Well... if I were asked this on the spot I'd first I'd first qualify I'm assuming a population mean of 0 (aka the distribution is centered). With this in mind, my reasoning would've been that the skewness would be the same as the normal, which is 0, because all symmetric distributions centered at 0 have 0 odd moments. The kurtosis would have to be greater than that of the normal because the tails of the t distribution are fatter BUT, as the degrees-of-freedom parameter increases, the kurtosis would become closer and closer to the normal, which is 3. (or an excess kurtosis of 0 if people prefer that).
Yeah, I always remember this from back in the day, knowing the 95% interval for the standard normal uses 1.96 and it is something like 2.12 for the t-distributions. So in finite samples their shapes may vary in tails and peakedness, but as @spunky stated they converge, much like many distribution can approach standard normal as sample sizes approaches infinity.
Actually, that would be correct for the first one. My company uses a lot of electronic components. Component suppliers will sell different levels of precision for components, but can't actually produce different levels of precision, so they sort them. If you order a high precision component, you get the first distribution. If you order non-precision components, you get the second one, which is the leftover tails.
In my world (psychometrics) truncated normals are the default assumption when talking about range restriction. Say you want to only admit students into a prestigious graduate program that score above X mark in their GRE. You usually only end up with the tails