I have 5 years worth of actual monthly data.
I want to "predict" (examine really) possible values 12 months into the future.
I don't want to be unnecessarily optimistic nor pessimistic with regard to the variability.
I'll use a random number generator to "predict" numbers.
Optimistic would be that the variability is small.
Pessimistic would be that the variability is large.
So, I want to use a reasonable value for standard deviation in doing this.
If I compute the standard deviation for all 60 months of actual data then the number can be fairly large - so perhaps too pessimistic.
I can pick the starting point and the trend. So all the simulation has to do is generate monthly values based on:
Where Distribution is the possible distribution % if there's an annualized periodicity.
And slope is the monthly growth / shrink rate straight line added to the beginning point.
and RAND()*STEDEV.P() respresents the possible randomness in the outcomes based on calculating the standard deviation for each year's worth of data and....

One the one hand, the standard deviation for the entire 60 months may be OK but it may be larger than warranted.
Alternately, the standard deviation for the previous year might be more contemporaneously representative.

If I calculate the simple average of the standard deviations for each of the 5 years then I get a value that looks reasonable. It takes into account those years where the variability is low and for those where it's higher.
Now, I'd like to put this in more mathematical terms which can be defended (and justify it to myself).