I have lists of data generated by my computer simulations collected over simulation time. These data are essentially time series showing stationary fluctuations. These fluctuations aren't noise, but rather the consequence of the physical nature of the problem.

I can easily compute the mean of these time series. I hope that by averaging over a suitable length of time, I can average over the natural fluctuations exhibited by the data to obtain an accurate estimation of the series' central tendency. I also would like to quantify the uncertainty in this calculation of the mean. The uncertainty in my mean calculation is not due to experimental error, or a small sample set, but due to the fact that I can only average over a few periods of the oscillations.

I'm stumped because I'm certain that neither the standard deviation nor the confidence interval will suit my purposes. Neither the standard deviation nor the confidence interval "understands" the data's fluctuation period relative to the length of time over which I average. Please take a moment to think about this last sentence because it is the crux of my problem.

What calculation should I instead perform to quantify the uncertainty of the mean? Thank you.