How to calculate the precision of the mean of univariate time series data, based on the precision of the discrete measurements it consists of

#1
I haven't had much luck getting an answer to the question I asked a week ago, so here's a simpler question that I hope somebody will be able to answer:

If I have measurements of some variable collected at a regular frequency for some period of time, how can I calculate the precision of the mean of all the collected data, based on the precision of the individual measurements?

Assume that the precision of the measurements is fixed and independent – each measurement has the same degree of random error, and is independent of all other measurements.

For example, suppose I have an instrument that measures some variable every minute, and the mean of 7 days of data (with no gaps – 10,080 equally spaced measurements) is 144.2. If the precision of the instrument has been previously established, and the SD is 12.5, how would I calculate the precision of that mean?

Alternatively, if I compare the measurement instrument to a trusted reference instrument that takes individual measurements of the same variable (not modeled or replicated, but actually the same physical quantity) at random intervals a dozen times per day, and I calculate a MAPE of 7.3% based on 84 comparisons throughout the 7 day interval, how would I calculate the precision of that 7 day mean of 144.2?

If those are not the most useful precision metrics for this purpose, please let me know what would be better. Also, I’m not fixated on any specific metric to represent the precision of the mean. I presume it would be best expressed as standard error or a 95% confidence interval, but I'm open to suggestion.

What all my questions are driving at is how to get from the precision of a measurement instrument to the precision of the average of multiple measurements made with that instrument. I believe that should be the same whether it's repeat measurements of the same target or a changing variable, as long as the precision is fixed. (I do understand that the precision of measurement instruments is not necessarily linear throughout the measurement range, but to keep things simple, I'm asking about cases where the measurement precision doesn't change.)

I've searched and searched and searched, and I've found many sources saying that you can improve precision by taking several measurements and averaging them (thank you, Captain Obvious), and some of those sources even go on to enlighten the reader that the more measurements you average, the greater the precision (no kidding???), but I haven't been able to find a single source that puts that improvement in precision in mathematical terms, i.e. explains the mathematical relationships between the precision of the measurement method and the number of measurements averaged. If anyone here can do that, or tell me where I can read about it, it would be greatly appreciated.