# Relationship of precision of mean of repeated measurements and precision of single measurement

#### bis225

##### New Member
The random error of the mean of repeated measurements of the same physical quantity decreases in proportion to the number of measurements. My question is, if I know the standard deviation or 95% CI for a single measurement with a given instrument (in other words, the precision of the instrument is previously documented based on large scale experiments), how can I calculate the precision of the mean of a given number of measurements?

For example, suppose an instrument has a documented SD of 2.82 units (therefore, the 95% CI would be ±2.82 x 1.96 = ±5.53). So, that would mean that a single measurement with this instrument has a ~68% chance of being within 2.82 of the true value, and a 95% chance of being within 5.53 of the true value. What happens to that precision if I take multiple measurements and average them? The precision should increase with the number of measurements, but what is the mathematical relationship?

I know that the SEM formula is to divide the sample SD by the square root of the sample size, but that applies to the SD of the measurements in that sample themselves. Can I apply that formula using the documented SD and the number of measurements I'm taking, e.g. if I take five measurements, then the SEM would be 2.82/√5 = 1.26, and 95% CI = ±1.26 x 1.96 = ±2.47, reasoning that the SEM (i.e. the SD of their mean) is inversely proportional to the square root of the number of measurements, so if each measurement is subject to an uncertainty quantified by SD = 2.82, then the uncertainty of x measurements would be 2.82/√x?

Or is that reasoning unsound? If it is, then how would I quantify the uncertainty of x number of measurements, not based on the statistical spread of just those few measurements themselves, but based on the documented SD of the instrument (which for practical purposes can be taken as the "true" or "population" SD)?

I'm surprised that after hours upon hours of searching, I haven't been able to find any information on this. Countless resources say that precision can be improved by calculating the mean of repeated measurements, and that increasing the number of measurements improves precision (thank you, Captain Obvious), but none of them say anything about how the number of measurements improves on the established precision of the measurement system.

Last edited:

#### bis225

##### New Member
Am I just being obtuse about this? I was thinking that the standard error of the mean (SEM) is based on the SD of the sample, and I wasn't sure if it's valid to combine a previously established SD with the number of observations in a new sample to find the SEM of that new sample, but I took another look at the SEM, and see that it's actually defined in terms of the population SD, and is usually only estimated with the sample SD because the population SD isn't known. Therefore, if the documented SD is based on large samples of repeat measurements and is for practical purposes insignificantly different from the "population SD", which in this case is the theoretical actual precision of the instrument, the dividing that SD by the square root of the number of measurements I've taken should in fact give me the precision in terms of SD of the mean of those measurements (the SE of that mean).

Does that make sense, or am I missing something? And if so, wouldn't that also apply to my question above about the precision of the mean of a time series? The fact that it's intermittent measurements of a changing variable rather than repeated measurements of the same quantity shouldn't make a difference in the combined precision as long as the measurement precision is the same regardless of the value of the measured variable. How precisely the mean of the measurements estimates the true mean should be the same calculation, the only difference being that with repeated measurements of the same quantity, the true mean is the actual quantity, whereas with the time series the true mean is the actual mean of the variable in the given time span. Or, once again, am I missing something?