I have a question about what the standard deviation means when it's calculated from the standard error.
Here's the scenario:
I'm looking at a pain scale published in a peer-reviewed article that has upper and lower CIs (at 95% of 67.84 and -129.24 respectively. I've calculated the SE as 50.28 (197.08/2*1.96). This number makes sense to me given the upper and lower CIs.
What doesn't make sense to me is the SD that I've calculated. The SD is 732.02 (50.28*14.56 (square root of sample size of 212)). How can the mean deviation of each measured value from the mean be 732.02 when the CI is between 67.84 and -129.24?
Last edited by Barnzilla; 10-07-2009 at 09:29 AM. Reason: Had 50.28/14.56 instead of 50.28*14.56
The basic question is whether it's reasonable for a SD to be 700 when the CI is between 67 and -129. Doesn't that seem off?
And, yes, I checked your calculations (the mean is -30.7, SE is 50.28, and STD=732) and they are correct.
Yeah, this can happen. The idea is that if you were to sample again from the same population your upper and lower limits could be markedly different.
For example, I just sampled 212 data points from a normal population with MU= -30.7 and with Standard Error of 50.28 and obtained lower and upper limits of -36.25 and +165.2 which are much different from the study you are looking at (above).
I guess the scale from which the measurement is taken doesn't go to 700, so it's hard to undertand how the SD could be that high. But if it's to be taken as indicating that the CI could be drastically different, then it makes a little more sense.
You seem content with the answer you got, so feel free to ignore this.
If I understand your question, you're wondering how it could be that the standard deviation (SD) could be so large relative to the precision of the estimate of the mean. How could SD be about 732 when the precision is only about 50?
For the mathematical answer, look at the formula for the precision:
Precision = z * SD / SQRT(n)
Under what circumstances will SD be greater than the precision?
SD > PRECISION
iff SD > z * SD / SQRT(n)
iff SQRT (n) > z
iff n > z^2
Since z^2 is about 4, the SD will be larger than the precision whenever the sample size is larger than 4. For most studies, this means all the time.
In fact, the larger the sample size, the larger SD will be relative to the precision.
Conceptually, remember that the SD is measuring variability in the underlying data. The precision is based on the SE, which is measuring variability in the sample mean, not the data. For any sample size, the SD is always greater than the SE. So, it is not surprising that the SD will be much larger than the SE and therefore, for realistic samples, much larger than the precision. Indeed, if this were not the case, we would know that the precision had been incorrectly computed.
Hope I'm answering the question you asked and if so, hope this helps!
As an aside, it's important to note that the standard deviation is not the mean deviation from the mean as you mentioned in your original post. We do have a statistic that measures that mean deviation, it's called the mean absolute deviation (MAD), but it is different from the sample standard deviation.
Advertise on Talk Stats