- Thread starter qim
- Start date

if X ~ N(mu, sigma^2)

(that is, if X follows a normal distribution with mean mu and variance sigma^2)

then Xbar ~ N(mu, sigma^2 / n)

(the sample mean Xbar follows a normal distribution with mean mu and variance sigma^2 / n)

'the' standard deviation usually refers to the square root of the variance of X's distribution,

sqrt(sigma^2) = sigma

the standard error (of the mean) refers to the square root of the variance of Xbar's distribution,

sqrt(sigma^2 / n) = sigma/sqrt(n)

standard deviation = square root of variance of X's distribution

standard error (of the mean) = square root of variance of Xbar's distribution

Standard deviation is a measure of dispersion. When you are looking at individual datapoints, standard deviation gives you a measuring tool to put a probability value on the difference of the datapoint and the mean of the population.

Standard error is EXACTLY the same thing... a measure of dispersion... only not for individual datapoints but sample means. When you take a sample of a certain size "n", the mean of that sample is measured against the population mean using standard error as the measuring stick instead of standard deviation. Remember... if you took EVERY POSSIBLE sample combination of a certain size "n" from a population "N", the distribution of the sample means would be NORMAL!!! That is why standard error is applicable to samples.

The underlying logical reason for this is that the mean of a sample would be expected to be more representative of the population mean than an individual datapoint. And, the larger the sample the more accurate we expect it to be in representing the population. That's why standard error gets smaller as the sample size gets larger... we are 'judging' the sample means by a tougher and tougher standard as the sample size grows. Look at the formula for standard error. If the sample size is 1... you get the standard deviation formula. And, if the sample size is the entire population, you would get... what??

Hope this helps.