Standard deviation is a measure of spread or variability for a given set of scores. The standard error quantifies how much variability exists between you're sample statistic and the population parameter.
What is the difference between Standard error v standard deviation, please?
Thank you
Standard deviation is a measure of spread or variability for a given set of scores. The standard error quantifies how much variability exists between you're sample statistic and the population parameter.
maybe this helps
http://www.westgard.com/lesson35.htm
this is assuming you mean the standard error of the mean:
if X ~ N(mu, sigma^2)
(that is, if X follows a normal distribution with mean mu and variance sigma^2)
then Xbar ~ N(mu, sigma^2 / n)
(the sample mean Xbar follows a normal distribution with mean mu and variance sigma^2 / n)
'the' standard deviation usually refers to the square root of the variance of X's distribution,
sqrt(sigma^2) = sigma
the standard error (of the mean) refers to the square root of the variance of Xbar's distribution,
sqrt(sigma^2 / n) = sigma/sqrt(n)
standard deviation = square root of variance of X's distribution
standard error (of the mean) = square root of variance of Xbar's distribution
the little heart beats so fast (8)
Let's get away from stats-speak for a minute and talk about what the difference of these terms are for the average person.
Standard deviation is a measure of dispersion. When you are looking at individual datapoints, standard deviation gives you a measuring tool to put a probability value on the difference of the datapoint and the mean of the population.
Standard error is EXACTLY the same thing... a measure of dispersion... only not for individual datapoints but sample means. When you take a sample of a certain size "n", the mean of that sample is measured against the population mean using standard error as the measuring stick instead of standard deviation. Remember... if you took EVERY POSSIBLE sample combination of a certain size "n" from a population "N", the distribution of the sample means would be NORMAL!!! That is why standard error is applicable to samples.
The underlying logical reason for this is that the mean of a sample would be expected to be more representative of the population mean than an individual datapoint. And, the larger the sample the more accurate we expect it to be in representing the population. That's why standard error gets smaller as the sample size gets larger... we are 'judging' the sample means by a tougher and tougher standard as the sample size grows. Look at the formula for standard error. If the sample size is 1... you get the standard deviation formula. And, if the sample size is the entire population, you would get... what??
Hope this helps.
In trying to find a good explaination for this I went through this thread. After reviewing the formulas I see mathematically that when sample size is 1 you get the standard deviation and when the sample size is the entire population the standard error is 0. What I dont understand is how the standard error is a measure of dispersion of the sample means. the formula for standard error as mentioned above is sample standard deviation divided by squrt(n). In the sample standard deviation formula we square the difference between xi and xmu, divide by n-1 and square root the whole thing. I dont understand how using the xi's in our sample gives us information about the dispersion of all the sample means. Are we assuming here that the xi's represent where all the possible xbarmu's might be and taking the difference between the xi's and the xmu is a good approximation to taking the difference between all the xbarmu's and the true mu?
Tweet |