# Standard Error - interpretation and relative standard error

#### DeanP

##### New Member
Hi,

I am unclear about how to interpret the Standard Error of the Mean (SEM) directly. For example, when a mean is reported as 5.00 + 0.50SEM, how do you directly relate the 0.50 to 5.00?

To help address this question I have written dot points about what I know about the SEM and interspersed 5 questions.

1) The standard deviation (SD) is a measure of dispersion around the mean
2) The SEM is the SD of the sampling distribution for the sample mean
3) The sampling distribution is derived from the means of an infinite number of samples from a statistical population and is normally distributed according to the Central Limit Theorem
4) In a normal distribution, 68.3% of (randomly selected) values fall within +1SD, 95.4% +2SD and 99.7% +3SD
5) The SEM decreases with increasing sample size

QUESTION 1: For a given statistical population being sampled, does the sampling distribution change with sample size? i.e. should point 3) be: "...derived from the means of an infinite number of samples of a given size from a statistical population..."?

QUESTION 2: Given points 2) and 4) is it correct to interpret the SEM in the same way as SD as a descriptive statistic? That is, for a sample with mean 5.00 and SEM 0.50, is it correct to conclude the true population mean lies between 4.50 and 5.50 with probability 68.3%?

6) A 95% confidence interval (CI) for the sample mean is calculated as +1.96*SEM (when using the t-distribution)

QUESTION 3: Does 1.96 equate to 95.0% because of points 2) and 4)?

QUESTION 4: Is SEM reported in preference to CI because it is not restricted (i.e. a CI has a specified probability level)? Put another way, why report SEM when CI seems to be more interpretable.

7) The SEM can be reported as a 'Relative Standard Error of the Mean (RSEM)' (calculated as the SEM/mean*100)

QUESTION 5: Why would you choose to report RSEM instead of SEM?

Dean

Last edited:

#### hlsmith

##### Less is more. Stay pure. Stay poor.
To get you started with all of these questions: the SEM can be presented by itself or used in a confidence interval (CI). It depends on how the author wants to present data. If you just present the SEM then the reader can calculate any CI they wish to as long as they know the sample size. However, if the author wants the reader to interprete the material easier or if they were examing a hypothesis, then they may calculate CI in order to prove the hypothesis and show the interval does or does not include perhap "1" or "0".

#### Dragan

##### Super Moderator
I find the Relative Standard Error (RSE) to be useful when I have two different estimators of something but they are scaled differently e.g. a conventional moment-based estimator of skew versus a L-moment based estimator of skew (so-called L-skew). The estimator that has a smaller RSE indicates that it has more precision because it has less variance around it.

#### DeanP

##### New Member
Thanks Dragan:
I find the Relative Standard Error (RSE) to be useful when I have two different estimators of something but they are scaled differently
This makes sense. The reason I posed the question of RSEM was because I had seen it presented for some population size estimates for different localities, instead of providing an indication of precision in terms of actual size. So I'm not clear on why the author chose to report the RSEM instead of the SEM; I suppose they did so because they wanted to show differences in precision for population estimates from different islands, but they do not discuss this at all.

#### DeanP

##### New Member
If you just present the SEM then the reader can calculate any CI they wish to as long as they know the sample size. However, if the author wants the reader to interprete the material easier or if they were examing a hypothesis, then they may calculate CI in order to prove the hypothesis and show the interval does or does not include perhap "1" or "0".
This relates to Question 4.

I get your first point (but note the reverse is also true, the SEM can be calculated from the CI, point #6) and I understand the second, so I'll try to explain myself better:

Since CI's are easier to interpret (i.e. gives the range of values in which the true parameter lies with a specified probability level) is it not more useful to routinely report these instead of SEM, understanding that SEM can be calculated from CI? Yet SEM is the 'convention'. Why (Question 4)?
Is it just personal preference or are there other things to consider (aside from one-sample hypothesis testing you refer to, i.e. "...in order to 'prove' the hypothesis and show the interval does or does not include perhaps "1" or "0"")?

#### hlsmith

##### Less is more. Stay pure. Stay poor.
It partially depends on the field and author, and probably on the peer-reviewers (they can always request more or less). Also, in the case of beta coefficents, you can present the estimate, SE and p-value (which is standard layout), while the p-value will let you know if it is different than 0.