# Thread: Confidence interval: true or false?

1. ## Confidence interval: true or false?

We have measured the concentration cesium-137 in the tissue of 15 tunafish. We may consider the distribution as normal distribution and a 95 % Confidence Interval for the mean concentration was calculated to (5.03 to 6.71) (Bq/kg).

Is the following statement true or false: "The interval means that 2.5 % of tunafish are expected to have dosage less than 5.03 Bq/kg."

I am not sure how to interpret this assumption whether it is true or false? I interpret this as false as since less than 5.03 Bq/kg is not within the 95% CI then our P-value is above 5%. I am not sure what the 2,5 % indicate?

2. ## Re: Confidence interval: true or false?

Is the following statement true or false: "The interval means that 2.5 % of tunafish are expected to have dosage less than 5.03 Bq/kg."
The confidence interval concerns the distribution of sample means which were repeatedly drawn from the same population.

Unfortunately, confidence intervals are nearly worth nothing (IMHO). All you can do with them is to state that you had a 95% a priori chance to catch the population mean with the 95% confidence iterval you wanted to construct.

With kind regards

Karabiner

It is false.

4. ## Re: Confidence interval: true or false?

Originally Posted by Karabiner
The confidence interval concerns the distribution of sample means which were repeatedly drawn from the same population.

Unfortunately, confidence intervals are nearly worth nothing (IMHO). All you can do with them is to state that you had a 95% a priori chance to catch the population mean with the 95% confidence iterval you wanted to construct.

With kind regards

Karabiner
Thank you Karabiner for your answer!

5. ## Re: Confidence interval: true or false?

I have another assumption that I have a hard time determining if true or false:
"From the interval we may draw as a conclusion that the null-hypothesis (the population-mean is 6.00 Bq/kg) can be rejected on the 5 % significance level"

I assume this is FALSE since 6.00 Bq/kg is within the interval, therefore the population-mean is significant and therefore we may not reject the null-hypothesis? This seems some how contraintuitive. Someone that agrees/disagrees and would please explain why?

6. ## Re: Confidence interval: true or false?

Originally Posted by medicalstatistics
I have another assumption that I have a hard time determining if true or false:
"From the interval we may draw as a conclusion that the null-hypothesis (the population-mean is 6.00 Bq/kg) can be rejected on the 5 % significance level"
Yeah, this is definitely wrong also. If it's in the CI, there's insufficient evidence to reject the particular null hypothesis at that given level of significance.

Originally Posted by medicalstatistics
I assume this is FALSE since 6.00 Bq/kg is within the interval, therefore the population-mean is significant and therefore we may not reject the null-hypothesis? This seems some how contraintuitive. Someone that agrees/disagrees and would please explain why?
If you nix the bold part, you've got it (a hypothesis test at the 5% level with the null hypothesis of mu=6 bg/kg would be nonsignificant-- failure to reject the null due to insufficient evidence to support the alternative).

7. ## Re: Confidence interval: true or false?

Originally Posted by Karabiner

Unfortunately, confidence intervals are nearly worth nothing (IMHO). All you can do with them is to state that you had a 95% a priori chance to catch the population mean with the 95% confidence iterval you wanted to construct.

With kind regards

Karabiner
Since you did state this was an opinion, I suppose we can both be reasonable yet disagree. I think that most people misuse confidence intervals (as we've seen), but CI's do convey quite a bit of information. After all, they're a method of estimation and allow us to assess the precision of our estimate (and I don't mind a method that has good long run performance assuming, although other people may disagree). I think they lose value when people misinterpret them as Bayesian credible intervals (i.e. saying there's a 95% chance we got it right with this specific interval or something to that effect). But, there's nothing wrong with the tool. The problem is with the mechanic, if you will.

8. ## Re: Confidence interval: true or false?

Yes, there is nothing wrong with CIs. They are just nearly worth nothing (IMHO), and might be even more misleading (if possible) than p-values. As for the assessment of precision - this is one thing I do not understand, since it is not a tool for assessment of the precision of an estimate, as far as I know; and I have never read a discussion which used CIs as assessement of precision, except those who misinterpreted CIs. But it might be possible to use CIs that way. I simply don't know.

With kind regards

Karabiner

9. ## Re: Confidence interval: true or false?

Originally Posted by medicalstatistics
Is the following statement true or false: "The interval means that 2.5 % of tunafish are expected to have dosage less than 5.03 Bq/kg."
This statement would be true for a prediction interval, but not for a confidence interval.

10. ## Re: Confidence interval: true or false?

Originally Posted by Karabiner
Yes, there is nothing wrong with CIs. They are just nearly worth nothing (IMHO), and might be even more misleading (if possible) than p-values. As for the assessment of precision - this is one thing I do not understand, since it is not a tool for assessment of the precision of an estimate, as far as I know; and I have never read a discussion which used CIs as assessement of precision, except those who misinterpreted CIs. But it might be possible to use CIs that way. I simply don't know.

With kind regards

Karabiner
The standard error for any sample statistic is often referred to and can be interpreted as a measure of precision. The fact that it's incorporated in the confidence interval allows for an assessment of the precision of that estimate. A wider interval, all else constant, indicates less precision of the estimated parameter. A narrower confidence interval, all else constant, would indicate more precision in the estimate. Remember that precision is a way of measuring the "tightness" or "spread" (a lower SE indicates less variability, higher precision) whereas accuracy is more about hitting the mark (think of a biased vs unbiased estimator). The easiest way to think of this is that for a sampling distribution with n approaching infinity, the SE approaches zero, indicating practically no variation in sample statistics of size= near infinity. This is precise, but it doesn't tell us anything regarding the accuracy (you can have a precise and unbiased estimate or a precise and biased estimate [or just lack precision all together]). Some people also use the target practice analogy. Irrespective of where I hit the target, a tight grouping indicates more precision in the shot (less variability) and a wider grouping indicates less precision in the shot (more variability).

At least, that's what I heard from many places (and I don't know if I've actually heard it from an actual statistician [Ph.D. or MS in stats/biostats]). I could be wrong. Any chance you can entertain more thoughts on this possibly?

Edit: this is interesting http://stats.stackexchange.com/quest...-anything?rq=1 I'm somewhat disregarding the point from the author of the paper (Morey), because we know his viewpoint. Some other posters noted that there isn't a necessary connection of precision to a CI, but there nearly always is between standard errors and precision. This is what I had thought after the first few times I heard a CI described as indicating precision-- the connection seemed natural in my mind, but I never gave it much serious thought past that. Definitely interesting.

Edit2: I usually consider Minitab a generally credible source as they often employ statisticians, so I'll include this too (describing a narrower confidence interval as more precise). http://support.minitab.com/en-us/min...-more-precise/

Edit3: So I spoke with a statistician today, and he more or less confirmed that it's not necessarily indicating precision (width of a CI). He agreed with the stackexchange comments about an empty or infinite interval and that in general, you can't say a narrow confidence interval indicates precision in an estimate. He also said people should be more clear in what they refer to as "precision," also. If you compared standard errors you could talk about the precision, though.

11. ## Re: Confidence interval: true or false?

Originally Posted by ondansetron
Since you did state this was an opinion, I suppose we can both be reasonable yet disagree. I think that most people misuse confidence intervals (as we've seen), but CI's do convey quite a bit of information. After all, they're a method of estimation and allow us to assess the precision of our estimate (and I don't mind a method that has good long run performance assuming, although other people may disagree). I think they lose value when people misinterpret them as Bayesian credible intervals (i.e. saying there's a 95% chance we got it right with this specific interval or something to that effect). But, there's nothing wrong with the tool. The problem is with the mechanic, if you will.
I hear you, but the problem is that what people want and what a confidence interval appears to be is a credible interval (but it isn't actually that). Personally I think the best way to make sense of a confidence interval is to treat it as a credible interval but make the assumption on which that treatment is based explicit - i.e., assuming we had no prior information and any effect size between -Inf and Inf was equally likely, then we can be 95% certain that the true effect size lies within the interval.

(But typically we would have prior information; typically we know that small effect sizes are much more likely than extremely large ones).

PS. This treatment of a confidence interval as a credible interval works for location parameters such as effect sizes and means. It doesn't for scale parameters like variances, afaik.

12. ## Re: Confidence interval: true or false?

Originally Posted by CowboyBear
I hear you, but the problem is that what people want and what a confidence interval appears to be is a credible interval (but it isn't actually that). Personally I think the best way to make sense of a confidence interval is to treat it as a credible interval but make the assumption on which that treatment is based explicit - i.e., assuming we had no prior information and any effect size between -Inf and Inf was equally likely, then we can be 95% certain that the true effect size lies within the interval.

(But typically we would have prior information; typically we know that small effect sizes are much more likely than extremely large ones).

PS. This treatment of a confidence interval as a credible interval works for location parameters such as effect sizes and means. It doesn't for scale parameters like variances, afaik.
Is the correct phrase "noninformative priors?" I think I had heard that before that a Bayesian credible interval and a Frequentist CI are the same when the priors for the Bayesian are noninformative (or something along those lines).

13. ## Re: Confidence interval: true or false?

Originally Posted by ondansetron
Is the correct phrase "noninformative priors?" I think I had heard that before that a Bayesian credible interval and a Frequentist CI are the same when the priors for the Bayesian are noninformative (or something along those lines).
Yep, that's the one, though the terminology is contested (Andrew Gelman reckons there's no such thing as a non-informative prior - the prior always has some effect).

14. ## The Following User Says Thank You to CowboyBear For This Useful Post:

ondansetron (02-19-2017)

 Tweet

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts