Bonferroni Adjustment/Correction and Confidence Intervals

#1
Hello,

I have two questions here.

RESOLVED (for now): You perform the bonferroni for multiple tests and/or multiple comparisons.
FIRST: I have performed 4 mixed between-within subjects ANOVAs, that look at the differences between time (2 time points) and group (2 groups). I have been told in some places that you divide you alpha level by the amount of tests you're running, but in the below quote from Pallant 2011 is says "comparisons" rather that "tests", and it has me wondering whether, with 4 mixed between-within subjects ANOVAs looking at time points and group differences, do I need to divide the alpha level by 4 or by 8?

SECOND: If I'm reporting confidence intervals with the rest of my output, is there anything I need to do with them, any adjustments, etc, any calculations for the confidence intervals, relating to the Bonferroni adjustment. I know that for the Bonferroni adjustment I am examining my p values according to the adjustment (in this case I think it's: 0.0125/0.013), but is there anything different I need to do when reporting the confidence intervals?

I have searched the forum, looked in my textbooks, but there I can't find a clear (to me) answer. Thank you for your help.

"The other alternative is to apply what is known as a Bonferroni adjustment to the alpha level that you will use to judge statistical significance. To achieve this, you can divide your alpha level (usually .05) by the number of comparisons that you intend to make, and then use this new value as the required alpha level. For example, if you intend to make three comparisons the new alpha level would be .05 divided by 3, which equals .017."
 
Last edited:

hlsmith

Less is more. Stay pure. Stay poor.
#2
You typically correct for the number of comparisons. So if you did post ANOVA ttests for the 3 group means, you would either divide your alpha by 3 for the cut0ff or multiply your p-values by 3. If you are also reporting values with CI, then you need to modify them too. So you would report the 95% CI, but due to correction they would actually be 96.666...% CI
 
#3
I'm doing the Bonferroni adjustment/correction due to performing multiple ANOVAs, as I was not able to perform a MANOVA. Following this advice from Pallant (just re-found this; so, I think I understand that you do it for multiple tests or comparisons):

"One way to control for the Type 1 error across multiple tests is to use a Bonferroni adjustment. To do this, you divide your normal alpha value (typically .05) by the number of tests that you intend to perform. If there are three dependent variables, you would divide .05 by 3 (which equals .017 after rounding) and you would use this new value as your cut-off. Differences between your groups would need a probability value of less than .017 before you could consider them statistically significant."

I have performed 4 mixed between-within subjects ANOVAs, aka split-plot ANOVA design (SPANOVA). For each ANOVA I have the same 2 groups, and 2 time points.

So, I performed a Bonferroni adjustment due to performing multiple tests, following Pallant's (2011) advice, I'm happy that dividing the alpha by 4 is what I should be doing for the 4 ANOVAs (the, first question is, for me, for now, resolved).

QUESTIONS: How did you get to that 96.666...% CI number?
In my case, if I'm using the Bonferroni adjustment to account for the multiple ANOVAs, rather than comparisons, should I alter anything to do with the confidence intervals?
And, how do I do that?
What is the calculation?
Do I just write in a different number for the CI percentage, or do I adjust the upper and lower confidence intervals themselves?


Also, there's no mention of post-hoc tests in the guide that I'm following, but I'm starting that as a separate question/thread, for the purposes of being concise: http://www.talkstats.com/showthread...NOVA-aka-split-plot-ANOVA?p=174939#post174939
 
#6
Thank you. :)

Then if I were to use the Bonferroni adjustment, following Pallant's advice, and adjusted my alpha level, diving it by 4, then I'd do (5%/4)+95%?

Is there a text source for that calculation? (I have to cite/reference decisions like this).

And, do I just write in a different number for the CI percentage, or do I adjust the upper and lower confidence intervals themselves?
 

hlsmith

Less is more. Stay pure. Stay poor.
#8
Yeah, perhaps not clear enough. You would then look up 96.6666 up on the standard normal table and multiply it by the SE and +/- it from the estimate.


Calling it 5% instead of 0.05 was misleading.


Does that seem right.
 

Mean Joe

TS Contributor
#9
**** lost connection while trying to post.

It's interesting to me, because I've always heard of Bonferroni to adjust the alpha level, but nothing done to the ORs/CIs.

Let's say you do 1,000 tests. In one test, you get a p=.0001 with OR=1.50 [1.25, 1.80].
Doing a Bonferroni correction to alpha, you would say you do not have a statistically significant result. Yet clearly your OR is greater than 1, and 1 is outside of your CI.
I always thought it would be strange to see that in a paper, the OR=1.50 [1.25, 1.80] is not signficant.

Looking at Wikipedia now, I see the confidence level for the CI would be 1 - (alpha)/(number of tests). ie confidence level is increasing with more tests.
Which doesn't seem right to me, shouldn't you have less confidence? You know you've done so many tests, that the probability of rejecting a true null result has increased.

Returning, we've done 1,000 tests, so our alpha=.00005. We've made 1,000 CIs, and we expect 999.95 of them to contain the true OR.
So we're even more confident (almost certainly confident), that the OR is clearly greater than 1.
But at the same time, we conclude that result is not statistically significant. Because. P values.
 

Dason

Ambassador to the humans
#10
Looking at Wikipedia now, I see the confidence level for the CI would be 1 - (alpha)/(number of tests). ie confidence level is increasing with more tests.
Which doesn't seem right to me, shouldn't you have less confidence? You know you've done so many tests, that the probability of rejecting a true null result has increased.
There are several things going on here. If we were to just consider that specific interval will then be a \((1 - \alpha/n)%[/code] confidence interval. But in the grand scheme of things if we want to be 95% confident that all of the intervals contain the parameters of interest then we are only 95% confident in that after modifying the intervals to be wider.\)
 

hlsmith

Less is more. Stay pure. Stay poor.
#11
It is just adjusting your alpha or level of significance, which gets used with p-value and CI. So in odds ratios or comparing multiple groups to the reference, you would correct the pvalues and CI.
 
#12
I am still unsure on what to do, and what to report.

I am doing 4 ANOVAs, with a Bonferroni adjustment resulting in an adjusted alpha of 0.0125/0.013 (0.05 / by 4 = 0.0125).

What should I do re: reporting CIs? Should I report the confidence interval percentage differently (97/98/99% etc) differently (and if so, how do I work out what to report), and/or do I need to do anything to the upper and lower confidence interval numbers themselves?

Thanks for your help so far.

You can't over-simplify. I have no degree in maths or statistics.
 

Dragan

Super Moderator
#13
Very simply, use the Bootstrap i.e. construct 95% bootstrap confidence intervals for each of your research questions, which ostensibly appear to orthogonal to each other and also obviate the usual assumptions associated with the parametric ANOVA.

Subsequently, look at the confidence interavals and look to see if they overlap...which is a conservative approach.

Well, how convenient :)
 
#14
Thank you for your assistance, but could you explain that in simpler terms please? And/or provide any resources that do?

Very simply, use the Bootstrap i.e. construct 95% bootstrap confidence intervals for each of your research questions,
I have never been taught anything about bootstrapping, what it is, how or why to do it, and there appears to be a lot of information out there about different types/applications of it.

which ostensibly appear to orthogonal to each other and also obviate the usual assumptions associated with the parametric ANOVA.
I have no idea what you mean here. Can you please clarify?

Subsequently, look at the confidence interavals and look to see if they overlap...which is a conservative approach.
And, consequently, because I don't understand that previous information, I have no context to understand this simpler sentence.