Is bonferroni correction necessary ?

#1
Hi,
I would like to ask you for little help:)
I have 4 types/gropus of artificial lenses (named: BY, Softec, ZA, ZCB). In each type there was done calculation using a 3 different formulas (1,2,3). I have 30 subjects in each lens type.
I would like to compare means between ZCB and other lenses for each formula....
Example:
1. ZCB_Formula 1 vs BY__Formula 1
2. ZCB_Formula 1 vs Softec_Formula 1
3. ZCB_Formula 1 vs ZA_Formula 1
.
.
Than same for formula 2 and 3

I have done t test for each couple.
My question is - is it neccesary to use Bonferroni correction in this case. There are 3 measurements...so should I multiply p value from t-test by 3x?

Im attaching the file for better understanding.

Thank you for your answers.
Best regards
IP
 

Attachments

Last edited:
#2
The 0.05 is designed to give you 95% protection against a false claim that there is a difference. Even if there is no real difference, about 5% of the time your test will show that there is (falsely). It's a bit like Russian roulette with a 20 chambered revolver with 1 bullet. Every now and again you will get a p value less than 0.05 when there is no real difference and shoot yourself in the foot.
In your experiment you plan to do 18 tests (I think). Even if there is no real difference anywhere you are very likely to find at least one significant difference (and probably more). You are playing the roulette game 18 times so you have 18 chances to shoot yourself in the foot instead of one.
It seems cruel but in my opinion you need to make some adjustment if a false positive has any sort of consequence. Strictly you should set the significance level to 0.05/18 = 0.003 but this may be too much. I'd use 0.01 and even then be cautious.
It's a problem that often happens if you set your net too wide and test all over the place.
 

obh

Active Member
#3
I also think that correction is necessary and the Bonferroni correction is too much, as it assumes all the tests are totally independent.
You may use the Holm correction instead of the Bonferroni correction or use the Tukey HSD test.
 
#4
Yes. Multiple p values is a thorny problem for statisticians, with many suggested solutions, none of which really solve the problem. The Holm correction helps a little, but not a great deal in practice, and Tukey's HSD is for multiple comparisons after a single anova and can't really be applied to the 18 p values you will generate.
 

obh

Active Member
#5
Hi Katxt,

Yes, usually when you can see many solutions for the same problem none of it is a good solution ...

If I understand correctly the question, ivipopi checks all the combinations per each formula, so you can run one way ANOVA per each formula with 4 groups: BY, Softec, ZCB, ZA
followed by the Tukey HSD test for each formula.
You may also run only the Tukey HSD test for each formula, without the one way ANOVA (depend on the main question...)

Since you run 3 sets of Tukey HSD tests, you may also need to correct the significant level per each set ... but with 3 instead of 18 ...
 
#6
That sounds like a good idea.
Rereading the original post, it looks as if each test involves ZBC. This comes to 9 tests rather than 18. This is like post hoc tests against a control so perhaps Dunnetts test? Again you would need to make each set more stringent.
This is all rather involved without much improvement over Bonferroni with significance at 0.05/9 = 0.006.
I would suggest you set the significance at 0.01 before you start (probably a little liberal but a nice round number like 0.05 is) and explain in any report why this has been done. Then people can make their own decisions over any borderline p values.