I'm helping a friend with the statistics used in her dissertation, and she needs to look at controlling Type I error.
She has two main predictor variables X & Y, and one outcome Z. Her main goal was to test the significance of the interaction X*Y as a predictor, but she found that in some cases X as a predictor alone is significant -- sometimes when X*Y is not.
Benjamini & Hochberg (1995) outlined a procedure for controlling false discovery rate (FDR) and later Benjamini & Yekutieli (2001) added upon it for the case of negative or positive correlation among tests. We're thinking the latter would be most appropriate for this study, but we're not sure which hypotheses to include in the line-up of p-values. Should we always include every X*Y, X and Y test for significant contribution?
I just want to make sure we're on the right track, so if anyone is familiar with controlling FDR in this way please let me know! Thanks so much,
Advertise on Talk Stats