- Thread starter javedbtk
- Start date
- Tags boneferroni correction hypotheis test

In my example, I have about 50 statistical analysis, so is it feasible to use bonferoni test in this case?.

05/50 will be very very small value

and it will be impossible for one type of algorithm to significantly outperform the other. Thanks

Maybe some information about your studiy (topic, research questions, study design, sample size, practical and/or

theoretical relevance) would be useful.

With kind regards

Karabiner

Well, it depends.

It depends. In some genome studies, for example, this would be a big value.

Would this be a problem for you?

Maybe some information about your studiy (topic, research questions, study design, sample size, practical and/or

theoretical relevance) would be useful.

With kind regards

Karabiner

It depends. In some genome studies, for example, this would be a big value.

Would this be a problem for you?

Maybe some information about your studiy (topic, research questions, study design, sample size, practical and/or

theoretical relevance) would be useful.

With kind regards

Karabiner

Yes it would be a problem. For example, algorithm A significantly perform better than B with the p value of 0.0001, it means algorithm A is quite better than B, but after the bonferoni analysis, we would end with no algorithm performed better than the other.

In my example, I have about 50 statistical analysis,

For example, algorithm A significantly perform better than B

Algorithms and treatments are different things.

So, what is it? And what criteria do you want to use?

Yes it would be a problem. For example, algorithm A significantly perform better than B with the p value of 0.0001, it means algorithm A is quite better than B,

Small p values do not indicate a large effect. Usually, they are due to large sample sizes.

Maybe some information about your study (topic, research questions, study design, sample size, practical and/or

theoretical relevance) would be useful.

With kind regards

Karabiner

Last edited:

No, it just tells you that you can reject the Null hypothesis "the difference between A and B is = 0.00000000000000000000000000".

Small p values do not indicate a large effect. Usually, they are due to large sample sizes.

Maybe some information about your studiy (topic, research questions, study design, sample size, practical and/or

theoretical relevance) would be useful.

With kind regards

Karabiner

Small p values do not indicate a large effect. Usually, they are due to large sample sizes.

Maybe some information about your studiy (topic, research questions, study design, sample size, practical and/or

theoretical relevance) would be useful.

With kind regards

Karabiner

Indeed, but it explains at least A and B have significant difference

It also sounds like in your OP that your goal is to have something be significant since your concern is that [one won't be able to outperform the other] if you use a smaller alpha level per test. This should not be your goal.

This is a pretty useless thing to explain, in general. P-values have limited information to convey and it's a misconception that "significance" is some targeted endpoint with tons of value.

It also sounds like in your OP that your goal is to have something be significant since your concern is that [one won't be able to outperform the other] if you use a smaller alpha level per test. This should not be your goal.

It also sounds like in your OP that your goal is to have something be significant since your concern is that [one won't be able to outperform the other] if you use a smaller alpha level per test. This should not be your goal.

Due to the nearly complete lack of information about the study, we don't know the research design; not even the scale level of the dependent variable; or why and what for the study is undertaken. It is difficult to suggest solutions if the problem is described so poorly.

Maybe you can perform all comparisons in one analysis (perhaps repeated measures ANOVA or mixed ANOVA or multilevel modeling, if the dependent variable is interval scaled), and attach 95% confidence intervals to the estimated parameters. Such confidence intervals will give you an impression about how reliable the estimations are.

What you then consider a "significant" difference (in the sense of important/relevant/remarkable..., I suppose?) will be up to your own judgement. No statistical procedure can take this task off you.

With kind regards

Karabiner

Maybe you can perform all comparisons in one analysis (perhaps repeated measures ANOVA or mixed ANOVA or multilevel modeling, if the dependent variable is interval scaled), and attach 95% confidence intervals to the estimated parameters. Such confidence intervals will give you an impression about how reliable the estimations are.

What you then consider a "significant" difference (in the sense of important/relevant/remarkable..., I suppose?) will be up to your own judgement. No statistical procedure can take this task off you.

With kind regards

Karabiner

Last edited:

Are you evaluating two treatments or are you evaluating two algorithms?

3 algorithms compared with each other to find its predictive accuracy. These comparisons are repeated 4 times for 4 different datasets. So for each dataset, the comparisons are 3, but overall the comparisons are 12 i.e. 3 algorithms * 4 datasets.

Now is it possible I divide the 0.05 by 3 (comparison for each dataset) rather than 0.05/12 (comparison for all datasets and algorithms).

Thanks for understanding

I hope you agree that this is absurd.

Usually one creates a model, and the model should fit the data.

Then you choose an estimator that is appropriate, e.g. least squares or maximum likelihood.

Then you choose an algorithm tha can compute the estimator.

Of course you can call all three steps an "algorithm" but the data must still fit the model and the estimator must be relevant.

- - -

Besides, four different data sets are not much. And you cant really define a population from which the data sets are taken from. So what are you doing inference about?