Multiple comparisons to a reference value - which test?

#1
Hi,

if I wanted to compare a sample to a fixed reference value, I would use the one-sample t-test (or the nonparametric one-sample Wilcoxon signed-rank test).

However, I don't know which test I should use if I want to compare several different samples to that single reference value. Dunnet's multiple comparison test, for example, is meant for comparison against a reference sample/control (consisting of replicates) and not against a single reference value.

Can anyone tell me which statistical test would be appropriate here? The question to be answered by the test(s) is whether there is a statistically significant difference of the samples from the reference value.
 

hlsmith

Omega Contributor
#2
You would likely use the same ol' test you normally would, but repeat it for the number of categories you have. You then adjust your alpha (level of significance) with whichever correction you want. The most common is Bonferroni. So given that correction you would change your alpha cut-off to 0.05/number of tests. I just used 0.05 as an example, you select whichever value you were originally gonnna use. If you are using 95% confidence intervals not pvalues, you do the same thing but calculate say 99% CI, if you had 5 comparisons.
 
#3
I understand. Thanks for your answer. Bonferroni is quite conservative when lots of comparisons are made. I know several other multiple comparison tests but I don't know which adjustment/correction can also relatively easily be performed with Microsoft Excel. Is there any less conservative correction that be performed manually within Excel (similar to the Bonferroni correction)?
 

hlsmith

Omega Contributor
#4
Conservative means fewers false positive and erring on the side of caution. You should pick one before ever applying it otherwise you bias things.

Benjamini-Hochberg is popular as well.
 
#5
Conservative means fewers false positive and erring on the side of caution.
That is right. But it also means that I might dismiss results because of this.

You should pick one before ever applying it otherwise you bias things.
I know, for example, that one always should formulate a hypothesis before applying statistics and not afterwards.
But what do you mean by the above sentence?
When doing multiple comparison tests, I select one from the options that are available within my main statistics software package (https://www.graphpad.com/guides/pri..._of_multiple_comparison.htm?toc=0&printWindow). The software often gives recommendations. Anything wrong with that approach?

Benjamini-Hochberg is popular as well.
OK, thanks. I will see if I can manually implement this with Excel.
 
Last edited:

hlsmith

Omega Contributor
#6
Yeah, I was just referring to having an a priori plan, so that you don't end up selecting the correction that serves your hypotheses best after running a couple of different ones. I am sure you know this, I just try to hammer it home over and over again since it is a contributor to the reproducibility crisis in science. Good luck.