- Thread starter Cameron
- Start date

Dear all,

An ANCOVA has been utilized for comparing the performance of three groups in a psychological test. There are 19, 19, and 15 subjects in each group. Some of the P value are like 0.092. Is it a good idea to take them into consideration and report a marginally significant in manuscript?

An ANCOVA has been utilized for comparing the performance of three groups in a psychological test. There are 19, 19, and 15 subjects in each group. Some of the P value are like 0.092. Is it a good idea to take them into consideration and report a marginally significant in manuscript?

A test of hypothesis is significant or it is not.

In either case, I think a p-value near .1 constitutes weaker evidence against the null hypothesis. If you or someone else conducted the test, it would be dishonest to not report it, particularly on the basis of it's p-value. This is part of why research is in a reproducibility and replication crisis.

Dear Ondansetron,

Actually, I want to know, when there is a p value like 0.092 and there is just 15 subjects in some groups, is it possible to consider it in favor of a relationship?

Actually, I want to know, when there is a p value like 0.092 and there is just 15 subjects in some groups, is it possible to consider it in favor of a relationship?

It depends on the alpha level you set prior to seeing your data and conducting analyses. What alpha level did you choose ahead of time? What do the graphs look like?

I think part of the issue is pressure to publish but I think part of it is lack of understanding as I have seen firsthand. I literally had a well respected researcher (who is a master in his primary field, but not at all in stats) refer to an observational study, without any attempts to adjust for confounding, as "bullet proof" because "...Fisher's exact test is so robust. You don't need fancy, technical methods such as logistic regression..." The guy clearly was missing out on many things.

I think it is okay to publish after a single study as long as conclusions are tempered to reflect the quality and strength of evidence against a null hypothesis. This would just allow readers to see groundwork that has been done and how it has been done and allow for possible improvement on or replication of a study.

I am more inclined these days to agree with @Miner - in that you cannot completely disregard a p-value of 0.0501. There is information there. Should you make a decision based on it - hmm, up to you. You also have to keep in mind possible study design problems (e.g., misclassification, selection bias) and that all models are incorrect and just a proxy to understanding the phenomenon. Given these things, is there truly not an effect or is it lost a little bit in the noise. Many biases bias toward the null and repeat studies are always needed.