To report or not to report posthocs for a 'back up' test

Esme

New Member
#1
Short version - I have run some non-parametric tests (Friedman) as sort of backup to a ANOVA because of an assumption violation - should I report the pairwise comparisons for these tests in addition to planned comparisons I did following the ANOVA?

TIA, E
-----------------------------------------------------------------------
Longer version - I have run a 2 (groups) x 3 (conditions) mixed repeated measures anova on response time data. For one of the groups, all three conditions distributions violated the assumption of normality, so I ran a couple Freidman tests (one for each group). The ANOVA showed a main effect of condition, and an interaction between group and condition (the main effect of group was not significant). planned comparisons showed that both groups scored significantly higher in Condition A than B, but only one group scored significantly higher in Condition C compared to B. Beyond finding that the Freidman tests revealed significant effects of Condition for both groups, should I report the pairwise comparisons which SPSS provides with the Friedman results as well? To say whether they corroborate the planned comparisons done following the ANOVA?

Does main test affect the outcome of the follow ups, or is, for example, a pairwise comparison a pairwise comparison no matter whether it follows an ANOVA or a Friedman test, so long as the data are the same?

Many thanks in advance,
E
 

noetsi

Fortran must die
#2
It depends who you are reporting to. In a journal, where space is a premium, I suspect you would normally only report you ran the tests (and why) and if they supported your ANOVA results or not. Of course if the non-parametric results are far from you ANOVA results the reviewer is going to ask why you chose to go with ANOVA so you have to have a justification.
 

Esme

New Member
#3
Ah. Of course, I should have said - I'm writing a phd thesis. So in that case, would you suggest I include the planned comparisons from the ANOVA and the paired comparisons SPSS supplies with the Friedman test, or is that overkill? I wasn't sure whether they were actually essentially the same test, or were completely different. The Friedman tests do jibe with the ANOVA - if I only reported the main statistics and p value from each of those, and point out that they back up the story the ANOVA is telling, would it be okay to leave it there, or might the reviewers be wondering why I didn't report posthocs?

Thanks very much for your advice - it very much appreciated!
 
#4
Im writing a thesis (single subject based on within session measures), and i would only report the results i were basing my conclusion on in relation to the hypothesis. so for example im running three experiments, and using a wilcoxon sign rank test because at least one of the times i used the test the assumption of normality was violated (hence the non parametric rather than t test). Economy of writing ans word limits are really important so only report what you are actually using, not irrelevant peripheral background. Its a bit like the difference between referencing 300 books all saying the same thing.

Of course if your doing a test for significance, thats one thing, if you want to assess more specifics like the size of the difference, because the hypothesis calls for it, then do it. The first two of my experiments had significant increases or no sig changes rather than decreases (the hypothesis) so there was no need to go further, but in the third there were sig decreases so now i need to talk more about how big they are large enough to be 'taken seriously'. So it really depends on your original argument, and what constitutes support for the hypothesis.
 
Last edited: