overal test differ from lsmeans/pdiff results

#1
Hi all,

I run proc mixed and I am not sure how to explain the following results...

I am interested in starch content in 3 parts of the same plant (leaves, roots, husks). I am interested in the effect of one factor (3 levels) on starch content in the entire plant and individual parts.

When I run the programm I get that the overall factor is significant (p=0.0016). However, all the comparisons of individual parts are not different among the 3 levels of the factor...!

How is this possible?
Any ideas?

Thanks
 

hlsmith

Less is more. Stay pure. Stay poor.
#2
How is the variable formatted in the model (categorized as 3 groups, dummy coded)? By comparisons of individual parts do you mean requested pairwise comparisons not the model's beta coefficients against the reference group? If so, a pairwise correction may be used to prevent the rejection of the null when it was true, correcting the alpha.
 
#3
Hi hlsmith,
The factor X is categorized as 3 groups (a, b, c). So when X is significant, none of the following pairwise comparisons are:

roots-a vs roots-b
leaves-a vs leaves-c
.
.
husks-b vs husks-a.

I use the following statement for the above comparisons:
lsmeans x/pdiff adjust=tukey;

So do you suggest to trust the results from the pairwise comparisons (since I correct the alpha) instead of the overall test?

Thanks
 

hlsmith

Less is more. Stay pure. Stay poor.
#4
For those variables, yes. It is correcting for pairwise error. So if I compare our mean weights we use a probability of 1 in 20 for chance, if I then compare my mean weignt to someone else, I need to correct the alpha to no longer be one in 20 since there are multiple comparisons. Use the adjusted p-values.