Usually you will see Roy's GR significant when the others are not. Is this the case? If so - don't pay attention to it - you have no significance.
does anybody know why the statistical significance of Roy's largest root does not imply the statistical significance of the other three popular statistics used to find differences in a MANOVA analysis (Wilks Lambda, Pillai-Bartlett Trace and Hotelling-Lawley Trace)??
are the F-distributions in which they get transformed different, even though they are coming from the same data? which would make me wonder whether the null hypothesis being tested through each statistic is different or not...
Usually you will see Roy's GR significant when the others are not. Is this the case? If so - don't pay attention to it - you have no significance.
thank you! that's a good start... although it still doesn't help me answer understand why, though... mine's more of a theoretical question than an applied one.
i know the whole point of the distributions of multivariate statistics is to get an approximation to an F and use it to test for significance. now, i'm a little bit more familair with wilks' lambda and i do sort of follow the logic of using determinants to create the ratio of the volume of what i believe is a higher-dimensional "total variance" parallelepiped (<--spelling?) to an "error variance" volume. now, if i follow things correctly, the eigenvalues of the covariance matrix define the coordinates of how big the error-variance-volume can be (in which the worst-case scenario would be the ratio to be 1) relative to the total variance.
roy's largest root, i pressume gives us some sort of pointer on the dimension (whose coordinate is already being taken into account by wilk's lambda) to tell us where the maximum differences are... however, it seems as if the other dimensions of the data where there may not be more "signal-to-noise" somehow water these differences down... same in the case of the pillai-bartlett trace, where all the eigenvalues get combined. that's why i sorta said to myself: "ok, i can see why maybe one dimension by itself might not be salient enough to yield significant findings and, hence, i find reasonable that wilks lambda/pillai's trace may be significant when roy's largest root is not. however, when roy's largest root is significant, that same largest root is being used in the calculation of wilks lambda/pillai's trace. therefore, why is it that that apparently important one dimension is being discarded by how the analysis is conducted?"
my initial idea was that probably the marginal F distributions to which those statistics get transformed are different F distributions in each case... although i stopped myself there because it feels to me then that would imply that the hypothesis being tested in each case then would be different and, if that is the case, well... what would be the correct way to state the null hypothesis in a MANOVA analysis? this is when i now decided to get expert help, heh.
now, with regards to your answer... do you happen to have any reference/author/book where i can find that? thanks!
(ps- i did my undergraduate in mathematics but i never really took statistics and now that i need them i'm scrambling to self-teach them, lolz).
Yeah - good questions. I'll let some of the more theoretical members address this for you. But I'll tell you that you're right - each of the four tests have different F statistics, degrees of freedom. Roy's GR is an "upper bound" and should not be trusted when significant. It is an upper bound because it uses the largest eigenvalue (root) like you mentioned.
i very much appreciate the time you took in replying to my question... if you happen to know the reference from which you told me that if Roy's largest root is significant but the other ones are not then there really is no significant i would greatly appreciate it. usually, having at least one journal article/book helps me go to their reference section so i can start jumping from article to article looking for information..
thanks!
Here is David Garson's site on MANOVA. If you have not seen it before - it will be an excellent statistical review for you. Go back to the statnotes homepage and see what all he offers.
http://faculty.chass.ncsu.edu/garson/PA765/manova.htm
I use Tabachnick and Fidell: Using Multivariate Statistics, which is pretty good in that it deals with everything from data cleaning, assumptions, derivations, hypotheses, software (spss & sas), and APA write-ups of the results. It reviews much more than just MANOVA. Most university libraries carry it.
My suggestion is for you to take a small example and run through the H and E matrices, eignevalues, etc. Compute each of the four statistics (Wilks, etc) by hand, DF, and F tests. Then compare all with your software output. A great problem would be one with an interaction (2-way MANOVA) - then you would have to compute three different SETS of tests.
I'm also sure someone else on the list will soon help out. Good luck.
Tweet |