are the F-distributions in which they get transformed different, even though they are coming from the same data? which would make me wonder whether the null hypothesis being tested through each statistic is different or not...

- Thread starter will22
- Start date

are the F-distributions in which they get transformed different, even though they are coming from the same data? which would make me wonder whether the null hypothesis being tested through each statistic is different or not...

i know the whole point of the distributions of multivariate statistics is to get an approximation to an F and use it to test for significance. now, i'm a little bit more familair with wilks' lambda and i do sort of follow the logic of using determinants to create the ratio of the volume of what i believe is a higher-dimensional "total variance" parallelepiped (<--spelling?) to an "error variance" volume. now, if i follow things correctly, the eigenvalues of the covariance matrix define the coordinates of how big the error-variance-volume can be (in which the worst-case scenario would be the ratio to be 1) relative to the total variance.

roy's largest root, i pressume gives us some sort of pointer on the dimension (whose coordinate is already being taken into account by wilk's lambda) to tell us where the maximum differences are... however, it seems as if the other dimensions of the data where there may not be more "signal-to-noise" somehow water these differences down... same in the case of the pillai-bartlett trace, where all the eigenvalues get combined. that's why i sorta said to myself: "ok, i can see why maybe one dimension by itself might not be salient enough to yield significant findings and, hence, i find reasonable that wilks lambda/pillai's trace may be significant when roy's largest root is not. however, when roy's largest root is significant, that same largest root is being used in the calculation of wilks lambda/pillai's trace. therefore, why is it that

my initial idea was that probably the marginal F distributions to which those statistics get transformed are different F distributions in each case... although i stopped myself there because it feels to me then that would imply that the hypothesis being tested in each case then would be different and, if that is the case, well... what would be the correct way to state the null hypothesis in a MANOVA analysis? this is when i now decided to get expert help, heh.

now, with regards to your answer... do you happen to have any reference/author/book where i can find that? thanks!

(ps- i did my undergraduate in mathematics but i never really took statistics and now that i need them i'm scrambling to self-teach them, lolz).

thanks!

http://faculty.chass.ncsu.edu/garson/PA765/manova.htm

I use Tabachnick and Fidell: Using Multivariate Statistics, which is pretty good in that it deals with everything from data cleaning, assumptions, derivations, hypotheses, software (spss & sas), and APA write-ups of the results. It reviews much more than just MANOVA. Most university libraries carry it.

My suggestion is for you to take a small example and run through the H and E matrices, eignevalues, etc. Compute each of the four statistics (Wilks, etc) by hand, DF, and F tests. Then compare all with your software output. A great problem would be one with an interaction (2-way MANOVA) - then you would have to compute three different SETS of tests.

I'm also sure someone else on the list will soon help out. Good luck.