I don't know how to report my results: did I use the wrong test?

#1
The aim of my thesis is to see whether anxiety score can predict sleep quality and dream content.
So for this, there is one continuous variable (=anxiety score) that I want to use as a predictor on multiple dependent variables to see whether it can predict the outcomes on those variables. In order to do this, I used the manova function in R:

model1 = manova(cbind(d1,d2)~X

(in this function: d1 = dependent variable 1, d2 = dependent variable 2, X = anxiety score)

Now, when I type summary(model1), I get this output:

> summary (model1)
Df Pillai approx F num Df den Df Pr(>F)
X 1 0.30237 7.585 2 35 0.001834 **
Residuals 36
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

And then, because it is significant, I look at the results per dependent variable by typing
summary.aov(model1)

> summary.aov (model1)
Response d1 :
Df Sum Sq Mean Sq F value Pr(>F)
X 1 1791.5 1791.5 6.505 0.01515 *
Residuals 36 9914.2 275.4
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Response d2:
Df Sum Sq Mean Sq F value Pr(>F)
X 1 2164 2164.05 7.3081 0.01041 *
Residuals 36 10660 296.12

Now, I interpreted this as: "so anxiety score is a significant predictor of both d1 and d2, interesting!" But now that I have to report my results, I'm a bit confused on how to do this. I read that there should be a beta-coefficient that indicates the relationship, but there isn't one. What did I actually test? Did I use the right test?

Everything I can find online uses multiple predictors with one outcome variable instead of the other way around, which is what I intended to do...
 
Last edited:

Karabiner

TS Contributor
#3
And then, because it is significant, I look at the results per dependent variable by typing
summary.aov(model1)
Why did you do that? If you did a MANOVA, then you assumed that the variables
jointly represent a construct. You now know that this construct and X are statistically
associated. I do not know why you did not just use the sum score and instead used
MANOVA, but anyway, the analysis can stop here. MANOVA ist not analoguous to the
omnibus test in ANOVA. If you were interested in the relationships between X and
each dependent variables seperately, why did you not perform separate analyses
from the start?

With kind regards

Karabiner
 
#4
Why did you do that? If you did a MANOVA, then you assumed that the variables
jointly represent a construct. You now know that this construct and X are statistically
associated. I do not know why you did not just use the sum score and instead used
MANOVA, but anyway, the analysis can stop here. MANOVA ist not analoguous to the
omnibus test in ANOVA. If you were interested in the relationships between X and
each dependent variables seperately, why did you not perform separate analyses
from the start?

With kind regards

Karabiner
Because there were many dependent variables to be tested, I decided to form groups of moderately correlated variables and perform multiple MANOVAs instead. I was under the impression that this would reduce the chance of Type I error, so that it would be better than performing multiple univariate tests. There were quite a few MANOVAs that were not significant, so for these I did not perform the univariate analyses. This way, I have performed less tests overall.
I thought performing univariate tests was a common post-hoc method for the MANOVA, to check whether the effect can be explained by one (or more) of the variables or whether there is in fact a multivariate effect... Am I wrong about this?
 
#5
Now you know they're significant, try doing them separately.
Isn't that already what I did when running summary.aov(model)?
Another way in which I have done the univariate tests as post-hoc, is using model2 = lm(cbind(dv1, dv2)~X).
Then, when running summary(model2), this yields in exactly the same results as the summary.aov(model1).
I also corrected the p-values from this univariate post-hoc for multiple testing using the False Discovery Rate method.
 

Karabiner

TS Contributor
#6
I am sorry, I do not fully understand. What do you mean by "multiple dependent variables (=anxiety score)".
This seems like a contradiction in itself. An anxiety score is 1 variable, which is is the sum of several items,
or not? So why do you analyse several items (?) instead of the score?

With kind regards

Karabiner
 
#7
I am sorry, I do not fully understand. What do you mean by "multiple dependent variables (=anxiety score)".
This seems like a contradiction in itself. An anxiety score is 1 variable, which is is the sum of several items,
or not? So why do you analyse several items (?) instead of the score?

With kind regards

Karabiner
Oh you're right, I apologize. There is just one independent variable, that is the anxiety score. The multiple dependent variables include different experienced emotions during the night, sleep quality, and scores on other questionnaires.