Statistically Challenged Psychology student!!!

#1
Hi everyone, I am really hoping that someone here can help me out. I am really struggling with the analysis for a Psychology study I have conducted, and have lots of questions :( I find stats really difficult, and just cannot begin my analysis yet because I am so stuck. I will try to give some background to the study to help:


Purpose of study: To investigate whether a self-affirmation intervention can reduce Teacher stress

Study design: I used a 3x2 mixed factorial design with a 3 level, between-subjects factor (group: self-affirmation condition, control condition 1, control condition 2) and a 2 level within-subjects factor (time: Time 1 pre-intervention, Time 2, post-intervention).

So all participants received the same baseline survey at Time 1, then the experimental group completed the self-affirmation task a week later, and a week after that all participants received the same survey at Time 2.

My IV is the group condition: the experimental group wrote about an important value, control condition 1 wrote about their least important value, and control condition 2 did not have any task.

My DV'S are: Perceptions of Stressors, Perceptions of Strain, Resilience, Coping Style, Work Engagement, Affect, Rumination, Affective Commitment, and Self-esteem. I am hoping that the intervention will decrease perceptions of strain and increase resilience. There may also be an effect on the other variables, however I do not expect self-esteem to change. The following variables may be Mediators: Rumination, Affect and Coping Style.

Analysis:

I have looked at other similar studies (i.e. pre-post intervention comparisons) to see what stats I should do and I think I need to conduct a repeated measures MANOVA. Does this sound right?

Many questions have come up though from what I have read, and due to my lack of knowledge. Huge apologies for the length, and any help is very appreciated. MANOVA specific questions from Q5.


1. What do I do with missing data (unanswered items). Does it matter if it’s just the demographics questions missing?

2. Should I test for homogeneity of variance using Levine’s test? (we want non-sig Levine).

3. Were there any differences between those who dropped out (after Time 1 and/or the Intervention week) and those who completed all three surveys? E.g. were those who dropped out more ‘stressed’ at Time 1? Were there any demographic differences? Do I use an independent samples t-test for this?

4. I want to check that there are no systematic differences between the experimental group and each control group (both in terms of demographics and the pre-intervention survey responses). Do I use an independent samples t-test?

5. Should I conduct a MANOVA with all measures in? Or conduct a series of MANOVAs? (e.g. perceptions of stressors; perceptions of strain; resilience; work outcomes (affective commitment and work engagement); coping; rumination; affect; self-esteem).

6. If I run one MANOVA with all measures in, and there is a significant main effect (Group) and interaction effect (Group X Time) is my next step to then run a series of ANOVAs on each measure to identify where the change occurred?

7. I am also interested in whether some measures are acting as mediators: rumination, affect, coping style, and self-esteem. At what point do I look into this?

8. I am not sure when post hoc tests come into things (or the difference between simple and pair-wise comparisons).

9. Do I need to check Mauchley’s test for sphericity for both the MANOVA/s and ANOVA, or just the ANOVA?

10. I want to examine whether there are differences between the experimental group post-intervention and the control group post-intervention, but also whether the experimental group post-intervention is different from experimental group pre-intervention; how would I go about doing this?


11. One of the assumptions of MANOVAs is that the time intervals are equally spaced (e.g. responses are a week apart) however I cannot guarantee this as participants completed the survey at slightly different times. Would this be an issue during data analysis, or is it just a point to make in the discussion?


12. I would like to look at whether teachers high in resilience report lower levels of strain (and stressors). How do I do this?


As you can tell from my questions I really am very poor/unconfident at stats and also inexperienced. If there is anyone out there wiser who can help I would be so, so grateful!

Thank you all,
Lolly
 

nbjo

New Member
#2
First off, great description of the problem! You seem to be headed in the right direction.

What analysis software are you going to use?

Typically, simply a repeated measures ANOVA is used for a mixed design. MANOVA takes into account more than 1 DV. I don’t have a background in MANOVA, so maybe others can respond if it would be more appropriate for certain DV groups.

Repeated measures ANOVA would be run on every DV (be sure to think about the level of measurement for all DVs - I am assuming they are all interval).

1. What do I do with missing data (unanswered items). Does it matter if it’s just the demographics questions missing?
- This is a hot topic. The various techniques people use result in greater discrepancies the larger the percentage of missing data present. What percentage of responses for the variables is missing?
- Demographics should not be an issue unless you think not responding to the item is related to other variables or your sample is not representative of the population.

2. Should I test for homogeneity of variance using Levine’s test? (we want non-sig Levine).
- Yes and this is run by default in SPSS. Correct, you do not want a significant value for this.

3. Were there any differences between those who dropped out (after Time 1 and/or the Intervention week) and those who completed all three surveys? E.g. were those who dropped out more ‘stressed’ at Time 1? Were there any demographic differences? Do I use an independent samples t-test for this?
- You could do a t-test after dividing the treatment group up by those factors. Make sure you report Ms and SDs. This is a bit overkill unless you have a high attrition rate.

4. I want to check that there are no systematic differences between the experimental group and each control group (both in terms of demographics and the pre-intervention survey responses). Do I use an independent samples t-test?
- You would want to look at the variables pre-treatment. You have three groups, so you want to do an ANOVA.

5. Should I conduct a MANOVA with all measures in? Or conduct a series of MANOVAs? (e.g. perceptions of stressors; perceptions of strain; resilience; work outcomes (affective commitment and work engagement); coping; rumination; affect; self-esteem).
- My understanding is that MANOVA is appropriate for correlated, groups of DVS. I see repeated measures ANOVAs way more often than MANOVA, but as I said, I do not have a background with MANOVA.

6. If I run one MANOVA with all measures in, and there is a significant main effect (Group) and interaction effect (Group X Time) is my next step to then run a series of ANOVAs on each measure to identify where the change occurred?

7. I am also interested in whether some measures are acting as mediators: rumination, affect, coping style, and self-esteem. At what point do I look into this?
- For mediation, the best way to do this is structural equation modeling (SEM). You could also look at the SPSS macros by Hayes [http://afhayes.com/spss-sas-and-mplus-macros-and-code.html]. You may see a lot about the Baron and Kenny method in older articles, but this is not the preferred method among scholars.

8. I am not sure when post hoc tests come into things (or the difference between simple and pair-wise comparisons).
- Simple and pairwise are SPSS terms. If there are more than two levels of a variable and you have not hypothesis about mean differences, you will want to do post hoc testing.
- ANOVA results will tell you if there is at least one significant difference between groups. It does not tell you how many or which ones. This means you need to do post hocs.
- An alternative is to do apriori, planned contrasts that test exactly what you hypothesized without having to do post hoc alpha corrections. Think of post hocs as ‘unplanned, after the fact’ analyses.

9. Do I need to check Mauchley’s test for sphericity for both the MANOVA/s and ANOVA, or just the ANOVA?
- This should be checked when you have a within-subjects factor. Same as Levene’s that you are looking for a non-significant value.

10. I want to examine whether there are differences between the experimental group post-intervention and the control group post-intervention, but also whether the experimental group post-intervention is different from experimental group pre-intervention; how would I go about doing this?
- Repeated measures ANOVA will give you main effects (between, within) and interactions (between * within). Post-hocs would give you information if you have an interaction or main effect with more than two groups.

11. One of the assumptions of MANOVAs is that the time intervals are equally spaced (e.g. responses are a week apart) however I cannot guarantee this as participants completed the survey at slightly different times. Would this be an issue during data analysis, or is it just a point to make in the discussion?
- You only have one interval (T1-> T2).
- You should talk about this in the method section. If they are reasonably at the same point of time for all participnts, this shouldn’t be a problem.

12. I would like to look at whether teachers high in resilience report lower levels of strain (and stressors). How do I do this?
- This sounds like a regression design.


Dont forget to think about statistical power! GPower is a good (and free) calculator. If you have a small sample, many of the above analyses are going to be underpowered (Type 2 errors likely).

You have a lot of tests. I suggest (if this is for a report or thesis) that you focus on the tests and relationships of most importance to you.