Hi,

I have experimental design which includes 3 dependent variables (Reading Mind in the Eyes test and 2 other measures of social cognition, for the matter of this thread lets call them A, B and C). They are continuous variables.

The first research hypothesis was that following treatment (actually intervention program, but it doesn't make much change) scores will become higher for all these variables in the experiment group but not in the control group. The second research hypothesis was that there would be correlation between each pair of the three different measures regardless of condition and that the size of it would remain the same between conditions for each pair (though the correlation between A and B may be different from the correlation between B and C for instance).

So actually, the research have 4 conditions: 2 levels of time (pretest and post-test)

X two groups (control and intervention). Now, simple main effects reveled that there was effect for both time and group with intervention group scored higher on all three measures after the intervention.

But, as for correlations, the things are less clear for me: I tested simple correlations (Pearson r) between A and B, A and C, B and C over time and group (12 correlations then), all correlations were medium (~0.2) to large (~0.8) and significant (p<0.01). Then I wanted to show that the same mechanism mediated performances on these tests (dependent variables) in all four different conditions, to that end I had to show that r values between the variables were not different between these different conditions though measures scores were different over time and group.

So I took Steiger's z tests for dependent (same group different time) and independent (different group same time) samples. The results confirmed this hypothesis too. But I been told that this is not the proper way to compare strength of correlations between different conditions as described and that from the first place I should have used multiple correlations and/or regression. I didn't quite understand how I'm expected to perform this regression, I only could think on something like step-wise linear regression with time and group variables, both nominal, recoded into new dummy scale order variables with the two levels recoded into numbers (0 and 1), then entering them with all the rest of the variables and each time one of the three test variables is serving as the dependent variable (3 regressions then, given there are 3 tests). In this way perhaps I can show that time and group contributed less to the variance in each of the of tests scores in comparison to other tests, but how does it compare between the size of the different correlations? I was also considering MANCOVA and regression but to tell the truth, at this point I'm a bit confused, I tried to find studies which used somewhat similar design and had similar assumptions but it doesn't help me much.

I would appreciate good guidance here. meaning what analysis I should do step by step.

Best regards,

Gilian

I have experimental design which includes 3 dependent variables (Reading Mind in the Eyes test and 2 other measures of social cognition, for the matter of this thread lets call them A, B and C). They are continuous variables.

The first research hypothesis was that following treatment (actually intervention program, but it doesn't make much change) scores will become higher for all these variables in the experiment group but not in the control group. The second research hypothesis was that there would be correlation between each pair of the three different measures regardless of condition and that the size of it would remain the same between conditions for each pair (though the correlation between A and B may be different from the correlation between B and C for instance).

So actually, the research have 4 conditions: 2 levels of time (pretest and post-test)

X two groups (control and intervention). Now, simple main effects reveled that there was effect for both time and group with intervention group scored higher on all three measures after the intervention.

But, as for correlations, the things are less clear for me: I tested simple correlations (Pearson r) between A and B, A and C, B and C over time and group (12 correlations then), all correlations were medium (~0.2) to large (~0.8) and significant (p<0.01). Then I wanted to show that the same mechanism mediated performances on these tests (dependent variables) in all four different conditions, to that end I had to show that r values between the variables were not different between these different conditions though measures scores were different over time and group.

So I took Steiger's z tests for dependent (same group different time) and independent (different group same time) samples. The results confirmed this hypothesis too. But I been told that this is not the proper way to compare strength of correlations between different conditions as described and that from the first place I should have used multiple correlations and/or regression. I didn't quite understand how I'm expected to perform this regression, I only could think on something like step-wise linear regression with time and group variables, both nominal, recoded into new dummy scale order variables with the two levels recoded into numbers (0 and 1), then entering them with all the rest of the variables and each time one of the three test variables is serving as the dependent variable (3 regressions then, given there are 3 tests). In this way perhaps I can show that time and group contributed less to the variance in each of the of tests scores in comparison to other tests, but how does it compare between the size of the different correlations? I was also considering MANCOVA and regression but to tell the truth, at this point I'm a bit confused, I tried to find studies which used somewhat similar design and had similar assumptions but it doesn't help me much.

I would appreciate good guidance here. meaning what analysis I should do step by step.

Best regards,

Gilian

Last edited: