Test for comparing correlation values. The winner is?

Hi everyone,

my problem is relatively simple, but trivial. I've already read some topics about that, but I really need straightforward answers and/or opinions.

I have data divided by one fixed effect (Emotion, with 2 levels: negative and neutral), and I have a covariate (reaction time) for 20 subjects. My aim is to 1) evaluate correlation between data and covariate;
2) know if there is a significantly bigger correlation between data in negative condition and negative reaction times, than between data in neutral condition and neutral reaction time (or viceversa).
That means, analitically, to compare regression slopes.

I evaluated, by now, three different strategies:

a) use of Cohens'q. That gives no probability of a significance (no p-value), but an 'effect size' (small/medium/large) based on the difference between r values (transformed in z values using Fisher's procedure).

b) use of Fisher's method. That takes in account sample size, other than transformed r values.

c) use of ANCOVA. In particular, an analysis of covariance that doesn't force the slope to be the same. MatLab's 'aoctool' can do that work.
To be clear:
Standard ANCOVA regression -> y = (a1 + a2) + b*X + e
ANCOVA with different slopes -> y = (a1 + a2) + (b1 + b2)*X + e
In this case, not only sample sizes are taken in account, but also variability in each group.

Cohens'd is just about correlations. Instead, Fishers' method takes also in account the sample size, and ANCOVA tools also account for variability; that means that the latter two methods, with a small sample size, will tend to reject a difference between slopes.
Actually, in neuroimaging studies, it is common to have 15-25 subjects. Therefore it appears to be useful to use Cohens'q, to have an estimate of the difference between correlations. Anyway, I'd like to hear comments about that.
What would you recommend to test between correlation values?

Re: Test for comparing correlation values. The winner is?

Hi, welcome.

What is your dependent variable?

I'm not sure I know the answer, but I found this thread, which may be helpful (especially the second reply).

I think ANCOVA may be the cleaner solution, because it takes into account all the data and variability. You can still look at the different slopes even if the interaction Emotion*RT is not significant. See also this thread, second reply.

Have you done a power analysis? 10 (or 20?) per cell seems like a very small sample to detect this effect. It should be huge if you want to detect it (having worked with cognitive neuroscientists, I symphatize with the fact that costs are enormous... but it's making neuroscience notoriously underpowered).

Re: Test for comparing correlation values. The winner is?

Hi Junes! Thanks for the reply.

Dependent variables are functional connectivity values -statistical associations between brain regions- during different conditions (Negative/Neutral) in a task, estimated using psycho-physiological iteractions (PPI).
The problem, after the ANCOVA, will still be how to look at different slopes...I'm going deep in the threads you linked...very useful!

Actually, there are 20 per cell, with repeated measures. I agree with your general statement about neuroscience. However, this one may appear as a small sample, but is almost double compared to many studies.
Power analysis have been conducted before the study. Overall, the project involved almost 100 subjects; preliminar studies and analysis have been done in order to increase the effect both on the target population and at the single subject level (not a common approach you see in neurosciences .

EDIT: I was wondering if it is plausible to apply a bootstrap or jackknife procedure, in order to solve the problem.
For example, I can use a jackknife procedure to obtain N values of correlation for each condition (where N is sample size); then, I can run a t-test between jack-knife statistics obtained in the two conditions.
Does this (or a similar bootstrap approach) seem reasonable?

Last edited by smndpln; 11-05-2016 at 01:30 PM.
Reason: New idea!