I need to fix the problem choosing and interpreting the correct statistical test for my data.

I have data from (a) Minnesota Job Satisfaction Questionnaire short form (20 questions, answers from 1 (=very dissatisfied) to 5 (=very satisfied)) and

(b) from Social Readjustment Rating Scale (43 items, minimum of summed scores of stress events is 12 and the maximum score is 500).

Number of respondents (doctors) = 60.

My hypothesis was that the more stress events was in a person's life in the last 6 months, the less satisfied he/she will be, that is, the scores of Social Readjustment Rating Scale (which measures quantity and intensity of stress events) will be negatively associated with Job Satisfaction scores.

In my data Job Satisfaction scores are in a range of 20-60 and SRRS scores are in a range of 12-500.

First I checked Pearson's r coefficient because all the researches I met on this topic used this correlation. Results I got was: r = 0.01 which is very low and means that there is no correlation between the two variables and sig = 0.997 which is higher than 0.05 so it means I must reject the hypothesis that there is no correlation, right?

Then I thought, maybe it's because my data is of ordinal measure because the data can be ranked just like ordinal scale requires (from less stressed respondents to more stressed ones or from less satisfied people to more satisfied ones). Then I calculated Spearman' rho which was -0.45 (which seems to be logical) but significance value was 0.73.

So how can I understand anything here? Correlation coefficient tells me there is moderate negative correlation between the two variables and at the same time, the correlation isn't significant so I can't believe in it?

Where did I go wrong? (When I chose Pearson or Spearman?) and what should I do? How can I interpret these results? Is there a correlation between being stressed out from the stress events and being satisfied with your job, or not?

Sorry for my English (I'm at the intermediate level here) and my potentially inappropriate question (I'm a beginner here).

Thank you for replying in advance! ]]>

The difficulty of the experimental design is that I cannot present the same speech (same content) of different genders to the same participants because they can tell the speeches are the same. Therefore, my experimental design is as follows:

1. same speech would be reproduced in female and male voices

2. half of the participants would rate the female speech, and the other half would rate the male speech

3. I cannot control the order of presentation of different speeches, therefore counter-balance the presentation sequence is not possible

My problem is that I want to find out not only whether gender of speech would modulate the rating, but also which participant rates differently. The former one can be analysed by independent sample t-tests. However, for the second purpose, I don't really know how to analyse - should I do a one-sample t-test which put the rating of a participant as the test mean to test against all the other ratings (both male and female conditions) of the same speech? Is there any standard way to run this kind of experiment? ]]>

I'm researching a moderated mediation (2x2) with two mediators.

To test my MANOVA and moderated mediation I have to check the normality assumption of errors (residuals). My sample size is 400.

1.) One of my mediators has a non-normality distribution of residuals (which I think is logical, because I'm quantify attributions [stereotypes] of helpers) --> Do I have to transform my mediators? If so, do I have to transform all variables or only the affected variable?

2.) how do I test non-normality in SPSS? Do I have to put all variables?

My model:

IV 1: nominale scale

IV 2 (& moderator): nominale scale

Mediator 1: interval scale

Mediator 2: interval scale

DV 1: interval scale

DV2: interval scale ]]>