I know the basic interpretation of r. I.e., -1(<or =)r(<or=1). Critical values for n etc.
However, while helping a research nursing student, her prof. told her a 1 standard deviation change in the "independent variable (x) will result in a change in the dependent variable's (y) standard deviation by a factor of r. Even if r is less than the critical value for that sample size. That is, even if there does not exists linear correlation. If this is true, is it because the way r is calculated? or is it wrong?
Also, the prof told her that if r is negative and you change the independent variable by +1 standard deviation, the standard deviation of the dependent variable goes down by a factor of r (r being negative). How can an increase in the standard deviation of the independent variable cause any kind of decrease in standard deviation when the variables are correlated? And yes, the prof. was speaking about the value of r and the standard deviation, not the value of and x and y with respect to the negative slope. I felt there where contradictions in what was being said. The prof. wrote this down on printed notes!