Questions about effect size, correlation, others...

I have a few questions.

1. I tried to correlate 2 variables in my data via Pierson's r, but the coefficient that I got was much too low (around 0.15). Is there a method by which I can test for a non-linear correlation?

2. What exactly is effect size? How can I implement it in my studies? For example, if I did post-hoc tests and I found significant results between some groups, how would I go about interpreting those results via effect sizes. Or, is that even possible?

3. A confidence interval is one by which if a procedure was repeated on many samples, the true parameter would be found in, for example, 95% of the the samples' confidence intervals. This says nothing about the actual probability that the true parameter is found in your respective sample's confidence interval, since you don't know whether your sample is in the 95% "true" group or 5% "false" group. Is there any form of interval estimation where you can obtain a "probability interval" instead of a confidence interval? i.e. an interval where you are 95% or 99% sure that the mean would be found.

4. Related to question 4, is there any stringent method by which you can order confidence intervals by rank?

Thank you for taking the time to respond to my questions.


New Member
1) Run a scatter plot and look for any obvious visual signs of a relationship. If you find either a quadratic or cubic effect, you can transform them (use regression rather than correlation). To transform a quadratic effect, use IV squared as predictor, to transform a cubic effect, use IV cubed as predictor.

2) Effect size is a measure of the magnitude of an effect, how much of the variance in the relationship is accounted for. Most reputable journals expect an effect size to be quoted in your results section.

3) I asked the same question of my professor last year. Rather than try to repeat him, I have attached his answer.

4) Hmmm... ? Don't know :eek: