# Cronbach Alpha vs Pearson r

#### dgi

##### New Member
Hi all,

this might sound like a completely amateurish question but...what is the practical difference between cronbach's alpha and the Pearson product moment correlation coefficient?

I just made up an example: let's say you want to measure the degree of literacy of a population and you use two items: the 1) # of books read per year and 2) the education level.

You have a reasonable expectation that they measure the same underlying phenomenon (the literacy of that sample) and very likely there is a strong linear correlation between the two variables (i.e. high value of r). Would computing the cronbach alpha add any additional information in such a case?

#### hlsmith

##### Less is more. Stay pure. Stay poor.
With two variables and in your scenario it wouldn't really add much at all, would seem redundant. You typically select the one that fits your goals and is common practice for that area.

#### noetsi

##### No cake for spunky
They are used for different things in different fields. Pearson's looks at the correlation between interval data, regardless of what it is. Cronbach Alpha (commonly used in psychology and education) is really geared to whether raters are consistant among themself in the way they code data although it can get at reliability more generally.

#### dgi

##### New Member
They are used for different things in different fields. Pearson's looks at the correlation between interval data, regardless of what it is. Cronbach Alpha (commonly used in psychology and education) is really geared to whether raters are consistant among themself in the way they code data although it can get at reliability more generally.
Thanks a lot for your answer! I have myself the feeling that it is field-dependant.

I'd very much like if you could extend the part of your answer in bold. Couldn't you get a measure of raters' consistency simply by looking at Pearsons'r between test items?

#### dgi

##### New Member
With two variables and in your scenario it wouldn't really add much at all, would seem redundant. You typically select the one that fits your goals and is common practice for that area.

What would be a scenario in which they would not be redundant?

#### hlsmith

##### Less is more. Stay pure. Stay poor.
You probably wouldn't ever report them together to examine the same thing, as it has been pointed out they are context specific. You should just look at when certain fields use each one. An example on how both could be used during a project, but to examine DIFFERENT things would be:

Cronbach alpha to examine how well a set of survey items measures a single characteristic
Pearson correlation (or Spearman) to interpret various survey item responses possibly gouped in the cronbach alpha or not.

#### Dragan

##### Super Moderator
I think it is easier to consider the relationship between Cronbach's alpha and the the Pearson correlation in Standardized form as:

$$\alpha =\frac{k\bar{\rho }}{1+\left ( k-1 \right )\bar{\rho }}$$

where k is the number of classes (e.g. # of test items, # of raters). Thus, coefficient alpha can be considered as a function of the average of the pairwise Pearson correlations across the k classes.

#### CB

##### Super Moderator
As Dragan shows the two are closely related. If we know the correlation between two items then this is enough information to calculate the Cronbach's alpha, and vice versa. (The alpha value will be higher).

Cronbach alpha to examine how well a set of survey items measures a single characteristic
Cronbach's alpha is often referred to as a measure of unidimensionality, but in reality it gives you very little information about unidimensionality. Useful paper: On the use, the misuse, and the very limited usefulness of Cronbach's alpha

#### dgi

##### New Member
Thanks a lot for all your excellent replies!

#### Basma

##### New Member
Hello everyone,
I have a question regarding Cronbach's Alpha which might seem daft BUT when do you actually conduct a cronbach's alpha test? is it during piloting your questionnaire or after you collect the data from your sample?

Help a confused postgrad student make her dissertation happen!

Thanx

#### CB

##### Super Moderator
You can do both, but what's reported in journal articles etc is usually the Cronbach's alpha from the "real" sample - often the questionnaire will be slightly altered after piloting, so what's of most interest to readers is the reliability of the final version in the fuller-sized sample.

#### Basma

##### New Member
Thanx for the great answers everyone.

But I am still confused, as I did my pilot testing without the respondents having to take the questionnaire, just look at it and tell me if it makes sense etc.
Also, all my items are taken from papers which have a cronbach alpha of at least 0.65 or more. Does this mean I still better do the Cronbach Alpha? because I really feel I want to start collecting data and not sure how to do this test from qualtrics to SPSS.

#### Basma

##### New Member
Oh that alpha thing sucks IMHO, and as stated in this thread by the experts. Becuase I have had studies in which alpha was not checked and after collecting the results, we saw our alpha was less than 0.5, but still the results taken from our questionnaires quite met our expectations. But unfortunately many reviewers ask for it. Although if the other aspects of your research is good, they might ignore it. It depends then.

I think I might as well try to do it so not to be bumped by my results later and get my supervisor dissatisfied!

#### HiLo

##### New Member
Hi Basma

Cronbach's alpha is pretty much standard to all pyschometric research, and it can actually be useful both for developing survey instruments and post-hoc validation of the instrument. If you have a mesurement model where you have a few sub scales or "dimensions," which are each manifest by say 5 or 6 questions on the pilot survey, the alpha can help you determine if those questions are truly reflective of their respective dimensions. After a pilot test, where you have actual test subjects take the survey, the alpha can help you decide which questions are important and which can be dropped to streamline the final survey. Some statistical programs (e.g., SAS) will calculate an alpha after dropping each successive question, to show you how much higher alpha would be without that question. An ideal alpha is .7 to .9, but .6 is acceptable. If it's above .9, then your questions are probably redundant and not varied enough in scope for that dimension (they're asking nearly the same thing).

It sounds like you did not actually "pilot test" your survey, but that you did qualitative testing to confirm face validity (the survey looked right and seemed relevant to potential subjects) and perhaps content validity (the questions were correct in scope and depth, at least according to the subjects). A pilot test is usually accomplished by administering the survey to test subjects (n>30) and assessing the results. Sometimes this is out of your scope or resource budget, but it's the best way to catch problems if you're serious about your survey's reliability and validity and you can't afford to make a big mistake with the "live" survey.

HTH !

#### DaniellaS

##### New Member
I think it is easier to consider the relationship between Cronbach's alpha and the the Pearson correlation in Standardized form as:

$$\alpha =\frac{k\bar{\rho }}{1+\left ( k-1 \right )\bar{\rho }}$$

where k is the number of classes (e.g. # of test items, # of raters). Thus, coefficient alpha can be considered as a function of the average of the pairwise Pearson correlations across the k classes.
Given that the Cronbach Alpha is often used in psychology to measure reliability of scale items that are on the ordinal scale of measurement, wouldn't it be meaningful to use the Spearman r instead of the Perason r to compute the Cronbach alpha?

#### spunky

##### Can't make spagetti
wouldn't it be meaningful to use the Spearman r instead of the Perason r to compute the Cronbach alpha?
Spearman's rho doesn't translate as easily as the ratio of variance explained to total variance like Pearson's r does. keep in mind that the Spearman assumes a rank transformation of continuous data.

the more correct way to approach reliability for ordinal data from a factor analytic point of view (like Cronbach's alpha does) is by using the matrix of tetrachoric or polychoric correlations to calculate alpha.

#### yue86231

##### New Member
They are used for different things in different fields. Pearson's looks at the correlation between interval data, regardless of what it is. Cronbach Alpha (commonly used in psychology and education) is really geared to whether raters are consistant among themself in the way they code data although it can get at reliability more generally.
Hi, I'm still a little bit confused here.

I know I can do Pearson's correlation among all the items, regardless of what it is, BUT, In the reality, it is less possible that I correlate for example several statements with length of last name toghter. I could just and only put certain varibales (XYZ) in to the correlation analysis. If I were running a cronbach alpha analysis, I would also put the varibales XYZ in the analysis.

Could I say, If XYZ are all well correlated with each other, they are consistent?

#### spunky

##### Can't make spagetti
If XYZ are all well correlated with each other, they are consistent?
define 'consistency' please. when you say 'consistency' are you implying it from a reliability/classical test theory perspective? 'consistency' has many definitions within mathematics/statistics and these definitions do not necessarily imply each other.

#### yue86231

##### New Member
"Cronbach's alpha is a measure of internal consistency, that is, how closely related a set of items are as a group. "
http://www.ats.ucla.edu/stat/spss/faq/alpha.html

From my understanding of the defination from the above sentence, Cronbach's Alpha means how highly correlated the items are (with each other).

So I can not understand, why we can't just run a correlation among XYZ? It tells how highly correlated with each other.