1. ## Reliability in SPSS

I have 66 students who have been using peer assessment in small groups. On average, each of the 66 students was graded by 8 peers. I want to see how reliable the results of the peer assessment were. I have set up my data in spss as follows:
(Peer Grade) P1 P2 P3 P4 P5 P6 P7 P8
Student 1
Student 2
Student 3
. . .
Student 66

If I do scale --> reliability on the data in this way (selecting P1-P8) is this the cronbach alpha that I'm looking for? (i.e. is this the correct measure to tell me how reliable the grades given by the peers were?)

I hope my question is clear! thanks in advance.

2. Cronbach alpha is, mathematically, the average of all possible split-half reliability estimates, so in order to get a valid alpha the columns would need to be survey items.

I assume that for each student, for example, column "P1" doesn't necessarily represent the same peer? If so, then what you could do is look at the standard deviation for each student (row) - if it's small, then people would generally have the same attitude toward that person; if it's large, then the opinion of that person varies...

3. Thanks John M.

I do have enough data so that I could get it into a format where P1 would always be the same person, but then I'd only have a few in each group at a time (because P1 only rated the others in his/her group and groups were comprised of about 8 people). Would this be useful?

Also, if I changed the data around so that the students were the columns and the peer assessors were the rows - would cronbach's work? (the grades given by each peer would be the 'survey' data). Although, would I then need to get an alpha for each group (since the peer assessors aren't all the same for each group).

Or were you saying just use the standard deviation because a cronbach's alpah isn't appropriate for this data?

Thanks again.

4. A little bit more background:

I have 2 years of data. Year 1 is data from the first year we ever used peer assessment. In year 2, some new methods were used. I'm trying to see if the new methods have made a difference in the quality of the grades given by the peer assessors. (that's why I was trying to get a reliability score, so I could compare year 1 with year 2). One of the concerns of the Y1 data was that the students were just giving everyone the same score.

Also, the grades given by the students are categorical because of the grading scheme we use (Fail, Poor, Pass, Excellent.)

5. "Or were you saying just use the standard deviation because a cronbach's alpha isn't appropriate for this data?"

This is my practical / pragmatic side speaking: I would just stick with the standard deviation. There's enough departure from the typical situation for Cronbach's alpha for me to have reservations about its proper interpretation.

6. Originally Posted by kowalsks
A little bit more background:

I have 2 years of data. Year 1 is data from the first year we ever used peer assessment. In year 2, some new methods were used. I'm trying to see if the new methods have made a difference in the quality of the grades given by the peer assessors. (that's why I was trying to get a reliability score, so I could compare year 1 with year 2). One of the concerns of the Y1 data was that the students were just giving everyone the same score.

Also, the grades given by the students are categorical because of the grading scheme we use (Fail, Poor, Pass, Excellent.)
My pragmatic side again: Because of these differences, you may want to consider just using a typical, standard measure of variation, such as the standard deviation. For the purposes of getting some sort of quantitative estimate of variation, I would assign numerical values to each category (fail=0, poor=1, pass=2, excellent=3).

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts