Comparing one rater with a group of raters

#1
Hi all,

I'm working through a research project, and am looking for advice. Students were asked to work through a simulated clinical scenario and were rated in real-time using a checklist. In addition, each scenario was videotaped and rated by 3 different raters (using the same checklist). What I'm trying to look at is if there is good correlation b/w the real-time rater's score and the videotaped raters? Typically for these scenarios, the "gold standard" is multiple video raters. What I'm trying to see is it feasible in this situation to just use one live rater (ie, does the live rater see everything that the video raters saw).

Looking at the data from the video raters on 30 students in our pilot, they have an interrater reliability (Krippendorff) of 0.749. My question is, is it reasonable to examine the correlation b/w the average of the 3 video raters with the score from the live rater, or is there another technique that I should look at?

All comments welcome, thanks all!