interrater reliability

  1. N

    Interrater reliability test on 3 raters (2 experts and 1 non-expert) on ordinal data

    I need to find the interrater reliability between 3 raters. A nurse (non-expert) performed 60 ultrasound scans and categorised patients into 3 ordinal and also categorical variables (hypovolemic, euvolemic, hypervolemic) according to the scans. The recorded scans were then subsequently...
  2. P

    Inter-rater reliability - what analysis do I use?

    During an intensive 5-day group therapy camp, where counselors are 1:1 paired with a camper (25 campers, 25 counselors), we collected data on the camper's ability to verbally respond to questions during group morning meeting and closing meeting. There were 25 campers and 25 counselors. If the...
  3. M

    Which interrater reliability test to chose and how to set up the data matrix?

    I have a question regarding which interrater reliability test I should use. The situation is as follows: 12 judges rated 20 profiles with 14 questions (each profile was rated with the same 14 questions). I want to know to what degree the raters agreed in their judgments. I was thinking...
  4. B

    Interrater Reliability. Is MORE better?

    Which design is more powerful: testing interrater reliability with 2 judges or 4 judges? (citations appreciated)
  5. C

    Comparing one rater with a group of raters

    Hi all, I'm working through a research project, and am looking for advice. Students were asked to work through a simulated clinical scenario and were rated in real-time using a checklist. In addition, each scenario was videotaped and rated by 3 different raters (using the same checklist)...
  6. L

    I need help with Inter-Rater agreement

    I'm developing a scale and my items were already checked by a group of 4 judges. They gave a dichotomica response (yes, leave the item or no, remove it). Now what should I do? I know I have to calculate something named "inter rater agreement", but I don't know how and I don't know which...
  7. L

    Cohen's kappa or Fleiss kappa or both?

    Hi all, I am trying to compare 2 instruments, A and B, against the gold standard. The measurement outcome is dichotomous. 2 different raters, R1 and R2, each uses instrument A and B to rate each subject. So the data looks something like: ID, A_R1, A_R2, B_R1, B_R2...
  8. S

    Intracorrelation Coefficient producing unexpected results..can you help me understand

    Hi, So i am trying to measure the reliability of measurements taken with a callipers by 4 different users on the same 10 samples. This my data in millimeters: Barry Sarah Aoife Jen Sample 1 2.18 2.15 2.27 1.62 2 1.695 1.82 2.07 1.33 3 1.76 1.46 2.20 1.18 4 1.83 1.94 3.00 1.51 5...