inter-rater reliability

  1. M

    HELP! Which inter rater reliability test??

    Hi, For my research, I need to calculate the inter rater reliability. There are 4 observers, who each observe the same objects and score it for a total of 7 variables. The variables are of ordinal level, but not each variable has the same amount of outcome variables. Which inter rater...
  2. W

    Inter-subject agreement re timings of (unequal number of) events

    Hi everyone, I am trying to figure out what would be the best statistic to use to quantify the amount of agreement that exists between subjects who were asked to press a button whenever they felt a certain emotion while listening to a short (2min) piece of music. Plotted as time series, the...
  3. J

    A measure for agreement between raters scoring

    Hi - I'm looking for a measure for the level of agreement between a number of raters who score performance of something on 1-4 scoring range. So, for example, we have 10 raters, who have each independently assessed something: 1. 4 2. 4 3. 3 4. 3 5. 2 6. 3 7. 4 8...
  4. R

    inter-rater reliability with multiple raters on spss statistics

    Hi everyone! I need help with a research assignment. I'm new to IBM SPSS statistics, and actually statistics in general, so i'm pretty overwhelmed. My coworkers and I created a new observation scale to improve the concise transfer of information between nurses and other psychiatric staff...
  5. G

    Comparing Cronbach's alpha intraclass correlations with varying rater permutations

    I've done a study for my PhD comparing different methods/conditions that might affect inter-rater agreement when rating the creativity of other people's drawings. I had 600+ drawings that needed to be rated, so it wasn't feasible to have the same few people rate all the drawings. So 24 raters...
  6. D

    Inter-rater Reliability in Grading Rubric -- How do I analyze this?

    Hello, I'm tasked with analyzing how reliable raters are in rating a single piece of work along multiple dimensions. I'm trying to answer questions like: 1) Which rubric dimensions (some or all) have unacceptable variance across the raters? How do I define the cutoff for unacceptable...
  7. J

    Inter-rater Reliability measure for ordinal data with multiple raters

    Inter-rater Reliability measure for ordinal data with multiple raters I am wanting to obtain a measure of IRR for a diagnostics tool on the NHS. 6 professionals assessed an individual on multiple health domains such as Behaviour Cognition Psychology Communication Mobility Each...
  8. J

    Q re Kappa statistic

    I am trying to calculate inter-rater reliability scores for 10 survey questions-- most of which are binary (yes/no). The agreemeant level bettwen the two raters is 70-90% on nearly every question, however the Kappa score is often very poor (0.2- 0.4). Can this be right? Secondly, can you...