# Recent content by Burnsie_UK

1. ### Weighted kappa - so confused

Hi, thanks for your detailed answer. I agree, I have been finding info that says some form of pilot study is required.... which is a bit hard for a project approval form! Additionally, I also believe that the raters required will differ depending on the response options. i.e. if the raters can...
2. ### Weighted kappa - so confused

Yes, it will be. I suppose the hypotheses will be that there is a difference (due to skill level and knowledge level) was the null being that there is agreement
3. ### Weighted kappa - so confused

I have posted about this subject regarding a variety of different inferential statistical tests...so apologies in advance. I'm running a project where one of my my investigations will look at the level of agreement between multiple raters who will be split into two groups; novice vs expert...
4. ### Krippendorffs’ alpha - sample size

Hi all, I’m not good at this stuff, so please bear with me. I am doing an investigation where I want to look at agreement between different groups of raters. The raters (a group of novice/new raters) and a group of exert raters will watch participants before an action and grade that (probably)...
5. ### Simple question. Type of data?

Well, thinking out loud here, if rater A scores participant A a score of 2 on Monday We then compare all raters (in two groups) from Monday on all participants. If there is agreement between the groups, then great. But surely with only 3 possible outcomes, the chance of accidental agreement is...
6. ### Simple question. Type of data?

Great. So comparing rater groups for the individual screens (1-3 scoring system) using Kendall's And comparing the composite scores between groups, ICC? I feel I should also look at intra rater scores, but this would involve a further testing protocol as currently I'm trying to do "live"...
7. ### Simple question. Type of data?

Thanks Kendall's Tau??
8. ### Simple question. Type of data?

Thanks for replies both. So, I am trying to compare the scores given by two groups. One group is an expert group, and the other is a novice group. I am trying to see if the scores both groups give relate to each other. The scores are going to be ordinal, in that 1= fail, 2=pass with error...
9. ### Simple question. Type of data?

So, there is a test that you can score 1, 2 or 3 one. Each number is a fault category. 1=fail 2=pass with errors 3=pass no error Therefore 3 is better than one. However, I have had to define such. I still believe that this is ordinal data. Am I being stupid here?
10. ### PhD – Power calculation

AH, the model they (lit) has used to comapre is general ICC and also the Kappa
11. ### PhD – Power calculation

Sorry for being annoying, and thanks for your speedy and detailed replies. So were singing from the same hymn sheet.... What do you mean by "model"
12. ### PhD – Power calculation

Yes. So the scale is a/will be a subjective scoring system (to take out the need for experience measuring equipment etc). For example, you do a hop on one leg If you fall over when landing = 1 if you wobble with arms out when landing = 2 if you don’t wobble when landing = 3 I want to test...
13. ### PhD – Power calculation

Are you reporting the means of the whole data set? They can only be whole numbers.... Obvs the means can be dp's. But yes, the closer the means, surely the closer the agreement between the novice and the expert. This means the novice finds the same info from the screen as the expert and thus...

Ie no sig.
15. ### PhD – Power calculation

The screen would be better is there is closer agreement between the two groups