This may be too basic of a question but I would greatly appreciate any guidance.
Say you have a dataset where a series of people have visually estimated the number of dots in a series of samples (say in a series of boxes). Almost everyone overestimates, it turns out.
And you count the actual dots for each sample, establishing the 'gold standard'.
How would you come up with a coefficient/correction factor to describe the overestimation factor and validate it? So that you can more accurately predict the real number in future visual estimates... how big of a dataset would you need?
Does that make sense?
Thank you so much!
Say you have a dataset where a series of people have visually estimated the number of dots in a series of samples (say in a series of boxes). Almost everyone overestimates, it turns out.
And you count the actual dots for each sample, establishing the 'gold standard'.
How would you come up with a coefficient/correction factor to describe the overestimation factor and validate it? So that you can more accurately predict the real number in future visual estimates... how big of a dataset would you need?
Does that make sense?
Thank you so much!