# Thread: Sensitivy/specificity - I'm confused

1. ## Sensitivy/specificity - I'm confused

Hi All,

I'm thinking about Sens vs Spec, like in the worked example here:

http://en.wikipedia.org/wiki/Sensiti...nd_specificity

but there is something that confuses me. Maybe somebody can explain it to me.

If a test has a high false positive rate (so it gives many positives when you do NOT have the disease). I would say that the true positive rate cannot be lower then the false positive rate. Why? Well, if the reason for the test to come out false positive is purely 'noise/random' then that process also occurs in the condition positive group, right? So, if the FP rate is 70%, than AT LEAST 70% of the condition positive group should come out positive, in fact I would assume that the number of positives in this group is 70% PLUS the TP rate * 30% (1-FP).
And of course we do not label these false positives (as the people are in the condition positive group) but in fact they are correctly determined positive by a fluke.

I hope this makes sense. I want to know who (if?) you should compensate for this?

The ultimate reason why I ask this is that I want to set up a maximum likelyhood estimation program on a binary data set, and the FP and FN rates are some of the parameters.

Thanks for any help,
Bas

2. ## Re: Sensitivy/specificity - I'm confused

So, if the FP rate is 70%, than AT LEAST 70% of the condition positive group should come out positive
Is that necessarily so? There are nearly always true positives which
come out as false-negative cases.

Kind regards

K.

3. ## Re: Sensitivy/specificity - I'm confused

Yes, ok, that's correct, still, my point is the same.
The expected number of positives is at least the FP rate is what I meant.

5. ## Re: Sensitivy/specificity - I'm confused

Originally Posted by Bas van Dijk
So, if the FP rate is 70%
Hold on right there. How are you defining "false positive rate"? Number of false positives divided by what?

Also:
Originally Posted by Bas van Dijk
The expected number of positives is at least the FP rate is what I meant.
Since the false positive results are a subset of all the positive results, obviously the number of false positives could never exceed the total number of positives... am I misunderstanding what you're trying to say?

6. ## Re: Sensitivy/specificity - I'm confused

Since OP refers to the WP page for sensitivity and specificity, I'm guessing he is using the terms based on the definitions there:

True positive rate = Sensitivity = True positives/(true positives + false negatives)
False positive rate = False positives/(false positives + true negatives) = (1-specificity)

It's hard to imagine a case where the TP rate could be higher than the FP rate. This'd basically mean that a person would be less likely to receive a positive test result if they actually do have the disease than if they don't. Strange things happen though, especially if tests are validated with one sample and then used with a completely different population. OP talks about assuming that:

the reason for the test to come out false positive is purely 'noise/random'
But I'm not sure what this really means in concrete terms.

7. ## Re: Sensitivy/specificity - I'm confused

I see. It helps me to use the language of signal detection theory when thinking about binary classification problems. So in this language, true positive rate = 'hit rate,' and false positive rate = 'false alarm rate.' A standard measure of accuracy in detection theory is d' = z(hit rate) - z(false alarm rate), where z() is the z-transform. Sometimes people do away with the z-transform and use the "corrected hit rate," i.e., hit rate - false alarm rate. So yes, one certainly hopes that the false alarm rate does not exceed the hit rate for a modern medical test, as this would indicate "negative accuracy." But there is certainly no logical requirement that the hit rate must exceed the false alarm rate. Consider a group of decision makers who choose completely at random. If we were to tally the accuracy of all of these decision makers, we would find that their corrected hit rates were 0 on average, with some above and some below--implying that for some, their false alarm rate exceeded their hit rate. On the other hand, if we assume that a test has above-chance accuracy, then by definition the hit rate exceeds the false alarm rate (i.e., this is what it means for a test to have above-chance accuracy). Maybe this very last part is close to what the OP is getting at.

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts