# Confidence intervals with sensitivity and specificity

#### throughstream

##### New Member
Hi I'm reading a journal that displays there sensitivity and specificity with 95% confidence intervals however I struggling to see how they worked it out.

[/url]

and they've worked out:

Sensitivity of 99% (95% CI 94% and 100%)
Specificity of 60% (95% CI 56% TO 64%)

I tried using:

but that didn't seem to workout.

Any ideas??

Cheers

#### hlsmith

##### Omega Contributor
Yeah, for the first I got 0.9676, 100.0 and 0.558, 0.633 for second. I used exact numbers pretty much, but perhaps they have rounding errors. The binomial formula you presented is the most commonly used, but perhaps they used a different one (I think there may be a likelihood formula).

#### ondansetron

##### TS Contributor
I decided to chime in...I plugged these numbers (90/91 and 390/654) in to check a few different methods and got this (the formatting looks better in my post before I submit, sorry):

Sensitivity
Method 95% Confidence Interval
Simple Asymptotic (0.96759, 1.00000)
Simple Asymptotic with CC (0.96210, 1.00000)
Wilson Score (0.94035, 0.99806)
Wilson Score with CC (0.93168, 0.99943)
Notes on C.I.: 1) CC means continuity correction.
2) Wilson Score method with CC is the preferred method, particularly for
small samples or for proportions close to 0 or 1.

Specificity
Method 95% Confidence Interval
Simple Asymptotic (0.55873, 0.63393)
Simple Asymptotic with CC (0.55796, 0.63470)
Wilson Score (0.55827, 0.63326)
Wilson Score with CC (0.55750, 0.63401)
Notes on C.I.: 1) CC means continuity correction.
2) Wilson Score method with CC is the preferred method, particularly for
small samples or for proportions close to 0 or 1.

Quite possible that they rounded. As hlsmith, I used the exact numbers (since the computer did it for me).

You could always email the corresponding author and ask how they arrived at their number and explain the approach you used. From the looks of it, sensitivity most closely matches rounding with the Wilson score method without continuity correction. However, the upper bound on specificity doesn't round from any of these methods (unless they rounded earlier than the final step).

Last edited:

#### hlsmith

##### Omega Contributor
You can also always post a link to the paper. Perhaps they were controlling for other variables?

#### ondansetron

##### TS Contributor
You can also always post a link to the paper. Perhaps they were controlling for other variables?
I guess they could have bootstrapped the CIs also, I suppose. Seeing the paper is probably the way to go.

#### GretaGarbo

##### Human
I got this:

Code:
#R code below
library(binom)

n <- 654
p <- 390/654
p
x <-  390

binom.confint(x, n, conf.level = 0.95, methods = "all")

method   x   n      mean     lower     upper
1  agresti-coull 390 654 0.5963303 0.5582669 0.6332686
2     asymptotic 390 654 0.5963303 0.5587279 0.6339327
3          bayes 390 654 0.5961832 0.5585685 0.6336334
4        cloglog 390 654 0.5963303 0.5576499 0.6328017
5          exact 390 654 0.5963303 0.5575981 0.6341890
6          logit 390 654 0.5963303 0.5582320 0.6333012
7         probit 390 654 0.5963303 0.5583392 0.6334330
8        profile 390 654 0.5963303 0.5584055 0.6335018
9            lrt 390 654 0.5963303 0.5584113 0.6334876
10     prop.test 390 654 0.5963303 0.5574997 0.6340131
11        wilson 390 654 0.5963303 0.5582711 0.6332644
It seems difficult to round the upper interval to 0.64.

#### hlsmith

##### Omega Contributor
What plans do you have for the results in this paper? If you are just trying to see what they did, well that is always hard to do unless authors are very detailed or post their code. If you want to see how the test may impact your population, well the difference seems fairly trivial to me.