# Sample size help

#### statsfail2

##### New Member
Hi would really appreciate any help as I've spent nearly a day trying to work out what to do.

I have two diagnostic tests looking and looking specificity for each. Each person will have both tests so will be a within study design. I want to work out a sample size to detect a 15% difference in specificity between the test. I'd like to a power of 80% and alpha at 0.05.

Any help would be appreciated.

#### EdGr

##### New Member
I assume that all the patients you are testing are healthy, since specificity only applies in a group without the illness being tested for.

I also assume each patient would then be classified as positive (false) or negative (true) by your two tests.

If so, the comparison would be by McNemar's test.

This is where things get sticky!

Let's suppose you had 100 subjects. 65 are correctly negative on one test, and 50 on the other (absolute 15% difference).

The problem is that McNemar's doesn't care about those percentages. It asks only how many changed + to - and - to +.

So the best power is if all 50 negative on test 2 are in the 65 negative on test 1, which adds 15 more. Then the p-value for McNemar's is basically 0.5^15.

But what if 20 of the negatives on test 2 were positive on test 1, and only 30 of the 65 remain negative on test 2. Now you are comparing 20 + to - with 35 - to +, which is a very different p value.

Unless you have some idea how these patterns are likely to play out (consistency wise) I think power for a McNemar's Test is tricky.

Maybe some others have good suggestions.