Sample size calculation for 1:2 matched case-control study

I have a matched case-control study with 12 cases and two controls for each case. Evaluating our findings, I want to calculate what our sample size must have been to detect a difference found in another study of a different population.

For the sample size calculation, I intended using the mean values from the other study, but apply the standard deviation from our study.

Mean among cases: 3.14
Mean among controls: 1.58
SD: 1.41
Power 80%, significance level 5%.

How can I go about to find out what sample size we would need in our 1:2 matched study? Can I use any of the calculators found here? Any comments on 2-sided versus 1-sided testing? Normally, I use Stata.

Please do not hesitate to request more info if needed. Thank you in advance.
I do have the data and can provide if necessary. I have used conditional logistic regression.

Note that the SD is from our study sample, from which I have all the data. The means I provided was from the other study, from which I do not other than summary data. The SD is of an indice, and is a dimensionless number.
Typically, power calculations are performed before sampling. The post-sampling/experiment power calculations using the realized numbers are fairly useless in comparison to the a priori calculations.
Absolutely. I will try to explain a little more:
1. The research is done in an area where cases are very hard to find. Thus, our findings are long-awaited and interesting even when sample size is small.
2. Other studies have found statistically significant differences between cases and controls. We (and others in the international literature) hypothesize that these differences may be exaggerated due to various biases. Our matched study definitely has its own biases, but we are controlling for several of the most important known confounders. Therefore, we expect that we will find less difference between cases and controls than in other studies.
3. Our results point to almost no differences between cases and controls. However, we must be very cautious about reporting a negative finding because of the small sample size (fasle negative / type II error). I want to calculate how large our study should have been to be able to detect the difference reported in another study.

Hope that makes sense.


Omega Contributor
Sorry, I know this isn't answering your question, but it sounds like your hypothesis is for a negative study. If so, you still want to run your study and then do a post hoc power calculation over doing an equivalency test?

Why did you run logistic when it seems like you are reporting data for a continuous outcome?
What do you mean by an equivalency test (perhaps a dumb answer)?

The outcome is dichotomous (case/control status). The means and SD are for one of the independent variables.


Omega Contributor
Most times people test that two groups are not equal in their dependent variable (two-sided tests for nonequivalency). However if in those tests the p-value is not below 0.05 or whatever your cutoff is, that does not imply the two groups are the equal, you may have just failed to reject the alternate hypothesis that the groups are unequal. Maybe this is because of a small sample size, etc. Thus, if your agenda was to show the two groups are equal you don't run the above test, you run something that is very similar, but it test the alternative hypothesis that the two groups are equal in their dependent variables.

Does this make sense?