# High power, bias or low power, unbiased?

#### giordano

##### New Member
Assuming we have two estimators and a fixed sample size. One estimator is biased and has high power, the other is unbiased and has low power. Which estimator should be preferred? Which questions I have to ask to solve this problem? Are the methods to decide which one should be used?

To clarify my problem, here some figures:
True values is an effect of 50% (proportion). alpha-level is 5% and sample size is fixed.
Estimator A has a power of 90% and estimates an effect of 30%.
Estimator B has a power of 70% and estimates an effect of 50%.

I appreciate any hint.
Thanks.
giordano

#### Dason

So you're assuming the true difference is 50%? It's really up to you to decide on a estimator. Or at least give us the criteria you want the estimator to meet. Most times we choose estimators that are unbiased but sometimes we choose biased estimators that have a much smaller MSE. Typically if you choose a baised estimator you want it to be consistent but that's entirely up to you. If you wanted to you could use an estimator of X = 50% with probability 1. It's an estimator but it's probably not what most would consider a good one because it doesn't even take the data into consideration. It's really what you want to accomplish and what properties you want the estimator to have.

#### fed1

##### TS Contributor
This question is weird because estimators dont really have power, to my knowledge!

#### giordano

##### New Member
@fed1. You are right, thank you. I should say: using estimator A results in a power of 90% by given sample size and alpha-level. Is this wording correct?

@Dason. Thanks for the quick reply. You ask for the criteria. I will give you the criteria. Firstly, a bit more background and motivation. 50% is the hypothetical, true efficacy of a vaccine that means that the vaccine (V) halved the number of diseases compared to a placebo (P). 0% means no effect of the vaccine and 100% the disease is eliminated.

Let's assume that we want to detect an efficacy > 0% and that sample size is given due to practical consideration. With the assumption above (method A estimates 30% and the power is 90%, method B estimates 50% and the power is 70%) both methods would reject null hypothesis: efficacy = 0%, but method B has only power of 70%.
Now, another scenario: efficacy estimated by method A is 5%, all other figures remain the same (power of method A gives 90%, B gives a power of 70% and an estimate of 50%, alpha-level is 5%).

I would prefer method B for both scenario, even if the power is lower than 70%. The reason I tend to B (based on my gut feeling) is that I know that B does not measure very accurate what I want to measure, but it does it "correct". Method A does measure a biased efficacy very accurate and this seems to me more dangerous if I do not know how biased the estimator is.

#### Dason

@fed1. You are right, thank you. I should say: using estimator A results in a power of 90% by given sample size and alpha-level. Is this wording correct?
Not really. Power is something that is calculated for a decision/test. If you're talking about two competing tests then it makes sense to analyze them in terms of power. Estimators don't have power. They have variance and MSE and they can be biased/unbiased or have a bunch of different properties. We have to analyze a point estimate based on the properties we want it to have.
I would prefer method B for both scenario, even if the power is lower than 70%. The reason I tend to B (based on my gut feeling) is that I know that B does not measure very accurate what I want to measure, but it does it "correct". Method A does measure a biased efficacy very accurate and this seems to me more dangerous if I do not know how biased the estimator is.
What exactly are you measuring? How are you coming up with the idea that one is biased and one isn't?

#### giordano

##### New Member
Hi Dason,
Thank you for the explanation.
What exactly are you measuring?
I do a simulation study regarding malaria. The problem is, that the exact diagnose of malaria is not possible. One can have Plasmodium parasite (which are the cause of malaria) in blood and have fever, but the fever may not caused by the parasite. There are individuals with parasite but without fever, thus, they don't have malaria. The more parasite in blood of a febrile individual the more probable is that the individual has malaria.
For a vaccine study, we could build two groups, placebo and vaccine, and measure the incidence rate of "malaria" (I(P) and I(V)). Vaccine efficacy is 1-I(V)/I(P).
Now, I could use a deterministic method (A) to measure the incidence rate: an individual is diagnosed to have malaria if he has fever >37°C and parasite density > cutoff (to be defined).
How are you coming up with the idea that one is biased and one isn't?
This method, depending on vaccine type, can be biased. If the misclassifications in both groups, placebo and vaccine, are similar, than it has no bias, if not there will be bias.

Another method is a probabilistic method (B). Using a logistic regression or a latent class model it would be possible to attribute to both groups (placebo and vaccine) a probability to have malaria given fever (P(M|fever)). The incidence rate of malaria would be P(M|F,P) *I(fever|P) resp. P(M|fever,V) *I(fever|V). These method gives unbiased estimates.
I performed a simulation based on hypothetical distributions of parasite densities and malaria cases and computed power for both methods. Method A has a considerable lower power than method B using the same sample size.