# Normal Approximation, Fisher's Exact Test, Relative Risk or Odds Ratio?

#### WeeG

##### TS Contributor
A clinical trial is being planned, in which the primary outcome is binary (success/failure). There are two treatment groups (treatment/control). The aim of the trial is to show that the treatment is better. The outcome is bad, meaning that in the treatment group, we wish to have a smaller proportion.

The events of interest are rare, and assumed to be around 7% in the control group and 5% in the treatment group.

I did a lot of reading, and some people say that with low rate of events it's better to use the risk ratio (relative risk) over the proportion difference (with normal approximation). It's not clear to me why.

I wanted to ask, under the conditions described above, what is best: Fisher's Exact Test, Odds Ratio (with a CI or hypothesis testing of OR=1), or Relative Risk (with CI or hypothesis testing of RR=1)? Will I get similar results in all? Should sample size be roughly the same?

Thank you !

#### hlsmith

##### Not a robit
If you are doing a trial, it is likely prospective. You always want to be examining risk, people just use odds because they are unable to confidently collect data on incidence, so they are required to use odds measures instead. If you are collecting prospective measures, you should be using a risk measure. Fisher's, well skip it and go with directly with risk measures. Now the question is should you be examining risk on the additive (differences) or multiplicative (ratio) scales. Well, I will tell you that the additive scale is much more interpretable by clinicians and lay people. E.g., 4% reduction in risk versus 0.98 times lower risk. It is typically a not brainer.

Now you mention that some say multiplicative scale is better than additive with rare outcomes! Hmm, can you provide some sources, so we can see what the posited arguments are?

This question is ripe for playing around with some simulations and seeing what can empirically be discerned. I know there is a current debate about the scales and the push is to look at differences.

Now my turn to ask questions, is the control group exposure a known drug? If so, should you be looking at a superiority test (one-sided) instead of two-sided tests, given your hypothesis or is there insufficient info.

Second, will this be a randomized trial, so you don't need to control for confounding patient differences?

Third, are you dealing with HUMANS? If so, I would imagine an institutional review board or ethics committee needs to approve this. In addition, would that mean that you should use a data safety monitoring board to run interim analyses as well to prevent undue harm between treatment group effects. Lastly a general comment, I would imagine with a 2% difference, this is going to have to be a large study!

#### WeeG

##### TS Contributor
Hi, thank you for your response, it is nice to find someone who knows not only the statistics, but also the methodology and jargon of the clinical trials !

I do not have a paper talking about the low rates, while searching on Google I found a few websites which stated that when the rates are low, ratio is better than difference. I do not know why, they did not specify.

I ran a SAS simulation, in which I draw 1000 samples from the Binomial distribution according the my assumed proportions. I did this for several sample sizes. For each sample I ran both the Fisher's exact test and the logistic regression from which I took the CI. Both simulations gave almost identical results regarding the power ! The CI is a Wald CI. I think this answers the Fisher vs. OR question.

The control group is a standard of care. This is a procedure in which complications are rare, but when occur, very severe. The new treatment comes in addition to the standard of care, and suppose to reduce the rate of complications. For the clinicians, every 1% difference is important, however, like you said, for the statistics, small effects means huge sample sizes - which the sponsor cannot pay for !

The trial will be randomized, however not double blinded, only single blinded, due to restrictions of the treatment (it's not a drug, so the physician will know what he is using).

I am dealing with humans, and of course ethics committee will be involved. Regarding your question about a one-sided test. I don't think it will make a difference. As far as I know, when you approach the FDA with a suggestion for a one-sided superiority test, they ask you to use a significance level of 2.5% instead of 5%.

I have a question regarding your advice to use risk ratio. If I decide to do just that, what is the preferred method? Should I calculate the risk ratio and report the 95% CI ? Is there an hypothesis testing procedure for the RR ? Any idea how SAS/R does it ? Should I write it manually?

#### ondansetron

##### TS Contributor
Hi, thank you for your response, it is nice to find someone who knows not only the statistics, but also the methodology and jargon of the clinical trials !
Hlsmith is pretty darn good

I do not have a paper talking about the low rates, while searching on Google I found a few websites which stated that when the rates are low, ratio is better than difference. I do not know why, they did not specify.
I have heard actually the opposite of those websites, but it's more for an ethical reason. Assume drug x reduced the disease recurrence from 0.5%(0.005) to 0.25% (0.0025). The absolute risk reduction is 0.25% but people want this to look more substantial, so they choose a relative measure such as relative risk reduction to get (0.0025/0.0050) -1 = -0.5 *100% = -50% or a 50% decrease in risk! WOW!!!... or so they would like it to appear. It's obviously contextual whether the 0.25% absolute risk reduction is clinically meaningful, but some people will focus on the relative number as a dishonest way to portray the results for something with minimal clinical significance.

Long story short, it depends on the context, and it isn't necessarily unethical to use a relative measure (I'm assuming this is what was meant by ratio). However, one must clearly give information for both sides, the absolute and the relative, in order to fully inform those reading. It's the same as people changing the axes on graphs to make a tiny trend look huge, for example. If there is a mathematical reason for it, such as an estimation/modeling procedure that needs it, that's more justified obviously. A bit off topic, but somewhat related to that issue.

I ran a SAS simulation, in which I draw 1000 samples from the Binomial distribution according the my assumed proportions. I did this for several sample sizes. For each sample I ran both the Fisher's exact test and the logistic regression from which I took the CI. Both simulations gave almost identical results regarding the power ! The CI is a Wald CI. I think this answers the Fisher vs. OR question.
Maybe hlsmith can clarify, but I believe you may want to ditch the Wald CI stuff in smaller samples in favor of profile likelihood CIs (or another option from an exact logistic regression, if I recall). So, this could depend on how many participants you end up recruiting.

The control group is a standard of care. This is a procedure in which complications are rare, but when occur, very severe. The new treatment comes in addition to the standard of care, and suppose to reduce the rate of complications. ...
I think with this then perhaps it would be important to calculate things like NNT/NNH. What do you think?

The trial will be randomized, however not double blinded, only single blinded, due to restrictions of the treatment (it's not a drug, so the physician will know what he is using).
I've heard the trend is to simply state who was blinded and who was not blinded. I agree with the old terminology of single or double blind to mean patient or patient and anyone not a patient, but maybe my interpretation is incorrect. I have heard several researching faculty at my school harp on this as a source of confusion and they say it makes it unclear when you're evaluating the quality of evidence because you can't tell who was blinded since people have different meanings. The long story short is, it may be helpful to clarify precisely who was blinded; was it only the patient, or were data analysts, nurses, and other blinded with the exception of the physician? Just something to consider.

I am dealing with humans, and of course ethics committee will be involved. Regarding your question about a one-sided test. I don't think it will make a difference. As far as I know, when you approach the FDA with a suggestion for a one-sided superiority test, they ask you to use a significance level of 2.5% instead of 5%.
I've never dealt with the FDA. Do they have a preset guideline that if you are doing x, you must do y?

I have a question regarding your advice to use risk ratio. If I decide to do just that, what is the preferred method? Should I calculate the risk ratio and report the 95% CI ? Is there an hypothesis testing procedure for the RR ? Any idea how SAS/R does it ? Should I write it manually?

The risk ratio will approximate the odds ratio as the prevalence of what you're measuring declines (think rare..just write out the formula for each odds and risk to see how this works, it holds for the risk ratio and OR, too). If I'm not mistaken, some people use logistic regression for this purpose-- hlsmith, what can you comment? Either way, there will certainly be methods for you to use.

#### hlsmith

##### Not a robit
Odan, has many good points. I would be interested in links to text recommending ratios.

Yes, risks are the way to go. And then as mentioned you can easily get NNT, etc. Yes, ORs will approximate RRs when outcome is under 10%, though you can get risks using PROC GENMOD. IF I get time later I will upload some basic code.

Great to see you ran some sims! Yeah the 2.5 alpha seems reasonable enough. Good to see it is standard care plus new therapy.

#### hlsmith

##### Not a robit
Funny enough, I was just trying to look up a SAS paper on getting risk differences from GENMOD, but then remembered that you are randomizing. Given the study is not biased from not blinding the investigators and you achieved covariate balance via a solid randomization protocol; you can just some thing like:

Code:
PROC FREQ DATA=your_file;
TABLE outcome*exposure / relrisk riskdiff;
RUN;
I was just guessing, but the code should be something very similar to the above piece. Though, you may want to still examine for systematic errors in your study design via GENMOD (I believe DIST=POISSON; LINK=LOG), e.g., you could stratify by individual provider, etc. You can also use the above code in your power calculation.

#### hlsmith

##### Not a robit
Also keep in mind whether it is possible for broken randomization assignment, meaning not everyone got the explicit treatment they were supposed to or they discontinued treatment. So they may not know their treatment group per se but, side effects or for implicit reasons break assignment. So intent to treat and per protocol may be considered.

There can also be loss to followup and disproportional loss to followup.

So keep these in mind when doing sample size calcs.