Question about sample size penalty for looking at data mid-trial

#1
Hi, we're running a clinical trial for a skin treatment with a medical device that has a strong effect. We are using an inter-patient control (no treatment arm) vs active treatment arm and estimate we need 8 patients enrolled for significance. That being said, for marketing reasons, we're going to do 30 patients.

I'm the CEO and would like to take a look at the data after 15 patients to help with fundraising. I would keep the investigators blinded to the results and would not use the data for any decisions (stopping or continuing). For this reason, I don't think its an interim analysis. Do I need to decrease my p-value at the 15 or 30 analysis? I'm trying to argue w/ my statistician that I don't (I'm losing right now).

If so, what is a reasonable approach to use? Obrien-Flemming seems to be too conservative to me at the first look. Should I try a different approach? If so which one?
 

Junes

New Member
#2
Hi, welcome.

That's an interesting problem. In principle, I think there is nothing wrong with looking at the results ad interim, even with null hypothesis testing. As long as you stick to your plan and don't base your decisions on it. And on the one hand, you don't. There's nothing necessarily wrong with looking or even communicating preliminary results, I think.

However, when you're talking about p-values and significance, you're inevitably in testing territory, and other rules apply. Most likely, you might not claim non-significance at the fundraiser with a result of, say, p = .3 (perhaps say something about "interesting preliminary results", giving your audience a chance to hear about the "significance" after the study), whereas you might be tempted to proclaim a result of p = .03 "significant". Thus, for a member of the audience at the fundraiser, their "internal Type-I error" (so to speak) is inflated.

Personally, I would just report preliminary effect sizes but leave out all talk of p-values.
 

rogojel

TS Contributor
#3
hi,
I would take the side of your statistician here.The point of the sample size calculation is to make sure you have enough data to draw reliable conclusions from. This also means that smaller sample sizes are unreliable, essentially random numbers. How would random numbers help you?

And this is only the most obvious problem - even with the best of intentions a CEO looking at intermediate results would be a huge warning sign to me - there are so many unconscious signals that influence an experiment....

regards
 

hlsmith

Omega Contributor
#4
Agreed. Early review of data is typically underpowered or runs the risk of random sampling not doing what it is suppose to. Meaning by chance sampling variation may have placed the first extreme case into one of the two groups and the 50% probability of treatment balance for covariates may not have occurred yet. Subconsciously if the results are great you embrace them and if not you just say it is too early, which isn't rational.