Hi!
I read your question but I didn't really understand what exactly you want.
What was your initial hypothesis test?
I will admit that I have not sat around racking my brain on this one, but thought it may be nice to get some input.
I was enlisted to help out on a study that is attempting to show no effect of a drug. Only one study group, those that got the drug. Outcome a continuous variable that was categorized into a binary groups (20 unit change yes/no).
I ran a hierarchical model (proc glimmix, SAS) looking for predictors since patients received multiple doses and could have had a 20 unit change after each dose. No significant predictors beyond baseline level for the continuous dependent variable were found.
I stated that we should do a power calculation to show that the negative finding was not due to sample size issues (though we are positive that it is not). Now I am pondering which power calculation I should look at with only one treatment group and repeated measures or checking of the concentration. I could also use the continuous outcome of the uncategorized data.
I could look at something along the lines of a one group change in proportion of outcomes. All would be positive then what proportion had a decrease (which test would this be). Though, that does not take into account the hierarchical level data. Not sure it is overly important to control for hierarchical data either - maybe just a basic power test may convey the message.
Stop cowardice, ban guns!
Hi!
I read your question but I didn't really understand what exactly you want.
What was your initial hypothesis test?
There was not a clean traditional hypothesis, since this was an exploratory/preliminary (observational) study.
Ho: Use of drug results in a null change in the biomarker.
Ha: Use of drug results in a 20 unit decrease in biomarker.
I had to go back and look to see what I ended up doing, since this study has been over with for awhile. It appears I attempted to validate the change in the biomarker and address potential sampling variation or insignificancies in sample size by presenting the primary endpoint (median change) with a 95% confidence interval, which was calculated using nonparametric bootstrap percentile interval (via 10,000 resamples with replacement).
Stop cowardice, ban guns!
I guess I missed this thread when it was originally posted. Actually I would discourage you from doing a power analysis at all. The general opinion among statisticians is that post-hoc power analyses are uninformative and/or nonsensical and basically should not be done. To see why they are uninformative, see this wonderful paper by Russ Lenth:
http://www.stat.uiowa.edu/files/stat/techrep/tr378.pdf
What Lenth shows is that post-hoc power is a simple transformation of the p-value and thus adds no new information beyond what is already known if the study has been conducted. If you failed to reject the null hypothesis, then power is necessarily less than 50%. (Technically that is only exactly true in "large samples" and but is a close approximation to the truth for all but the tiniest sample sizes -- all of which is explained in the paper.)
In God we trust. All others must bring data.
~W. Edwards Deming
Why don't you try to work with confidence intervals? People are starting accepting justifications based confidence intervals, putting aside "p-values". You can actually do some interesting observation just playing around confidence intervals and mixing it with information obtained via regression analysis.
This is just an idea...
Tweet |