Were individuals randomized to treatment groups? If so, and changes in weight loss appear reasonably normally distributed, a two-sample t-test would work.
Let's say I want to know whether a certain weight loss pill works. I have before and after weights of 15 people who took the pill and 20 people who took a placebo. What would be the most appropriate statistical test to determine whether the pill causes weight loss?
Thanks.
Eric
Were individuals randomized to treatment groups? If so, and changes in weight loss appear reasonably normally distributed, a two-sample t-test would work.
Stop cowardice, ban guns!
efoss (08-21-2016)
Hi,
would't testing the before-after differences between the groups have a better power?
regards
efoss (08-21-2016)
I was also thinking in terms of a paired sample t-test, because it will certainly have better power (i.e. I'm agreeing with rogojel's "before-after" suggestion). But I think I should change my example a bit, because I think my example wasn't quite what I wanted to convey. (The real thing I'm dealing with is more complicated, and I tried to simplify it with a more intuitive example.) If I do a paired sample t-test, I can ask whether my weight loss pill made a difference in weight, and I can ask whether my placebo made a difference in weight. What if I have two pills and I don't know whether the effect of either of them on weight is different from the other - either of them could cause weight to go up, down or have no effect. So I have a bunch of pairable before and after data, and I want to know whether these two groups are different from each other in how their before and after measurements change. The paired t-test can tell me whether my "afters" are different than my "befores" for either pill, but it seems like it's a step more complicated to ask whether the degree to which the "afters" are different from the "befores" is different in one group than in the other. I hope this makes sense. (The real data are the numbers of different pollen types that come from heterozygous or homozygous corn plants, and I want to see whether the numbers of two types of pollen are different in the two different types of crosses. My analogy to the weight loss pill may be flawed, but I'm not going to worry about that for now.)
Last edited by efoss; 08-19-2016 at 01:11 PM.
That is what I was referring to as well, compare the differences of the two groups.
Are you now trying to say the treatments weren't randomized so you need to control for other variables?
Stop cowardice, ban guns!
efoss (08-21-2016)
Hi hlsmith,
To try to clarify what I was saying: Let's say I have two pills, A and B, both of which may have effects on someone's weight. I weigh 15 people, give them pill A, wait a month, and then weigh them again. And I do the same thing with pill B. Now I can do paired t-tests to determine whether the weight of people who were given pill A is different after one month than it was before, and I can do the same thing for pill B. But what if I learn that both groups weighed significantly less at the end of the month than at the beginning? Did one pill cause more weight loss than the other or are the two pills, both of which caused weight loss, indistinguishable from each other? How can I know that answer? My two paired t-tests, one before-and-after for pill A and the other before-and-after for pill B won't answer that question.
Thanks.
Eric
P.S. Having grown up in Ames, IA, I like your "location" signature!
How do randomized control trials show causality? Randomize treatment status. So why can't you attribute the effect?
Look up the Bradford-hill criteria and in addition if you can rule out confounding, each person has a probability of being in either group and the effects of one treatment does not effect outcome of other group, and there aren't multiple version of the treatment:why can't A be the source of the change?
If there are two groups and you are looking at the person differencesin in weight why would you use paired test?
Small world. Was in Seattle last summer and had a good time!
Stop cowardice, ban guns!
efoss (08-21-2016)
hi,
it is not exactly a paired t-test what I would propose : I would first calculate the differences before-after for the group treated with A, the same for group B and then a simple t-test for the differences .
The reason to do this would be to increase the power of the test by eliminating the variance of individual weights .
regards
efoss (08-21-2016)
Agreed. Though this is given treatments were randomized. If treatments weren't, then you may need to control for initial weight.
If you had more heavier people in one group and both treatment were efficacious, the group with heavier people may appear to be associated with greater loss but those people had a high potential for greater loss.
Stop cowardice, ban guns!
efoss (08-21-2016)
Right,
I actually assumed that, but it definitely needs to be stated.
regards
efoss (08-21-2016)
Hi hlsmith and rogojel,
This approach of t-tests of differences is what I had first tried. It gave me answers that seemed to make sense, i.e. when I’d just look at the data it looked like it would be kind of border line significant, and the test gave me p = 0.06. But it seemed wrong because of the weight issue you guys mentioned. My two groups are randomized, so it’s not like the group that took pill A were heavier than the group that took pill B. And certainly if everyone started out with identical weights, the difference t-test would be good. But if someone starts out at 200 pounds and loses 10 pounds, I’ll score that the same as if someone starts out at 90 pounds and loses 10 pounds. I could convert everything to “fraction of initial weight gained or lost”, but that seems like it will open a whole new can of worms.
hlsmith: I’m reading up on your comments about Bradford-Hill and still digesting what I am learning to figure out whether I’m on the right track with your suggestions. Thanks.
Eric
What does the differences and standard deviations look like? Post a boxplot comparison.
Perhaps given you sample size there just isn't a difference!
Stop cowardice, ban guns!
efoss (08-21-2016)
hi,
I guess you will need to be wary of p-value hacking here.Re-analysing data that was marginally significant, with new methods, to get a "better " result is definitely a no-go. I agree with hlsmith, maybe your power is too low or your effect size small?
regards
efoss (08-21-2016)
Yes, I think that probably in the end there just isn't a difference. Attached is a box plot using my actual data, as hlsmith suggested. (Obviously the data aren't really from weight loss pills - they are actually data my brother-in-law collected regarding corn pollen. If they were truly diet pills I had developed, I'd be out on a yacht in the Caribbean instead of posting questions on TalkStats.)
I think I now understand the answer to the question I raised. Thanks so much, hlsmith and rogojel. I very much appreciate the help.
Best wishes,
Eric
How do you know we are not on a yacht right now? Tell your brother that he did have a small sample but given controlling for sampling variability there still wasn't a difference.
The image was telling, but you could also overlay the actual data as well to see where they fell amongst themselves.
Stop cowardice, ban guns!
Tweet |