Test for difference between ratio of means, unequal variance.

#1
Hello there,

I an experimental design were I wish to test if the effect of a treatment on several subjects type is different to the effect of the treatment on a control subject type.

(Background: I am a biologist, and have several different cell types. I wish to know for each, if the treatment has a larger effect than the control type).

I have four means, and four sample variances for the 2x2 design. (test subject type vs control subject type vs treatment vs no treatment.

I'm guessing that the variance are not equal because there is a strong correlation between mean and variance for each of the samples.

The two obvious ways to test this would be to test the difference between the two pairs of treatment vs no treatment i.e.
H0: m1 - m2 = m3-m4.

or by using an ANNOVA and looking at the interaction terms (although both of these assume equal variation right?).

However, these methods both compare the absolute difference between treatment and no treatment. I think I want to compare the ratio. I.e:

H0: m1/m2 = m3/m4


You could do this by taking the log of the data and doing the linear comparison, but then the variables in the comparison would no longer be normally distributed.

I read that the ratio of two random variables is a cauchy distribution (although the I gather that the unequal variance might be a problem), but I have no idea how to use this information to construct a test.

Anyone have any ideas on how I might construct a test for this data?

Cheers,

Ian
---
 
#4
I had nearly the same question... Despite searching the forums, I had not found this post before I posted separately. A link to my original post is included at the end. In any case, if one were to go ahead and try to implement the randomization/permutation test using something like matlab, how might the resampling be structured? Using the above post's nomenclature, I too am interested in:
H0: m1/m2 = m3/m4
restated,
H0: m1/m2 - m3/m4 = 0;

where, for example, m1 represents the mean of 6 observations; m2, the mean of 8 observations; m3, 7 observations, and m4, 10 observations. Equal variances not assumed.

Does the following setup seem correct:
I could bootstrap the difference of the ratios: randomly pick (with replacement) samples from the observations for each of m1, m2, m3, and m4 and compute the difference in the ratio each time. This would give me the distribution of the difference which would likely be centered at my observed difference value and I could try to see where Zero falls on this distribution. If it is off on one of the tails beyond 95% of the distribution points, I could conclude the ratios are significantly different. I'm not sure if it would be valid to report the proportional location of the Zero on the distribution as a p-value? So if I do 10,000 resamples and zero falls as the 99th smallest value, can I say the difference in ratios is >0 with p<0.01?

From looking up other references, it seems that to truly set this up as a hypothesis test, it may be better to set up a permutation test instead of bootstrapping. This would allow me to get the distribution assuming H0 and the location of my observed value on this distribution would yield the appropriate p-value. However, for this case, I can't seem to understand how to set up the randomization/reassignment to set up the permutation test. There are no individual observations to randomize between the minuend and subtrahend.

Any suggestions will be greatly appreciated. Thanks in advance.


My original post where I was seeking an analytical solution but I'd be happy with a numerical solution too at this point:
http://www.talkstats.com/showthread...r-single-or-multiple-comparisons-Fieller-s-CI
 
Last edited: