Search results

  1. J

    hypothesis testing: proportions with 3 samples

    I conclude that the means differ, all 3 and all pairs. I'm working without a net. Why don't you show us that I'm incorrect. Anybody?
  2. J

    hypothesis testing: proportions with 3 samples

    You folk seem a little nervous.
  3. J

    hypothesis testing: proportions with 3 samples

    No, Dason, you are wrong. I learned many years ago, and taught my students, that there are two times when statistics is inappropriate. This case is one where we know everything we need to know by looking at the data.. We can dazzle the uninformed, bit bring nothing to the table. The second is...
  4. J

    hypothesis testing: proportions with 3 samples

    "How could I test if there is a significant difference in the proportion of customers that would return for each hotel?" All you need do is calculate NO/REPLIES, 41%, 14% and 26%, there are obviously significant differences between each pair. Its easy. joe b.
  5. J

    Need help finding sampling error

    Do you have the excel file? I'd like to see it.
  6. J

    Need help finding sampling error

    It is not possible. There is no question, you need a question or hypothesis. Even then, nobody knows the sampling error. alpha is the allowed sampling error/P accepting Ha when Ho is true; but nobody knows THE sampling error. Just lke we never know mu. You need more info, including a question...
  7. J

    hypothesis testing: proportions with 3 samples

    I would add to the table; it looks like no %NO is even close to another. Then check the Binomial, and find that the probability of any two %NO is ~Zero.
  8. J

    CV constant for Normal distribution based variables

    The model covers n = 2-n = 10. Could the cv have anything to do with the Tippett bias corrections for estimating sigma from range? Just a thought, I can't make that or any connection. Thanks; joe b.
  9. J

    CV constant for Normal distribution based variables

    With mu and sigma defining a random Normal distribution, sample pairs, x,y, define a point. The max distance between pairs of points has s/x bar constant for each n, for any/many values of mu and sigma. For n = 5, 5 pairs of points, CV is .27 for any mu and sigma tried. The .27 has been sorta...
  10. J

    t, Z and sigma again

    Got it, now I understand, thanks; joe b.
  11. J

    percent difference

    34Some thoughts: An s of .0004 means that measurements of .0797 and .0799 were happening, +/- 1 in 10,000. Measurements with a reasonable unit of measure, of about anything, are complex even today. Measurement to .0001" require equipment capable of precision of +/- .00005". An s of .0004 / mean...
  12. J

    percent difference

    What are the standard deviations? Where do +/- .0004 and .0009 come from? If s is teeny, it may not be proper to say that the percent difference is 0%. If s is huge, it may be proper to say that the percent difference is 0%. As somebody once said, it's all relative. joe b.
  13. J

    t, Z and sigma again

    If you use the Z-test with s and reasonable n, which is the INcorrect use for the test, you get type I error similar to the significance level. (0.05)
  14. J

    Comparing 3 treatments (bimodal results)

    With 2 going in, Ill and Healthy; 3 treatments; 2 coming out, Ill and Healthy;;;;;;;;there are 12 outcomes. Why not make a bar graph, look at it, and a set of answers/conclusions should be clear. Why confuse the process with these exotic statistical tests? Turning a set of data into tables and...
  15. J

    Comparing 3 treatments (bimodal results)

    Are there six patients?
  16. J

    t, Z and sigma again

    Back in the old days, ANOVA assumed that variances were equal, giving thought that differences squared led to the conclusion that means differed. I guess that that changed.
  17. J

    t, Z and sigma again

    Well, it depends where you're standing. In Stats 101, I thought and acted that the Normal distribution was of great importance, introducing the notions of a distribution, a probability distribution, the area under the curve, and CLT. About random and independent and hypothesis testing and...