Recent content by ondansetron

  1. O

    High statistical significance with low R squared coefficient

    Right, but the OP is presenting a Frequentist statitstic, and encouraging the interpretation in the Bayesian framework contributes to the poor literacy we see today and the illusion that a p-value of .08 means there's an 8% chance the null is true or that a "mistake" has been committed. You and...
  2. O

    High statistical significance with low R squared coefficient

    So you're trying to move from a Frequentist interpretation to a Bayesian one. I think your argument, in the Frequentist framework, is flawed in that the null is either true or it is not, "probability" 1 or 0. The p-value doesn't change that in the slightest. It also doesn't imply that [small...
  3. O

    High statistical significance with low R squared coefficient

    As far as I'm aware, the easiest way would be introducing biases that readily occur in practice. If you don't want to, you could certainly repeat the experiment as many times as you want to keep interpreting p-values and end up making many different claims regarding the probability of Ho based...
  4. O

    High statistical significance with low R squared coefficient

    Sure, if you want a real experiment versus a simulation, either can be done. Failure to randomize people into groups in a way that leads to bias away from the null, or introducing some other form of bias that typically occurs in studies and you can get a p-value less than alpha, but the null is...
  5. O

    High statistical significance with low R squared coefficient

    This is what I was saying is incorrect. The p-value doesn't tell you anything regarding the probability the null hypothesis is true or false. I can design an experiment where the null hypothesis is true but where the p-value is very low. This illustrates why it's incorrect to say the p-value...
  6. O

    High statistical significance with low R squared coefficient

    The bold part isn't accurate. A small p-value for the case you provided would indicate that it's improbable to see a coefficient at least as contradictory to the null hypothesis, IF what we saw is entirely due to chance (null is true). In other words, p-values don't tell you any sort of...
  7. O

    Significant 2x2 interaction, but non-significant simple main effects - how to interpet?

    As a general rule, test higher order terms first, before testing the nested terms, and don't test nested terms if the higher order term is significant. For example A*B should be tested before either A or B. If A*B is significant, then , by definition, A and B are statistically useful variables...
  8. O

    Confidence Interval in Statistics test

    So, in the traditional sense of frequentist statistics, I think you can say "95% confident" because they didn't mean it as a probability statement on the interval. They meant it to refer to the methodology and long run success rate if used properly. Almost a short hand was of saying "this...
  9. O

    How serious are violations of regression assumptions

    One of the big problems that I've seen is the misunderstanding that these things are "just calculations" and boil down to a black and white matter (not saying this is you, noetsi, but the people pushing the program). The violation may affect one conclusion in a material way and another in an...
  10. O

    How serious are violations of regression assumptions

    Great example of how "big data" and "analytics" are watered down statistics...I think you can see how violated assumptions screw with estimates and conclusions when you've worked on something that changed dramatically when the assumption violations were remedied or a more appropriate method was...
  11. O

    Ratio of sizes data

    This was my thought after reading your first post. I Think that would be reasonable to use ANCOVA to model Golgi size (volume, area, or however you intended) as a function of genotype after accounting for the covariate cell area/volume (again whichever size measurement you planned to use). Is...
  12. O

    Logistic Regression Models Without Main Effects?

    Another note is that you will have to work with your data to determine which method of relieving collinearity is best. Centering may work in some cases and note in others, depending on the variable, whereas a ridge regression may be worth while in other cases.
  13. O

    Logistic Regression Models Without Main Effects?

    Also, there are ways to handle collinearity if you need to make inferences on the beta estimates (ridge regression, possible centering, partialling out a variable you don't care about, dimension reduction). It is also not advisable, for estimation purposes, to exclude variables that are...
  14. O

    Logistic Regression Models Without Main Effects?

    I don't think it's very reasonable to exclude main effects. By definition, if the interaction is important, you've specified that the variables are important for illustrating the relationship accurately. As a general principle, it's not good to test main effects or lower order terms after a...
  15. O

    Immortality & Bayesian Statistics

    This is different than improbable. Now, you're saying H=>~X (if Hypothesis is true, then we won't see X). The contrapositive is true: X => ~H (If we see X, then H is not true). However, the converse isn't necessarily true. That is, you can't say ~X => H (if we don't see X, H is true)...