+ Reply to Thread
Page 2 of 2 FirstFirst 1 2
Results 16 to 28 of 28

Thread: bonferroni correction in multivariate regression

  1. #16
    TS Contributor
    Points: 18,889, Level: 87
    Level completed: 8%, Points required for next Level: 461
    CowboyBear's Avatar
    Location
    New Zealand
    Posts
    2,062
    Thanks
    121
    Thanked 427 Times in 328 Posts

    Re: bonferroni correction in multivariate regression




    Quote Originally Posted by rogojel View Post
    If I understand correctly what CowboyBear says then I would disagree - the assumption that most of the coefficients are exactly zero ( that is they have no effect on my DV) seems to me to be reasonable.That seems to me to be equivalent to Occams razor or the paucity of effects principle from the DoE crowd
    Interesting, I've never heard of the paucity of effects principle. Could you expand?

    Occam's razor says we should prefer the simpler of two explanations that are equally good at explaining the same set of observations. It is not a guarantee that the world itself is simple, so I don't think it's relevant here. You could say that the idea that parameters tend often to be zero is a theory, but that theory would by definition not be as good at explaining actual observations as a theory allowing parameters to vary, so again the razor is of limited relevance.

    If you'd like an empirical demonstration of my point, see this article: Empirical statistics: IV. Illustrating Meehl's sixth law of soft psychology: everything correlates with everything

    The authors take a haphazard grab-bag of 135 educational and biographical variables, and examine their association in 2058 subjects. Despite being conceptually unrelated in many cases, each variable had a significant correlation with about 41% of the other variables. (Not a regression approach, but you get the idea).

  2. The Following User Says Thank You to CowboyBear For This Useful Post:

    rogojel (11-27-2014)

  3. #17
    Points: 4,358, Level: 42
    Level completed: 4%, Points required for next Level: 192

    Posts
    143
    Thanks
    3
    Thanked 37 Times in 34 Posts

    Re: bonferroni correction in multivariate regression

    Quote Originally Posted by rogojel View Post
    If I understand correctly what CowboyBear says then I would disagree - the assumption that most of the coefficients are exactly zero ( that is they have no effect on my DV) seems to me to be reasonable.That seems to me to be equivalent to Occams razor or the paucity of effects principle from the DoE crowd.
    Well remember how SSTs are applied: we either reject the null hypothesis or we fail to reject it. Note that failing to reject a null hypothesis does not equate to accepting the null hypothesis.

  4. #18
    TS Contributor
    Points: 12,227, Level: 72
    Level completed: 45%, Points required for next Level: 223
    rogojel's Avatar
    Location
    I work in Europe, live in Hungary
    Posts
    1,470
    Thanks
    160
    Thanked 332 Times in 312 Posts

    Re: bonferroni correction in multivariate regression

    hi,
    the paucity of effects means thatbwhen we design a screening experiment we can assume that most of the factors will be inert , that is they will have no effect on the studied outcome. E.g if we have 10 factors then we can investigate 1024 different effects for main effects and various interactions between the factors - and we very definitely expect most of those to have a coefficient that is exactly zero.

    I see Occams razor in a bit less restrictive way. My latin fails me , but IIRC the original statement says something like you should not multiply the causes without need. That would fit into,the view, that one should aim for the model that has the least number of parameters without a serious deterioration of model quality, this latter being defined in some objective way e.g. predictive RMSE or such.

    @injektilo - I do not see the practical difference. If I can not reject the null hypothesis that would mean, in practice,mthat I have no basis, for example, to request a new investment in a plant for some machine that will control that factor. If I can prove the null hypothesis false, I have the needed arguments to request such an investment.

    regards
    rogojel

  5. #19
    Devorador de queso
    Points: 95,540, Level: 100
    Level completed: 0%, Points required for next Level: 0
    Awards:
    Posting AwardCommunity AwardDiscussion EnderFrequent Poster
    Dason's Avatar
    Location
    Tampa, FL
    Posts
    12,930
    Thanks
    307
    Thanked 2,629 Times in 2,245 Posts

    Re: bonferroni correction in multivariate regression

    I haven't heard of paucity of effects but have heard of sparsity of effects which seems to be the same thing.
    I don't have emotions and sometimes that makes me very sad.

  6. The Following User Says Thank You to Dason For This Useful Post:

    rogojel (11-27-2014)

  7. #20
    TS Contributor
    Points: 12,227, Level: 72
    Level completed: 45%, Points required for next Level: 223
    rogojel's Avatar
    Location
    I work in Europe, live in Hungary
    Posts
    1,470
    Thanks
    160
    Thanked 332 Times in 312 Posts

    Re: bonferroni correction in multivariate regression


  8. #21
    TS Contributor
    Points: 17,749, Level: 84
    Level completed: 80%, Points required for next Level: 101
    Karabiner's Avatar
    Location
    FC Schalke 04, Germany
    Posts
    2,540
    Thanks
    56
    Thanked 640 Times in 602 Posts

    Re: bonferroni correction in multivariate regression

    If we accept the idea for a moment that true hypotheses "b=0" exist,
    then a singificant F test in mutiple regression suggests that at least
    1 regression coefficient deviates from zero. If we have a model with
    12 coefficients and 3 of them are "signficant", then 2 of them could
    be false-positives (the actual probability that there are false-positives
    depends on additional factors, but basically is > 0). So why don't we
    have to adjust for multiple testing? Somewhere it was asked why we
    have to adjust in ANOVA post hoc tests but not in case of regression
    with dummy coding.

    With kind regards

    K.

  9. The Following User Says Thank You to Karabiner For This Useful Post:

    rogojel (11-30-2014)

  10. #22
    TS Contributor
    Points: 12,227, Level: 72
    Level completed: 45%, Points required for next Level: 223
    rogojel's Avatar
    Location
    I work in Europe, live in Hungary
    Posts
    1,470
    Thanks
    160
    Thanked 332 Times in 312 Posts

    Re: bonferroni correction in multivariate regression

    Hi Karabiner,
    exactly! That is what is bugging me too.

    BTW I am just reading a book http://www.amazon.de/Data-Mining-Bus...words=Ledolter

    and it has a whole chapter about this problem. The author basically suggest using the lasso regression to select parameters . The pb. is the lasso selected parameters might have a high p-value if the model is checked traditionally - I am not sure what to do then.

    I already eliminated the non-significant terms to get a model with low p-values and this seemed to work, but I believe the other alternative is also accceptable, i.e to keep all the terms and screw the p-values.

    regards

  11. #23
    Fortran must die
    Points: 58,790, Level: 100
    Level completed: 0%, Points required for next Level: 0
    noetsi's Avatar
    Posts
    6,532
    Thanks
    692
    Thanked 915 Times in 874 Posts

    Re: bonferroni correction in multivariate regression

    Think of what substantively a slope of 0 means. It means that empirically (and in practice this commonly means correlations not experimental test ) one variable has abolutely no relationship with another variable. It seems unlikely that a researcher would include many if any variables that are not somehow in the same domain in their analysis because placing totally non-sensical variables serves no research purpose and violates the concept of parsimony. Realistically IV and DV will be in the same dimension commonly even if the author feels one will not drive the other. So some correlation is always likely even if not very much.

    Also by shear random chance it is likely that there might be some spurious correlation between variables(even if you had a population, but even more so if you have a sample). This might be pure noise, but it would likely occur. This ignores of course measurement error and movement over time which makes the problem worse.

    So non-zero slopes are probably unrealistic - certainly in correlational studies which I imagine most regression addresses.
    Last edited by noetsi; 12-01-2014 at 03:07 PM.
    "Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995

  12. #24
    Points: 4,358, Level: 42
    Level completed: 4%, Points required for next Level: 192

    Posts
    143
    Thanks
    3
    Thanked 37 Times in 34 Posts

    Re: bonferroni correction in multivariate regression

    Quote Originally Posted by rogojel View Post
    @injektilo - I do not see the practical difference. If I can not reject the null hypothesis that would mean, in practice,mthat I have no basis, for example, to request a new investment in a plant for some machine that will control that factor. If I can prove the null hypothesis false, I have the needed arguments to request such an investment.

    regards
    rogojel
    The difference appears like splitting hairs, but it is nonetheless important. The difference is that you have to conclude that you don't have enough information to say the null hypothesis is false, rather than conclude by saying the null hypothesis is true. It's like saying that because you don't have enough evidence to say that something is not equal to 0, then it must be equal to 0. It's a leap in logic.

  13. #25
    TS Contributor
    Points: 18,889, Level: 87
    Level completed: 8%, Points required for next Level: 461
    CowboyBear's Avatar
    Location
    New Zealand
    Posts
    2,062
    Thanks
    121
    Thanked 427 Times in 328 Posts

    Re: bonferroni correction in multivariate regression

    Quote Originally Posted by Injektilo View Post
    The difference is that you have to conclude that you don't have enough information to say the null hypothesis is false
    Yep. The logical argument:
    I don't have enough information to reject the null hypothesis. That is, this test statistic would be quite probable if the null hypothesis was true (p > 0.05)
    Therefore the null hypothesis is probably true.

    Is obviously a fallacy. But what about the other possibility? I.e., What happens if the p value is statistically significant? Then you have this logical argument:

    This test statistic would be improbable if the null hypothesis was true (p < 0.05)
    Therefore the null hypothesis is probably false

    But that's a logical fallacy too... (probabalistic modus tollens).

  14. #26
    Fortran must die
    Points: 58,790, Level: 100
    Level completed: 0%, Points required for next Level: 0
    noetsi's Avatar
    Posts
    6,532
    Thanks
    692
    Thanked 915 Times in 874 Posts

    Re: bonferroni correction in multivariate regression

    I don't know about baysian approaches, but I was always taught you could never conclude the null was true. It is either rejected or not rejected. That is why researchers set up the alternate hypothesis to test what they really believe is true. They are hoping to reject the null, because if they don't they have learned a lot less than if they do.

    Not good statistics, but good if you want to get published since not finding something is a lot harder sell then finding something.
    "Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995

  15. #27
    TS Contributor
    Points: 18,889, Level: 87
    Level completed: 8%, Points required for next Level: 461
    CowboyBear's Avatar
    Location
    New Zealand
    Posts
    2,062
    Thanks
    121
    Thanked 427 Times in 328 Posts

    Re: bonferroni correction in multivariate regression

    Quote Originally Posted by noetsi View Post
    I don't know about baysian approaches, but I was always taught you could never conclude the null was true. It is either rejected or not rejected.
    It depends a little on the framework. In Fisherian NHST, and the "hybrid" method most people learn now, you can't conclude the null is true.

    But in Neyman-Pearson NHST, you can "accept" the null hypothesis if p > alpha. That said, one isn't strictly saying that the null hypothesis is true, just that you have evidence to justify acting as if it were true (i.e., you have grounds for a decision to guide behaviour).

    In some Bayesian tests - e.g., some of the Bayes Factor tests being developed at the moment - it is possible to provide evidence to support a point null hypothesis. Which is useful if you're doing something like testing for extrasensory perception, where the exactly null hypothesis being tested might actually be true. But in Bayesian estimation more generally you typically specify a continuous prior probability distribution for the estimated parameters, which means you're implicitly saying that the probability that the null hypothesis is exactly true is zero.

    That is why researchers set up the alternate hypothesis to test what they really believe is true.
    In obscure tidbits of information for the day: Another way to use NHST is the "strong" form. I.e., You have a theory that makes a quantitative prediction about what the exact true value of the parameter should be (which might not be zero). You then specify this value as the null hypothesis, and see if you can find evidence to reject it. This use of NHST fits more with a Popperian approach to science (i.e, you specify a theory and then try to falsify it). I've never seen it done in practice - theories in psych are almost never specific enough to predict the exact value of a parameter. But apparently this approach has been used in physics.

  16. #28
    Fortran must die
    Points: 58,790, Level: 100
    Level completed: 0%, Points required for next Level: 0
    noetsi's Avatar
    Posts
    6,532
    Thanks
    692
    Thanked 915 Times in 874 Posts

    Re: bonferroni correction in multivariate regression


    In classes in social science it was drilled into my head over and over again you can only reject the null

    In no research that I have been involved in, all social sciences, would you reasonably know an exact value for the null.
    "Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995

+ Reply to Thread
Page 2 of 2 FirstFirst 1 2

           




Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats