+ Reply to Thread
Results 1 to 7 of 7

Thread: Meta-analysis: How should one handle non-significant studies containing no raw data?

  1. #1
    Points: 18, Level: 1
    Level completed: 35%, Points required for next Level: 32

    Posts
    4
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Meta-analysis: How should one handle non-significant studies containing no raw data?




    Let's say that I'm conducting a meta-analysis, looking at the performance of group A and group B with respect to a certain construct. Now, some of the studies that I'll come across will report that no statistical differences could be found between the two groups but no exact test statistics and/or raw data will be presented. In a meta-analysis, how should I handle such studies?

    Basically, I see three different alternatives here:
    1. Include them all and assign to each one of them an effect size of 0.
    2. Throw them all out.
    3. Do some kind of power analysis for each one of them or set a threshold at a certain number of participants. Include all which should have been able to reach statistical significance and assign to each one of them an effect size of 0. Throw the rest out.

    I can see merits with all the different options. Option one is fairly conservative and you'll only risk making a type II error. Option two raises the risk for making a type I error, but it also avoids having your results ruined because of a bunch of underpowered studies. Option three seems like the middle road between option one and option two, but a lot of assumptions and/or pure guesses will have to be made (What effect size should you base your power analyses on? What number of participants should you demand from each study for it to pass?), probably making the final result less reliable and more subjective.

  2. #2
    TS Contributor
    Points: 17,742, Level: 84
    Level completed: 79%, Points required for next Level: 108
    Karabiner's Avatar
    Location
    FC Schalke 04, Germany
    Posts
    2,539
    Thanks
    56
    Thanked 639 Times in 601 Posts

    Re: Meta-analysis: How should one handle non-significant studies containing no raw da

    Do some kind of power analysis for each one of them or set a threshold at a certain number of participants. Include all which should have been able to reach statistical significance and assign to each one of them an effect size of 0. Throw the rest out.
    I don't quite understand this. But one can often assess which effect would have
    been needed to reach significance, given the sample size. Maybe some value(s)
    between zero and these threshold values (mean value? a random value betwee
    these limits?) can be used.

    It's a bit surprising that a relevant proportion of studies doesn't report descriptive
    statistics together with the test results. Maybe the pages of the Cochrane colaboration
    have information on how to deal with only-p-value-given results.

    With kind regards

    K.

  3. #3
    Points: 18, Level: 1
    Level completed: 35%, Points required for next Level: 32

    Posts
    4
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Re: Meta-analysis: How should one handle non-significant studies containing no raw da

    Quote Originally Posted by Karabiner View Post
    I don't quite understand this. But one can often assess which effect would have
    been needed to reach significance, given the sample size. Maybe some value(s)
    between zero and these threshold values (mean value? a random value betwee
    these limits?) can be used.
    No, you're right. That was both poorly thought through and poorly explained. What I meant was something along the lines of sorting out all the clearly underpowered studies by assuming some small effect size, favoring either A or B. For example, if the study wouldn't be able to detect an effect size of let's say d=0.10 in more than 5% of the cases, we'll proclaim it underpowered and throw it out.

    What you're suggesting seems interesting, but since we're comparing two groups, wouldn't the mean between the effect sizes that would have reached significance always be zero (since we're testing both for group A being better than group B and group B being better than group A)?


    Quote Originally Posted by Karabiner View Post
    It's a bit surprising that a relevant proportion of studies doesn't report descriptive
    statistics together with the test results.
    The question I'm investigating for my meta-analysis is a question that's very often not the main question (or not even a question the authors are asking at all) of the studies that I'm looking at. That might explain why the authors often don't bother reporting any precise numbers. However, I think it's bad practice, and in my opinion, a statistical test should always be accompanied with F-values, p-values, or what have you.

  4. #4
    TS Contributor
    Points: 17,742, Level: 84
    Level completed: 79%, Points required for next Level: 108
    Karabiner's Avatar
    Location
    FC Schalke 04, Germany
    Posts
    2,539
    Thanks
    56
    Thanked 639 Times in 601 Posts

    Re: Meta-analysis: How should one handle non-significant studies containing no raw da

    we'll proclaim it underpowered and throw it out.
    But study selection for meta analysis doesn't use the criterion of whether
    a study is underpowered or not? Throwing out studies with low power and
    p > 0.05 will possibly skew overall results.
    Since we're comparing two groups, wouldn't the mean between the effect sizes that would have reached significance always be zero (since we're testing both for group A being better than group B and group B being better than group A)?
    Ah, yes, I see. I didn't take this into account.

    With kind regards

    K.

  5. #5
    TS Contributor
    Points: 12,501, Level: 73
    Level completed: 13%, Points required for next Level: 349

    Posts
    951
    Thanks
    0
    Thanked 103 Times in 100 Posts

    Re: Meta-analysis: How should one handle non-significant studies containing no raw da

    I don't see how you can claim to have some study in your meta analysis when you don't even have their results. Wouldn't the study be alarmed to find you writing a paper saying they had an effect size = 0? Have you tried contacting them?

    Quote Originally Posted by Speldosa View Post
    The question I'm investigating for my meta-analysis is a question that's very often not the main question (or not even a question the authors are asking at all) of the studies that I'm looking at. That might explain why the authors often don't bother reporting any precise numbers. However, I think it's bad practice, and in my opinion, a statistical test should always be accompanied with F-values, p-values, or what have you.
    It is bad practice. It's slowly improving. Some studies have websites where you can see fuller results that were space-restricted from appearing in print. There may be some doubt, but probably the test results are somewhere being stored, especially by the original study. If they're willing to print a result, they should be willing to share the effect sizes and p-values.
    All things are known because we want to believe in them.

  6. #6
    Points: 18, Level: 1
    Level completed: 35%, Points required for next Level: 32

    Posts
    4
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Re: Meta-analysis: How should one handle non-significant studies containing no raw da

    Quote Originally Posted by Karabiner View Post
    But study selection for meta analysis doesn't use the criterion of whether
    a study is underpowered or not? Throwing out studies with low power and
    p > 0.05 will possibly skew overall results.
    Yes, normally, power doesn't have to be taken into account since you're just weighting the effect size (whether it's statistically significant or not) by the number of participants. This means that in a sample of studies where most of them are using a small number of participants (let's imagine they're all using the smallest number of participants we can think of for a between-subjects designs, n=2), you can still get a sensible estimate of the true effect size. However, imagine an extreme situation where we have a couple of studies with a large number of participants and where the effect size is reported, but also a humongously large set of studies with n=2 where we only are told that there were no significant differences between the groups, and we have to assume an effect size of 0, we would get an estimate of the true effect size that would approach zero as the number of underpowered studies increased, this no matter how large the true effect size actually is. And we can't have that.

  7. #7
    Points: 18, Level: 1
    Level completed: 35%, Points required for next Level: 32

    Posts
    4
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Re: Meta-analysis: How should one handle non-significant studies containing no raw da


    Quote Originally Posted by Mean Joe View Post
    I don't see how you can claim to have some study in your meta analysis when you don't even have their results. Wouldn't the study be alarmed to find you writing a paper saying they had an effect size = 0? Have you tried contacting them?
    You're absolutely right in saying that the best case scenario would be contacting the authors and actually get the raw data. However, this question pertains to what to do if that's not possible. (In my specific case, many of the studies are so old that many of the authors probably are dead or retired, and there are so many of them that it might not be worth putting the time into it.)

+ Reply to Thread

           




Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats