Meta-analysis: How should one handle non-significant studies containing no raw data?

#1
Let's say that I'm conducting a meta-analysis, looking at the performance of group A and group B with respect to a certain construct. Now, some of the studies that I'll come across will report that no statistical differences could be found between the two groups but no exact test statistics and/or raw data will be presented. In a meta-analysis, how should I handle such studies?

Basically, I see three different alternatives here:

  1. Include them all and assign to each one of them an effect size of 0.
  2. Throw them all out.
  3. Do some kind of power analysis for each one of them or set a threshold at a certain number of participants. Include all which should have been able to reach statistical significance and assign to each one of them an effect size of 0. Throw the rest out.

I can see merits with all the different options. Option one is fairly conservative and you'll only risk making a type II error. Option two raises the risk for making a type I error, but it also avoids having your results ruined because of a bunch of underpowered studies. Option three seems like the middle road between option one and option two, but a lot of assumptions and/or pure guesses will have to be made (What effect size should you base your power analyses on? What number of participants should you demand from each study for it to pass?), probably making the final result less reliable and more subjective.
 

Karabiner

TS Contributor
#2
Re: Meta-analysis: How should one handle non-significant studies containing no raw da

Do some kind of power analysis for each one of them or set a threshold at a certain number of participants. Include all which should have been able to reach statistical significance and assign to each one of them an effect size of 0. Throw the rest out.
I don't quite understand this. But one can often assess which effect would have
been needed to reach significance, given the sample size. Maybe some value(s)
between zero and these threshold values (mean value? a random value betwee
these limits?) can be used.

It's a bit surprising that a relevant proportion of studies doesn't report descriptive
statistics together with the test results. Maybe the pages of the Cochrane colaboration
have information on how to deal with only-p-value-given results.

With kind regards

K.
 
#3
Re: Meta-analysis: How should one handle non-significant studies containing no raw da

I don't quite understand this. But one can often assess which effect would have
been needed to reach significance, given the sample size. Maybe some value(s)
between zero and these threshold values (mean value? a random value betwee
these limits?) can be used.
No, you're right. That was both poorly thought through and poorly explained. What I meant was something along the lines of sorting out all the clearly underpowered studies by assuming some small effect size, favoring either A or B. For example, if the study wouldn't be able to detect an effect size of let's say d=0.10 in more than 5% of the cases, we'll proclaim it underpowered and throw it out.

What you're suggesting seems interesting, but since we're comparing two groups, wouldn't the mean between the effect sizes that would have reached significance always be zero (since we're testing both for group A being better than group B and group B being better than group A)?


It's a bit surprising that a relevant proportion of studies doesn't report descriptive
statistics together with the test results.
The question I'm investigating for my meta-analysis is a question that's very often not the main question (or not even a question the authors are asking at all) of the studies that I'm looking at. That might explain why the authors often don't bother reporting any precise numbers. However, I think it's bad practice, and in my opinion, a statistical test should always be accompanied with F-values, p-values, or what have you.
 

Karabiner

TS Contributor
#4
Re: Meta-analysis: How should one handle non-significant studies containing no raw da

we'll proclaim it underpowered and throw it out.
But study selection for meta analysis doesn't use the criterion of whether
a study is underpowered or not? Throwing out studies with low power and
p > 0.05 will possibly skew overall results.
Since we're comparing two groups, wouldn't the mean between the effect sizes that would have reached significance always be zero (since we're testing both for group A being better than group B and group B being better than group A)?
Ah, yes, I see. I didn't take this into account.

With kind regards

K.
 

Mean Joe

TS Contributor
#5
Re: Meta-analysis: How should one handle non-significant studies containing no raw da

I don't see how you can claim to have some study in your meta analysis when you don't even have their results. Wouldn't the study be alarmed to find you writing a paper saying they had an effect size = 0? Have you tried contacting them?

The question I'm investigating for my meta-analysis is a question that's very often not the main question (or not even a question the authors are asking at all) of the studies that I'm looking at. That might explain why the authors often don't bother reporting any precise numbers. However, I think it's bad practice, and in my opinion, a statistical test should always be accompanied with F-values, p-values, or what have you.
It is bad practice. It's slowly improving. Some studies have websites where you can see fuller results that were space-restricted from appearing in print. There may be some doubt, but probably the test results are somewhere being stored, especially by the original study. If they're willing to print a result, they should be willing to share the effect sizes and p-values.
 
#6
Re: Meta-analysis: How should one handle non-significant studies containing no raw da

But study selection for meta analysis doesn't use the criterion of whether
a study is underpowered or not? Throwing out studies with low power and
p > 0.05 will possibly skew overall results.
Yes, normally, power doesn't have to be taken into account since you're just weighting the effect size (whether it's statistically significant or not) by the number of participants. This means that in a sample of studies where most of them are using a small number of participants (let's imagine they're all using the smallest number of participants we can think of for a between-subjects designs, n=2), you can still get a sensible estimate of the true effect size. However, imagine an extreme situation where we have a couple of studies with a large number of participants and where the effect size is reported, but also a humongously large set of studies with n=2 where we only are told that there were no significant differences between the groups, and we have to assume an effect size of 0, we would get an estimate of the true effect size that would approach zero as the number of underpowered studies increased, this no matter how large the true effect size actually is. And we can't have that.
 
#7
Re: Meta-analysis: How should one handle non-significant studies containing no raw da

I don't see how you can claim to have some study in your meta analysis when you don't even have their results. Wouldn't the study be alarmed to find you writing a paper saying they had an effect size = 0? Have you tried contacting them?
You're absolutely right in saying that the best case scenario would be contacting the authors and actually get the raw data. However, this question pertains to what to do if that's not possible. (In my specific case, many of the studies are so old that many of the authors probably are dead or retired, and there are so many of them that it might not be worth putting the time into it.)