I am in the process of coding a number of study results for the purpose of conducting a meta-analysis. At present i am having an issue with the number of effects i am coding for each study.

Ok - a bit of a diversion here to set up the structure of the meta-analysis. In a nutshell, i'm looking at differences between 2 populations of individuals (the populations are defined by behavioural criteria) on neuropsychological measures of Executive Functioning (EF). The problem with EF is that it can be measured/assessed using a WIDE range of instruments. The result of this is that studies generally use a range of measures that measure the construct of EF - meaning that multiple effects are present in each study.

In the methodological literature i have read on meta-analysis, it is strongly recommended that you only derive ONE effect from each study - since taking more than one effect from the same sample will bias the results.

So here is the question: Is there any kind of statistical adjustment i can use to correct for using multiple effects from the same sample (i.e., sample)?

Any help would be greatly appreciated

it does not seem to agree with the literature i am looking at -- cochrane handbook for systematic reviews of interventions. Anyway, was it not usually we use multilevel modelling to control this bias? Meanwhile, has the weight we give the study not already biased the results? And, is there a way to summarise (or aggregate) the effect size in one study in the first stage of data extraction?

maybe I am not really helping
Last edited: