I am a biostatistician consulting on a meta-analysis of 16 studies each w/ 2 treatment arms. The main interest is in the effect of the variable plateau pressure (PP from now on), on mortality rate. As there is a strongly significant difference in PP between the two arms, I do not believe both variables should be included in the model. However, as the results of each arm of a given study are clearly not independent it seemed to me that a way to account for this would be to include the Study id variable (1,1,2,2,...16,16) as a random effect.

When I first ran the model with Overall id (1,2....30) as the random effect, the effect estimate of PP was approx. 0.01, p<0.0001, and when changing the random effect to Study id, the effect of PP was 0.0038, p = 0.119. I am having difficulty understanding what model is ideal and why this change would result from the choosing to estimate random effects on an observation level, v.s. a study level.
I asked about this in a forum in Stack Exchange, and was advised to include a random intercept for Study ID and random slope for treatment arm, but I can't understand why it's not enough to specify a random effect for Study ID, given that the two arms are not random levels from population of levels of interest & why the p-value of the fixed effect changes so much when adding the arm random effect. If anyone has insight on why this change may be occurring it would be very helpful and so appreciated.


TLDR; 1) Why is it appropriate/necessary to use a random effect for the factor arm in a multi-level meta-regression (with interest being in a different predictor); 2) Why does the significance of said predictor change drastically when using Study ID as a random intercept instead of a RE for each observation?