# Risk Ratio and Confidence Intervals...

#### datacrunch

##### New Member
Hello,
I would be hugely grateful if someone could help shed some light?! As I understand it, Risk Ratio (RR) of <1 indicates under prediction, 1 correctly predicts and >1 over predicts.
For example:
Events predicted 4 , Events observed 6 (4/6=0.6 which indicates under prediction)
Events predicted 4, events observed 4 (4/4=1 which indicates correct prediction)
Events predicted 4, events observed 2 (4/2=2 which indicates over prediction)

So, with this in mind, results from a study are as follows:

RR 0.86, 95% Confidence Interval (0.47-1.58)
RR 0.99, 95% Confidence Interval (0.67-1.47)
RR 0.84, 95% Confidence Interval (0.6-1.19)

The authors conclude that the results show that the prediction rule correctly predicts risk in the three groups, but as I see it because the confidence interval value crosses the value 1 these results cannot really be applied to the general population. Yet the authors make no big deal about this.

Any thoughts would be gratefully received!

Thanks

#### hlsmith

##### Omega Contributor
What paper is this, please cite.

Prediction and risk ratios are not quite the same thing always.

#### datacrunch

##### New Member
What paper is this, please cite.

Prediction and risk ratios are not quite the same thing always.
Hi there,

Prognostic value of the ABCD2 clinical prediction rule:a systematic review and meta analysis
Galvin et al. 2011

The only other thing I can think of is that the wide CI are due to significant heterogeneity in the studies.

Thanks for taking the time to look at this..

#### rogojel

##### TS Contributor
Hi ,
I would guess that the logic is that confidence intervals contains the value 1 in each case, so there is no reason to reject the hypothesis, that the prediction is correct. That is assuming that the null hypothesis is that the prediction is correct, in analogy to the odds ratio in binary logistic regression.

regards

#### CE479

##### New Member
Hi,

It looks like they have compared the PREDICTED number of strokes across the risk groups to the OBSERVED number of strokes in each of the subsequent validation studies.

Therefore, if the PREDICTED / OBSERVED is greater than 1 this is over prediction, and visa versa.

As mentioned by Rogojel, they expect there NOT to be a difference, so the fact the confidence intervals overlap means they do not reject the null hypothesis of no difference.

I am no stats expert, but I do not think the wide confidence intervals are due to the heterogeneity in the studies, but more a reflection of sample size.

They use Mantel-Haenszel:
http://sphweb.bumc.bu.edu/otlt/MPH-...nfounding-EM/BS704-EP713_Confounding-EM7.html

The above link talks through this approach and also shows the different between risk ratios and odds ratios, which is very useful.

#### hlsmith

##### Omega Contributor
Sorry for the delay I was at home hanging out my girls yesterday. I just open the paper to deciphered your question.

Yes, so this was a meta-analysis of ABCD predicting TIA. Pretty straight forward approach, if you treat it as a diagnostic meta-analysis, using a scoring systems (clinical indicators). So they are looking at the generalizability of the system, reminds me of the Wells Scoring system for pulmonary embolism.

So the M.H-RR are kind of moot. I wouldn't focus on them. They are telling you the risk of TIA for that risk stratum. I didn't read the whole paper, but they likely collapsed the other strata as the unexposed group (comparison). So it makes since that they kind of suck and also as a clinician how would or why would you care about these. They would be interpreted as no greater risk for TIA in that risk strata compared to comparison group. Kind of worthless, but I am guilty of doing this in a paper I wrote on a diagnostic prediction system, but I focused on also presenting Table 5. Low risk group didn't have signs of heterogeneity (chi-sq and I^2 < 0.50) between studies, though the latter two did. Just meaning that if they did not use random effects, which I don't think they did, they will have smaller confidence intervals and pvalues, in not accounting for the sampling variation between studies.

Depends on your purpose, but all of the good results are in Table 5. So the ABCD sucks at ruling in a positive diagnosis (e.g., low Specificity), but performs well for ruling out TIA risk with a high Sensitivity and probably a nice low negative likelihood ratio (1-sen/spec) for level 3 or greater, moreover your probability of having a TIA when your score is 3 or greater is only 7.4%. Then just look at all levels for the score and examine those metrics. The 5 or greater is kind of a wash with plenty of false positive and false negatives. But if this is the best prediction tool that there is, then you have to take what you got.

#### CE479

##### New Member
Hi,

@hlsmith- I have never really understood why people use ruling in and ruling out with regards to specificity and sensitivity. For example, if you had the following:

True Positives n=4, False Negatives= 1, False Positives= 4, and True Negatives= 91

Then specificity is 91/95, but the PPV is only 4/8.

Just wondering what your thoughts were on this?

#### hlsmith

##### Omega Contributor
PPV = 4/4 =1.00

The ppv and specificity will be similar. Both have fp in denominator

The npv and sensitivity will be similar. Both have fn in denominator.

#### CE479

##### New Member
Is PPV = TP / (TP+FP) , therefore 4/8, which is quite different from 91/ 95.
I had wondered if the justification was related to the change in risk. (I.e. the risk before the test is 5/100, (have disease/ population) whilst if you have a positive test it is 1/2)?