# Thread: Parallel and subsequential diagnostic tests

1. ## Parallel and subsequential diagnostic tests

Hey guys,

i just ran into a problem:

I am having two predictive models:

Model 1:
Sensitivity: 0.8
Specificity: 0.4

Model 2:
Sensitivity: 0.7
Specificity: 0.4

As both are lacking Specificity i tried to combine predictions of both models:

At first I did the conservative approach which says that if either model predicts the case as positive it will be seen as positive. This would be kind of simultaneaous testing and increases sensitivity further ans reduced specificity as I would expect.

Now I wanted to use the models as subsequential models, but as both lack specificity there is no reason for putting one of the models first. Thus I decided to predict a case only as positive if both models give positive predictions.
With this appraoch I am getting a predictivity of approx 0.6 for sensitivity and specificity which is not great but the best I could achieve so far.

I would explain this due to using the tests subsequential, thus increasing specificity but somehow it is not really subsequential as the order of the models doesnt matter.

Is there any explanation to why this can/should or should't be done or why this works?

I would appreciate your help =)

2. ## Re: Parallel and subsequential diagnostic tests

Hmm, lots of directions this could go. Are you just trying to get maximum accuracy?

Question: how do they compare in their discriminatory function? Are they both mostly labeling the observations the same or do they catch different observations?

Once you know this, you could funnel say the FP and TN (only) through the second model.

Super fun topic.

3. ## Re: Parallel and subsequential diagnostic tests

Hey,

The models completely differ. The first model is using physicochemical properties along with structural fragments within a partial logistic regression, the second is just using a rule base, saying if a structural fragment is present it will predict a positive.

I am trying to go for the best accuracy but not at any cost. It has to make sense why I am chosing the approach. Meaning I would have to explain why I am using a different approach than the usual way as in only labeling a positive if both models predict a case as such.

4. ## Re: Parallel and subsequential diagnostic tests

Just curious if this is for a work project or publishable research project. If the form, you have a little more flexibility.

If possible, this would be a good project to split your sample if it is large enough. So have a model building set and test/validation set to try it out on.

5. ## Re: Parallel and subsequential diagnostic tests

Its not intended to be published in a journal, but well ill have to write my thesis about it theferore somehow i need a good or at least logical explanation

I have already done splitting and validation and the outcome were both the models mentioned above.

But I have further looked at my problem and came up with the theory that the order actually does not matter (I would be glad if you could mabe help me if I made a huge mistake while playing this through)

If I'm using both models, assuming I am taking it in the way that that model 2 using the fragments is less specific (using it as a first screening. in case it gives me a positive prediction ill run model 1 and look at the results (kind of a confirmation test): if it confirms the positive prediction ill assume a TP, if it does not confirm ill assume due to the high sensitivity of the first model that it gave me a FP here. (just like its done with medical tests)

So in changing the order of the models, in the cases where the secondary test above gives a negative prediction, with a changed order I would not go for a second prediction by the first model, thus ill have the same results even if the order of the models is changed.

Does this explanation make any sense? Or is it more of a wishful-thinking explanation?
Somehow it does not seem very logic (at least in the case of medical screening and confirmatory testing) that the order of the models is easily interchangeable.

Sorry for all the confusing questions ^^

6. ## Re: Parallel and subsequential diagnostic tests

I think this would greatly benefit if you drew figures illustrating your options and the results of the different combinations. This would also help your committee to understand.

Why can't you incorporate your rule model into the first model or would you lose an understanding of how it functioned?

Is there any cost related to variables collected or test components in the models? Or time element related to getting the variables or results? If so, that could influence the ordering and options you have.

7. ## Re: Parallel and subsequential diagnostic tests

I have illustrated it using ROC curves which show that using both models would be the best option in terms of sensitivity and specificity. This also shows that i am getting the same result no matter which model is put first. Are there other (better?) ways to illustrate the performance?

The problem is I am using a licensed software for the regression model which cannot incorporate the rules, and the same for the rule based model, hence I can only use them separately. (Of course there are options to integrate both models but as I am coming towards the deadline this is not possible for me anymore)

No they are solely predictive models of chemical structures, therefore there is no real cost associated to running a model, except for few minutes to load a virtual structure in and let the model run.

8. ## Re: Parallel and subsequential diagnostic tests

Sounds good. Are you able to plot the individual ROC curves for each model and then there combination?

There are also formal test for comparing curves to confer improvement. This may be an option for you as well.

Good luck!

9. ## Re: Parallel and subsequential diagnostic tests

Hey,
thank you a lot for your help! It seems I've not completely lost track

Unfortunately only one model (the regression) gives continouus probabilities, the other is only a binary classificator, thus I can only plot the datapoint I get from evaluation.

10. ## Re: Parallel and subsequential diagnostic tests

Do you not create a threshold for the continuous value for classification purposes. Or do you run a logistic model with the continuous variable and it provides the increase in odds of classification for every unit increase. That may be fine for your situation, but I would imagine if you are feeding values through both, creating the optimal cutoff may be beneficial to ensure you have a set rule.

What is the binary rule, in that you can't insert it into the model.

11. ## Re: Parallel and subsequential diagnostic tests

Hey,

both models do a binary classification as in active/inactive for a molecule. I can only tune paranmeters as far as the programs allow me to, thus I cannot change much about the parameters as most parameters are set by the program according to the data.
The only difference is that with the regression I am able to set a threshold myself due to the continouus output, whereas the other model only gives a yes no anwer due to one or more rules which apply in case of the data.

12. ## Re: Parallel and subsequential diagnostic tests

Are you familiar with nomograms? I attached a link to a document on them, though there are probably better sites - I just wanted to present one with multiple predictors. You score each prediction variable and then some their total score for a overall post tests probability.

http://www.zlotnik.net/stata/nomograms/

13. ## Re: Parallel and subsequential diagnostic tests

Hey,

no haven't heard of them so far. This would be a great representation of the model especially for a presentation trying to explain what the models are doing. Ill try if i can get one but as the model is not implemented into any software like R or STATA I guess I can't get out the equation for that.
Thanks a lot for the input!

 Tweet

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts