When using multilevel models based on maximum likelihoods, one can use either deviance statistics (e.g., -2LLs) for nested models or information criteria (e.g., Bayesian Information Criterion, BIC) to test which of two models better fit the data--as long as the models are both used to fit the exact same data.

However, I would like to compare the fit of two models that are based on different subsets of the same data. Does anyone know how one can do this?

(A bit more info if it helps:
I'm trying to see if respondents who are high on one variable produce better-fitting models than respondents who are low on that same variable. So, I did a median split on the variable and would like to compare the fit of the model for the subset of people high on that variable versus the model based on the subset of people low on that variable. I was thinking, e.g., to get the difference in -2LLs for these two models and see if that diffference was bigger than the difference in -2LLs for two models that split the data set in half randomly.)

Thank you all for any help you can give me!