Never heard of such a thing, though I do not know a tremendous amount. Reminds me of cross-validation or using a dataset to hone a model. Then you try to validate it on a comparable data set. However, in these approaches you would determine variables (covariates of interest), not probabilities. If the next dataset is "completely different" do you also mean the sampling procedure to procure it? A random sample is customary or a year compared to the next year. If it was truly a completely different dataset, it seems problematic.

Propensity scores are calculated this way, but applied back into the model to adjust covariate levels (weights) to predict the actual outcome. So that approach is still not like your description.