Hmm, can you provide more details!
Would a grouping of predictions (e.g., by decile) vs. observed values be of any benefit?
Hello everybody!
I am currently working on a robust regression problem, where Model R^2 and Spearman correlation recommend different models.
Are there new normed Goodness of Fit-/ Association-/ Correlation Measures away from the ones mentioned before (scale of target: interval)? With normed I am referring to at least a normed maximum fit (like 1 in R^2 and Correlation).
Best case they will also work for robust problems.
Thanks
Consuli
Prediction is very difficult, especially about the future. (Niels Bohr)
Hmm, can you provide more details!
Would a grouping of predictions (e.g., by decile) vs. observed values be of any benefit?
Stop cowardice, ban guns!
Sure.
I want to compare machine learning models (decision tree, neural network) with different kinds of regression models including interaction terms (OLS, GLM, Quantile, M-Estimator). Especially I want to know, to which proportion a simpler model (lets say OLS regression) has got the overall explanation of the best fitting modell. Thus, I need a normed measure.
This all on robust data problems (target interval/ratio scale), error NOT following a classical theoretical distribution. As far I know, AIC and BIC make use of Maximum Likelihood, which is based always based on a theoretical distribution. Thus I am questioning wether they are reliable on robust data problems. Usually they are not recommended for robust problems.
Maybe there are no new normed goodness of fit measures.
Compare
https://scholar.google.de/scholar?hl...ness+of+fit%22
https://scholar.google.de/scholar?q=...e&as_sdt=0%2C5
Prediction is very difficult, especially about the future. (Niels Bohr)
Is this on simulated data or real data?
Matt aka CB | twitter.com/matthewmatix
I like to use Omega-naught-squared from Xu (2003). I think this might help a little bit. The measure of goodness of fit is a bit more robust than classical coefficients of determination.
Here is the reference:
Xu, R. (2003). Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22(22), 3527-3541.
Prediction is very difficult, especially about the future. (Niels Bohr)
Are you shure this omega-naught-squared is a general goodness-of-fit measure? In R it is located in the "Time-Domain Deconvolution of Seismometer Response" package (TDD). Compare https://cran.r-project.org/web/packages/TDD/TDD.pdf
Looks strange to me.
Prediction is very difficult, especially about the future. (Niels Bohr)
For the simulated datasets you can assess how well the different methods do more directly - e.g., just estimate the bias of coefficients, calculate the mean squared deviation between the estimates of each parameter and its true value to estimate efficiency, etc. Not so sure about the real datasets..
Matt aka CB | twitter.com/matthewmatix
Tweet |