Using ttest instead of R-Squared for acceptance


New Member
I have independent variable X and dependent variable Y. I am trying to compare the predictive power of a number of regression based models. For each model I obtain a R-Squared value. Also for each model I can obtain predicted Y (call it Y_new). What if I perform ttest on (Y_give - Y_new) to test null hypothesis of whether the difference is 0.

Why should I use R-squared method vs. performing ttest on (Y_given - Y_new) and selecting the model which fails to reject null of difference being 0 to select a model.


Ambassador to the humans
What makes you think your method is any good? Better question - do you realize that for any regression model that includes an intercept that the sum of the residuals (prediction - observation) which is what I think you're referring to when you say Y_give - Y_new will always be 0 so the p-value for a t.test will be 1 for any model that includes an intercept.