Testing for difference 2 regression models

#1
Hello,

I am working on multiple regressions at the moment and have come to a point that I need a little assistance. I have the following question:

Is there a way to test for significant difference between 2 regression models?
That would mean testing for the difference between 2 R-Squared values?
If so, which test should I use? (partial F-test or F-test?). Assuming this cannot be done in SPSS, how does it work manually?

I know these are actually 3 questions but could someone help me out with this predicament (another question:eek:)?

Thanks so much!
 

Dragan

Super Moderator
#2
Hello,

I am working on multiple regressions at the moment and have come to a point that I need a little assistance. I have the following question:

Is there a way to test for significant difference between 2 regression models?
That would mean testing for the difference between 2 R-Squared values?
If so, which test should I use? (partial F-test or F-test?). Assuming this cannot be done in SPSS, how does it work manually?

I know these are actually 3 questions but could someone help me out with this predicament (another question:eek:)?

Thanks so much!

I think the Chow test might be helpful to you.

http://www.stata.com/support/faqs/stat/chow.html
 
#4
I think I mean something else...

Thanks a lot for your answers although my problem is slightly different, let me explain in more detail:

I ran a regression equation with 25 variables. Subsequently, I removed 5 of those variables and ran the equation again. I would like to test whether the R-squared of the first regression equation is significantly different form the second regression equation.

How do I go about doing this and which test doe I use?

Thanks again people!
 
#5
No ... the Chow test is indeed the way to go ...

I think the link I sent you is the easiest one to follow for the kinda chow you want ...

You'll have two grouping variables ... let's call then 20 and 25 to reflect the number of variables - so, a big long column of 20s and that followed by a bunch more 25s. That's your fixed factor in the univariate anova.

Then your DV can be whatever the outcome predictor was - following the regression. A single column of outcomeiness. let's call it outcome.

Then you co-variate is whatever you were trying to predict.

Use paste instead of OK

To the syntax design add *outcome (the last line)

Run all
(actually - read the link carefully on this ...)

Then you'll get a F and a P value.

Booya ... any sig diff between the two models.
 
#6
Oh - here's what you might also be describing ...
Assume SPSS 12 (it's all I speak)

Put the first 20 variables into the regression using the ENTER METHOD and click next
Put the remaining 5 variables in - again ENTER
Make sure you opt for r-square change
This is a hierarchical multiple regression

Check the model summary for results.
The sig F change will tell you whether those 5 extra were ... significantly helpful

(a little bit of a different question though)
 

Dragan

Super Moderator
#7
Oh - here's what you might also be describing ...
Assume SPSS 12 (it's all I speak)

Put the first 20 variables into the regression using the ENTER METHOD and click next
Put the remaining 5 variables in - again ENTER
Make sure you opt for r-square change
This is a hierarchical multiple regression

Check the model summary for results.
The sig F change will tell you whether those 5 extra were ... significantly helpful

(a little bit of a different question though)
By hand calculation it is:

F = [(R^2_Full - R^2_Reduced) / (25 - 20) ] / [ (1-R^2_Full) / ( N - 25 - 1) ]

where R^2_Full is associated with the model with 25 I.V.'s and
R^2_Reduced is associated with the model with 20 I.V.'s.

N is the total sample size (which I hope is large).
 
#8
Dragan

Do you know anything about a Chow test for catagoricals?

I typically deal with discriminate analysis rather than MR ...

any thoughts?

I guess the bottom line is sig diffs between F1 values because that's what I end up with.