Swapping hypothesises when doing hypothesis testing

lo2

New Member
#1
Hi all,

Assume I have two linear models:

M1: y=a1+a2
M2: y=a1

And then I wanted to test if M2 describes my data better than M1.
However if I for instance do an lm with R, I will get if M1 describes my data better than M2.

So is there anyway I can get a P value for H1: a2=0 and H0: a2!=0?
Or does that not make any sense? Does H0 always have to that a parameter is equal to 0?
 

Dason

Ambassador to the humans
#2
It's actually impossible for M2 to describe your data "better" than M1. Maybe if you explain *why* you want to test this we might be able to help more though.
 

hlsmith

Not a robit
#3
Yes you can compare these models if they are reduced forms of each other. The typical method is doing a -2loglikelihood test. No, your null hypothesis does not have to be "0". Though, you would have too have good information to propose otherwise.

Alternative hypothesis can also include: equivalency, superiority, and non-inferiority tests.

I will see if I can find any code for the likelihood test.

P.S., Does your M1 represent an intercept only model?
@Dason, what if M2 was an intercept only model and M2 was the same thing but a2 was a random multinomial variable not associated with the outcome. Thus, the intercept would be the base case, though the reference group has no explanatory power. Could that make the M2 model worse, base on some criteria?
 
Last edited:

lo2

New Member
#4
It's actually impossible for M2 to describe your data "better" than M1. Maybe if you explain *why* you want to test this we might be able to help more though.
Hi I see your point. I will try to explain the models better:

M1: y = a1*x1 + a2*x2
M2: y = a1*x1

The idea is that y (response) and x1 (data points) are correlated, but y and x2 are not correlated.
I would like to test these 2 models against each other, showing that M2 is significantly better at explaining y.
Does that make sense?
 

Dason

Ambassador to the humans
#5
Once again it's impossible for M2 to be "better" at explaining y since M1 could just set a2 to be 0 and then it would be equivalent to M2. Thus M2 can't ever do a "better" job at predicting y than M1 can. It might be that x2 doesn't provide enough evidence that it actually improves the prediction of y but that's just the typical test of H0: a2 = 0 against H1: a2 != 0.
 

lo2

New Member
#6
Once again it's impossible for M2 to be "better" at explaining y since M1 could just set a2 to be 0 and then it would be equivalent to M2. Thus M2 can't ever do a "better" job at predicting y than M1 can. It might be that x2 doesn't provide enough evidence that it actually improves the prediction of y but that's just the typical test of H0: a2 = 0 against H1: a2 != 0.
Yes I guess you have a point. I kind of overlooked that fact, so thanks for the great answer :)