# Swapping hypothesises when doing hypothesis testing

#### lo2

##### New Member
Hi all,

Assume I have two linear models:

M1: y=a1+a2
M2: y=a1

And then I wanted to test if M2 describes my data better than M1.
However if I for instance do an lm with R, I will get if M1 describes my data better than M2.

So is there anyway I can get a P value for H1: a2=0 and H0: a2!=0?
Or does that not make any sense? Does H0 always have to that a parameter is equal to 0?

#### Dason

It's actually impossible for M2 to describe your data "better" than M1. Maybe if you explain *why* you want to test this we might be able to help more though.

#### hlsmith

##### Not a robit
Yes you can compare these models if they are reduced forms of each other. The typical method is doing a -2loglikelihood test. No, your null hypothesis does not have to be "0". Though, you would have too have good information to propose otherwise.

Alternative hypothesis can also include: equivalency, superiority, and non-inferiority tests.

I will see if I can find any code for the likelihood test.

P.S., Does your M1 represent an intercept only model?
@Dason, what if M2 was an intercept only model and M2 was the same thing but a2 was a random multinomial variable not associated with the outcome. Thus, the intercept would be the base case, though the reference group has no explanatory power. Could that make the M2 model worse, base on some criteria?

Last edited:

#### lo2

##### New Member
It's actually impossible for M2 to describe your data "better" than M1. Maybe if you explain *why* you want to test this we might be able to help more though.
Hi I see your point. I will try to explain the models better:

M1: y = a1*x1 + a2*x2
M2: y = a1*x1

The idea is that y (response) and x1 (data points) are correlated, but y and x2 are not correlated.
I would like to test these 2 models against each other, showing that M2 is significantly better at explaining y.
Does that make sense?