1. ## Hypothesis

If you got this model:

mode 1) y=a +b1x1 + b2x2 + u

and this:

mode 2) y=a + b1(x1-x2) + u

What do you call the restriction there has been put on model 1 in model 2?

Is it just b1=-b2 ?
if yes, how do you test this kind of restriction? I really dunno:S

2. ## Re: Hypothesis

You could test the hypothesis:

Ho: b1 = -b2
Ha: b1 != -b2

through the use of contrasts. You could also do it through a likelihood ratio test with model 1 being the full model and model 2 being the reduced model.

3. ## Re: Hypothesis

yes okay, but i really dont know how to test that kind of hypothesis.
Is it possible you can provide me with a example?

4. ## Re: Hypothesis

What software are you using? Most packages have an easy way to specify a contrast.

5. ## Re: Hypothesis

I use Stata. Really hope you can help. I need to deliver an answer tomo:S

6. ## Re: Hypothesis

I've never used Stata so I can't help there.

7. ## Re: Hypothesis

nop okay. But can you tell me the theory i should do, point for point?

9. ## Re: Hypothesis

Okay friends, this is my exam. I did not think it was necessary to read up on the material because I thought I knew everything. I thought I got all knowledge from heaven

10. ## Re: Hypothesis

Originally Posted by Dason
What software are you using? Most packages have an easy way to specify a contrast.
I'd like to see the R way.

11. ## Re: Hypothesis

Well there is a package called "contrast" in R but I typically just do them by hand.

Code:
``````test.contrast = function(lm.out, C, d = 0){
# Provides a test of Ho: C*b = d vs. Ha: C*b != d
# lm.out: - is the linear model used
# C: ----- is a matrix with the desired set of contrasts
#          which may contain more than one row
# d: ----- a vector of values to test against

b <- coef(lm.out)
V <- vcov(lm.out)
df.numerator <- nrow(C)
df.denominator <- lm.out\$df
Cb.d <- (C %*% b) - d
Fstat <- drop(t(Cb.d) %*% solve(C %*% V %*% t(C)) %*% Cb.d/df.numerator)
pvalue <- 1 - pf(Fstat, df.numerator, df.denominator)
ans <- list(Fstat = Fstat, pvalue = pvalue)
return(ans)
}

# Generate some fake data
n <- 20
sigma <- 1
betas <- c(7, 2, -2)
x1 <- 1:n
x2 <- runif(n)
X <- matrix(c(rep(1,n),x1, x2), ncol = 3)
y <- X %*% betas + rnorm(n, 0, sigma)

# Fit the linear model
o <- lm(y ~ x1 + x2)

# Test the idea that b1 = -b2  ie b1 + b2 = 0
C <- matrix(c(0, 1, 1), nrow = 1)
test.contrast(o, C)``````

12. ## Re: Hypothesis

dudes, im not much cleaver here. Please introduce me to the rapidhole

13. ## Re: Hypothesis

Originally Posted by Mikkelsoeren
Okay friends, this is my exam. I did not think it was necessary to read up on the material because I thought I knew everything. I thought I got all knowledge from heaven
What? You thought you didn't have to read up on anything and that knowledge just came from heaven? Well you're wrong and you should probably start reading up on things...

Originally Posted by Mikkelsoeren
dudes, im not much cleaver here. Please introduce me to the rapidhole
What are you trying to say here? Rapidhole? What?

Are you just asking for quick answers? You could try googling "linear contrast".

14. ## Re: Hypothesis

Dason :=) you are right. With a probability in 100%

15. ## Re: Hypothesis

Isn't there some way to deal with the contrasts in the lm parameter list?