+ Reply to Thread
Results 1 to 8 of 8

Thread: Regression MC

  1. #1
    Points: 175, Level: 3
    Level completed: 50%, Points required for next Level: 25

    Posts
    13
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Regression MC




    Can someone confirm the answers to the below? And advise which ones are incorrect. Thanks!

    1) True/False: Models selected by automated variable selection techniques do not need to be validated since they are ‘optimal’ models.

    (2) Compute the Akaike Information Criterion (AIC) value for the linear regression model
    Y = b0 + b1*X1 + b2*X2 + b3*X3.
    The regression model was fitted on a sample of 250 observations and yielded a likelihood value of 0.18.



    (a) 9.49

    (b) 11.43

    (c) 25.52

    (d) 15.55


    (3) Compute the Bayesian Information Criterion (BIC) value for the linear regression model
    Y = b0 + b1*X1 + b2*X2 + b3*X3.
    The regression model was fitted on a sample of 250 observations and yielded a likelihood value of 0.18.

    (a) 9.49

    (b) 11.43

    (c) 25.52

    (d) 15.55


    (4) True/False: Consider a categorical predictor variable that has three levels denoted by 1, 2, and 3. We can include this categorical predictor variable in a regression model using this specification, where X1 is a dummy variable for level 1, X2 is a dummy variable for level 2, and X3 is a dummy variable for level 3.
    Y = b0 + b1*X1 + b2*X2 + b3*X3

    True

    False


    (5) True/False: The model Y = b0 + exp(b1*X1) + e can be transformed to a linear model.

    True

    False


    (6) True/False: A variable transformation can be used as a remedial measure for heteroscedasticity.

    True

    False


    (7) When comparing models of different sizes (i.e. a different number of predictor variables), we can use which metrics?

    a. R-Squared and Adjusted R-Squared

    b. R-Squared and Mallow’s Cp

    c. AIC and R-Squared

    d. AIC and BIC


    (8) True/False: When using Mallow’s Cp for model selection, we should choose the model with the largest Cp value.

    True

    False


    (9) True/False: Consider the case where the response variable Y is constrained to the interval [0,1]. In this case one can fit a linear regression model to Y without any transformation to Y.

    True

    False


    (10) True/False: Consider the case where the response variable Y takes only two values: 0 and 1. A linear regression model can be fit to this data.

    True

    False




    1) False
    2) b
    3) c
    4) T
    5) T
    6) F
    7) D
    8) F
    9) F
    10) F

  2. #2
    Omega Contributor
    Points: 38,432, Level: 100
    Level completed: 0%, Points required for next Level: 0
    hlsmith's Avatar
    Location
    Not Ames, IA
    Posts
    7,006
    Thanks
    398
    Thanked 1,186 Times in 1,147 Posts

    Re: Regression MC

    Well I skipped the questions that required me to use a calculator and got the following from the top of my head, not looking anything up.


    1. f
    4. t or f, you can do this, but when dummy coding you typically use k-1 terms in the model, k=# categories. so I would say false, but you can do the other way, though the program may yell at you.
    5. T
    6. T
    7. d
    8. F
    9. T, if the majority of values land around 0.5 with minimal dispersion, otherwise you would want to transform variable or use beta regression.
    10. T or F, you can do it, but ideally you would use logistic regression, so I would prefer False.
    Stop cowardice, ban guns!

  3. #3
    Points: 175, Level: 3
    Level completed: 50%, Points required for next Level: 25

    Posts
    13
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Re: Regression MC

    I don't understand on #10. For #10, this is what I think:

    I think it is true because we will predict a value between -1 and 1; and then equate <=0 to 0 and >0 to 1. That is what even logistic regression would do.

    So, with this, which one would you say #10 is?

  4. #4
    Points: 175, Level: 3
    Level completed: 50%, Points required for next Level: 25

    Posts
    13
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Re: Regression MC

    Could you also confirm #2 is 11.43, and #3 is 25.52? Thanks

  5. #5
    TS Contributor
    Points: 12,287, Level: 72
    Level completed: 60%, Points required for next Level: 163
    rogojel's Avatar
    Location
    I work in Europe, live in Hungary
    Posts
    1,471
    Thanks
    160
    Thanked 332 Times in 312 Posts

    Re: Regression MC

    hi,
    just for my education, how would you linearize 5 if b0 is not 0?

    regards

  6. #6
    Points: 1,741, Level: 24
    Level completed: 41%, Points required for next Level: 59

    Posts
    230
    Thanks
    37
    Thanked 68 Times in 59 Posts

    Re: Regression MC

    Quote Originally Posted by hlsmith View Post

    4. t or f, you can do this, but when dummy coding you typically use k-1 terms in the model, k=# categories. so I would say false, but you can do the other way, though the program may yell at you.
    Four is false. If you fit the intercept with that model using k dummies for k groups you will have perfect collinearity and the matrix will be singular (not invertable). As you noted the program will probably give you an error message or will sometimes just drop one of the dummies to allow a fit. Search "dummy variable trap" for more background. You would need to fit the model without an intercept to use the k dummies for k groups.

  7. #7
    Devorador de queso
    Points: 95,995, Level: 100
    Level completed: 0%, Points required for next Level: 0
    Awards:
    Posting AwardCommunity AwardDiscussion EnderFrequent Poster
    Dason's Avatar
    Location
    Tampa, FL
    Posts
    12,938
    Thanks
    307
    Thanked 2,630 Times in 2,246 Posts

    Re: Regression MC

    Quote Originally Posted by ondansetron View Post
    Four is false. If you fit the intercept with that model using k dummies for k groups you will have perfect collinearity and the matrix will be singular (not invertable). As you noted the program will probably give you an error message or will sometimes just drop one of the dummies to allow a fit. Search "dummy variable trap" for more background. You would need to fit the model without an intercept to use the k dummies for k groups.
    While I don't disagree that the answer they are probably looking for is "false" I do disagree that that is the answer. We can easily fit that model but the solutions won't be unique and we won't be able to do any sort of inference on the parameter estimates directly. At that point we'd need to make sure that anything we want to talk about is a so called "estimable function".
    I don't have emotions and sometimes that makes me very sad.

  8. The Following User Says Thank You to Dason For This Useful Post:

    hlsmith (04-25-2017)

  9. #8
    Points: 1,741, Level: 24
    Level completed: 41%, Points required for next Level: 59

    Posts
    230
    Thanks
    37
    Thanked 68 Times in 59 Posts

    Re: Regression MC


    Quote Originally Posted by Dason View Post
    While I don't disagree that the answer they are probably looking for is "false" I do disagree that that is the answer. We can easily fit that model but the solutions won't be unique and we won't be able to do any sort of inference on the parameter estimates directly. At that point we'd need to make sure that anything we want to talk about is a so called "estimable function".
    True there is the distinction that you won't get unique estimates. Is there any benefit to the estimation in that case? I'm not sure so I have to ask.

  10. The Following User Says Thank You to ondansetron For This Useful Post:

    hlsmith (04-25-2017)

+ Reply to Thread

           




Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats