Thread: Dependent variable issues: (i) Mild skewness and (i) Measurement error

1. Dependent variable issues: (i) Mild skewness and (i) Measurement error

Cross-sectional over .

's were estimated in a different regression (specifically, 300 time series regressions). It was generated by , and D_t is a dummy equal to unity from t=150 to t=200. A and B are normally distributed N(0.01,0.2) and epsilon satisfies all the normal Gauss Markov assumptions (including the optional normality assumption).

Questions:
1) Can measurement error cause heteroscedasticity? I know that it'll always cause increased variance (The model is equivalent to using the population Y_i as the LHS variable, but the disturbances as u_i = e_i - v_i, where v_i was generated by the measurement error in Y_i).

Perhaps if v_i has variance ? But what conditions could generate this measurement error?

2) Suppose that is skewed. What are the consequences for my standard errors (of the intercept/slope)?

2. Re: Dependent variable issues: (i) Mild skewness and (i) Measurement error

Perhaps if you were regressing an intelligence marker on age using an online format and an age group was completely ignorant of the technology, there could be greater variability in responses for those individuals. This is just based on a hypothethical in my mind, not sure if it would translate (a little bit more about systematic bias).

What about a height regressed on weight using a spring scale that is more likely to faulter with very heavy weight, if the population did not have any obese persons scattered throughtout and weight was just related to height, tall persons may have more variability.

Perhaps you could plug in some fictitious data and see what happens, then report back.

3. Re: Dependent variable issues: (i) Mild skewness and (i) Measurement error

Hmm interesting ideas. Another idea is if there's some violation in the necessary assumptions for BLUE in some proper subset of the 300 underlying time series regressions, which would mean the measurement error in Y_i has non-constant variance.

Simulations are a good idea!

4. Re: Dependent variable issues: (i) Mild skewness and (i) Measurement error

Talked to my econometrics prof ... Says that it's not important if the measurement error is not severe because the additional variance in the diagonal of the residual variance covariance matrix will uniformly increase the standard error estimates of the coefficients, leading to conservative inference. So I expect to have weaker power of tests but low type 1 errors.

It's just an issue when a predictor variable has measurement error.

5. Re: Dependent variable issues: (i) Mild skewness and (i) Measurement error

Your professor is right of course, but think about what that entails. As the standard errors inflate the chance of making a type II error increases as well. Your power falls. If the null is not true do you really want to not reject it?

Depends if there is any of your money involved in the decision

6. Re: Dependent variable issues: (i) Mild skewness and (i) Measurement error

Yep true. He might sent an e-mail to a top bootstrap guy to see what can be done.

7. Re: Dependent variable issues: (i) Mild skewness and (i) Measurement error

The frequently heard statement that heteroskedacity or multicolinarity won't bias the data but will make it inefficient sounds pretty comforting until you consider what really high uncertainties can be involved (and that type II errors really do matter). I have a "joke" that knowing the answer is 3 plus or minus 5000 is not very useful even if you are sure 3 is unbiased

Ok my jokes aren't really funny

Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts