Thread: Importance of bias in estimators

1. Importance of bias in estimators

I always thought that bias was the central issue in an estimator. The one thing you had to avoid above all else and the gold standard to evaluate the value of an estimator.

Then I read this by Greene [in an econometric text]

"Note that no mention has been made of unbiasedness. The linear least squares estimator in the linear regression model is essentially alone in the estimators considered in this book. It is generally not possible to establish unbiasedness for any other estimator. As we saw earlier, unbiasedness is of fairly limited virtue in any event. - we found for example that the property would not differentiate an estimator baed on a sample of 10 observations from one based on 10,000. Outside the linear case, consistency is the primary requirement of an estimator. Once this is established, we consider questions of efficiency and, in most cases, whether we can rely on asmpytotic normality as the basis for statistical inference."
Can this be true

2. Re: Importance of bias in estimators

It is saying if the normality assumption of the residuals is met, the model should provide unbiased estimate - perhaps meaning you are not making any asymptotic assumptions. There is another component at work here that I am not fully remembering but it has to do the the structure of the hat-matrix (perhaps the orthogonal projection concept) and the linear combination of its components.

3. The Following User Says Thank You to hlsmith For This Useful Post:

noetsi (12-30-2014)

4. Re: Importance of bias in estimators

Really? I read that completely different. That in fact it was impossible to know if the estimates were unbiased.

5. Re: Importance of bias in estimators

Hmm, it is hard to know outside of context.

6. Re: Importance of bias in estimators

As is generally true there is not a lot of context in the Greene book. He jumps from element to element without detailed discussion. This is required given the amount of material he is covering, but it can be confusing [well it is to me].

7. Re: Importance of bias in estimators

You have to be careful on the term "bias" here.

For most of the estimators, consistency is a very common and important requirement that one would like to achieve - because most of us will want the estimator to be arbitrarily close to the true parameter when the sample size increase.

A close relationship with this is the asymptotically unbiasedness. If an estimator is asyptotically unbiased, and its variance goes to zero, then it is a consistent estimator.

Unbiasedness is another good property. But you have to note that it is neither a necessary nor a sufficient condition for the consistency. So the text have state the importance of consistency correctly.

8. The Following User Says Thank You to BGM For This Useful Post:

noetsi (12-30-2014)

9. Re: Importance of bias in estimators

Originally Posted by hlsmith
It is saying if the normality assumption of the residuals is met, the model should provide unbiased estimate - perhaps meaning you are not making any asymptotic assumptions.

What I think it was saying was that outside of a linear model we rarely see unbiased estimators. Outside of linear models what is the 'typical' estimator? The MLE. And this isn't guaranteed to be unbiased. That's pretty much what I think it was trying to get across.

10. The Following 2 Users Say Thank You to Dason For This Useful Post:

hlsmith (12-30-2014), noetsi (12-30-2014)

11. Re: Importance of bias in estimators

Agreed. I was rambling because I heard someone reference the linear combination in y-hat as unbiased, but I can't remember the details. Thanks.

12. Re: Importance of bias in estimators

If I understood the author, and I may not have, he feels least squares is the only estimator that you can insure or even care about unbiasedness. He talks about a wide range of estimators in his book.

If the author is arguing that you get close to the true value with large samples (asymptotic approaches) than I guess I misunderstood what he means by bias. I understand that to mean you are not accurately estimating the population parameter which seems like the single most important thing an estimator can do. Being consistantly wrong does not seem very useful.

13. Re: Importance of bias in estimators

No. Least squares in linear regression is just one of the only commonly used estimators that we can easily show to be unbiased. There are other cases. For example the MLE for a poisson is unbiased.

But your concerns are basically what the author is saying don't matter. It might seem weird to say that we don't care too much if an estimator is unbiased but to be honest consistency is something we care about more. Unbiasedness is something that isn't easy to come by regardless of what the true parameter is so we settle with being happy if our estimator in a way gets closer to the truth as your sample size increases (consistency). There are other properties that we talk about but if your estimator isn't consistent then you would need a very good reason for using it.

14. Re: Importance of bias in estimators

Larger samples you get closer to the truth and in may cases the normal distribution or whatever distribution it is approximating.

15. Re: Importance of bias in estimators

What I think it was saying was that outside of a linear model we rarely see unbiased estimators. Outside of linear models what is the 'typical' estimator? The MLE. And this isn't guaranteed to be unbiased.
I think that is the most amazing thing I learned since I came here [which covers a lot of territory obviously]. Yow. Note however, that the author talks about many estimators not just least squares and MLE.

16. Re: Importance of bias in estimators

Originally Posted by Dason
No. Least squares in linear regression is just one of the only commonly used estimators that we can easily show to be unbiased. There are other cases. For example the MLE for a poisson is unbiased.

But your concerns are basically what the author is saying don't matter. It might seem weird to say that we don't care too much if an estimator is unbiased but to be honest consistency is something we care about more. Unbiasedness is something that isn't easy to come by regardless of what the true parameter is so we settle with being happy if our estimator in a way gets closer to the truth as your sample size increases (consistency). There are other properties that we talk about but if your estimator isn't consistent then you would need a very good reason for using it.
I have not read the whole book so it is possible he does not cover the estimator you mention. I actually missed this comment by the author until this thread.

The linear least squares estimator in the linear regression model is essentially alone in the estimators considered in this book.

 Tweet

Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts