It doesn't. But for that prediction interval to be correct all the assumptions you make in the process of creating the interval need to be correct as well.
What did you assume about u? How about x?
It doesn't. But for that prediction interval to be correct all the assumptions you make in the process of creating the interval need to be correct as well.
What did you assume about u? How about x?
I don't have emotions and sometimes that makes me very sad.
TrueTears (08-27-2013)
Okay, so how would you go about answering this question then? I'm really confused.
The assumptions of u are given, ie iid normally distributed with mean of 0 and variance of 9.
We condition on x_{n+1} for the posterior of y_{n+1}, hence x_{n+1} is a known value.
And you're being asked to identify where the process fails to account for uncertainty. Do you honest to God believe that you absolutely for sure know that the variance of u is 9? Are you absolutely positive a normal distribution for u works? These are the kinds of questions it is asking you to ask.
I don't have emotions and sometimes that makes me very sad.
TrueTears (08-27-2013)
Okay, from my point of view I think we are meant to assume everything given is true, ie, what if we assume the TRUE data generating process for u is as described and all distributional assumptions are true. Then I don't see how the credibility interval for y_{n+1} fails to account uncertainty. There are only 2 possibilities that I can think of: 1. we assumed a perfectly symmetrical confidence interval about y_{n+1} as a result there are many other possibilities for a and b which provide the same probability. 2. We somehow failed to account for the uncertainty in beta as all we did was marginalise (ie, "average") out beta in order to produce the posterior distribution for y_{n+1}.
Btw the variance of u is 9 as it is assumed by the question, that is the premise, this isn't from empirical data.
Well if you want to ignore my suggestions that fine. But I don't think either of your ideas are what it's going for either. The symmetric *prediction interval* (not confidence interval) isn't failing to account for anything just because there are other ways to make the interval. How would integrating beta out fail to account for it? You accounted for it by integrating it out... It's not like you took the posterior expected value of beta and just plugged that in - your uncertainty about beta is already in the analysis in that you used the posterior distribution for beta when you did the integration.
I don't have emotions and sometimes that makes me very sad.
TrueTears (08-27-2013)
No I don't mean as to reject your suggestion, I appreciate your ideas, it's just that the question specified the variance as a given factor, so I thought it's not something we should doubt to be possibly incorrect. Ofcourse if the variance is incorrect, then our entire posterior for y_{n+1} would be incorrect, so then we couldn't even be able to create a prediction interval.
Exactly. And how likely is it that you will ever know the variance? I mean seriously - you don't know the regression coefficients but somehow you know the exact variance?
If you have data why not use it to estimate the variance?
But this isn't the only area that might cause trouble. I thought I mentioned it before but here it is assumed that you measure the x values perfectly. If there is any uncertainty in the measurements that isn't taken into account in this analysis. That might be what they're going for here...
I don't have emotions and sometimes that makes me very sad.
TrueTears (08-27-2013)
Ah now that makes sense, I see what you mean, I'll take that into account, thanks for your kind assistance
Tweet |