Demonstrating that MAP -> MLE, Bayes

#1
Note - I can't work out how to write in LaTeX on here, and ascii maths can be quite painful, as a result I have included the LaTeX in this post and provided links where the LaTeX can be read rendered.

Also, any answers that make use of R software are fine, I don't have access to anything such as Minitab, SPSS, or other packages though.

Here's a link to the formatted LaTeX : http://mathb.in/32195

If there's anything I can do to improve the post please let me know

Thank you

========================



I'm not sure how to go about this, as the statement is just to

* find bayes estimate of $\theta$ under square loss function

* show derived bayes estimate will converge to MLE as sample size becomes large

I'm a bit confused about where it says 'sample size becomes large', because I'm
not sure if that's referring to $n$ only or to $n$ and $x$ increasing together
(so that the proportion is the same).

Here's the context -

I have $n$ observations and $x$ successes, $\theta$ is the proportion of
successes and this is what we're interested in.

The distribution is $B(\alpha , \beta)$.

Which means that we have

\begin{align*}
E(\theta \vert \vec{x}) &=
\frac{\alpha + \sum_i \vec{x}_i}{\alpha + \beta + n} \\
&=
\frac{\alpha + \beta}{\alpha + \beta + n}E(\theta)
+
\frac{n}{\alpha + \beta + n}\hat{\theta}
\\
\end{align*}

where

\begin{align*}
E(\theta) = \frac{\alpha}{\alpha + \beta} , \;\;\;\;
\hat{\theta} = \frac{\sum_i \vec{x}_i}{n}
\end{align*}

So It seems that I need to consider $\vec{x}$ here, as I need that for
$\hat{\theta}$ ?

Anyway, the square loss function gives

\begin{align*}
E((\hat{\theta - \theta})^2 \vert x)
&= E(\hat{\theta}^2 - 2\hat{\theta}\theta + \theta^2 \vert x ) \\
&= \hat{\theta}^2 - 2 \hat{\theta} E(\theta \vert \vec{x}) + E(\theta^2 \vert \vec{x})
- E^2(\theta \vert \vec{x}) + E^2(\theta \vert \vec{x}) \\
&= \left( \hat{\theta} - E(\theta \vert \vec{x}) \right)^2 + Var(\theta \vert \vec{x}) \\
\end{align*}

$\longrightarrow \text{argmax}_{\theta} E( (\hat{\theta} - \theta)^2 \vert \vec{x}) = Var(\theta \vert \vec{x})$

The maximum likelihood estimate for $\theta$ is found from $p^x(1 - p)^{n-x}$
which is minimal at $p = x/n$.

I really don't see how the following works though, that as $n \to \infty$ we have

\begin{align*}
\left( \hat{\theta} - E(\theta \vert \vec{x}) \right)^2 + Var(\theta \vert \vec{x})
\to x/n
\end{align*}

The right hand side of this will tend to zero, or should I be holding the right
hand side fixed and adjusting the LHS?

If I sub some terms in then I get

\begin{align*}
\left( \hat{\theta} - E(\theta \vert \vec{x}) \right)^2 + Var(\theta \vert \vec{x})
&=
\left( \hat{\theta} -
\left(
\frac{\alpha + \beta}{\alpha + \beta + n}E(\theta)
+
\frac{n}{\alpha + \beta + n}\hat{\theta}
\right)
\right)^2 + Var(\theta \vert \vec{x}) \\
&=
\left(
% \hat{\theta}
\frac{\sum_i \vec{x}_i}{n}
-
\left(
\frac{\alpha (\alpha + \beta)}{(\alpha + \beta)(\alpha + \beta + n)}
+
\frac{n}{\alpha + \beta + n}
\frac{\sum_i \vec{x}_i}{n}
\right)
\right)^2 + Var(\theta \vert \vec{x}) \\
\end{align*}

I don't really why this doesn't just give $Var(\theta \vert \vec{x})$ as $n$ increases.

Hopefully the error that I'm making is quite obvious, so I'll leave this for now and wait for assistence.

Thanks