Proof of the day

Englund

TS Contributor
#1
In this thread we post one (or more if you can't wait) proof a day. I'll start by proving the that

[math]V[b|X]=\sigma^2(X'X)^{-1}[/math]

in the linear regression model. We can write the beta estimator as

[math](X'X)^{-1}X'y=(X'X)^{-1}X'(X\beta+\epsilon)=[/math][math](X'X)^{-1}X'X\beta+(X'X)^{-1}X'\epsilon=\beta+(X'X)^{-1}X'\epsilon[/math].

Then we have that

[math]V[b|X]=E[(b-\beta)(b-\beta)'|X]=E[(\beta+(X'X)^{-1}X'\epsilon-\beta)(\beta+(X'X)^{-1}X'\epsilon-\beta)'|X]=[/math][math]E[((X'X)^{-1}X'\epsilon)((X'X)^{-1}X'\epsilon)'|X]=E[(X'X)^{-1}X'\epsilon\epsilon'X(X'X)^{-1}|X]=[/math][math](X'X)^{-1}X'E[\epsilon\epsilon'|X]X(X'X)^{-1}=[/math][math](X'X)^{-1}X'\sigma^2IX(X'X)^{-1}=\sigma^2(X'X)^{-1}X'X(X'X)^{-1}=\sigma^2(X'X)^{-1}[/math].

Where

[math]E[\epsilon\epsilon'|X]=\sigma^2I[/math]

follows from one of the assumptions of the classical linear model: the spherical disturbances assumption.
 

spunky

Super Moderator
#2
me likes this. most of my proofs would come from the field of psychometrics or quantitative psychology though (mostly factor analysis and stuctural equation modelling).

here i'm doing the (rather simple) proof of how the linear factor analysis model can be parameterised as a covariance structure model. it's relevant because as a linear factor model it is unsolvable, but as a covariance structure model it is possible to obtain parameter estimates.

let the obseverd score [MATH]x[/MATH] be defined as the linear factor model [MATH]x = \Lambda F+\epsilon_{i}[/MATH] since it is known that (in the case of multivariate normality) [MATH]E(xx')=\Sigma[/MATH] it trivially follows that:

[MATH]
xx' = (\Lambda F+\epsilon)(\Lambda F+\epsilon)'
[/MATH]
[MATH]
xx' = (\Lambda F+\epsilon)(F'\Lambda' + \epsilon')
[/MATH]
[MATH]
xx' = \Lambda FF'\Lambda' + \epsilon F'\Lambda'+ \Lambda F \epsilon' + \epsilon\epsilon'
[/MATH]

so taking the expectation of both sides:

[MATH]
E(xx') = E(\Lambda FF'\Lambda') + 0 + 0 + E(\epsilon\epsilon')
[/MATH]

which happens because the erros are random and assumed uncorrelated with the Factors and estimated loadings. Now by linearity of expectation and substituting the covariance matrix of the Factors and of the errors we can see that:

[MATH]
E(xx') = \Lambda E(FF')\Lambda' + E(\epsilon\epsilon')
[/MATH]

[MATH]
\Sigma=\Lambda \Phi \Lambda' + \Psi
[/MATH]

which is known as the fundamental equation of Factor Analysis.
 

Englund

TS Contributor
#3
Okay, since this day is soon over (at least according to Swedish time) and no one posted a proof yet today, I'll post another proof. I'll give a very simple, and possibly boring, proof this time. I'll prove that [math]\bar{x}[/math] is the value that minimizes the sum [math]\sum_{i=1}^n{(x_i-a)^2}[/math] (1).

By taking the first derivative with respect to a and setting it equal to zero, we get [math]\sum_{i=1}^n{-2(x_i-a)}=0 \Leftrightarrow -2\sum_{i=1}^n{x_i}+2na=0 \Leftrightarrow \sum_{i=1}^n{x_i}=na \Leftrightarrow \bar{x}=a[/math].

By checking the second order condition we see that it's equal to 2n, which is always positive, so now we know that [math]\bar{x}[/math] is at least a local minimum. By investigating (1) it is easily seen that it is also a global minimum.
 

Dason

Ambassador to the humans
#4
I prefer the version that doesn't require the use of calculus.

[math]\sum (x_i - a)^2 = \sum (x_i - \bar{x} + \bar{x} - a)^2 = \sum (x_i - \bar{x})^2 + (\bar{x} - a)^2 + 2(x_i - \bar{x})(\bar{x} - a)[/math]

[math] = \sum (x_i - \bar{x})^2 + \sum(\bar{x} - a)^2 + 2\sum(x_i - \bar{x})(\bar{x} - a)[/math]

Now consider the last summation. Note that in the sum both [math]a[/math] and [math]\bar{x}[/math] are constant so we can pull them out

[math] = 2(\bar{x} - a) \sum (x_i - \bar{x})[/math]
We know that that sum is equal to 0 so this shows the third summation disappears.

We are left with

[math]\sum (x_i - a)^2 = \sum (x_i - \bar{x})^2 + (\bar{x} - a)^2[/math]

The first summation we can't control and the second sum is always non-negative so the minimum would occur if we can make it equal to 0 - which happens when [math]a=\bar{x}[/math].

Now clearly I need a few more details to make it more rigorous but I like that version a little bit more because it also gives hints at what we do in ANOVA when decomposing the sums of squares.
 
Last edited:

spunky

Super Moderator
#5
a while ago (before Englund became an MVC) I posted a proof about another result in factor analysis. I thought it would be nice to resurrect it (briefly) and add it here to our small (but growing) compendium of proofs. the original thread is here

http://www.talkstats.com/showthread.php/45041-Factor-Analysis-PROOF?highlight=

and the proof goes like this:

Let [MATH]\bf{S}[/MATH] be a covariance matrix with eigenvalue-eigenvector pairs ([MATH]\lambda_1, \mathbf{e}_1[/MATH]), ([MATH]\lambda_2, \mathbf{e}_2[/MATH]), ..., ([MATH]\lambda_p, \mathbf{e}_p[/MATH]), where
[MATH]\lambda_1 \ge \lambda_2 \ge ... \ge \lambda_p[/MATH]. Let [MATH]m<p[/MATH] and define:

[MATH]\bf{L} = \{l_{ij}\} = \left[\sqrt{\lambda_1 }\mathbf{e}_1\ |\ \sqrt{\lambda_2} \mathbf{e}_2\ |\ ...\ |\ \sqrt{\lambda_m} \mathbf{e}_m \right] [/MATH]

and:

[MATH]\
\mathbf\Psi =
\left(
\begin{array}{cccc}
\psi_1 & 0 & ... & 0 \\
0 & \psi_2 & ... & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & ... & \psi_p \\
\end{array}
\right)
\text{ with } \psi_i = s_{ii} - \sum_{j=1}^{m} l_{ij}^2[/MATH]

Then, PROVE:

[MATH]
\text{Sum of squared entries of } (\mathbf{S} - (\mathbf{LL'} + \mathbf{\Psi})) \le \lambda_{m+1}^2 + \cdots + \lambda_p^2[/MATH]

Spunky's attempt of a proof:

By definition of [MATH]\psi_i[/MATH], we know that the diagonal of [MATH](\mathbf{S} - (\mathbf{LL'} + \mathbf{\Psi}))[/MATH] is all zeroes. Since
[MATH](\mathbf{S} - (\mathbf{LL'} + \mathbf{\Psi})))[/MATH] and [MATH](\mathbf{S} - \mathbf{LL'})[/MATH] have the same elements except on the diagonal, we know that

[MATH]\text{(Sum of squared entries of } (\mathbf{S} - (\mathbf{LL'} + \mathbf{\Psi}))) \leq \text{ Sum of squared entries of } (\mathbf{S} - \mathbf{LL'}) [/MATH]

Since [MATH]\mathbf{S} = \lambda_1 \mathbf{e}_1 \mathbf{e}'_1 + \cdots + \lambda_p \mathbf{e}_p \mathbf{e}'_p [/MATH]
and [MATH]\mathbf{LL'} = \lambda_1 \mathbf{e}_1 \mathbf{e}'_1 + \cdots + \lambda_m \mathbf{e}_m \mathbf{e}'_m [/MATH], then it follows that
[MATH]\mathbf{S} - \mathbf{LL'} = \lambda_{m+1} \mathbf{e}_{m+1} \mathbf{e}'_{m+1} + \cdots + \lambda_p \mathbf{e}_p \mathbf{e}'_p[/MATH]

Writing it in matrix form, this is saying [MATH]\mathbf{S} - \mathbf{LL'} = \mathbf{P}_2 \mathbf{\Lambda}_2 \mathbf{P}'_2[/MATH] where
[MATH]\mathbf{P}_2 = [ \mathbf{e}_{m+1} | \cdots | \mathbf{e}_p ][/MATH] and [MATH]\mathbf{\Lambda}_2 = Diag(\lambda_{m+1}, \cdots, \lambda_{p})[/MATH]

Then, the following is true:

[MATH]\text{Sum of squared entries of }(\mathbf{S}- \mathbf{LL'})= \text{tr}((\mathbf{S} - \mathbf{LL'}) (\mathbf{S} - \mathbf{LL'})')=[/MATH]

[MATH]\text{tr} (( \mathbf{P}_2 \mathbf{\Lambda}_2 \mathbf{P}'_2)( \mathbf{P}_2 \mathbf{\Lambda}_2 \mathbf{P}'_2)')=\text{tr}( \mathbf{P}_2 \mathbf{\Lambda}_2\mathbf{\Lambda}_2 \mathbf{P}'_2)[/MATH]

[MATH]tr(\mathbf{\Lambda}_2\mathbf{\Lambda}_2)=\lambda_{m+1}^2 + \cdots + \lambda_p^2.[/MATH]

All the [MATH]\bf{P}_2[/MATH] disappear because by the definition of [MATH]\bf{P}_2[/MATH] we know that [MATH]\bf{P}_2 '\bf{P}_2=\bf{I}[/MATH]
 

Englund

TS Contributor
#6
a while ago (before Englund became an MVC)
Time wasn't even defined before I became MVC, so that's per definition impossible ;)
I posted a proof about another result in factor analysis. I thought it would be nice to resurrect it (briefly) and add it here to our small (but growing) compendium of proofs.

and the proof goes like this:
Very nice. If you keep posting stuff on FA I'll be forced to get more familiar with it, which is good :)
 

spunky

Super Moderator
#7
If you keep posting stuff on FA I'll be forced to get more familiar with it, which is good :)
i don't quite understand why but pretty much NO ONE in the Statistics world even touches on Factor Analysis. when it comes to dimension reduction techniques almost all of the undergrad stats textbooks i've seen that deal with intro to multivariate analysis stop at principal components. there may be like some small subsection in some namless appendix that says something about Factor Analysis... but that's it!

WHY!??!
 
#14
Nice thread so I make my debut here: The derivation of the Ridge-Estimator in the linear Regression Model.

[math]
\mathbf{y}=\mathbf{X}\boldsymbol{\beta}+\mathbf{u}, \quad \mathbf{u} \sim N_n(\mathbf{0},\sigma_u^2\mathbf{I}_n)
[/math]

with strong correlation patterns among the vectors within the data matrix [math] \mathbf{X} \in Mat_{n,p}(\mathbb{R}) [/math]. The problem with multicollinearity is that single components within the vector of parameters [math] \boldsymbol{\beta} \in \mathbb{R}^k [/math] can take absurdly large values. So the general idea is to restrict the length of said vector to a prespecified positve real number. Let this restriction been noted by [math] \left\| \boldsymbol{\beta} \right\|_2^2=c [/math], whereas [math] \left\|\cdot \right\|_2 [/math] is just the euclidian norm on [math] \mathbb{R}^n[/math].

Eventually one faces the restricted least squares problem

[math]
Q_n(\boldsymbol{\beta},\lambda) := \left\|\mathbf{y}-\mathbf{X}\boldsymbol{\beta}\right\|_2^2 + \lambda (\left\|\boldsymbol{\beta}\right\|_2^2-c) \rightarrow \min_{\boldsymbol{\theta} \in \mathbf{\Theta}}
[/math]

whereas the Lagrange parameter is assumed to be positive and [math]\mathbf{\Theta} \subseteq \mathbb{R}^k \times \mathbb{R}_{>0}[/math] is the associated parameter space. The optimization problem is equivalent to

[math]
Q_n(\boldsymbol{\beta},\lambda):= (\mathbf{y}-\mathbf{X}\boldsymbol{\beta})'(\mathbf{y}-\mathbf{X}\boldsymbol{\beta}) + \lambda (\boldsymbol{\beta}'\boldsymbol{\beta}-c) \rightarrow \min_{\boldsymbol{\theta} \in \mathbf{\Theta}}
[/math]

Taking the derivative with respect to [math] \boldsymbol{\beta} [/math] yields

[math]
\displaystyle \frac{\partial}{\partial \boldsymbol{\beta}} Q_T(\boldsymbol{\beta},\lambda) = -2\mathbf{X}'(\mathbf{y}-\mathbf{X}\hat{\boldsymbol{\beta}})+ 2\lambda \hat{\boldsymbol{\beta}}
[/math]

This leads to the first order condition (note you can set the hats already due to the fact that the potential minimizers of the problem above are already given as an implicit function)

[math]
-\mathbf{X}'\mathbf{y} + \mathbf{X}'\mathbf{X}\hat{\boldsymbol{\beta}} + \lambda \hat{\boldsymbol{\beta}} = \mathbf{0}
[/math]

Arranging terms leads to the modified normal equations

[math]
(\mathbf{X}'\mathbf{X}+ \lambda \mathbf{I}_k)\hat{\boldsymbol{\beta}} = \mathbf{X}'\mathbf{y}
[/math]

Since [math] \mathbf{X}'\mathbf{X} [/math] is at least positive semi definite and [math] \lambda \mathbf{I}_k [/math] is positive definite one yields that*

[math]
det(\mathbf{X}'\mathbf{X}+ \lambda \mathbf{I}_k) \geq det(\mathbf{X}'\mathbf{X})+det(\lambda\mathbf{I}_k) = det(\mathbf{X}'\mathbf{X}) + \lambda^n >0
[/math]

so that [math] (\mathbf{X}'\mathbf{X}+ \lambda \mathbf{I}_k) [/math] is an invertible matrix even if the data matrix is of less than full column rank. This finally yields the ridge estimator in its known form

[math]
\hat{\boldsymbol{\beta}} = (\mathbf{X}'\mathbf{X}+ \lambda \mathbf{I}_k)^{-1}\mathbf{X}'\mathbf{y}
[/math]

Also this is the unique global minimizer of [math] Q_n [/math] due to the fact that the problem under consideration is just a sum auf convex functions and [math] \hat{\boldsymbol{\beta}} [/math] is the only local minimizer, so one doesn't need to check the second order conditon and the associated hessians.

*One can find a good proof for that inequality in Magnus, J.R. & Neudecker, H. (1999). Matrix Differential Calculus. Wiley and Sons on page 227 theorem 28.
 
Last edited:

Dragan

Super Moderator
#15
The Pearson product-moment coefficient of correlation can be interpreted as the cosine of the angle between variable vectors in [math]n[/math] dimensional space. Here, I will show the relationship between the Pearson and Spearman (rank-based) correlation coefficients for the bivariate normal distribution through the following series:

[math]\sum_{n=1}^{\infty }\frac{\cos nx}{n} [/math].

If we let [math] z=\cos x+i\sin x[/math], then

[math]\sum_{n=1}^{m}y^{n-1}z^{n}=\frac{z\left \{ 1-\left ( yz \right )^{m} \right \}}{1-yz} [/math]

where it follows for [math] \left | y \right |<1 [/math],

[math] \sum_{n=1}^{\infty }y^{n-1}\left ( \cos nx+i\sin nx \right )=\frac{\cos x+i\sin x}{1-y\cos x-yi\sin x} [/math]

[math] =\frac{\left ( \cos x-y \right )+i\sin x}{1-2y+y^{2}} [/math], so that

[math] \sum_{n-1}^{\infty }\cos nx=\frac{\cos x-y}{1-2y\cos x+y^{2}} [/math].

This series is uniformly convergent for all values of [math]y[/math] and for [math] \left | y \right |\leq p<1 [/math]. Hence, integrating with respect to [math]y [/math], where [math] 0<y<1 [/math] gives

[math] \sum_{n=1}^{\infty }y^{n}\frac{\cos nx}{n} [/math]

[math] =\int_{0}^{y}\frac{\cos x-t}{1-2t\cos x+t^{2}}dt [/math]

[math] =-\frac{1}{2}\ln \left ( 1-2y\cos x+y^{2} \right ) [/math].

Suppose that [math] x [/math] is neither zero nor a multplei of [math] 2\pi [/math].

Then the series [math]\sum_{n=1}^{\infty }\frac{\cos nx}{n} [/math] is convergent, and, for [math] 0\leq y\leq 1 [/math], [math] y^{n} [/math], is positive, monotonic, decreasing and bounded. As such the series:

[math] \sum_{n=1}^{\infty }y^{n}\frac{\cos nx}{n} [/math]

is therefore uniformly convergent on the interval [math] 0\leq x\leq 1 [/math].

Subsequently letting [math] x\rightarrow 1 [/math], then it follows that if [math] x [/math] is neither [math] 0 [/math] nor a multiple of [math] 2\pi [/math] we have

[math] \sum_{n=1}^{\infty }\frac{\cos nx}{n} =-\frac{1}{2}\ln \left ( 2-2\cos x \right ) [/math]

[math] =-\ln \left ( 2\sin \frac{1}{2} x\right ) [/math].

Setting [math] x=\frac{\pi }{3}r_{s} [/math] and exponentiating [math] e^{-1} [/math] gives the relationship (for large sample sizes) between the Pearson and Spearman correlation coefficients as:

[math] r_{p}=2\sin\left ( \frac{\pi }{6}r _{s}\right ) [/math]

for the bivariate normal distribution.