Difference of expectations of RV's given arbtrary pdfs

#1
I've been beating my head on this for a while...

Let X and Y have pdf's f & g respectively such that [math]
\begin{cases}
f(x) >= g(x) & \mbox{if } x \leq a \\
f(x) <= g(x) & \mbox{if } x > a\\
\end{cases} [/math]. Show that [math] E[X] \leq E[Y][/math].

As the expectation is a probability weighted average, it stands to reason that if there is more "weight" given to larger values (i.e. larger values are more likely to occur), then the expectation is larger. I just don't know how to show mathematically. So far, I've tried looking at the difference in the pdfs:
[math] E[Y]-E[X] = \int_{-\infty}^{\infty} t g(t)dt - \int_{-\infty}^{\infty} t f(t)dt = \int_{-\infty}^{\infty} t \cdot [g(t)-f(t)]dt = \int_{-\infty}^{\infty} t \cdot h(t)dt [/math]. I can show that h(t) integrates to zero (as the difference of the integrals of two pdf's) so [math]\int_{-\infty}^a h(t)dt=-\int_a^{\infty} h(t)dt[/math], but then I don't know where to go from there.

Any thoughts are appreciated!
 

BGM

TS Contributor
#2
There maybe a simpler and more direct method, but you may need to make use of the following formula:

[math] E[X] = -\int_{-\infty}^0 F(x)dx + \int_0^{+\infty} [1 - F(x)]dx [/math]

Then you may obtain the following:

[math] E[Y] - E[X] = \int_{-\infty}^{+\infty} [F(t) - G(t)]dt [/math]

where [math] F, G [/math] are the CDF of [math] X, Y [/math] respectively.

Next you may show that the integrand tends to zero when it goes to infinity (property of CDF) and has a unique global and local maximum at [math] a [/math] by usual calculus check with the given property. Thus showing that the integrand, and therefore the integral is non-negative which completes the proof.