maximizing probability that one variable is greater than another

Vova

New Member
#1
hi,
There is a random variable X with a known distribution along its support interval, say, [0,a].

Need to find a distribution function of another independent random variable Y with given expected value E(Y) and distributing along the same interval [0,a] so that P(Y>X) is maximal.

here http://math.stackexchange.com/quest...ability-that-one-random-variable-is-less-than is explained how to calculate the probability that one variable is greater than another, but how to maximize it?

In my particular example X has a mass point at zero and uniformly distributes on (0,a]. My solution candidate Y has a mass at a and uniformly distributes on [0,a), but I'm unable to show that Y indeed maximizes P(Y>X). Also, as I've said before, I'm looking for a general method to determine Y.

Thanks in advance.
 

BGM

TS Contributor
#2
General formulation:

Given a random variable [math] X [/math] with CDF [math] F_X [/math], find a random variable [math] Y [/math] (i.e. equivalent to find its CDF [math] F_Y [/math]) to

Maximize
[math] \Pr\{Y > X\} = \int_{-\infty}^{+\infty} \int_{-\infty}^y dF_X(x)dF_Y(y) [/math]

subject to

[math] \int_{-\infty}^{+\infty} ydF_Y(y) = \mu [/math]

where [math] \mu < +\infty [/math] is a given constant.

This, in a general Mathematical context should be related something like functional optimization, which I cannot go into details here as I do not know much either.


For your particular example, I can try to give a heuristic guess for it.

Let the point mass at 0 for [math] X [/math] is [math] p [/math].

Lets test some trivial cases first:


If you set [math] Y = \mu [/math] to be a constant, then

[math] \Pr\{Y > X\} = \Pr\{X < \mu\} = p + (1 - p)\frac {\mu} {a} [/math]



If you set [math] Y [/math] to be a discrete random variable with 2 support points [math] \{y_1, y_2\} [/math] such that [math] 0 < y_1 < \mu < y_2 < a [/math] and

[math] p_Yy_1 + (1 - p_Y)y_2 = \mu [/math]

then

[math] \Pr\{Y > X\} = \Pr\{X < y_1\}p_Y + \Pr\{X < y_2\}(1 - p_Y) [/math]

[math] = \left[p + (1 - p)\frac {y_1} {a}\right]p_Y +
\left[p + (1 - p)\frac {y_2} {a}\right](1 - p_Y) [/math]

[math] = p + (1 - p)\frac {\mu} {a} [/math]

which is the same. Inductively, for any discrete random variable without support on [math] 0 [/math], the probability is the same. And it is easy to see that for any discrete random variable with support [math] 0 [/math] it must be worse.


For [math] Y [/math] to be a purely continuous random variable, with

[math] \int_0^a yf_Y(y)dy = \mu [/math]

Then

[math] \Pr\{Y > X\} [/math]

[math] = \int_0^a \Pr\{X < y\}f_Y(y)dy [/math]

[math] = \int_0^a \left[p + (1 - p)\frac {y} {a}\right] f_Y(y)dy [/math]

[math] = p + (1 - p) \frac {\mu} {a} [/math]

Again it has the same probability.


For mixed case can work out need to work out tomorrow.
 

Vova

New Member
#3
Thank you very much for your answer!

So, as I understand, the described X produces the same P(Y>X) for any Y given E(Y).
And this is probably true in the opposite direction, i.e. given Y as described below, every possible X with given E(X) produces the same P(Y>X).

Best,
Vova.