General formulation:

Given a random variable [math] X [/math] with CDF [math] F_X [/math], find a random variable [math] Y [/math] (i.e. equivalent to find its CDF [math] F_Y [/math]) to

Maximize

[math] \Pr\{Y > X\} = \int_{-\infty}^{+\infty} \int_{-\infty}^y dF_X(x)dF_Y(y) [/math]

subject to

[math] \int_{-\infty}^{+\infty} ydF_Y(y) = \mu [/math]

where [math] \mu < +\infty [/math] is a given constant.

This, in a general Mathematical context should be related something like functional optimization, which I cannot go into details here as I do not know much either.

For your particular example, I can try to give a heuristic guess for it.

Let the point mass at 0 for [math] X [/math] is [math] p [/math].

Lets test some trivial cases first:

If you set [math] Y = \mu [/math] to be a constant, then

[math] \Pr\{Y > X\} = \Pr\{X < \mu\} = p + (1 - p)\frac {\mu} {a} [/math]

If you set [math] Y [/math] to be a discrete random variable with 2 support points [math] \{y_1, y_2\} [/math] such that [math] 0 < y_1 < \mu < y_2 < a [/math] and

[math] p_Yy_1 + (1 - p_Y)y_2 = \mu [/math]

then

[math] \Pr\{Y > X\} = \Pr\{X < y_1\}p_Y + \Pr\{X < y_2\}(1 - p_Y) [/math]

[math] = \left[p + (1 - p)\frac {y_1} {a}\right]p_Y +

\left[p + (1 - p)\frac {y_2} {a}\right](1 - p_Y) [/math]

[math] = p + (1 - p)\frac {\mu} {a} [/math]

which is the same. Inductively, for any discrete random variable without support on [math] 0 [/math], the probability is the same. And it is easy to see that for any discrete random variable with support [math] 0 [/math] it must be worse.

For [math] Y [/math] to be a purely continuous random variable, with

[math] \int_0^a yf_Y(y)dy = \mu [/math]

Then

[math] \Pr\{Y > X\} [/math]

[math] = \int_0^a \Pr\{X < y\}f_Y(y)dy [/math]

[math] = \int_0^a \left[p + (1 - p)\frac {y} {a}\right] f_Y(y)dy [/math]

[math] = p + (1 - p) \frac {\mu} {a} [/math]

Again it has the same probability.

For mixed case can work out need to work out tomorrow.