Standard Uniform Distribution

#1
Hi, how to prove that U~uniform[0,1] is equivalent to (i.e. if and only if) [nU]~uniform{0,1,2,...,n-1} for all positive integers n. [] is the floor function. I totally have no idea how to start, so guidance would be greatly appreciated. Also, why is the range from 0 to n-1? If you have [0,1] in the beginning and you multiply by n, shouldn't you have from 0 to n? I am so confused.
 
#4
Anyway, from your question, it has given me an idea.

[nU] = n-1 if and only if n-1 <= nU < n if and only if (n-1)/n < U < 1. And probability is 1 - n/n + 1/n = 1/n.
[nU] = n-2 if and only if n-2 <= nU < n-1 if and only if (n-2)/n < U < (n-1)/n. Probability is 1/n again.
Continue with n-3, n-4 etc. Is it correct?
 
Last edited:
#5
So is my above solution good enough or more is needed? Anyway, I need help for a further question:

U_n = [nU]/n. Compute P(U_(n+1) > U_n) and P(U_(n+1) = U_n) for each n in N (natural number) to show that the convergence is not monotone.

I have no idea how to do it at all and I will show you my attempt and I won't be surprised if I get it totally on the wrong track.

So here is my attempt:

[(n+1)U]/(n+1) > [nU]/n when k/(n+1) <= U < k/n , where k is an integer from 1 to n
[(n+1)U]/(n+1) = [nU]/n when U < 1/(n+1) or when U=1

So P(U_(n+1) > U_n) = summation (k/n - k/(n+1)), k from 1 to n = 1/2
P(U_(n+1) = U_n) = 1/(n+1)

Assuming that my probabilities are correct (please let me know if I am wrong), how do I show that the convergence is not monotone?
 

Dragan

Super Moderator
#6
I think a better way to do this is to show convergence in probability - which is weaker than convergence almost surely.

So, consider a sample space \(\Omega =\left [ 0,1 \right ] \) with a Uniform probability measure P. That is, the probability associated with the interval \( \left [ 0,1 \right ]\subset \Omega \) is \( b-a \).

Let \( m\left ( n \right )\) be a sequence of intervals of the form \( \left [ \frac{i}{k},\frac{\left ( i+1 \right )}{k} \right ] \) for \( i=0,...,k-1 \) and where \( k\in \mathbb{N} \), where

\( m\left ( 1 \right )=\left [ 0,1 \right ], \),\( m\left ( 2 \right )=\left [ 0,\frac{1}{2} \right ] \),..., \( m\left ( 6 \right )=\left [ \frac{2}{3},1 \right ] \).

Next, define a sequence of random variables on this sample space as \( X_{n}\left ( \omega \right )=\delta \left \{ \omega ;m\left ( n \right ) \right \} \) for
\( \omega \in \left [ 0,1 \right ] \).

Let \( \varepsilon > 0 \) and note that \( X_{n}\left ( \omega \right ) \) is 1 only on the interval \( m\left ( n \right ) \) such that
\( P\left ( \left |X _{n} \right |< \varepsilon \right )\leq 1-\frac{k}{n} \),
where \( k\left ( n \right ) \) is a sequence that converges to 0, \( n\rightarrow \infty \).

It follows that for any \( \varepsilon > 0 \),

\( \lim_{n\rightarrow \infty }P\left ( \left |X _{n} \right |<\varepsilon \right )=1 \),

so then it necessarily follows that \( X_{n}\overset{p}{\rightarrow}0 \) as \( n \to \infty \).
 
#7
I think a better way to do this is to show convergence in probability - which is weaker than convergence almost surely.

So, consider a sample space \(\Omega =\left [ 0,1 \right ] \) with a Uniform probability measure P. That is, the probability associated with the interval \( \left [ 0,1 \right ]\subset \Omega \) is \( b-a \).

Let \( m\left ( n \right )\) be a sequence of intervals of the form \( \left [ \frac{i}{k},\frac{\left ( i+1 \right )}{k} \right ] \) for \( i=0,...,k-1 \) and where \( k\in \mathbb{N} \), where

\( m\left ( 1 \right )=\left [ 0,1 \right ], \),\( m\left ( 2 \right )=\left [ 0,\frac{1}{2} \right ] \),..., \( m\left ( 6 \right )=\left [ \frac{2}{3},1 \right ] \).

Next, define a sequence of random variables on this sample space as \( X_{n}\left ( \omega \right )=\delta \left \{ \omega ;m\left ( n \right ) \right \} \) for
\( \omega \in \left [ 0,1 \right ] \).

Let \( \varepsilon > 0 \) and note that \( X_{n}\left ( \omega \right ) \) is 1 only on the interval \( m\left ( n \right ) \) such that
\( P\left ( \left |X _{n} \right |< \varepsilon \right )\leq 1-\frac{k}{n} \),
where \( k\left ( n \right ) \) is a sequence that converges to 0, \( n\rightarrow \infty \).

It follows that for any \( \varepsilon > 0 \),

\( \lim_{n\rightarrow \infty }P\left ( \left |X _{n} \right |<\varepsilon \right )=1 \),

so then it necessarily follows that \( X_{n}\overset{p}{\rightarrow}0 \) as \( n \to \infty \).
How do you compute the intervals of m(n): How do you get \( m\left ( 1 \right )=\left [ 0,1 \right ], \),\( m\left ( 2 \right )=\left [ 0,\frac{1}{2} \right ] \),..., \( m\left ( 6 \right )=\left [ \frac{2}{3},1 \right ] \)? And how do you show that the convergence is not monotone?
 

Dragan

Super Moderator
#8
Well, to be more clear, you compute the intervals like this: m(1)=[0, 1], m(2)=[0, 1/2], m(3)=[1/2, 1], m(4)=[0, 1/3], m(5)=(1/3, 2/3), m(6)=[2/3, 1] - see the pattern.

What I am showing is convergence in probability. It will not converge almost surely. As I said, convergence in probability is weaker than convergence almost surely.

In terms of whether your question regarding monotonicity: Are you being asked to use the Lebesque's Monotone Convergence Theorem???...I guess, I'm just not sure why you asking this.
 

Dragan

Super Moderator
#10
Well, the problem asked me to find those probabilities and to use them to show that the convergence is not monotone.
Okay, so their is convergence in probability (and not convergence almost surely) - but the entire sequence of intervals m(n) is not (strictly) monotone:

m(1)=[0, 1], m(2)=[0, 1/2], m(3)=[1/2, 1], m(4)=[0, 1/3], m(5)=(1/3, 2/3), m(6)=[2/3, 1], m(7)=[0, 1/4], m(8)=[1/4, 1/2], m(9)=[1/2, 3,4], m(10)=[3/4, 1].