Intuition was telling me that probability of one random value being bigger than the other (both values are sampled from same distribution space) is 0.5. I did some simple tests of this with normally and uniformly distributed values and the results support my expectations.

Now, what I am not sure is if this is 'universal' truth? Does it hold true for all possible distributions? Any difference whether we are dealing with continuous, discrete or empirical distribution? Is there any paper or book that would explicitly show this - I need a better reference than 'my intuition'