Can different measures of central tendency/dispersion be compared?

Hi there,

I'm wondering, let's say I have two variables, one that is ordinal and one that is interval/ratio.

For the ordinal variable I am able to compute a median and range, and for the interval/ratio variable I can compute a mean and standard deviation, in order to measure central tendency and dispersion.

Would it be accurate to compare the central tendency and dispersion of the two variables (i.e. compare a mean to a median and compare a standard deviation to a range)?

With the range vs. standard deviation, I know the range is the highest minus the lowest score, whereas the standard deviation uses all scores to measure how dispersed they are. So are the two even comparable? Or could I still compare them to get a sense of which variable is more dispersed?

Likewise, does comparing a mean and median for two separate variables even tell me anything, or am I comparing apples to oranges?

If anyone is able to provide some clarification here, I would be enormously grateful. I know I can compare the mean and median for a single distribution (this will help me ascertain if the distribution is skewed, for example, by comparing the mean to the median).

But I'm finding it difficult understanding whether I can compare the mean/standard deviation of one distribution/variable (e.g. INCOME) to the median and range of another distribution/variable (e.g. SOCIAL CLASS). Can I compare the standard deviation and range, for example, and say whether one of those variables is MORE or LESS dispersed than the other? Or am I comparing apples to oranges when comparing a standard deviation to a range?

Thanks again,


Fortran must die
A mean is not the same thing as the median if the data is non-normal at all so comparing one to the other is doubtful as a measure of central tendency as compared to making comments about the distribution itself. This is true even for the same variable let alone two different ones with different distributions. The obvious example would be a distribution with 3 values 1 1 and 901. The median would be 1 the mean 301. This shows you the distribution is skewed, but it tells you little of the central value. It is why you use the median rather than the mean in some analysis. Generally the mean is not used when the median is because as the data gets skewed the mean loses any validity as a descriptor of central tendencies.

That said, I have not seen the analysis you are suggesting utilized this is only my opinion. My experience goes to formal methods like regression where the standard deviation or standard error would be more likely utilized.


New Member
I suggest comparing apples to apples. You can compute the interquartile range (IQR) and range for any distribution, and you can always compare them between variables.

However, the question becomes what exactly do you mean by more or less dispersed? If the variables are in different units and are developed in different ways, I'm not sure the question is meaningful unless you define degree of dispersion very carefully. Social class tends to have a fixed upper limit (don't people often use, like a 6 or 7 point scale?) -- if so, how would you compare its dispersion to income, which can go from 0 to billions with no set upper limit? You need a definition of degree of dispersion. Something like IQR/median comes to mind. But this only works consistently if a variable always has a lower value of 0. A change score could have a median of 0, which would give infinite dispersion by this rule.

You may want to search for nonparametric dispersion comparisons or the like to see what may have been proposed. I don't see much at a very quick glance, however.
Thank you very much, both noetsi and EdGr. It sounds like comparing apples to apples makes a whole lot more sense, and that comparing a mean of one distribution to a median for a different distribution, or an interquartile range vs standard deviation, doesn't make much sense, because they're different measures that measure central tendency and dispersion in different ways.

As noetsi mentioned, a mean can be very different than a median because the mean uses every score in a distribution in its calculation, whereas the median is just the middle score.

And as EdGr suggests, how are we measuring dispersion? If we're getting a range for something like social class measured on a 6 or 7 point scale, the range for this is going to be very different than measuring the dispersion of income using standard deviation.

I guess there isn't really a way to compare dispersion in a clear sense if you have an ordinal variable like social class and an interval/ratio variable like income - and I guess why would you even want to compare them in terms of dispersion anyway, since they are such different measures.