#### BigBugBuzzz

##### New Member
Consider two probability distributions, A and B. A has greater variance than B, but A and B have equal means. We are not told anything about their skewness or kurtosis, these can be different for A and B (A might have fat tails or be skewed, while B might be likewise or may different all together).

Here is the questions:

Is it possible to draw n observations from A and n observations from B, and find that the variance of observations drawn from A is greater than the variance of observations drawn from B, even for very large numbers of n (perhaps given different skewness and kurtosis?)

Last edited:

#### Dason

... What? Are you sure you phrased that correctly?

#### BigBugBuzzz

##### New Member
... What? Are you sure you phrased that correctly?
Sorry, I have amended the question. Hope that avoids confusion.

#### interestednew

##### New Member
You havent shown any work either.

#### Dason

I'm confused. Do you think for some reason that we wouldn't be able to do this? And when you say "even for large values of n" it implies that you think that we would be less likely to show this when n is large. Or am I reading too much into what you said?

#### interestednew

##### New Member
I agree .....why are you placing emphasis on large values on n? Do you know the implications of large n values? Why dont you start off by trying to answer the question yourself.

#### BGM

##### TS Contributor
I am also curious here. Usually in statistic the larger the sample size, the merrier we are.

And under some mild condition, the sample variance is a consistent estimator for the population variance, which can be used to construct a consistent test for the two populations variance, i.e. when the sample size is getting very large, eventually the sample variance (although random) will get very close to the actual population variance and eventually we are getting close to 100% sure that which one is larger.

#### BigBugBuzzz

##### New Member
I am also curious here. Usually in statistic the larger the sample size, the merrier we are.

And under some mild condition, the sample variance is a consistent estimator for the population variance, which can be used to construct a consistent test for the two populations variance, i.e. when the sample size is getting very large, eventually the sample variance (although random) will get very close to the actual population variance and eventually we are getting close to 100% sure that which one is larger.
BGM, it is exactly these lines I am thinking. Interestednew, I am sorry I didn't show any work, but I can tell you that the phenomenon of finding systematically greater sampling variance (I mean, every time) from distributions that have smaller variances (compared to other different shaped distributions) is ocuring in a simulation I have built. I am worried that I have made a programming error, but I cannot find one. So I thought I would post the question here in hope you could settle the case.

So, considering BGM's more lucid reasoning, would for example sampling from a fat-tailed distribution (with smaller variance) systematically result in larger sampling variance than sampling from a negative skewed distribution having objectively larger variance?

Perhaps I am missing some understanding of the importance of n, but my intuition tells me that as n approaches infinity, this should not occur. But perhaps there is a wide "practical" range of n where this will occur with some very large probability.

#### Dason

It sounds like maybe you messed up the original post? Did you mean to say that A has larger variance than B but you're finding consistently that the sample variance of B is larger than the sample variance of A?

#### leavesof3

##### New Member
If I'm understanding you right I think this depends on the degree of skewness and kurtosis of A and B. If either distribution exhibits extreme kurtosis, then I think you are in an area where one observation can end up being THE variance of the entire distribution. If the kurtosis is mild then I think you can use Edwin Hurst's equation on rescaled range and relating it back to the extreme observations of your distribution. I think it would be easier to discuss if you came up with some data from distributions A and B.

#### BigBugBuzzz

##### New Member
@ Dason, Yes, exactly.

Last edited:

#### BigBugBuzzz

##### New Member
@ Leavesof3, This is sounding very promising given the data I have. I would like to provide data. Is it possible to upload data here?

#### errence boyoyo

##### New Member
The only way to prove that the variance of A is greater than the variance of B is to do the hypothesis testing. n must be the same from both observations. null hypothesis: siga of A > sigma of B.