Independence of Gaussian and non-Gaussian variables

Hi,

If X is a Gaussian random variable, and Y is a non-Gaussian random variable, what are the necessary conditions for X and Y to be independent? I'm only interested in the case where the pdf of Y is continuous with well-defined derivatives (but is otherwise unspecified).

If X and Y were jointly Gaussian, it would be sufficient to establish that they had vanishing covariance, I'm fine with that.

If X and Y were both non-Gaussian, my understanding is that it would in general be necessary to establish that cross-moments take the form E(X^m Y^n) = E(X^m) E(Y^n) for all integer m, n > 0 (where E is expectation).

Does the fact that X is Gaussian simplify the situation even without knowing anything about the properties of Y? Certainly X and Y being uncorrelated is insufficient (consider the case Y=X^2 for instance). I can't think of a case where <XY> = 0 and <X^2 Y> = 0 yet X and Y are still dependent but I'm not sure how to go about proving / disproving this.

I'm an astrophysicist, with little formal statistics training. This question arose in connection with my research, although since it's not related to some specific data I thought I would ask here rather than in the Applied Statistics forum.

Re: Independence of Gaussian and non-Gaussian variables

The most general definition for two independent continuous random variables is that their joint pdf satisfy

where are the marginal pdf of respectively.

For the other necessary and sufficient condition, based on expectation is that

for all continuous functions (whenever the expectations exist)

For compact supported random variables, we have Stone-Weierstrass theorem to guarantee that these continuous functions can be well approximated by polynomial, so that your assertion will holds. However for random variables with unbounded support, it will be another story.

Another condition, should be based on the mgfs (if exist)

due to the uniqueness of Laplace transform. And not surprisingly you have the equivalent condition based on the characteristic function due to the uniqueness of Fourier transform.

Currently I cannot think of anyway to relax these conditions if you just specify one of the random variable to be Gaussian, as the other one is still quite arbitrary. I think one point maybe useful is that a multivariate Gaussian distribution is an elliptical distribution. So if you specified to be another elliptical distribution, something may happen. I am not sure.

Re: Independence of Gaussian and non-Gaussian variables

Thanks for the reply. Yes I agree these conditions must still hold, I guess what I was wondering (and should have written more clearly) is whether demonstrating these conditions is easier when one variable is known to be Gaussian. For example, looking at the expectation condition you wrote, and assuming compact support for Y, does the fact X is Gaussian reduce the number of polynomial combinations I have to show satisfy the condition?

I thought about writing the characteristic function of Y in terms of a Gaussian characteristic function, and then seeing if I can write down the interdependence of X and Y in terms of this new Gaussian and X, but so far I haven't gotten anything useful out of that.