- Thread starter Mada
- Start date

It was. I even had a response I was going to post once I had time.

If you have a random (let’s say a continuous) variable Y with the distribution function F() and you take p = F(Y), then p will be uniformly distributed.

And then if you take the inverse standard normal distribution function, i.e. the standard normal quantile function Q(p) you will get a normal random variable. That is a method to do a normalising transformation. Is that correct?

Yup. That's basically it although I think their question was mainly on the notation but you covered it. This is a case where we do need to care if Y is continuous or not though (which you mentioned but it deserves an extra special call out here)

Lets talk about an empirical issue. Suppose if we identify the correct distribution F() but that we estimate some parameters with some errors. How sensitive would that be for the resulting normal distribution and the inference from a normal model (say an anova).

Suppose now that we identify an incorrect distribution F() but a similar one. How sensitive would that be for our normal distribution based inference?

I would not be surprised if there has been a lot written in this area. Has anybody seen anything about this?

But also for multivariate analysis. If the marginal distribution is not normal then it can not be a multivariate normal distribution. If the marginal distribution is normal it does not have to multivariate normal, but the possibility exist. Maybe the transformation can make it come closer to a multivariate normal.

Views?