I have a very basic question regarding confidence intervals.

Given I want to construct a confidence Interval for the expected value mu of normally distributed Random Variables.

As an estimator for mu I use the sample mean, \bar(X) = 1/n sum_(i=1)^n X_i that is.

Now I want the real value be in the constructed interval with a probability of 1 - alpha:

P(parameter mu in confidence interval) = 1 - alpha

I wonder that when constructing the interval, why instead of looking for quantiles with regards to mu, quantiles with regards of the *standardized* mu are given:

P(-z(1-alpha/2) <= (\bar(X)-mu)/(sigma/sqrt(n)) <= z(1-alpha/2))

This is the interval for (\bar(X)-mu)/(sigma/sqrt(n)) being in it with 1-alpha probability, not for mu!

So, what is it I am not understanding correctly?

Thanks