I'm wondering if anyone could help shed some light on why there appear to be two different ways that different textbooks show to calculate confidence intervals for proportions.

Within the formula I know, there is P, which is the population parameter/value we are trying to estimate using the sample statistic/value.

The way I am familiar with is to always use 0.5 as the value for P (the population parameter), as by doing so, the numerator will always end up having a value of 0.25 -- because we are multiplying P by ( 1 - P ).

This is good, because if we were to use any other value than 0.5, the expression will decrease in value (i.e. be less than 0.25). Setting P at 0.5 ensures that the expression P( 1-P ) will be at its maximum possible value and therefore the interval we construct will be at maximum width. This is the most conservative possible solution to the dilemma posed by having to assign a value to P in the estimation equation.

There are some other textbooks, however, that in place of P in the equation, use the sample value (Ps). This is not how I learned it, so this seems odd to me, because we don't know if the sample value accurately represents the population value -- though we know from what we know about the sampling distribution that this is likely to be close to the population value (the sampling distribution mean).

Why are there these two different ways about going to calculate a confidence interval for proportions? And which is better?

Thank you to anyone who is able to help, and I apologize if this question is a repeat and has been asked already.

Thanks,

Frodo