As I understand it, The Central Limit Theorem and Confidence Intervals are related. What is the best way to explain why the Confidence Interval formula uses only one sample whereas the Central Limit Theorem uses 'many' samples? Isn't the x-bar in the CI formula the mean of one sample? Don't we take many samples to explain the CLT?
Is the best thing that we can say is that a 95% CI means that if we took say 100 samples then 95 of them would contain the mean.
Is the best thing that we can say is that a 95% CI means that if we took say 100 samples then 95 of them would contain the mean.