Originally Posted by

**spunky**
I was thinking about adding something like variance or efficiency there but my advisor said (and I think he has a point) that techniques like the bootstrap obviate the need to know the properties of asymptotic variance (or a population parameter in general) since, when the assumptions are satisfied, we end up with correct confidence intervals.

I'm not quite sure I follow why the possibility of bootstrapping would imply that we don't need to worry about efficiency? Surely we want our estimates to be precise as possible, regardless of whether our interval estimates have correct coverage? I mean, extreme example:

^That's a 95% confidence interval with 95% coverage for any parameter, but it's not really one you'd want to use! Efficiency still matters, I think?

While finishing-up my next manuscript, I started thinking that a way to “buff it up” would be to talk about the advantages of having a population-defined parameter that some sample statistic purportedly estimates.

Ok so back to the main question... I guess at the end of the day, statistics (or the fun part of it anyway) is all about making inferences about things we haven't directly observed. That might mean using a sample to make inferences about a population, but it could also mean using observations to make inferences about the causal influences that produced those observations.

So the value of statistics relies very heavily on the idea that they are estimates of parameters (parameters of populations or that describe causal effects). If we have a sample statistic, but don't even know what parameter that statistic is intended to estimate, then that statistic surely doesn't achieve anything beyond describing the sample at hand?