Qualifying statistical power when confidence intervals are the same?

#1
I derive two different statistics for characterizing the dispersion in a random variable. One, call it E, is an average of ranges. The other, call it R, is an unbiased estimator of the random variable's parameter. But E is far more popular than R because it's easier to calculate and observe.

Analyzing a particular sample I discover that the 90% confidence interval for E has an error range of +/-20%. On the same sample the 90% confidence interval for R also has an error range of +/-20%. Based on this it looks like E is as "efficient" as R.

However I believe that R is a much more powerful and efficient statistic, because E ignores useful information that R doesn't. How can I quantify this?

I calculated the coefficient of variation for each statistic and on this sample it is .12 for E and .10 for R. Does that demonstrate that R is a better statistic? And if so how what is the plain-language explanation of that? Or how would that difference manifest itself given that the confidence intervals are identical?
 
Last edited:
#2
statistical efficiency of an estimator can be quantified by the estimator's variance. Do you know the analytical variance of each estimator E and R?
 
#3
I have the simulated variance of each estimator. I think I can get an analytic expression of the variance of R, but definitely not of E.

Either way wouldn't we want to use the coefficient of variation, which is a normalized expression of variance?