We're using the Rayleigh distribution for some real-world scenarios. We often need to estimate its parameter (sigma) from samples R of size N where N is very small.
The estimator we're using for sigma, \(\widehat{\sigma} = \sqrt{\frac{\sum r_i^2}{2n}}\), is biased.
Using Monte Carlo analysis we happened to notice that \(c_4^2\) is a good correction factor for N > 10. But starting with N = 2 it has the following errors: 5.3%, 2.5%, 1.4%, 0.9%. Am I correct in assuming that the "correct" correction factor would have only rounding error for even small N?
Now, while we can use the Monte-Carlo correction factors in practice, we're curious to see the analytic estimator correction (even if it's as difficult to use as \(c_4\)). I don't think I have the capacity to derive that myself -- I lack experience with formal statistics. Could any statisticians help, or at least weigh in on whether a derivation is likely to be trivial, hard, or practically impossible?
The estimator we're using for sigma, \(\widehat{\sigma} = \sqrt{\frac{\sum r_i^2}{2n}}\), is biased.
Using Monte Carlo analysis we happened to notice that \(c_4^2\) is a good correction factor for N > 10. But starting with N = 2 it has the following errors: 5.3%, 2.5%, 1.4%, 0.9%. Am I correct in assuming that the "correct" correction factor would have only rounding error for even small N?
Now, while we can use the Monte-Carlo correction factors in practice, we're curious to see the analytic estimator correction (even if it's as difficult to use as \(c_4\)). I don't think I have the capacity to derive that myself -- I lack experience with formal statistics. Could any statisticians help, or at least weigh in on whether a derivation is likely to be trivial, hard, or practically impossible?