What's the difference between Rao-Blackwell Theorem and Lehmann-Scheffé Theorem?

Rao–Blackwell says the conditional expected value of an unbiased estimator given a sufficient statistic is another unbiased estimator that's at least as good. (I seem to recall that you can drop the assumption of unbiasedness and all you lose is the conclusion of unbiasedness; you still improve the estimator. So you can apply it to MLEs and other possibly biased estimators.) In examples that are commonly exhibited, the Rao–Blackwell estimator is immensely better than the estimator that you start with. That's because you usually start with something really crude, because it's easy to find, and you know that the Rao–Blackwell estimator will be pretty good no matter how crude the thing you start with is.

The Lehmann–Scheffé theorem has an additional hypothesis that the sufficient statistic is complete, i.e. it admits no unbiased estimators of zero. It also has an additional conclusion: the estimator you get is the unique best unbiased estimator.

So if an estimator is complete, unbiased, and sufficient, then it's the best possible unbiased estimator. Lehmann–Scheffé gives you that conclusion, but Rao–Blackwell does not. So the statement in the question about what Rao–Blackwell says is incorrect.

It should also be remembered that in some cases it's far better to use a biased estimator than an unbiased estimator.

Tags:

Statistics