Optimally combining samples to estimate averages

Suppose I have two tables, each of unknown size, and I’d like to estimate the average of their true sizes. I hire 2 contractors: one guarantees good precision (i.e., her measurement normally-distributed about the true value, with a standard deviation of 5mm), while the other is a dimwit (i.e., his measurement is also unbiased, but with a standard deviation of 100mm). What’s the optimal way to combine the two measurements to form a final estimate of their true average size? The formal way to ask this question is, “Given a sample from a normally-distributed random variable with known variance, and another sample from a second normally-distributed random variable with known (but potentially different) variance, what is the best guess as to the expected value of the average of the two random variables?” Is the answer “Just average them.”? Ideally, I want to prove the answer, whatever it is.

EDIT: I’m not sure if it was clear, but one contractor measures table 1, while the other contractor measures table 2.

Solutions Collecting From Web of "Optimally combining samples to estimate averages"

Just averaging them does not account for the different variances, and therefore does not yield an optimal result.

The best (as in minimum variance, unbiased) estimation of the expected value $\mu$ of those two distributions $X_1, X_2$ with variances $\sigma^2_1,\sigma^2_2$ is $$\hat\mu(x_1, x_2) = \frac{\sigma_1^2x_2+\sigma_2^2x_1}{\sigma_1^2+\sigma_2^2}$$.

Please correct my notaion. I’ve learned this stuff with German names. =)

The joint distribution of $X_1, X_2$ forms an exponential family in $T(x_1, x_2) = \sigma_1^2x_2+\sigma_2^2x_1$, since its probability density function has the form

$$f(x_1, x_2) = \frac{1}{\sqrt{2\pi\sigma_1^2}\sqrt{2\pi\sigma_2^2}}\exp({-\frac{(x_1-\mu)^2}{2\sigma_1^2}-\frac{(x_2-\mu)^2}{2\sigma_2^2}})$$ $$= h(x)A(\mu)\exp({\frac{\mu(\sigma_1^2x_2+\sigma_2^2x_1)}{2\sigma_1^2\sigma_2^2}}) $$ with $h(x)A(\mu)$ being all the uninteresting stuff. =)

Therefore $T$ is sufficient, $\mu$ depends only on $T$ and is unbiased and the Lehmann–Scheffé theorem together with some mumbling about completeness concludes the proof.

More generally, the minimal-variance unbiased linear combination of estimators is their reciprocal-variance-weighted average.

You can do away with the normality if you restrict yourself to linear estimates. Given $\mu, \sigma_1^2, \sigma_2^2, \sigma_{12}$,, you have random variables $X_1,X_2$, with $E(X_1) = E(X_2) = \mu $, and $\text{Var}(X_1) = \sigma_1^2$ and $\text{Var}(X_2) = \sigma_2^2$, and $\text{Cov}(X_1,X_2) = \sigma_{12}$. You want $l_0,l_1,l_2$ such that $E(l_0+l_1 X_1 + l_2 X_2) = \mu $ (unbiased) and $\text{Var}(l_0 + l_1 x_1 + l_2 X_2)$ is as small as possible (minimum variance unbiased estimator). From the first equation you get $l_0 + l_1 \mu + l_2 \mu = \mu $ for all $\mu$ i.e., $l_0 = \mu ( 1 – l_1 – l_2)$ for all $\mu$. Which is possible iff $l_0 = 0$ and $l_1 + l_2 = 1$. So the second condition reduces to minimizing $\text{Var}(l_1 X_1 + l_2 X_2)$ over all $l_1,l_2$ subject to $l_1 + l_2 = 1$. Simple calculus of lagrangian multipliers applied to $\text{Var}(l_1 X_1 + l_2 X_2) – \lambda(l_1 + l_2 – 1)$ leads to
$
l_1 \sigma_1^2 + l_2 \sigma_{12} – \lambda = 0
$
$
l_1 \sigma_{12} + l_2 \sigma_{2}^2 – \lambda = 0
$
$
l_1 + l_2 = 1$
from which we get our estimator as
$\frac{\alpha_1}{\alpha_1 + \alpha_2} X_1 + \frac{\alpha_2}{\alpha_1 + \alpha_2} X_2$ where $\alpha_1 = \frac{\sigma_2^2 – \sigma_{12}}{\sigma_1^2\sigma_2^2 -\sigma_{12}^2}$ and $\alpha_2 = \frac{\sigma_1^2 – \sigma_{12}}{\sigma_1^2\sigma_2^2 -\sigma_{12}^2}$. Note, if $\sigma_{12} = 0$ then it reduces to $\frac{\sigma_2^2 X_1 + \sigma_1^2 X_2}{\sigma_1^2 + \sigma_2^2}$ as noted above.