+ Reply to Thread
Results 1 to 2 of 2

Thread: How to combine correlated vector-valued estimates

  1. #1
    Points: 4, Level: 1
    Level completed: 7%, Points required for next Level: 46

    Posts
    1
    Thanks
    0
    Thanked 0 Times in 0 Posts

    How to combine correlated vector-valued estimates




    I'd like to combine several vector-valued estimates of a physical quantity in order to obtain a better estimate with less uncertainty.

    As in the scalar case, the weighted mean of multiple estimates can provide a maximum likelihood estimate. For independent estimates we simply replace the variance \sigma^2 by the covariance matrix \Sigma and the arithmetic inverse by the matrix inverse (both denoted in the same way, via superscripts); the weight matrix then reads (see https://en.wikipedia.org/wiki/Weight...lued_estimates)
    W_i =\Sigma_i^{-1},
    where \Sigma_i stands for the covariance matrix of the vector-valued quantity x_i.

    The weighted mean in this case is:
    \bar{x} = \Sigma_{\bar x} \left(\sum_{i=1}^n \text{W}_i \mathbf{x}_i\right)
    (where the order of the matrix-vector product is not commutative).
    The covariance of the weighted mean is:
    \Sigma_{\bar x} = \left(\sum_{i=1}^n \text{W}_i\right)^{-1}

    For example, consider the weighted mean of the point [1~0]^\top with high variance in the second component and [0~1]^\top with high variance in the first component. Then
    x_1 := \begin{bmatrix}1\\0\end{bmatrix}, \qquad \Sigma_1 := \begin{bmatrix}1 & 0\\ 0 & 100\end{bmatrix}
    x_2 := \begin{bmatrix}0\\1\end{bmatrix}, \qquad \Sigma_2 := \begin{bmatrix}100 & 0\\ 0 & 1\end{bmatrix}
    then the weighted mean is:
    \bar {x} = \left(\Sigma_1^{-1} + \Sigma_2^{-1}\right)^{-1} \left(\Sigma_1^{-1} \mathbf{x}_1 + \Sigma_2^{-1} \mathbf{x}_2\right) \\ =\begin{bmatrix}  0.9901 &0\\ 0& 0.9901\end{bmatrix}\begin{bmatrix}1\\1\end{bmatrix} = \begin{bmatrix}0.9901 \\ 0.9901\end{bmatrix}

    On the other hand, for scalar quantities it is well known that correlations between estimates can be easily accounted. In the general case (see https://en.wikipedia.org/wiki/Weight...r_correlations), suppose that X=[x_1,\dots,x_n]^\top, C is the covariance matrix relating the quantities x_i, \bar x is the common mean to be estimated, and U is the design matrix U=[1, ..., 1]^\top (of length n). The Gauss–Markov theorem states that the estimate of the mean having minimum variance is given by:
    \bar x = \sigma^2_{\bar x} (U^\top C^{-1} X)
    with
    \sigma^2_{\bar x}=(U^\top C^{-1} U)^{-1}

    The question is, how can correlated vector-valued estimates be combined?
    In our case, how to proceed if x_1 and x_2 are not independent and all the terms in the covariance matrix are known?
    In other words, are there analogous expressions to the last two for vector-valued estimates?
    Any suggestion or reference, please?

  2. #2
    TS Contributor
    Points: 22,359, Level: 93
    Level completed: 1%, Points required for next Level: 991
    spunky's Avatar
    Location
    vancouver, canada
    Posts
    2,135
    Thanks
    166
    Thanked 537 Times in 431 Posts

    Re: How to combine correlated vector-valued estimates


    Got stuck in the moderation queue!
    for all your psychometric needs! https://psychometroscar.wordpress.com/about/

+ Reply to Thread

           




Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats