+ Reply to Thread
Results 1 to 3 of 3

Thread: Marginal of Gaussian

  1. #1
    Points: 1,934, Level: 26
    Level completed: 34%, Points required for next Level: 66

    Posts
    10
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Marginal of Gaussian




    I have a slight problem with the marginal distribution of a Gaussian. First, assume
    (d,r) are jointly Gaussian of dimension 2n. It is known that d = Wr + e, where W is a a convolution matrix and e is gaussian error. We define r_t^{(k)} = (r_{t-k+1}, \ldots r_t). Then, we marginalize to get \left(d, r_t^{(k)}\right) \sim \mathcal{N}_{n+k}\left(\begin{pmatrix} \mu_d \\ \mu_{r_t^{(k)}}\end{pmatrix}, \begin{pmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22} \end{pmatrix}\right).
    Then the conditional distribution is
    d|r_t^{(k)} \sim \mathcal{N}_n\left(d; \mu_d + \Sigma_{12}\Sigma_{22}^{-1}\left(r_t^{(k)}-\mu_{r_t^{(k)}}\right), \Sigma_{11} - \Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21} \ \right). Assume, further that r_t^{(k)}|\kappa_t^{(k)}  \sim \mathcal{N}_k\left(r_t^{(k)}; \mu_{r_t^{(k)}|\kappa_t^{(k)}}, \Sigma_3\right).

    Is the following then correct?
    If, given that d only depends on kappa through r,
    p\left(d|\kappa_t^{(k)} \right) = \int p\left(d|r_t^{(k)}\right)p\left(r_t^{(k)}|\kappa_t^{(k)}\right)dr_t^{(k)}
    then d|\kappa_t^{(k)} is also Gaussian. However, I am not sure what the corresponding mean and covariance matrix is. I guess the mean is \mu_d + \Sigma_{12}\Sigma_{22}^{-1}\left(\mu_{r_t^{(k)}|\kappa_t^{(k)}}-\mu_{r_t^{(k)}}\right), but what about the covariance matrix?

  2. #2
    TS Contributor
    Points: 22,410, Level: 93
    Level completed: 6%, Points required for next Level: 940

    Posts
    3,020
    Thanks
    12
    Thanked 565 Times in 537 Posts

    Re: Marginal of Gaussian

    Let see if I have get your point:

    Now you assume (d, r, \kappa) jointly follows a multivariate Gaussian distribution. If you know the covariance matrix of this, then you can solve your problem.

    Let

    \Sigma = \begin{bmatrix} \Sigma_d & \Sigma_{d,r} & \Sigma_{d,\kappa} \\\Sigma_{d,r} & \Sigma_r & \Sigma_{r,\kappa} \\ \Sigma_{d,\kappa} & \Sigma_{r,\kappa} & \Sigma_{\kappa} \end{bmatrix}

    be the covariance matrix of (d, r, \kappa).

    You already known the the conditional distribution of d|r and r|\kappa, so \Sigma_{d,r} and \Sigma_{r,\kappa} are already known. I also assume that you know the "marginal" distributions of each of them, so the diagonals are also known.

    The key question here is that:

    given that d only depends on kappa through r,
    So how do we interpret this?

    If \Sigma_{d,\kappa} is a zero matrix, then d, \kappa will be independent. If this is not the case we need further assumption on the structure of covariance matrix I guess? Or do you mean (d, \kappa) is conditionally independent given r?

  3. The Following User Says Thank You to BGM For This Useful Post:

    Tosha (03-26-2015)

  4. #3
    Points: 1,934, Level: 26
    Level completed: 34%, Points required for next Level: 66

    Posts
    10
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Re: Marginal of Gaussian


    I will try to make myself more clear. I do apologize for my poor English.

    We assume kappa to follow a first order Markov chain, thus the joint (d,r,kappa) is not Gaussian (if I am correct). I all try to formulate the model, which is a two level convolved model, below

    1. Prior: p(\kappa) = \prod p(\kappa_t|\kappa_{t-1}), discrete first order Markov chain
    2. Likelihood 1: p(r|\kappa) = \mathcal{N}_n\left(r; \mu_{r|\kappa}, \Sigma_{r|\kappa}\right) where \mu_{r|\kappa} depends on which class \kappa_i belongs too.
    3. Likelihood 2: p(d|r) = \mathcal{N}_n\left(d; Wr, \Sigma_{d|r}\right), W is a fairly broad convolution matrix

    I have attached a DAG of the model.

    Our goal is to assess
    p(\kappa|d) \propto p(d|\kappa) p(\kappa)
    where
    p(d|\kappa) = \mathcal{N}_n\left(d; W\mu_{r|\kappa}, W\Sigma_{r|\kappa}W^T + \Sigma_{d|r} \right),
    however we are not able to assess it since the normalization constant is computational infeasible. Therefore, we use k-th order states \kappa_t^{(k)} = (\kappa_{t-k+1}, \ldots, \kappa_t). and look at the k-th order approximate posterior
    p^{(k)}\left( d|\kappa_t^{(k)}\right) = \int p^{(k)}\left(d|r_t^{(k)}\right)p\left(r_t^{(k)}|\kappa_t^{(k)}\right) dr_t^{(k)}
    As far as I have understood this is Gaussian since p(d,r) = p(d|r)p(r) where p(r) is a Gaussian approximation to r. Then, we can marginalize to find p(d, r_t^{(k)}) by extracting the appropriate rows and columns. Similarly, p(r_t^{(k)}|\kappa_t^{(k)}) = p(r_t^{(k)}|\kappa_t) is also Gaussian.

    Until now I understand what happens, but now I have note saying "combining the results above p(d, r_t^{(k)}|\kappa_t^{(k)}) = p(d|r_t^{(k)})p(r_t^{(k)}|\kappa_t^{(k)}) is also Gaussian. Marginalize wrt r_t^{(k)} to obtain p(d|\kappa_t^{(k)})." To be honest, I really don't see how to find the mean and covariance...
    Attached Images
    Last edited by Tosha; 03-26-2015 at 04:50 PM.

+ Reply to Thread

           




Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats