3.2 Integrated nested Laplace approximation

I still do not get it!

It can be used for:

  1. It is also used for generalized linear mixed models: check this (from Anna B. Kawieck)

  2. spatio-spatio-temporal models

Model that follow this form:

\[y_i|x, \theta \sim \pi(y_i|x_i, \theta), i = 1, ..., n,\]

\[x|\theta \sim N(\mu_\theta, Q(\theta)^{-1})\]

\[\theta \sim \pi(\theta)\]

\(y\) : observed data

\(x\) : Gaussian fields / latent Gaussian field / latent effect also known as Gaussian Markov random fields (GMRF)

\(\theta\) : hyperparameters (here you have parameters + parameters for parameters)

Caution: \(x\) and \(y\) can be high-dimensional but not (\(theta\))

\(\mu_{\theta}\) mean of our Gaussian field

\(Q(\theta)^{-1}\) is the precision matrix = \(C^{-1}\) (inverse of covariance matrix) and we want the variance inside the \(N()\)

Why are we using precision instead of variance (source wiki):

  • easier time on computation

  • in multivariable normal distribution or joint normal distribution it make stuff easier

We are assuming that the mean \(\mu_i\) of our observations \(Y_i\) can be linked to a linear predictor (ex: \(\mu_i = g^{-1}(\eta_i)\) ):

\[ (\alpha\{\beta_k\}, \{f^{(j)}\})|\theta \sim N(\mu(\theta), Q(\theta)^{-1}) \]

Focus on individual posterior marginals of the model parameters (see below!) and some approximations instead of the joint posterior distribution of the model parameters (what MCMC do).

Posterior marginals of the component of the latent Gaussian variables:

\[\pi(x_i|y), i = 1, .., n\] Posterior marginals for the hyperparameters of the Gaussian latent model

\[\pi(\theta_j|y), j = 1, .., dim(\theta) \]

Posterior marginals of each element \(x_i\) of the latent fiield \(x\):

\[\pi(x_i|y) = \int \pi(x_i|\theta, y)\pi(\theta|y)d\theta \]

Posterior marginals for the hyperparameters (general):

\[ \pi(\theta_j|y) = \int \pi(\theta|y)d\theta_j \]

Then some approximations that lead us to get:

\[\tilde{\pi}(x_i|y) = \int \tilde{\pi}(x_i|\theta_k, y)\tilde{\pi}(\theta_k|y) \times \Delta_k \]

\[ \tilde{\pi}(\theta_j|y) = \sum\pi(\theta^{*}_l|y) \times \Delta^*_l\]

Approximations:

  • Gaussian approximations

  • Laplace approximations (more costly)

  • simplified Laplace approximations (compromise), used by default in R-INLA

3.2.1 Resources:

Blog post from Kathryn Morrison

Book from Virgilio Gómez-Rubio