Random vectors

  • Now we return to more then 2 variables, and look at random vectors:

\[ f_\mathbf{X}(\mathbf{x}) = f_{X_1,...,X_N}(x_1,..,x_n) \]

Where

\[ \mathbf{X} = \begin{bmatrix} X_1 \\ X_2 \\ ... \\ X_N \end{bmatrix} \]

  • Expectation is straightforward, and the vector mean \(\mathbf{\mu}\) is defined:

\[ \mathbf{\mu} = \mathbb{E}[\mathbf{X}] = \begin{bmatrix} \mathbb{E}[X_1] \\ ... \\ \mathbb{E}[X_N] \end{bmatrix} \]

  • For covariance, each variables has a variance, and there are covariances between all the possible pairs. These are organized into a convenient matrix \(\mathbf{\Sigma}\):

\[ \mathbf{\Sigma} = Cov(\mathbf{X}) = \begin{bmatrix} Var[X_1] & Cov(X_1,X_2) & ... & Cov(X_1, X_N)\\ Cov(X_2,X_1) & Var[X_2] & ... & Cov(X_2, X_N)\\ \vdots & \vdots & \ddots & \vdots \\ Cov(X_N,X_1) & Cov(X_n, X_2) & ... &Var[X_N] \end{bmatrix} \]

which is more compactly written:

\[ \mathbf{\Sigma} = \mathbb{E}[(\mathbf{X}-\mathbf{\mu})(\mathbf{X}-\mathbf{\mu})^T] \]

  • If the variables are all independant, the Covariances are all zero and we have a diagonal covariance matrix:

\[ \mathbf{\Sigma} = Cov(\mathbf{X}) = \begin{bmatrix} Var[X_1] & 0 & ... & 0\\ 0 & Var[X_2] & ... & 0)\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & ... &Var[X_N] \end{bmatrix} \]