Factorization Algorithm

How does this help us do the factorization? The key observation is that given a vector \(\mathbf{z}\) we can choose a \(\mathbf{V}\) so that \(\mathbf{P}\) reflect \(\mathbf{z}\) onto the \(\mathbf{e}_1\) axis:

\[ \mathbf{P}\mathbf{z} = \begin{bmatrix} \pm \| \mathbf{z} \|\\0 \\ \vdots \\ 0 \end{bmatrix} = \pm \| \mathbf{z} \| \mathbf{e}_1. \]

This uses the fact that \(\mathbf{P}\) is orthogonal and so preserves the norm.

The vector that will do this is:

\[ \mathbf{v} = \frac{\mathbf{w}}{||\mathbf{w}||}\text{, }\mathbf{w} = ||\mathbf{z}||e_1-z \]

The book describes the process in detail, but the essence of the idea is to use this idea to successively turn the matrix \(\mathbf{A}\) into \(\mathbf{R}\). The orthogonal projection matrices form \(\mathbf{Q}\)

Q-less QR and least squares

  • Since we only need \(\mathbf{Q}\) to compute \(\mathbf{Q}^T b\), we don’t need the full \(\mathbf{Q}\)

  • Leads to “Q-less” factorization: In julia a special “QRCompactWYQ” object is returned for efficient calculation of \(\mathbf{Q}^T b\)