QR-Decomposition and Least Squares Approximation

(Part of Section 6.2 pp 343-344 and Section 6.5)

Introduction

If A is an mxn matrix, as a result of constructing an orthonormal basis for the column

space of matrix A, one can get a factorization of matrix A as a product QR where the

columns of matrix Q consist of an orthonormal basis and matrix R an upper triangular

matrix with positive entries on the main diagonal. This factorization is useful, for example,

in finding inverses, in solving systems of linear equations and in finding eigenvalues.

> with(linalg):with(linpdt);

```Warning, the protected names norm and trace have been redefined and unprotected
```

```Warning, the names GramSchmidt, QRdecomp and leastsqrs have been redefined
```

>

Description of the QR-Algorithm

Example 1: Consider the matrix

> A:=matrix([[1,1,1],[1,0,2],[1,1,0]]);

>

whose columns are the independent vectors

> v1:=vector([1,1,1]);
v2:=vector([1,0,1]);
v3:=vector([1,2,0]);

1. Convert the set of vectors { } into an orthonormal set.

Normalize the first vector to obtain the first orthonormal vector

> r11:=sqrt(innerprod(v1,v1));
u1:= evalm((1/r11)*v1);

>

Find a vector , that is orthogonal to

> r12:=innerprod(v2,u1);
p1:=evalm(r12*u1):
w2:=evalm(v2-p1);

Normalize the vector to obtain the second orthonormal vector

> r22:=sqrt(innerprod(w2,w2)) ;
u2:=evalm((1/r22)*w2);

Compute the third orthonormal vector by computing the inner products:

> r13:=innerprod(v3,u1);
r23:=innerprod(v3,u2);
p2:=evalm(r23*u2 + r13*u1):
w3:=evalm(v3-p2);

>

Normalize the vector to obtain the third orthonormal vector

> r33:=sqrt(innerprod(w3,w3));
u3:=evalm((1/r33)*w3);

2. Construct the matrix Q whose columns are the orthonormal vectors , and

> Q:=augment(u1,u2,u3);

>

3.Construct an upper triangular matrix R whose entries are the obtained in step 1

> R:=matrix([[r11,r12,r13],[0,r22,r23],[0,0,r33]]);

>

Perform the multiplication of matrix Q with matrix R.

> multiply(Q,R);

>

which is the matrix A.

Least Squares Approximation

Least squares problems are introduced in Section 6.5. System Ax = b does not have a

solution if b is not in the column space of A. In this case an approximate solution is sought.

This problem can be stated as:

(LSP) Find an x that minimizes the quantity || b - Ax ||

Algorithm:

If the vector b is in the column space of A, then x is the solution to Ax = b and we are done.

If b is not in the column space of A, then find the projection of b upon the column space

of A. Now is the closest vector in the column space of A to b, i.e. || - b|| < || -b||

for any other vector . Let and and then we have

|| - b|| < || - b||. Hence, is the solution to the least squares problem. Note that,

since is orthogonal to the column space of A; that is, = 0

where . From this we may deduce that the least square solution must satisfy:

This is called the normal equation . The solution x of the normal equation is called the

least squares solution to the original system .

Example 2: Given the matrices

> A:=matrix([[4,0],[0,2],[2,2]]);
b:=matrix([[2],[0],[5]]);
x:=matrix([[x1],[x2]]);

find the least squares solution to:

> evalm(A)*evalm(x)=evalm(b);

The system Ax = b is inconsistent. We can verify this by writing the augmented matrix

> AUG:=matrix([[4,0,2],[0,2,0],[2,2,5]]);

and apply Gauss elimination:

> ReducedMatrix:=gausselim(AUG);

>

The reduced augmented matrix implies that the system is inconsistent since b is not in

the column space of A. Therefore we seek an approximate solution.

Least Squares Procedure :

Premultiply the original system Ax = b by the transpose(A) to get the new system

.

Construct the matrix by premultiplying A by its transpose:

> trAA:=multiply (transpose(A),A);

>

Construct the matrix by premultiplying b by the transpose of A:

> trAb:=multiply(transpose(A),b);

>

The new system trAA*x = trAb is consistent.

> evalm(trAA)*evalm(x) = evalm(trAb);

>

Construct the augmented matrix of the new system

> AUG1:=matrix([[20,4,18],[4,8,10]]);

apply Gauss elimination

> AUG2:=gausselim(AUG1);

apply backsubstitution to the reduced matrix to get the least square solution is:

> x1 := backsub(AUG2);

>

******************************************************************

Notice that the soltuion to || || can be written as . Replace

A =QR from the QR-factorization then . Thus, the least squares solution

can be written as solving .

******************************************************************

Compute the QR-factorization of A.

> Q := matrix([[2/5*sqrt(5), -2/15*sqrt(5)], [0, 1/3*sqrt(5)], [1/5*sqrt(5), 4/15*sqrt(5)]]);
R := matrix([[2*sqrt(5), 2/5*sqrt(5)], [0, 6/5*sqrt(5)]]);

>

>

Solve the linear system .

> Qtb := multiply(transpose(Q),b);

>

> Aug := augment(R,Qtb);

>

> rref(Aug);

>

Exercise

(Section 6.5) 16 - 20