**Section 5.1 **

**Eigenvalues and Eigenvectors**

`> `

`> `

Warning, the protected names norm and trace have been redefined and unprotected

`> `

**Introduction**

The problem of finding nonzero solutions x to the matrix equation
* =*
arises in the

formulation of many problems in engineering and physical sciences. The differential equation

*y" = -*
* *
that models the spring oscillation with
*y*
being the displacement from the equilibrium

position and
the spring constant is an example of such a problem with
* A*
being a second order

linear differential transformation. This is called the "
**eigenvalue problem**
". The nonzero vector
**x**

is called the
**eigenvector**
corresponding to the
**eigenvalue**
.

The World’s Largest Matrix Computation.

Google’s PageRank is an eigenvector of a matrix of order 2.7 billion.

http://www.mathworks.com/company/newsletter/clevescorner/oct02_cleve.shtml

The Anatomy of a Large-Scale Hypertextual Web Search Engine

Sergey Brin and Lawrence Page

http://www7.scu.edu.au/programme/fullpapers/1921/com1921.htm

Face recognition using eigenfaces

http://www.cs.ucsb.edu/~mturk/research.htm

Scrabble Recognition Using EigenLetters

http://www.cc.gatech.edu/classes/cs7322_97_spring/participants/Sumner/final/report.html

**What are Eigenvalues and Eigenvectors?**

*********************************************************************

Let
*A*
be an
*n x n *
matrix. A scalar
is an
** eigenvalue**
of

vector v in
such that
. The vector v is then an
** eigenvector**
of A corresponding

to .

**********************************************************************

If
then
where v is a
__NONZERO__
vector.

1.) If v is a nonzero vector then the matrix is singular. Why ?

2.) Since is singular we have det( ) = 0.

**Example 1: **
Let us consider another
*3x3*
matrix A

`> `

`> `
**A:=matrix([[1,1,1],[0,2,1],[0,0,3]]); **

`> `

x be a vector

`> `
**x:=matrix([[x1],[x2],[x3]]);**

`> `

and Id be the
*3x3*
identity matrix

`> `
**Id:=diag(1,1,1);**

The
__eigenvalue problem__** = **** *** *
reduces to finding all
's and the corresponding nonzero

vectors x satisfying the equation

`> `
**evalm(A)*evalm(x) = lambda*evalm(x); **

This is equivalent to solving the matrix equation

`> `
**(evalm(A)-lambda*evalm(Id))*evalm(x)=
matrix([[0],[0],[0]]);**

`> `
**S:=evalm((A-lambda*Id)*(x))=matrix([[0],[0],[0]]):**

When does this homogeneous system possess a nontrivial solution?

Recall that a homogeneous system has
** nontrivial**
solutions if and only if the determinant of the

coefficient matrix is equal to zero. That is, the coefficient matrix

`> `
**A1:=evalm(A-lambda*Id);**

`> `

must have a zero determinant

`> `
**det(A1)=0;**

`> `

The resulting equation is called the
__characteristic polynomial__** **
of A (the Maple command is

`> `
**charpoly(A, lambda);**

The solution set of the characteristic polynomial is

`> `
**s:={solve(det(A1)=0,lambda)};**

`> `

The elements of this solution set are the eigenvalues of A

`> `
**lambda1:=s[1];lambda2:=s[2]; lambda3:=s[3];**

`> `

How do we compute the
__eigenvectors__
; that is, the nonzero vectors associated with each eigenvalue?

To compute the eigenvectors, we substitute the eigenvalues in the matrix equation S. An

eigenvector associated with the eigenvalue
**1**
is the solution of the system

`> `
**(evalm(A)-lambda*evalm(Id))*evalm(x)=
matrix([[0],[0],[0]]);**

`> `
**E1:=subs(lambda=s[1],S);**

`> `

The system implies that = = 0 and is arbitrary. An eigenvector corresponding to the

eigenvalue 1 is

`> `
**evector1:=vector([1,0,0]);**

An eigenvector associated with the eigenvalue
**2**
is the solution of the system:

`> `
**(evalm(A)-lambda*evalm(Id))*evalm(x)=
matrix([[0],[0],[0]]);**

`> `
**E2:=subs(lambda=s[2],S);**

`> `

The system implies that = 0 and = . Thus, an eigenvector corresponding to the

eigenvalue 2 is

`> `
**evector2:=vector([1,1,0]);**

An eigenvector associated with the eigenvalue
**3**
is the solution of the system:

`> `
**(evalm(A)-lambda*evalm(Id))*evalm(x)=
matrix([[0],[0],[0]]);**

`> `
**E3:=subs(lambda=s[3],S);**

`> `

The system
implies that
=
=
. Thus, an eigenvector corresponding to the eigenvalue
**3**

`> `
**evector3:=vector([1,1,1]);**

`> `

Matrix A has three distinct eigenvalues and three distinct independent eigenvectors.

**Example 2: **
Let us consider another
*4x4*
matrix A

`> `
**A:=matrix([[1,1,1,1],[0,2,0,0],[1,0,3,1],[0,0,0,1]]); **

together with the identity matrix

`> `
**Id:=diag(1,1,1,1);**

and the vector x

`> `
**x:=matrix([[x1],[x2],[x3],[x4]]);**

We want to find all 's and the corresponding nonzero vectors x satisfying the equation

`> `
**evalm(A)*evalm(x) = lambda*evalm(x); **

This is equivalent to solving the matrix equation

`> `
**(evalm(A)-lambda*evalm(Id))*evalm(x)=
matrix([[0],[0],[0],[0]]);**

`> `
**S:=evalm((A-lambda*Id)*(x))=
matrix([[0],[0],[0],[0]]):**

When does this homogeneous system possess a nontrivial solution?

This matrix equation has a nonzero solution x provided the coefficient matrix

`> `
**A1:=evalm(A-evalm(lambda*Id));**

`> `

has a zero determinant

`> `
**det(A1)=0;**

`> `

We solve the characteristic polynomial of A to obtain the eigenvalues. The solution set is

`> `
**s:={solve(det(A1)=0,lambda)};**

`> `

The eigenvalues are:

`> `
**lambda1:=s[1]; lambda2:=s[2]; lambda3:=s[3];lambda4:=s[4];**

`> `

To get an eigenvector associated with the eigenvalue
**1**
, we solve the system:

`> `
**(evalm(A)-lambda*evalm(Id))*evalm(x)=
matrix([[0],[0],[0],[0]]);**

`> `
**E1:=subs(lambda=1,S);**

`> `

We get and is arbitrary. Thus, an eigenvector corresponding to

eigenvalue 1 is

`> `
**ev1:=vector([1,0,-1,1]);**

To get an eigenvector associated with eigenvalue
**2**
, we solve the system

`> `
**(evalm(A)-lambda*evalm(Id))*evalm(x)=
matrix([[0],[0],[0],[0]]);**

`> `
**E2:=subs(lambda=2,S);**

`> `

We get and is arbitrary. Thus, an eigenvector associated with

eigenvalue
**2**
is

`> `
**ev2:=vector([-1,-2,1,0]);**

`> `

To get an eigenvector associated with eigenvalue , we solve the system

`> `
**(evalm(A)-lambda*evalm(Id))*evalm(x)=
matrix([[0],[0],[0],[0]]);**

`> `
**E3:=subs(lambda=2+sqrt(2),S);**

`> `

We get = ( ) , and is arbitrary. Thus, an eigenvector associated

with the eigenvalue is

`> `
**ev3:=vector([-1+sqrt(2),0,1,0]);**

`> `

To get an eigenvector associated with eigenvalue , we solve the system

`> `
**(evalm(A)-lambda*evalm(Id))*evalm(x)=
matrix([[0],[0],[0],[0]]);**

`> `
**E4:=subs(lambda=2-sqrt(2),S);**

`> `

We get = ( ) and is arbitrary. Thus an eigenvector associated

with the eigenvalue is

`> `
**ev4:=vector([-1-sqrt(2),0,1,0]);**

`> `

Again, matrix A has four distinct eigenvalues and four independent eigenvector.

******************************************************************

How do we compute eigenvalues and eigenvectors?

**Summary**

is an eigenvalue of an nxn matrix A

if and only if

has a non-trivial solution

if and only if

is a singular matrix

if and only if

det( ) is equal to zero

Maple command is: >charpoly(A, lambda);

***********************************************************************

**Some Properties of Eigenvalues and Eigenvectors**

****************************************************************

Let
*A*
be an
*n x n *
matrix.

1.) If is an eigenvalue of A with v as a corresponding eigenvector, then is an

eigenvalue of , again with v as a corresponding eigenvector, for any positive

integer k.

2.) If is an eigenvalue of an invertible matrix A with v as a corresponding eigenvector,

then and is an eigenvalue of , again with v as a corresponding eigenvector.

3.) If is an eigenvalue of A, then with each eigenvalue , we associate an eigenspace .

This space consists of all vectors spanned by the linearly independent eigenvectors associated with

the eigenvalue.

*********************************************************************

********************************************************************

Let T be a linear transformation of a vector space V into itself. A scalar is an eigenvalue

of T if there is a nonzero vector v in V such that T(v) = v. The vector v is called

an eigenvector of T corresponding to .

*********************************************************************

In
**Example 1:**
, for the matrix A

`> `
**A:=matrix([[1,1,1],[0,2,1],[0,0,3]]);**

`> `

For the eigenvalue
**1**
, the eigenspace
consists of all vectors of the form [
, 0, 0] with

an arbitrary scalar. The set consisting of the eigenvector [1,0,0] is a basis for this eigenspace .

For the eigenvalue
**2**
, the eigenspace
consists of all vectors of the form [
, 0] with

an arbitrary scalar. The set consisting of the eigenvector [1,1,0] is a basis for this eigenspace.

For the eigenvalue
**3**
, the eigenspace
consists of all vectors of the form [
,
] with

an arbitrary scalar. The set consisting of the eigenvector [1,1,1] is a basis for this eigenspace.

**Example 3:**
Consider the matrix

`> `

`> `
**A:=matrix([[3,-1,-2],[2,0,-2],[2,-1,-1]]);**

`> `

`> `

`> `
**eigenvals(A);**

`> `
**eigenvects(A);**

`> `

**Example 4:**
Consider the matrix

`> `

`> `
**A:=matrix([[1,0,0],[0,2,0],[3,0,1]]);**

`> `

`> `

`> `
**eigenvals(A);**

`> `
**eigenvects(A);**

`> `

**Exercises**

1-15(odd).