KEMBAR78
Eigenvalues | PDF | Eigenvalues And Eigenvectors | Mathematical Analysis
0% found this document useful (0 votes)
18 views13 pages

Eigenvalues

The document provides an introduction to eigenvalues and eigenvectors, detailing the equation Ax=λx and its implications in linear algebra. It discusses applications of matrix diagonalization, the significance of eigenspaces, and the conditions under which a matrix can be diagonalized. Additionally, it covers the relationship between eigenvalues, eigenvectors, and matrix transformations, including examples and proofs of concepts.

Uploaded by

鞠家
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views13 pages

Eigenvalues

The document provides an introduction to eigenvalues and eigenvectors, detailing the equation Ax=λx and its implications in linear algebra. It discusses applications of matrix diagonalization, the significance of eigenspaces, and the conditions under which a matrix can be diagonalized. Additionally, it covers the relationship between eigenvalues, eigenvectors, and matrix transformations, including examples and proofs of concepts.

Uploaded by

鞠家
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Eigenvalues and Eigenvectors - Introduction Solutions of Ax=x

Ax=x A=S S-1 When  and x are both unknown, Ax=x is a nonlinear equation.

 Applications If is known   A  I  x  0 a linear problem.

- Applications of Matrix Diagonalization: Powers Ak and (i) The vector x is in the nullspace of AI

Exponential eAt (ii) The number  is chosen so that AI has a nullspace

 Markov processes (application of Ak)  The nullspace of AI is then called “eigenspace” of A. All x’s in the

 Ordinary Differential Equation (the most often mentioned but is eigenspace are eigenvectors corresponding to the same .

just one of many eAt applications)  With the nullspace of A is an eigenspace of A.

- Minimum and Maximum Principles  We are interested in nonzero vector x  AI must be singular 

 Gradient Search det  A  I   0. (Characteristic equation)

 Principal Component/Discriminant Analysis  Conventional solution: find first by det  A  I   0 . Then, the

- Change of Basis = Similarity Transformation corresponding x can be found by the nullspace  A   I  x  0.


 Ax=x  this is an attempt to simplify the transformation by a 4  5
 Example: A    Find x and  such that Ax=x
2  3
matrix A to a simple multiplication by a number .
4   5 
det( A  I )  det  =0  4    3     10  0 
 Problem:  2  3   

Find certain x such that transformation A can be simplified into  b  b2  4ac 1  9


2    2  0      1 or 2
2a 2
multiplication of a number , i.e. Ax=x
5  5   y  0  1
1  1:  A  1 I x        . x1  c1  .
This is only possible for certain vectors, x, called eigenvectors in  2  2  z  0  1
certain subspaces called eigenspaces and is called “eigenvalue” of A 2  5  y  0  5
2  2 :  A   2 I x        . x2  c2  .
2  5  z  0 2
1 2
A different perspective of Ax=x Trace and Eigenvalues
 Ax: Transformation of a vector x to Ax 1 0 0 0
0 0 0 0
 x: a multiple of the vector x  a vector in the same direction Ex: P    has   1, 1, 0, 0.
0 0 0 0
0 0 0 1
 Ax=x: A transformation of “certain” vector x by A becomes a 

multiple of the vector x itself. Ex: a triangular A


1 4 5  1  4 5
1 1
1  
Ex: P   12 2
1
projects any vector onto   Eigenvectors? Eigenvalues? A= 0 3 / 4 6  det  A   I   0 3
 6  1     43     12    .
1   4
 2 2 
2 
0 0 1 / 2  0 0 1

Px=1x=x  column space of P is the eigenspace with 1=1  = 1 or 3/4 or 1/2  eigenvalues are diagonal entries

Px=0x=0  nullspace is an eigenspace too with 2=0  How to transform matrix A into a diagonal or triangular matrix

If a projection matrix P has dim()=r and dim()=nr, =1 has a without changing its eigenvalues? Elimination doesn’t work any more!

r-dimensional eigenspace (repeats r times) and =0 has a  To find eigenvalues is already a headache (unlike solving Ax=b where

(nr)-dimensional eigenspace (repeats nr times). elimination always works)

 Only for “certain” vector x, transformation A can be simplified?  Some checks on the eigenvalues (proof?)

Useless for other vectors? 1. 1    n  a11    ann  trace

 3 0 1 0 2. det A  12  n


Ex: A    has 1  3 with x1   , 2  2 with x2   .
0 2 0 1
a b 
1
How about x    ? Not an eigenvector!  c d  has trace a  d , determinant ad  bc
 
5 12
trace    trace   4 det 
2

3 a   b    .
Let x=x1+5x2  Ax  1 x1  52 x2   . The action of A can still be det    2   trace    determinant;  
 c d    2
10
determined by eigenvalues and eigenvectors!
3 4
Diagonal Form of Matrix Independent Eigenvectors

 nn matrix A with n linearly independent eigenvectors from Ex: defective matrix

1  0 1 
.  0  S AS=0  SS ASS =0 A=0
  
  A
2  0 0 
eigenspaces: S 1 AS      where S’s columns are
  
  But A is not zero! Contradictory!  A is not diagonalizable!
 n 
A is defective not because the eigenvalues are repeated zeros but because
formed by the eigenvectors and 1, 2,3,...,n may not be all distinct.
not enough independent eigenvectors can be found.
Proof
 3 1
    Ex: A    and A=I
0 3
AS  A  x1 x2  xn  1 x1 2 x2  n xn .
 
   
     Diagonalizability is concerned with the eigenvectors

1  Invertibility is concerned with the eigenvalues


    
 x  x   x    x  
x2  xn  2 .  If the eigenvectors correspond to distinct eigenvalues then those
 1 1 2 2 n n  1   
     
 n  eigenvectors are linearly independent.

 AS  S, or S 1 AS  , or A  S S  1 . Eigenvalues Different “” Eigenvectors Independent

 Not all matrixes are diagonalizable. When diagonalizable, only S with Reason: For 2 by 2 matrix, let the eigenvectors be x1 and x2
eigenvector columns can diagonalize A into  with eigenvalues as its corresponding to and Let c1x1+c2x2=0 (1) then A(c1x1+c2x2)=0 
diagonal entries.
c1x1+c2x2=0 (2)  (2)(1) 
 If there are n distinct eigenvalues, all eigenvectors are independent 
c1 1  2 x1  0. But 1  2 and x1  0  c1=0
the matrix can be diagonalized.
same for c2 and for any number of eigenvectors.
 S is not unique. Example: A=I: eigenvalue=1; the eigenspace filling
up the entire n-dimensional space.

5 6
More on Eigenvalues and Eigenvectors
Examples of Diagonalization 1 1 1 1
1 1 1 1
1 / 2 1 / 2 1 0 Example 1:  
Ex: projection matrix A      1 1 1 1
1 / 2 1 / 2 0 0  
1 1 1 1
1 1 1 0
 S  and AS  S   .
1  1 1 0
0  1
Ex: rotation K   
 1 0
K rotates any vector through 900. Are there vectors rotated without

changing its direction? Does K have eigenvalues? Yes!


0 1 0 1
det K  I   2  1  0  i and -i 1 0 1 0
Example 2:  
 i  1  y  0 0 1 0 1
 K  1 I x1     
 1
   and x1     
 1  i   z  0   i  1 0 1 0
i  1  y  0
 K  2 I x2     
1
   and x2   .
1 i   z  0 i 
 1 1  i 0
 S  and S 1KS   .
 i i  0  i 
 Complex numbers are needed even for real matrices

Ex: A2 x  Ax  Ax  2 x. eigenvalues for A2 are the square of

eigenvalues of A.

  
Ex: S 1 AS S 1 AS  2 or S 1 A2 S  2 .

7 8
Powers: Ak=SkS-1 Product of two matrices

 Ak=(SS-1) (SS-1) (SS-1)= SkS-1  ABx=Ax=Ax=x where  is the eigenvalue of B

 For invertible A, THIS IS FALSE!! Eigenvectors are different for A and B!

1 0 1 0 0 1 0
if Ax  x then x  A1 x and x  A1 x. Example: AB     .
 0 0 1 0 0 0
The eigenvalues of A-1 is 1/  A and B are diagonalizable:

Example: rotation through 90o They share the same eigenvector matrix S if and only if AB=BA

0  1  1 0  0 1 Proof: If A  S1S 1 and B  S2 S 1 then


K   and K2    and K 1   .
 1 0  0  1   1 0
AB  S1S 1S2 S 1  S12 S 1 and
eigenvalues for square are –1 and –1;
BA  S2 S 1S1S 1  S2 1S 1 but 21  12
eigenvalues for inverse are 1/i = i and 1/(i) = i.
 AB=BA
1 0 i 4 0  1 0
K 
4
and also  
4
 .

0 1  0  i 4  0 1 Opposite direction: Let AB=BA 
ABx  BAx  Bx  Bx.
Why? Physically. Rotate 90o 4 times = rotate 360o
Bx and x are both eigenvectors of A corresponding to the same :

assuming distinct eigenvalues, Bx must be then multiple of x Bx=x

(for repeated eigenvalue corresponding to an eigenspace, the proof will

be longer)

 A  SS 1 is extremely useful when taking powers of A:

Ak  Sk S 1

and A=LU does not help at all in this aspect.

9 10
Fibonacci Sequence and Difference Equations Power of Matrix and Difference Equations
 Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13,…  If A  SS 1 then

Fk  2  Fk 1  Fk .     
uk  Ak u0  SS 1 SS 1  SS 1 u0  Sk S 1u0 .

This is a form of difference equation. Numbers in the sequence turn By setting S 1u0  c  Sc  u0 (i.e. expressing u0 as linear combination

up in fantastic natural patterns (e.g. sunflower seeds). of eigenvectors), the solution becomes

 1   c1 
k

  
uk  Sk c   x1  xn    k k
     c11 x1    cn n xn .
 
   kn  cn 

The solution is a combination of the “pure solutions” ik xi

 Another way of looking at the solution:


 Question: 1000th Fibonacci number?
If u0  c1 x1    cn xn , i.e. u0=Sc then
Reduce it to a uk 1  Auk problem (like the compounded interest
u k  Ak u0  Ak (c1 x1    cn xn )
 Fk 1 
problem) Let uk   
 Fk   c1 Ak x1    cn Ak xn

Fk  2  Fk 1  Fk 1 1  c11k x1    cn kn xn .
becomes uk 1   uk .
Fk 1  Fk 1 1 0 c’s are determined by the initial conditions:
The second equation is trivial and is a standard trick for second order    c1 
u0  x1  xn      Sc
 and c  S 1u0 .
difference equation. For higher order equations, say order s, we need s-1   
  cn 
trivial equations.

 The solution to uk+1=Auk :

u k  A k u0

11 12
More on Fibonacci and Difference Equations Markov Processes and Difference Equations
1 1 1 5 1 5  Example: each year 1/10 of the people outside California move in,
A , det  A  I   2    1, 1  , 2  .
1 0 2 2
1
and 2/10 of the people inside California move out. What is the
   1  1 1  2  
and initial conditions: c  S 1u0   1 2      .
 1 1  0   1 1  2  population of California after 5 years, 10 years, or 100 years? (y:

We have uk  c11k x1  c2k2 x2  population outside, z: inside)


y1  .9 y0  .2 z0  y1  .9 .2  y0 
1  1  5   1  5  
k k
1k k2 or  z   .1 .8  z .
Fk        . z1  .1 y0  .8 z0
1  2 1  2 5  2   2    1    0
 
 Markov matrix: 1. Each column adds up to 1
Since the second term [(1  5 ) / 2]k / 5 is less than 1/2 and is becoming
2. All entries are nonnegative
insignificant when k is large.
 Solving Markov process:
Fk 1 1  5
  1.618  Golden ratio!!
Fk 2 .9 .2
, det  A  I     1.7  .7,
2
A 
 . 1 .8 
 Simplicity of Ak computation with diagonalization:
1  1 and 2  .7
  4  5  1   1 2 1
1
A   has 1  1, x1   , 2  6, x2    1 3 3   1 1
 10 11    1  2 A  SS  1 1    .
 3  3   .7 1  2
 1  1 1k 0  2 1  2  6k 1  6k 
Ak  Sk S 1         .
  1 2  0 6k  1 1   2  2  6k  1  2  6k   yk  k  y0   3
2 1  k
3 1
 1 1  y0 
z   A 
 z   1  1  k  1  2  z 
  k  0   3 3
 .7     0
 2  1
  y0  z0 (1)k  13    y0  2 z0 .7 k  13 .
 3    3 
 y  2
 
 Solution when time approaches infinity:    y0  z0  13 . i.e.
 z   3 
.9 .2  3   3 
2 2
.1 .8 1    1
, or Au  u .
   3   3 
13 14
Stability of Markov Process System Stability and Eigenvalues
In the example of Markov process: The steady state is the eigenvector of A Example:

corresponding to   1 Fibonacci number and compounded interest become larger and larger

 For a Markov process:  unstable

(a) 1  1 is an eigenvalue (each column adds up to 1) Markov Process converges to constants  neutrally stable

(b) Its eigenvector x1 is nonnegative and is the steady state since  Given: uk  Sk S 1u0  c11k x1    cn kn xn .

Ax1=x1  Ax1=x1 (i.e. A can no longer change x1) The difference equation uk 1  Auk is

(c) Other eigenvalues i  1 stable if all eigenvalues satisfy i  1( Ak  zero )

(d) If any power of A has all positive entries, then these other neutrally stable if some i  1 and other i  1

i  1. Solution Ak u0 then approaches a multiple of x1 = u unstable if at least one eigenvalue has i  1

1 0 4 
(e.g. California’s population approaches ( y0  z0 ) ) Example: stable matrix A   1  has eigenvalues 0 and 1/2
3 0 2 
Reason (a): each column of A1I adds up to 11=0  rows of AI adds 0 4   2 1  1
u0   , u1   1 , u2   1 , u3   1 , u4   12 , 
1 2 4 8 16 
up to 0  AI is singular  1  1
The first step is to split u0 into two eigenvectors:
(b): the steady state should maintain positive proportions
0   8 8
(c)(d): otherwise uk  c11k x1    cn kn xn will blow up. If ik (other than 1 1   0  1
     
=1) goes to zero when k becomes very large. uk  c1 x1  u The first vector is multiplied by its eigenvalue 0 and is thus annihilated.

The second vector is cut to half at every step.

15 16
Differential Equations Example: Diffusion Model Solution to Ordinal Differential Equation (ODE)
 v and w are concentrations. du
 Solution? Look at  au , u  u0 at t  0 first
dt

Let u t   ke at . Given the initial values u=u0 at t=0  u t   e at u 0 .

 a>0  unstable; a=0 neutrally stable; a<0 stable.

 If a=+iea t=eteit=et(cos t + i sint)

 At each time t the diffusion rate between two adjacent segments  : stability : oscillations

equals the difference in concentrations.  For systems

 Concentrations in S0 and S3 are forever zero because they have du   2v  w   2 1 v 


    u where u    and u  u0 at t  0
dt  v  2 w   1  2  w
infinite ends.
du
 Au , u  u 0 at t  0.
 Differential equations: dt

dv Solution:
 w  v   0  v   v  du   2v  w   2 1
dt  u   ,  
dw  u. u (t )  e At u0  Se t S 1u0
 0  w   v  w   w dt  v  2 w   1  2
dt
 Difference equation: uk=Aku0 depending on power of A
 Then the system becomes:
 Differential equation: u (t )  e At u0 depending on exponential of A
du
 Au, u  u 0 at t  0.
dt  Problem: what is exponential of A: eAt ?

17 18
Exponential of a Matrix More on Exponential: eAt=SetS-1
x2 x3  2 1 
 Imitate e x  1  x    .  Example: exponential of A=  
2! 3!  1  2
1
At  At 2  At 3 1 1 e t  1 1
 . has the following properties:
t 1
e  I  At   e At
 Se S   3 t   
2! 3! 1  1  e  1  1

     
1. e As e At  e As  t  2. e At e  At  I 3.
dt
 
d At
e  Ae At 
1  e t  e 3 t

2 e t  e 3t
e t  e 3t 
.
e t  e 3 t 

 Property 3 is how we solve the differential equations  If x1 is the eigenvector of A corresponding to eigenvalue 1, then x1 is

 e At  I  At 
 At 2  At 3 the eigenvector of eAt corresponding to eigenvalue e 1t , i.e.
  .
2! 3!
e At x1  e 1t x1
1 2 1 3
S S t 2
S S t 3
e At  I  SS 1t   
2! 3! Proof:

 S  I  t 
t 2  t 3   S 1  Se t S 1.  At 2 x  At 3 x
 e At x1  Ix1  Atx1  1 
 2! 3!  2! 3!
1

t2 2 t3
 e t  I  t 
t 2  t 3    Ix1  Ax1t  A x1  A3 x1  
2! 3!
2! 3! 2
t t3
 Ix1  1 x1t  12 x1  13 x1  
 1t 2  1t 3    2! 3!
1  1t  


2! 3!


 Ix1  1tx1 
1t 2 x 
1t 3 x 
 2!
1
3!
1

1  n t 
nt 2 nt 3


 
 2! 3!   ( I  1t 
1t 2  1t 3  ) x  e 1t x1
1
 2! 3!
e 1 t 
   Matrix eAt is never singular: its eigenvalue et is never zero and
  
 n t 
e 
 det e At  e1t e2t  ent  e trace At  

Inverse of eAt = eAt which always exists

19 20
Matrix Exponential and ODE Solution Back to ODE for Diffusion Model

 If A can be diagonalized, A=SS-1 then the differential equation  For systems

du/dt=Au has the solution: du  2v  w  2 1 v


    u where u    and u  u0 at t  0
dt  v  2 w   1  2  w
u t   e At u0  Set S 1u0 .
du
 Au, u  u 0 at t  0.
The column of S are the eigenvectors of A, so that dt
t
  e 1   First step: Diagonalize A
  1
u t    x1  xn     S u0  2 1   x1   x1 
 
   n t 
e   1  2  x     x  
  2   2
By setting S 1u0  c  Sc  u0 (i.e. expressing u0 as linear combination 1 1  1  1
A    1  , A    3  .
1 1   1   1
of eigenvectors), the solution becomes
 General solution:
t
  e 1   c1 
   1 1 e  t   c1 
u (t )   x1  xn    1t n t
     c1e x1    cn e xn . u t   c1e 1t x1  c2e 2 t x2 or u    .
  e  3t  c2 
   e  n t  cn  1  1 
At time zero (initial condition) (e0=1):
 Another way of looking at the solution:
1 1  c1 
u0  c1x1  c2 x2 or u0       Sc  c=S-1u0
If u0  c1 x1    cn xn , i.e. u0=Sc then 1  1 c2 
u (t )  e At u0  e At (c1 x1    cn xn ) Solution:
 c1e At x1    cn e At xn 1 1 e t   c1  e  t 
1t n t u t        S   3t 
S 1u0 . =SetS-1u
 c1e x1    cn e xn . 1  1  e 3t  c2   e 

21 22
Stability of Differential Equation Stability for a 2 by 2 System
u t   Se t S 1u0  c1e 1t x1    cn e n t xn du a b 
 u
dt  c d 
 Eigenvalues decide how u(t) behaves as t  
a   b 
i t det    2  trace  det   0. 
 Stability is governed by e  by real part of i  c d  

 If   a  ib , 
  12 trace  
trace2  4det  .
e t  eat eibt  e at cos bt  i sin bt  and e t  eat  Stability test: 1. The trace a+d must be negative

Decays for a<0; becomes constant for a=0; and explodes for a>0. 2. The determinant ad-bc must be positive

 The du/dt=Au system is  When the eigenvalues are real, the two tests guarantee them to be

Stable and e At  0 whenever all Re i <0 negative.

Neutrally stable when all Re i  0 and some Re i =0  When the eigen values are complex pair x  iy , the trace=2x and the

Unstable and eAt is unbounded if any eigenvalue has Re i >0 determinant=x2+y2

 All solutions approaches zero if and only if all eigenvalues have a

negative real part  asymptotic stability

 
 If b=c then trace 2  4det   a  d 2  4 ad  b2  a  d 2  4b2  0

 symmetric matrix is on or below parabola.

 Neutrally stable: boundaries of 2nd quadrant

23 24
Example of 2 by 2 System Skew-Symmetric Matrices
 2 1  0  1
Example 2: diffusion equation du / dt   u is stable since Example 1: du / dt    u0
 1  2 1 0 
eigenvalues are –1 and -3   1
trace=0; det=1   2  1  0 so   i and  i.
1 
 1 1
eigenvectors: (1, -i) and (1, i) u t   c1eit    c2 e it  .
Example 3: diffusion model with two ends closed off.  i  i 
du   1 1 dv dt  w  v  c  c  cos t  i c1  c2  sin t 
 u or substituting eit  cos t  i sin t : u t    1 2
dt  1  1 dw dt  v  w 
  i c1  c2  cos t  c1  c2  sin t 
 This is a continuous Markov process. At t=0, u0=(a, b)  Sc=u0  (c1+c2)=a and –i(c1-c2)=b

Markov matrix: each column adds up to 1= max  a cos t  b sin t  cos t  sin t  a  cos t  sin t 
u t      u0 sends u0
b cos t  a sin t   sin t cos t  b   sin t cos t 
Cont. Markov matrix: each column adds up to 0= max
around a circle.
 A is a Markov matrix if and only if B=A-I is a continuous Markov
 Also, since A2=I
matrix.
 t 2   t3 
1 1      t   
  cos t
 The steady state (v=w) is the eigenvector (=   ) corresponding to  At 2 2 6  sin t 
1 e At
 I  At         
2  t 3   t 2    sin t cos t 
  t    1     
max 6  2 
     
 eAt is an orthogonal matrix!

FACT: If A is skew-symmetric then eAt is an orthogonal matrix

AT   A and e At T  e At  e At e At T  I  e At is orthogoanl
 e At u0  u0  Conservative systems : no energy is lost

25 26

You might also like