KEMBAR78
Kalman's Beautiful Filter: (An Introduction) | PDF | Kalman Filter | Applied Mathematics
0% found this document useful (0 votes)
74 views19 pages

Kalman's Beautiful Filter: (An Introduction)

The document provides an introduction to the Kalman filter. It discusses that the Kalman filter provides the best estimate of the state vector x given a linear dynamical system with process and measurement noise. It works recursively by making a prediction and then correcting the prediction using the measurement. The correction is calculated by finding the change in estimate Δx that minimizes the distance between the prediction and the measurement based on their uncertainties. The Kalman filter also estimates the covariance of the estimate to provide a distribution rather than just a single value.

Uploaded by

MM
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views19 pages

Kalman's Beautiful Filter: (An Introduction)

The document provides an introduction to the Kalman filter. It discusses that the Kalman filter provides the best estimate of the state vector x given a linear dynamical system with process and measurement noise. It works recursively by making a prediction and then correcting the prediction using the measurement. The correction is calculated by finding the change in estimate Δx that minimizes the distance between the prediction and the measurement based on their uncertainties. The Kalman filter also estimates the covariance of the estimate to provide a distribution rather than just a single value.

Uploaded by

MM
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 19

Kalman’s Beautiful Filter

(an introduction)

George Kantor
presented to
Sensor Based Planning Lab
Carnegie Mellon University
December 8, 2000
What does a Kalman Filter do, anyway?
Given the linear dynamical system:
x ( k  1)  F ( k ) x ( k )  G ( k )u( k )  v ( k )
y ( k )  H ( k ) x ( k )  w( k )
x ( k ) is the n - dimensional state vector (unknown)
u( k ) is the m - dimensional input vector (known)
y ( k ) is the p - dimensional output vector (known, measured)
F ( k ), G ( k ), H ( k ) are appropriately dimensioned system matrices (known)
v ( k ), w( k ) are zero - mean, white Gaussian noise with (known)
covariance matrices Q( k ), R( k )

the Kalman Filter is a recursion that provides the


“best” estimate of the state vector x.
Kalman Filter Introduction Carnegie Mellon University
December 8, 2000
What’s so great about that?

x ( k  1)  F ( k ) x ( k )  G ( k )u( k )  v ( k )
y ( k )  H ( k ) x ( k )  w( k )

• noise smoothing (improve noisy measurements)


• state estimation (for state feedback)
• recursive (computes next estimate using only most
recent measurement)

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
How does it work?
x ( k  1)  F ( k ) x ( k )  G ( k )u( k )  v ( k )
y ( k )  H ( k ) x ( k )  w( k )
1. prediction based on last estimate:
xˆ (k  1 | k )  F (k ) xˆ (k | k )  G(k )u (k )
yˆ (k )  H (k ) xˆ (k  1 | k )

2. calculate correction based on prediction and current measurement:


x  f  y(k  1), xˆ (k  1 | k )

3. update prediction: xˆ (k  1 | k  1)  xˆ (k  1 | k )  x

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
Finding the correction (no noise!)
y  Hx
Given prediction xˆ  and output y, find x so that xˆ  xˆ   x
is the " best"estimate of x.

x̂

  x | Hx  y

x  H HHT
 
T 1

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
A Geometric Interpretation
  x | Hx  y

x̂

x

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
A Simple State Observer
x(k  1)  Fx(k )  Gu (k )  v(k )
System:
y(k )  Hx (k )

1. prediction:
xˆ (k  1 | k )  Fxˆ (k | k )  Gu (k )

2. compute correction:
Observer:
x  H HH T
 
T 1
 y(k  1)  Hxˆ(k  1 | k )
3. update:
xˆ (k  1 | k  1)  xˆ (k  1 | k )  x

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
Estimating a distribution for x
Our estimate of x is not exact!
We can do better by estimating a joint Gaussian distribution p(x).

1
1

( x  xˆ )T P 1 ( x  xˆ ) 
p ( x)  1/ 2
e2
(2 ) n / 2 P
where 
P  E ( x  xˆ)(x  xˆ)T  is the covariance matrix

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
Finding the correction (geometric intuition)
Given prediction xˆ  , covariance P, and output y, find x so that
xˆ  xˆ   x is the " best"(i.e. most probable) estimate of x.

1

( x  xˆ )T P 1 ( x  xˆ ) 
  x | Hx  y p ( x) 
1
1/ 2
e2
(2 ) n / 2 P

The most probable x is the one that :


x̂
 1. satisfies xˆ  xˆ   x
x
2. minimizes x T P 1x

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
A new kind of distance
Suppose wedefine a new inner producton R n to be :
x1 , x2  x1T P 1 x2 (this replaces the old inner product x1T x2 )

 x, x  xT P 1 x
2
Then wecan define a new norm x

The xˆ in  that minimizes x is the orthogonal projection of xˆ 


onto , so x is orthogonalto .

  , x  0 for  in T  null( H )

 , x   T P 1x  0 iff x  column( PH T )

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
Finding the correction (for real this time!)

Assuming that x is linear in   y  Hxˆ 


x  PH T K

The condition y  H ( xˆ   x)  Hx  y  Hxˆ   

Substitution yields :
Hx    HPH T K


 K  HPH T

1

 x  PH HPHT
 T

1

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
A Better State Observer
We can create a better state observer following the same 3. steps, but now we
must also estimate the covariance matrix P.
We start with x(k|k) and P(k|k)
Step 1: Prediction
xˆ (k  1 | k )  Fxˆ (k | k )  Gu (k )

What about P? From the definition:



P(k | k )  E ( x(k )  xˆ(k | k ))(x(k )  xˆ(k | k ))T 
and


P(k  1 | k )  E ( x(k  1)  xˆ(k  1 | k ))(x(k  1)  xˆ(k  1 | k ))T 
Kalman Filter Introduction Carnegie Mellon University
December 8, 2000
Continuing Step 1
To make life a little easier, lets shift notation slightly:

Pk1  E ( xk 1  xˆk1 )(xk 1  xˆk1 )T 

 E Fxk  Gu k  vk  ( Fxˆ k  Gu k )Fxk  Gu k  vk  ( Fxˆ k  Gu k ) T 
 E F x  xˆ   v F x  xˆ   v  
k k k k k k
T

 E F x  xˆ x  xˆ  F  2F x  xˆ v
k k k k
T T
k k
T
k
 vk vTk 
 FEx  xˆ x  xˆ  F  Ev v 
k k k k
T T
k
T
k

 FPk F T  Q

P(k  1 | k )  FP(k | k ) F T  Q

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
Step 2: Computing the correction
From step1 we get xˆ (k  1 | k ) and P(k  1 | k ).
Now we use these to compute x :


x  P(k  1 | k ) H HP(k  1 | k ) H T

1
 y(k  1)  Hxˆ(k  1 | k )

For ease of notation, define W so that

x  W

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
Step 3: Update

xˆ (k  1 | k  1)  xˆ (k  1 | k )  W


Pk 1  E ( xk 1  xˆ k 1 )( xk 1  xˆ k 1 )T 
 E( x k 1  ˆ
x 
k 1  W )( xk 1  ˆ
x 
k 1  W )T

(just take my word for it…)

P(k  1 | k  1)  P(k  1 | k )  WHP(k  1 | k ) H T W T

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
Just take my word for it…

Pk 1  E ( xk 1  xˆ k 1 )( xk 1  xˆ k 1 )T 
 E( x ˆk1  W )(xk 1  xˆk1  W )T
k 1  x 
 
 E  ( xk 1  xˆ k1 )  W ( xk 1  xˆ k1

)  W  
T


 E ( xk 1  xˆ k1 )( xk 1  xˆ k1 )T  2W ( xk 1  xˆ k1 )T  W W T 

 Pk1  E  2WH ( xk 1  xˆk1 )(xk 1  xˆk1 )T  WH ( xk 1  xˆk1 )(xk 1  xˆk1 )T H T W T 
 Pk1  2WHPk1  WHPk1H T W T


 Pk1  2Pk1H T HPk1H T  HP
1 
k 1  WHPk1H T W T

 Pk1  2Pk1H T HP 


k 1 H  HP
T 1 
k 1H
T
HP 
k 1 H 
T 1
HPk1  WHPk1H T W T

 Pk1  2WHPk1H T W T  WHPk1H T W T

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
Better State Observer Summary
x(k  1)  Fx(k )  Gu (k )  v(k )
System:
y(k )  Hx (k )

1. Predict xˆ (k  1 | k )  Fxˆ (k | k )  Gu (k )
P(k  1 | k )  FP(k | k ) F T  Q

 
Observer

1
W  P(k  1 | k ) H HP(k  1 | k ) H T
2. Correction
x  W  y(k  1)  Hxˆ (k  1 | k )

xˆ (k  1 | k  1)  xˆ (k  1 | k )  W
3. Update
P(k  1 | k  1)  P(k  1 | k )  WHP(k  1 | k ) H T W T

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
Finding the correction (with output noise)

y  Hx  w

Since you don’t have a hyperplane to


  x | Hx  y aim for, you can’t solve this with algebra!

You have to solve an optimization problem.

x̂ That’s exactly what Kalman did!



Here’s his answer:


x  PH T HPHT  R 1
 y  Hxˆ 

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000
LTI Kalman Filter Summary
x(k  1)  Fx(k )  Gu (k )  v(k )
System:
y(k )  Hx (k )  w(k )

xˆ (k  1 | k )  Fxˆ (k | k )  Gu (k )
1. Predict
P(k  1 | k )  FP(k | k ) F T  Q
Kalman Filter

S  HP(k  1 | k ) H T  R
2. Correction W  P(k  1 | k ) HS 1
x  W  y(k  1)  Hxˆ (k  1 | k )

xˆ (k  1 | k  1)  xˆ (k  1 | k )  W
3. Update
P(k  1 | k  1)  P(k  1 | k )  WSW T

Kalman Filter Introduction Carnegie Mellon University


December 8, 2000

You might also like