KEMBAR78
COL726 Lecture Notes 6 | PDF | Norm (Mathematics) | Mathematical Analysis
0% found this document useful (0 votes)
34 views44 pages

COL726 Lecture Notes 6

The document discusses numerical algorithms and mathematical problems related to function evaluation, optimization, and error analysis, including absolute and relative errors. It covers concepts of conditioning, stability, floating-point arithmetic, and various norms in linear algebra, including singular value decomposition and QR factorization. Additionally, it addresses the implications of these mathematical concepts in computational contexts and their applications.

Uploaded by

amansinghdalawat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views44 pages

COL726 Lecture Notes 6

The document discusses numerical algorithms and mathematical problems related to function evaluation, optimization, and error analysis, including absolute and relative errors. It covers concepts of conditioning, stability, floating-point arithmetic, and various norms in linear algebra, including singular value decomposition and QR factorization. Additionally, it addresses the implications of these mathematical concepts in computational contexts and their applications.

Uploaded by

amansinghdalawat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Numerical Algorithms

Mathematical Problem y Ffr

evaluate the function on n

given y goal is to find ns.t y FCR


optimisation
X discrete or continuous
F linear or non linear

Sources of Approximation

ftp.ltation Modeling Error


Previous computation

Simplify computation off


agitation
Rounding Arithmetic of input

Absolute Error F X F X FIX

Relative Error F X Fix


F X F I

fix FIXI Fa
Egffff
error
En
sin sin
I 3
Forward Error
y 5
Actual value y y FIX
computed value if j Fix
Backward Error

x x̅

F
F
backward forward
error error
F
FIX f x̅

En FIX think about


cos tan
2
y E f 1.4

forward error E 1.4 0.014

backward error 2 1 4 0.04


Conditioning
Condition No relative change in output
relative change in input
F x̅ FX
f X
x̅ x

Absolute Fwd error f x Ax fly DX F X

Relative fwd error ΔXF x


F X

Δ F
condition no fix Fly
DX FIX

condition no
1
of f condition no
of f

cond f
cond f
f x F 2

condition no n
2M.FR
Stability
Accurate Stability Conditioning

Floating Point Numbers


3
0.0079 7 900 10
4132 4.132 103

IEEE Double Precision


52 52 5
1,2 1 1 2 1 2 2 1 3 2 2

52 52
2,4 2 2 1 2 2 1 2 2 2 1 3 252 4

β base
p precision
L V exponent

Real number n

n do
dg
If β

die 0 β 1 EECL.US

Normalised if do 0
Number of NORMALISED floating point numbers
in terms of B p L U
to represento
2 β 11 BP U L 1 1

Underflow Limit p L OCU

Overflow Limit B1 β
Ppt Bp
1 1 1
β p x BP
1 l
P β
put BP 1

Be careful of arithmetic system in assignments

Machine Number can be exactly represented in


the particular arithmetic system

rounding n W n

Rounding to nearest

Chop nearest
shop
1 6999 1.6 1.7
Emachine Hln n
x

P β 1 ptpttp.it BE
Chop β Hey
a
x

P
pl
Rounding to nearest p1

Rounding to nearest has less error but is more

computationally intensive

Addition or subtraction do d dp BE
Multiplication Add exponents multiply repr

1
x 1 92403 102 y 6.3587 10
0.00635 10
match higher exponen
st
Rty 102 Tignifiants's
x y

Take first p significant bits then adjust


Emach

ITE 1 E 0 Ec Emach

Cancellation

n
y
11
x y k
Hly
ax bn c 0

n b b2 Uac
2A

a 0.05010 b 98.78 1 5.015

1971.605 0.05077

x 2C
bI b Yac
Linear Algebra

Matrin vector multiplication


for any matrix A E 12mn

a Are
NER Alkyl An Ay
Alan AN

Range lonsists of vectors spanned by columns of A

Rank of the transformation


of 1 i vectors in the column space

column rank row rank

Full rank rank min columns rows

Basis minimal set of 1 i vectors that span


the vector space

y A y get coefficients
mxm
Amna I
Orthogonal Basis
B bi bn bi br

component of ba in dir of bi
by by in dir of bi in dir of bi

b bit Cb
0 orthogonal
ba b abi b cabi b

c ba bis
bi bas

Norm IR IR

1 11m11 0 11h11 0 iff n 0


2
IIntyll Hull Hyll
3 Ilkull k 11h11 Re R

l Norm NER 11m11 nil

Euclidean Norm n nil


P
Lp Norm n p ni

has Norm n mean kill


n n n as

K E IR

n a In Cauchy Schwar
E b n

Weighted p Norm Wip


x WR p

Matrin Norm AE Rmm KERM

Induced norm of matrix 1 All c

for any n I An I cllull Prove this norm

I All manimum factor which A stretch


by can
the vector
A
sup Anil Anl
11h11 Iggy

11A B I I All 111311

At B K An Bu
11An Ball I Aull I Ball 11A111m11 11131111711
11AM 11811 11m11
1 Norm
All 11AM 1
gypsy
11m11

I Aull Illajnill Ellaj 11 Injl man Hajll

Induced Matrin Norm

AE 12mn vector norm

A sup An
n

A ai an A 1

An 1 Lkiai 1

nil Haill

mean Haill

A as IAN

Hai'll
Man
Holder's Inequality

1 p q as
1
p 1g
M
N YE

xty U p
y q

Cauchy Schwarz Inequality special case of Holder's

p q 2

my U 2 Y 2

AER
A at

2 Find A a

A sup An 2

I scalar

Are 2 air 2 an a 2 k 2 a 2

i A a 2

For n a An 12 Hall Hall


map Yz
ERnxm.BE Rmx k minor quiz
A

2 Find AB

ABR A.BR

A B
If
i AB A B

Forbenius Norm
2
AER A
IgE aig
2 Why is this a norm
This
quantity is the L 2 Norm of an mn
sized vector w elements aij

A B

AB EmEn Cij EmEn ai.bg


L
a g
EmE
Em di bj

A B
Unitary Matrin
n men
ERM de
QQ I QQ I

Singular Value Decomposition factoring matrices

A E Rm Ui Us Um orthonormal basis of
full rank the output space
2 on

R n 1
IR AI

I
Find the longest vector go
to reduced space again
find the longest repeat

Singular Values of A 92 22 on 0

left singular vectors U Um

right singular vectors V Vn


Avi Tilli
2 Prove vi v2 vn are orthonormal

Avi AV VFATAV
iki.su Vit ATA Vj
0 Vit ATA Vj

Sinie A is full rank ATA 0 vi Vj 0


Hence orthogonal

Reduced SVD

A vi vn
V us um
A
i o

A V w̅ I

Artin.mn frnY'unitary
Reduced SVD a An

A V U I singulara AERman
right left singular full rank
m n
sied
vector

orthonormal
unitary

U Ju um Unitary matrix

A
YAE diagonal

unitary
A r rank Ax
IR IRM

Au Ur V __
Vr 1 sor

s t Avi oilli Trifortz on 0

v vi AV VE
wring
iii
Av a
u V v

S V un
U AV
up
mkm
In In

s w w 144 wew
a

115112 194W w

Wtf you
have a matrix
matrine then norm
you multiply w

remainsthe same
unitary
L2 norm

115112 110 A V11 11A11 0

Wtw 0

118
UAV 5
Em In In 8 5
verify
B V2 V2

A vi
88 E o

A
13 8 9

4 3 2
v Q
v
f
4
f 9
Claim A is square matrix a On are unique
6 027 9 0

an 70 full rank
A r rank n AK
IR RM
A U V AX
x y EX
hit L Vix
Y U U
y AX U UEV X
U Ey
x u

Frob norm
niraalo
Norm preserved on multiplying w unitary
matrin
Applinations of SVD

Low Rank Approx

A UEV VE Vt Ej g

i
k1 we are
multiples of
considering all
a
single
vector

Theorem O VE r
Av Ef inivit Rank A r

If v min m n Tv 0

11A Avl a inf I A B112 Tt


BERMYN
Rank BIEV low rank matrix
would caniel out
stretching
many An
A Av
Effi Ui Vit factors
case this is out
worst

I A Aull Tv 1

BE Rmn rank B V rank B f

There is a subspace S of dimension n l


5 t for all w S Bw 0
A Bil Ttl Assume for contradiction

1 Aw 1 11 A B w 12 KA B12 w 12

ojllullrwenjtwhihsipsace.in
Fi vies Tt Hull
From assumption
spandri rex
1 1 1
hoga in space
rgwhich is in
null

say vj

aj llull out lull This is a contradiction

111A B Vi 11 I Avill oil will

BY insp
a
2 on

As

2 Application of SVD 120121


Problem statement
of low rank
How SVD minimises storage lapp
How effect of A is not reduced despite
of low rank

A
V E V tsar V3

Projectors
P is a square matrix and PEP
Range P

Tv light source
pro

P Pr V
Pr o
EPI
p
I P
I P I P I P

Pv Range P
I P Pr Pu PV 0
Null I P Range IP
I Plv 0 v Pr 0 Null I P Range IP

Null I P Range P 20

Pve Range P

P Pv P v PV

P is orthogonal iff P P
RangelPTisorthogonal to Null IP

py f RangeP1
WE Null P Pw 0

Pv W 0

if P P Pr Range P
II Ply Null P

MTP CI Ply
ut P p p
n P P2
y
y
ut P
0
Ply
orthogonal
Pu II ply
nt Pt Pt P y 0 need SVD to
show PEP

F 9 9m basis

A An Anti 9M

P q qi ie n

PQ q 9,10 0

a pa s
1 5
QR Factorisation

A a an Rmn men

spania spanlai.az span a an

a or

Fall a

j n
span q qjl spania aj
equality for full rank
91 An f QM
are orthonormal vectors

a v q.tv q 192 v 92

lqntvlq q.tn
q v q qi v1.9 0

a an

Egan f
i
7
Reduced QR
factorisation
Full QR Factorisation

can ant

Gram Schmidt Orthogonalization

q 91
fit

a as
Ei.fi
a

2 nonm
rij quit Aj
rii 119 qjt.ailaj.lt
Existence
For any matrix A E Rmn men QR fact

condition tj spania aj spank aj


Take arbitrary orthonormal vectors if
enounter a lin dep Ai
you
When full rank Miis are non zero

Full QR not unique we can choose


Anti 9m arbitrary

Not full rank we chose some


q arbitrarily
Uniqueness
R diagonal entries are ve
full rank mattin A

Reduced QR delomp is unique

upto
For complen multiplication w 2 11211 1
Isiahar
Reduced OR decomp is unique

GSO alg is generally not implemented


in this way
Given A 10

An b

Goal is to find N

A QR
2Rn b Q is unitary
Rn 2ᵗb
to solve because
very easy
R is Dar
upper
square
Given A B Goal is to find
11AM 6112

Aerman nom

b uniformly from RM
An
In n
v
vars
unlikely
m eghs
a 501 exist
IR Factorization
Gram Schmidt Orthogonalization unstable
A QR
2 AR triangular orthogonalization

92 92 Component of as in dir of q

all

i
Vi ai

p
P I
É 9i9j

I_qiqi ai
iii.fi i
Orthogonal Dization

Q 22 2K A R upper Dar

A Q 2 R Full QR factorization

D l
Q2

an
1

f n
lintle a

Pn P I
Halle 1
P N
P I
27
Householder 2R Factorization

ge
2
pl 1
2
I
Atv AFAR
n IA A ATV
P ACA A A V
y PV

If A is orthonormal P AA

Least Square Problem

Given A ERM b e Rm
for Find X sit 11AM bll is minimum
r b An
1 Atr 0 Normal Equation
2 A An Atb
3 Pb Au P is projection on span A
suppose 1 Atr O

r b An
Atr Atb AFAR
0 Atb AFAN

2 AFAR Atb

ALAFAT AFAN ALAA Atb


An Pb

If A is full rank then k is unique

An Z

112 611 112 1


11 1
span A span A
stitis 11am 611

z doesn't minimise the distance

A An Atb
R CAFAI Atb
At AFA At
a pseudoinverse
AFAR 0
N A AN 0
I Aull 0
An 0

n Atb y Pb
1
At A A At
7
Normal Equation

Given matrix A we can find upper Dar


matron R S.t

AA R R

R Rn Atb
Rtw Atb for w
Rn w for n

n Atb y Pb
At AAA At
2 QR factorization

22Pa

An Pb In Qtb
Rn Qᵗb

3 SVD

A UÉv
AV DÉ
P jÑ
An Pb
UÉv x UUᵗb
ÉV x Uᵗb
Gaussian Elimination

AE 4mm is a
square matrix

Goal transform A into an upper triangular matrix


by introducing Os below the diagonal
columnwise eath Lk is lower Δ

Equivalent to U
Leyla upper Bar
A or
Lower Dar
A
I

Gaussian Elimination
Kth column
Lk I

o niet

Lm L A R

want ALU

Le w
neg values as the
U A
for i 1 to m
for j it

eji
to m
E m i
12
YI
uj.im Uj.im lj.i.ui.im

Operations count

m il 2 m iti m i

An b
0143m

EEE no
Un w

20
A 10 1

L U
b 1 1
t.io L
machine error

stability
bud
L
1020
stability

iv 10

a
1 a 1
ii
y
10
aug 01 1

Cholesky
Lemma eigen vectors corresponding to distinct
eigen values then they are orthogonal
An XM
An_ 12h2
A A
2 n 42 n Anz
In Am uz't m
XV Nz

X 12 n Nz 0 nine 0
i orthogonal
Cholesky Factorisation

A positive definite Hermitian

A R R R upper triangular

w̅ 1
1 1 www.t

1 live.tl
A
ww I

A R AR

Cor If A is positive definite Hermitian thenthe


principal submatrin of A is also positive
definite Hermitian
k wwt 0

1
A IR A R

Need to show R is full rank then the


claim follows

R is full rank as determinant is 1

K wwt ve definite hermitian

x k ww In 0 Take a as

R
Free
A R R
F went

Existence
comes from the fact that this algorithm
will work for the definite hermitian matrices
In GE
2m cheek why
framework
Cholesky in alg
I
m

A R A R
R R AzR2R

R Rat Rm Rm R

Stability A LU GE

A RTR
HAIR IIR 12 112 112

use SVD

R U V
R V U

A RTR V U U V
V V

You might also like