Lecture Notes On Mathematical Method of
Lecture Notes On Mathematical Method of
Dr. A. N. Njah,
Department of Physics,
University of Agriculture,
Abeokuta.
PHS 471: Linear Algebra: Transformation in linear vector spaces and ma-
trix theory.
Functional analysis; Hilbert space, complete sets of orthogonal functions; Lin-
ear operations.
Special functions: Gamma, hypergometric, Legendre, Bessel, Hermite and
Laguerre functions. The Dirac delta function
Integral transform and Fourier series: Fourier series and Fourier transform;
Application of transform methods to the solution of elementary differential
equations in Physics and Engineering.
Suggested reading.
1
Contents
1 LINEAR ALGEBRA 5
1.1 Vector Space or Linear Space . . . . . . . . . . . . . . . . . . 5
1.1.1 Algebraic Operations on Vectors . . . . . . . . . . . . . 6
1.1.2 Linearly Dependent and Independent sets of vectors . . 6
1.2 Matrix Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 Determinant of a Matrix . . . . . . . . . . . . . . . . . 9
1.2.2 General properties of determinants . . . . . . . . . . . 10
1.2.3 Adjugate Matrix or Adjoint of a Matrix . . . . . . . . 10
1.2.4 Reciprocal Matrix or Inverse of a Matrix . . . . . . . . 11
1.2.5 The Transpose of a Matrix . . . . . . . . . . . . . . . . 11
1.2.6 Symmetric, Skew-symmetric and Orthogonal Matrices . 11
1.3 Complex Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.1 The Conjugate of a Matrix . . . . . . . . . . . . . . . . 12
1.3.2 The Conjugate transpose or Hermitian Conjugate of a
Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.3 Hermitian, Skew-Hermitian and Unitary Matrices . . . 13
1.4 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.1 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . 13
1.5 Consistency of equations . . . . . . . . . . . . . . . . . . . . . 15
1.5.1 Homogeneous and Non-Homogeneous Linear Equations 15
1.5.2 Uniqueness of Solutions . . . . . . . . . . . . . . . . . 16
1.6 Solution of sets of Equations . . . . . . . . . . . . . . . . . . . 17
1.6.1 Inverse method . . . . . . . . . . . . . . . . . . . . . . 17
1.6.2 Row Transformation method . . . . . . . . . . . . . . . 19
1.6.3 Gaussian elimination method . . . . . . . . . . . . . . 20
1.6.4 Triangular Decomposition method: LU-decomposition . 20
1.6.5 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . 20
1.7 Eigenvalues and Eigenvectors of a Matrix . . . . . . . . . . . . 20
2
1.7.1 Nature of the eigenvalues and eigenvectors of special
types of matrices . . . . . . . . . . . . . . . . . . . . . 22
1.7.2 Diagonalisation of a matrix . . . . . . . . . . . . . . . 24
1.8 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.8.1 Transformation . . . . . . . . . . . . . . . . . . . . . . 25
1.8.2 Resultant of two linear transformation . . . . . . . . . 26
1.8.3 Similarity transformation . . . . . . . . . . . . . . . . . 26
1.8.4 Unitary transformation . . . . . . . . . . . . . . . . . . 26
1.8.5 Orthogonal transformation . . . . . . . . . . . . . . . . 26
1.8.6 Orthogonal set . . . . . . . . . . . . . . . . . . . . . . 27
1.9 Bases and dimension . . . . . . . . . . . . . . . . . . . . . . . 27
1.9.1 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . 27
2 FUNCTIONAL ANALYSIS 28
2.1 Normed Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.1 Cauchy Sequences . . . . . . . . . . . . . . . . . . . . . 28
2.1.2 Completeness . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.3 Pre-Hilbert spaces . . . . . . . . . . . . . . . . . . . . 29
2.1.4 Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.5 Geometry of Hilbert space . . . . . . . . . . . . . . . . 29
3 SPECIAL FUNCTIONS 30
3.1 The gamma and beta functions . . . . . . . . . . . . . . . . . 30
3.1.1 The gamma function Γ . . . . . . . . . . . . . . . . . . 30
3.1.2 The beta function, β . . . . . . . . . . . . . . . . . . . 32
3.1.3 Application of gamma and beta functions . . . . . . . . 32
3.2 Bessel’s Functions . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3 Legendre’s Polynomials . . . . . . . . . . . . . . . . . . . . . . 35
3.4 Hermite Polynomials . . . . . . . . . . . . . . . . . . . . . . . 36
3.5 Laguerre Polynomials . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.1 Hypergeometric Function . . . . . . . . . . . . . . . . . 39
3
4.2.1 Integration involving the impulse function . . . . . . . 49
4.2.2 Differential equations involving the impulse function . . 49
4.3 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3.1 Fourier series of functions of period 2π . . . . . . . . . 51
4.3.2 Half-range series . . . . . . . . . . . . . . . . . . . . . 54
4.3.3 Functions with arbitrary period T . . . . . . . . . . . . 54
4.3.4 Sum of a Fourier series at a point of finite discontinuity 55
4.4 Fourier Integrals . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4.1 The Fourier integral . . . . . . . . . . . . . . . . . . . 55
4
Chapter 1
LINEAR ALGEBRA
5
1.1.1 Algebraic Operations on Vectors
and B
If A are two vectors with components (a1, a2 , . . . , an ) and (b1, b2, . . . , bn)
respectively then
±B
(i) A = (a1 ± b1, a2 ± b2, . . . , an ± bn )
= (ka1 , ka2, . . . , kan), k a scalar
(ii) k A
·B
(iii) A = a1 b1 + a2 b2 + . . . + an bn
=1
(iv) A vector will be a unit vector if the magnitude |A|
and B
(v) The vectors A will be orthogonal if A
·B
=0
1. The set of vectors (1,2,3), (2,-2,0) is linearly independent since k1(1, 2, 3)+
k2(2, −2, 0) = (0, 0, 0) is equivalent to the set of equations k1 + 2k2 =
0, 2k1 − 2k2 = 0 and 3k1 = 0 which gives k1 = k2 = 0.
2. The set of vectors (2,4,10), (3,6,15) is linearly dependent since k1 (2, 4, 10)+
k2(3, 6, 15) = (0, 0, 0) gives the system 2k1 + 3k2 = 0, 4k1 + 6k2 =
0, 10k1 + 15k2 = 0 ⇒ k1 = 3, k2 = −2
6
1.2 Matrix Theory
A set of numbers arranged in a rectangular array of m rows and n columns
such as ⎛ ⎞
a
⎜ 11
a12 . . . a1n ⎟
⎜ a21 a22 . . . a2n ⎟
⎜
⎟
⎜ ⎟
⎜ ... . . . . . . . . .
⎜ ⎟
⎟
⎝ ⎠
am1 am2 . . . amn
is called a matrix of order m × n or an m × n matrix. If m = n (i.e.
number of rows = number of columns) it is called a square matrix of order
n. aij , (i = 1, 2, . . . , m; j = 1, 2, . . . , n) are called its elements or constituents
or entries. aij represents the element in the ith row and j th column of the
matrix. The element aij (i = j) of a square matrix A lie on the main diagonal
or principal diagonal and are called its diagonal elements. The sum of the
diagonal elements is called the trace of A and is denoted by trA = ni=1 aii
7
Multiplication of a matrix by a matrix: For two matrices A = [aij ] and
B = [bij ] to be multiplied (i.e. for C=AB to be defined) the number of
columns of A must be equal the number of rows of B; i.e. if A is a p × n
matrix B must be an n × q matrix; C=AB is a p × q matrix whose elements
cij in the ith row and j th column is the algebraic sum of the products of the
elements in the ith row of A by the corresponding elements in the j th column
of B
cij = nk=1 aik bkj = ai1 b1j + ai2b2j + . . . + ain bnj
8
Examples
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
d
⎜ 11
0 0 ⎟ ⎜
λ 0 0 ⎟ ⎜
1 0 0 ⎟
D=⎜ ⎜
⎝
0 d22 0 ⎟
⎟
⎠
, S=⎜
⎜
⎝
0 λ 0⎟⎟
⎠
, I=⎜
⎜
⎝
0 1 0 ⎟
⎟
⎠
0 0 d33 0 0 λ 0 0 1
9
1.2.2 General properties of determinants
Theorem 1: (Behaviour of nth order determinant under elementary row op-
eration)
(a) Interchange of two rows multiplies the value of the determinant by −1.
(b) Addition of a multiple of a row to another row does not alter the value
of the determinant.
(c) Multiplication of a row by c multiplies the value of the determinant by
c
Theorem 2: (further properties of the nth -order determinant)
(a)-(c) in theorem 1 hold also for columns
(d) Transposition leaves the value of a determinent unaltered
(e) A zero row or column renders the value of a determinant zero.
(f) Proportional rows or columns render the value of a determinant zero.
In particular, a determinant with two identical rows or columns has the
value zero.
Singular and Non-singular matrices
A square matrix A is known as singular matrix if its determinant |A| = 0.
In case |A| =
0 then A is known as non-singular matrix
10
• If |A| = 0 then A(adjA) = (adjA)A = 0
• adj(AB) = adjBadjA
or adj(ABC) = adjCadjBadjA (prove!!)
11
(ii) skew-symmetric if A′ = −A i.e. if transposition gives the negative of A
(iii) orthogonal if A′ = A−1 i.e. if transposition gives the inverse of A.
Every square matrix A = P + Q
where P = 12 (A + A′ ) is a symmetric matrix
and Q = 12 (A − A′) is a skew-symmetric matrix
12
Properties:
• (A†)† = A
• (A + B)† = A† + B†
• If α is a complex number and A a matrix, then (αA)† = α∗A†
• (AB)† = B†A† (prove!!)
13
⎛ ⎞
4 2 4 2
1. A = ⎝ is of rank 2 since |A| = = 18 i.e. not zero.
⎠
1 5 1 5
⎛ ⎞
6 3⎠ 6 3
2. B = ⎝ gives |B| = = 0.
8 4 8 4
tested
4 5 1 2 2 3 3 5 1 3 4 5 3 5 3 4
, , , , , , , . i.e. we
2 3 4 5 5 6 1 3 4 6 5 6 4 6 4 5
test all possible second order minors to find one that is not zero.
For a rectangular matrix of order m × n the rank is given by the order of the
largest square sub-matrix formed by the elements.
Example: ⎛ ⎞
⎜
2 2 3 1 ⎟
For a 3 × 4 matrix ⎜ ⎜ 0 8 2 4 ⎟ the largest square sub-matrix cannot be
⎝
⎟
⎠
1 7 3 2
2 2 3
greater than order 3. We try 0 8 2 = 0
1 7 3
But we must
also try other 3 × 3 sub-matrices, e.g,
2 3 1
8 2 4 = 30 = 0, therefore, B is of rank 3
7 3 2
14
1.5 Consistency of equations
1.5.1 Homogeneous and Non-Homogeneous Linear Equations
A set of m simultaneous linear equations in n unknowns
a11 x1 + a12 x2 + . . . + a1nxn = b1
a21 x1 + a22 x2 + . . . + a2nxn = b2
..............................
am1 x1 + am2 x2 + . . . + amn xn = bm
can be written in the matrix form as follows
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜
a11 a12 . . . a1n ⎟⎜
x1 ⎟ ⎜
b1 ⎟
⎜
a21 a22 . . . a2n ⎟⎜
x2 ⎟ ⎜
b2 ⎟
=
⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜
⎜
⎝
... ......... ⎟⎜
⎟⎜
⎠⎝
... ⎟
⎟
⎠
⎜
⎜
⎝
... ⎟
⎟
⎠
am1 am2 . . . amn xn bm
i.e. Ax = b
This set of equations is homogeneous if b = 0, i.e. (b1, b2, . . . , bm ) = (0, 0, . . . , 0)
otherwise it is said to be non-homogeneous. This set of equations is said to
be consistent if solutions for x1, x2, . . . , xn exist and inconsistent if no such
solutions can be found.
The Augmented coefficient matrix Ab of A is
⎛ ⎞
⎜
a11 a12 . . . a1n b1 ⎟
⎜
a21 a22 . . . a2n b2 ⎟
Ab =
⎜ ⎟
⎜ ⎟
⎜
⎜
⎝
... ......... ... ⎟
⎟
⎠
am1 am2 . . . amn bm
15
Example:
⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
1 3 x
⎠⎝ 1
4 1 3 1 3 4
If ⎝ ⎠ =⎝ ⎠ then A = ⎝ ⎠ and Ab = ⎝ ⎠
2 6 x2
5 2 6 2 6 5
1 3
Rank of A: = 0 ⇒ rank of A = 1
2 6
1 3
Rank of Ab : = 0 as before
2 6
3 4
but = −9 ⇒ rank of Ab = 2 In this case rank of A is less than rank
6 5
of Ab ⇒ no solution exists.
Example: ⎛ ⎞⎛ ⎞ ⎛ ⎞
−4 5 ⎠ ⎝ x1 ⎠ −3 ⎠
Show that ⎝ = ⎝ has an infinite number of solutions.
⎛
−8⎞ 10 x2 ⎛ −6 ⎞
−4 5 ⎠ −4 5 −3 ⎠
−4 5
A= and Ab = Rank A: = = 0 ⇒
⎝ ⎝
−8 10
−8 10
−6
−8 10
−4 5 5 −3 −4 −3
rank A = 1 Rank Ab : =
= 0,
= 0,
=
−8 10 10 −6 −8 −6
16
Therefore an infinite number of solutions exist.
For the homogeneous linear equations
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜
a11 a12 . . . a1n ⎟⎜
x1 ⎟ ⎜
0 ⎟
⎜
a21 a22 . . . a2n ⎟⎜
x2 ⎟ ⎜
0 ⎟
=
⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜
⎜
⎝
... ......... ⎟⎜
⎟⎜
⎠⎝
... ⎟
⎟
⎠
⎜
⎜
⎝
... ⎟
⎟
⎠
am1 am2 . . . amn xn 0
i.e. Ax = 0 (∗)
Let r be the rank of the matrix A of order m × n. We have the following
results:
1. A system of homogeneous linear equations always have one or more
solutions. The two cases are: r = n or r < n. For r = n the eq(*)
will have no linearly independent solutions, for in that case trivial (zero)
solution is the only solution, while in the case r < n there will be (n − r)
independent solutions and therefore the eq(*) will have more than one
solution.
2. The number of linearly independent solutions of Ax = 0 is (n − r) i.e.
if we assign arbitrary values to (n − r) of the variables, then the values
of the others can be uniquely determined.
Since the rank of A is r, it has r linearly independent columns.
3. If the number of equations is less than the number of variables, then the
solution is always other than x1 = x2 = . . . = xn = 0 (i.e. the solution
is always non-trivial solution)
4. If the number of equations is equal to the number of variables a necessary
and sufficient condition for solutions other than x1 = x2 = . . . = xn = 0
is that the determinant of the coefficients must be zero.
17
(ii) Form C, the matrix of cofactors of A (the cofactor of any element is its
minor together with its ’place sign’)
(iii) Write C′ , the transpose of C
1
(iv) Then A−1 = |A| × C′
Example: To solve the system
3x1 + 2x2 − x3 = 4
2x1 − x2 + 2x3 = 10
x1 − 3x2 − 4x3 = 5
we rewrite it ⎞
⎛ in ⎛matrix ⎞ form ⎛ as ⎞
⎜
3 2 −1 ⎟ ⎜ 1 ⎟
x ⎜
4 ⎟
⎜ 2 −1 2 ⎟ ⎜ x ⎟ = ⎜ 10 ⎟
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎝ ⎠ ⎝ 2 ⎠ ⎝ ⎠
1 −3 −4 x3 5
i.e. Ax
⎛ = b ⎞
⎜
3 2 −1 ⎟
A=⎜ ⎜ 2 −1
⎝
2 ⎟
⎟
⎠
1 −3 −4
3 2 −1
(i) |A| = 2 −1 2 = 55
1 −3 −4
⎛ ⎞
c
⎜ 11
c 12 c 13 ⎟
(ii) C = ⎜ ⎜
c c22 c23 ⎟
⎝ 21
⎟
⎠
c31
c 32 c 33
−1 2 2 2 2 −1
where c11 = = 10, c12 = − = 10, c13 =
= −5,
−3 −4 1 −4 1 −3
2 −1 3 −1 3 2
c21 = −
= 11, c22 =
= −11, c23 = −
= 11,
−3 −4 1 −4 1 −3
2 −1 3 −1 3 2
c31 = = 3, c32 = − = −8, c33 =
= −7
−1 2 2 2 2 −1
⎛ ⎞
⎜
10 10 −5 ⎟
So C = ⎜ ⎜
⎝
11 −11 11 ⎟ ⎟
⎠
3 −8 −7
18
⎛ ⎞
⎜
10 11 3 ⎟
′
(iii) C = ⎜ ⎜
⎝
10 −11 −8 ⎟ ⎟
⎠
(i.e. the adjoint of A)
−5 −11 ⎛
−7 ⎞
10 11 3
1 1 ⎜
⎜ ⎟
(iv) A−1 = |A| C′ = 55 ⎜ 10 −11 −8 ⎟
⎝
⎟
⎠
⎛ ⎞ ⎛ ⎞
−5 −11
⎛
−7 ⎞⎛ ⎞ ⎛ ⎞
x
⎜ 1 ⎟
b
⎜ 1 ⎟
10 11 3 4 40 + 110 + 15
1 ⎜ 1 ⎜
⎜ ⎟⎜ ⎟ ⎜ ⎟
−1
So ⎜
⎜
x ⎟=A ⎜
⎝ 2 ⎠
⎟ ⎜
b ⎟ = 55 ⎜
⎝ 2 ⎠
⎟
⎝
10 −11 −8 ⎟ ⎟⎜
⎠⎝
⎜ 10 ⎟ =
⎟
⎠
⎜ 40 − 110 − 40
55 ⎝
⎟
⎟
⎠
=
⎛
x⎞3 b3 −5 −11 −7 5 −20 + 10 − 35
⎜
3 ⎟
⎜ −2 ⎟
⎜ ⎟
⎝ ⎠
1
Therefore x1 = 3, x2 = −2, x3 = 1
19
8 −2 1
⎛ ⎞ ⎛ ⎞
2 1 1 1 0 0⎟ 1 0 0 17 17 17
−10 11 3
⎜ ⎜ ⎟
and ⎜
⎜
1 3 2 0 1 0⎟⎟
is transformed to ⎜
⎜
0 1 0 17 17 17
⎟.
⎟
11 −7 −5
⎝ ⎠ ⎝ ⎠
3 −2 ⎛ 4 0 0 1⎞ ⎛ ⎞ ⎛
0 0 1⎞ ⎛17 17
⎞ 17
1 0 0 ⎟ ⎜ x1 ⎟ 8 −2 1 ⎟ 5
1
⎜ ⎜ ⎜ ⎟
We now have ⎜⎜
⎝
0 1 0⎟⎟ ⎜
⎜ x ⎟ =
⎠ ⎝ 2 ⎠
⎟
17 ⎝ −10 11 3 ⎠
⎜
⎜
⎟
⎟
⎜
⎜
⎝
1 ⎟
⎟
⎠
0 0 1 x3 11 −7 5 −4
Which gives x1 = 2, x2 = −3, x3 = 4
20
or (A − λI)X = 0
Since X = 0, the matrix (A − λI) is singular so that its determinant |A − λI|,
which is known as the characteristic determinant of A is zero. This leads to
the characteristics equation of A as
|A − λI)| = 0 (∗)
which follows that every characteristic root λ of a matrix A is a root of its
characteristic equation, eq(*)
Example: ⎛ ⎞
5 4
If A = ⎝ ⎠ find its characteristic roots and vectors.
1 2
The⎛ characteristic
⎞ ⎛equation⎞ of A is given by |A − λI| = 0
5 4 1 0 ⎠ 5 − λ 4
i.e.
⎝ ⎠ −λ ⎝ = =0
1 2 0 1 1 2 − λ
i.e. λ2 − 7λ + 6 = 0
or (λ − 1)(λ − 6) = 0
Therefore λ1 = 1, λ2 = 6 are ⎛ the⎞eigenvalues of A.
21
i.e. −x1 + 4x2 = 0
and x1 − 4x2 = 0
which yield x1 = 4x2.
If we take x1 = 4,
⎛ then
⎞ x2 ⎛
= 1⎞
Therefore X 2 = ⎝ x1 ⎠ = ⎝ 4 ⎠
x2 1⎛ ⎞ ⎛ ⎞
hence X 2 = √1 ⎝ 4 ⎠
17 1
22
Theorem 3: The modulus of each eigenvalue of a unitary matrix is unity, i.e.
the eigenvalues of a unitary form have absolute value 1
Corollary: The eigenvalues of an orthogonal matrix have the absolute value
unity and are real, or complex conjugate in pairs.
Theorem 4: Any two eigenvectors corresponding to two distinct eigenvalues
of a Hermitian matrix are orthogonal.
Proof: Let X 1 and X 2 be two eigenvectors corresponding to two distinct
eigenvalues λ1 and λ2 of a Hermitian matrix A; then
AX 1 = λ1 X 1 ...............................(1)
AX 2 = λ2 X 2 ...............................(2)
From theorem 1 λ1 and λ2 are real. Premultiplying (1) and (2) by X 2† and
X 1† respectively
X 2† AX 1 = λ1 X 2† X
1 ...............................(3)
X 1 AX
† 2 = λ2 X 1X
† 2 ...............................(4)
But (X 2† AX 1 )† = X 1† AX 2
Therefore for a Hermitian matrix A† = A and also (X 2† )† = X
2 ; therefore we
have, from (3) and (4),
(λ1X 2† X
1 )† = λ2 X 1† X
2
or λ1 X 1† X 2 = λ2 X 1† X
2
or (λ1 − λ2 )X 1† X2 = 0
Since λ1 − λ2 = 0 for distinct roots we have X 2 = 0 ⇒ X
1† X 1 and X 2 are
orthogonal
Corollary: Any two eigenvectors corresponding to two distinct eigenvalues of
a real symmetric matrix are orthogonal.
Theorem 5: Any two eigenvectors corresponding to two distinct eigenvalues
of a unitary matrix are orthogonal.
Theorem 6: The eigenvectors corresponding to distinct eigenvalues of a ma-
trix are linearly independent.
Theorem 7: The characteristic polynomial and hence the eigenvalues of sim-
ilar matrices are the same. Also if X is an eigenvector of A corresponding
to the eigenvalue λ, then P−1X is an eigenvector of B corresponding to the
eigenvalue λ where B = P−1AP
Proof: Let A and B be two similar matrices. Then there exists an invertible
matrix P such that B = P−1AP. Consider
B − λI = P−1AP − λI
= P−1(A − λI)P
23
since P−1(λI)P = λP−1P = λI
Therefore
which follows that A and B have the same characteristic polynomial and so
they have the same eigenvalues.
Corollary 1: The eigenvalues of a matrix are invariant under similarity trans-
formation.
Corollary 2: If A is similar to a diagonal matrix D then the diagonal elements
of D are the eigenvalues of A.
24
Show that AM = MS
⇒ M−1AM = S (i.e. similarity transformation of A to S which implies
that S and A are similar matrices)
Note
1. M−1AM transforms the square matrix A into a diagonal matrix S
2. A square matrix A of order n can be so transformed if the matrix has n
independent eigenvectors.
3. A matrix A always has n linearly independent eigenvectors if it has n
distinct eigenvalues or if it is a symmetric matrix.
4. If the matrix has repeated eigenvalues, it may or may not have linearly
independent eigenvectors
1.8 Transformation
n
Linear form: An expression of the form j=1 aij xj is said to be linear form
of the variable xj .
1.8.1 Transformation
If aij are the given constants and xj the variables then the set of equation
yi = nj=1 aij xj (for i = 1, 2, 3 . . . , n) (1)
25
If A is an identity matrix I then we have the identical transformation and
its determinant is unity. In this case y1 = x1, y2 = x2, y3 = x3, . . . , yn = xn
26
is AA′ = I. In eq.(2) if P is orthogonal, then P−1 = P′ and eq.(2) is an
orthogonal transformation. The product of two orthogonal transformations
is an orthogonal transformation. Two n-vectors x and y are orthogonal to
each other if x.y = x, y = 0 i.e. if x†y = 0.
27
Chapter 2
FUNCTIONAL ANALYSIS
2.1.2 Completeness
A normed space in which every Cauchy sequence has a limit is said to be
complete.
28
2.1.3 Pre-Hilbert spaces
An inner product space or a pre-Hilbert space is a pair (H, ·, · ) consisting
of a linear space H over K and a functional ·, · : H × H → K, called the
inner product of H, with the following properties:
(i) x + y, z = x, z + y, z , ∀x, y, z ∈ H
(ii) αx, y = αx, y , ∀x ∈ H, α ∈ K
(iii) x, y = y, x , ∀x, y ∈ H
(iv) x, x ≥ 0, ∀x ∈ H and x, x = 0 iff x = 0.
Remark
1. For x, y ∈ H the number x, y is called the inner product of x and y.
2. For x ∈ H, define x by x = x, x , x ∈ H Then, · is a
norm on H, whence (H, · ) is a normed space. · is called the norm
induced by the inner product ·, ·
3. With · as in 2., one can show that
x + y 2 + x − y 2 = 2 x 2 +2 y 2 for all x, y ∈ H.
This result is called the parallelogram law and is a characterizing property
of prer-Hilbert spaces, i.e. if a norm does not satisfy the parallelogram
law, then it is not induced by an inner product
where x = (x1, x2, . . . , xn), y = (y1, y2, . . . , yn) Then (K n, ·, · ) is a Hilbert
space finite dimension.
29
Chapter 3
SPECIAL FUNCTIONS
Γ(n + 1) = nΓ(n).
= n(n − 1)Γ(n − 1) since Γ(n) = (n − 1)Γ(n − 1)
= n(n − 1)(n − 2)Γ(n − 2) since Γ(n − 1) = (n − 2)Γ(n − 2)
= ....................................................
= n(n − 1)(n − 2)(n − 3) . . . 1Γ(1)
= n!Γ(1)
∞
But Γ(1) = 0∞ t0 e−t dt = [−e−t ]0 = 1
⇒ Γ(n + 1) = n! (3)
Examples:
30
Γ(7) = 6! = 720, Γ(8) = 7! = 5040, Γ(9) = 40320
We can also use the recurrence relation in reverse
Γ(x + 1) = xΓ(x) ⇒ Γ(x) = Γ(x+1) x
Example:
If Γ(7) = 720 then Γ(6) = Γ(6+1) 6 = Γ(7) 720
6 √= 6 = 120
If x = 21 it can be shown that Γ( 12 ) = π (F.E.M. 150)
Using the recurrence relation Γ(x + 1)√= xΓ(x) we can obtain the following:
√
Γ( 23 ) = 21 Γ( 12 ) = 21 ( √ π) ⇒ Γ( 32 ) = √2π
Γ( 52 ) = 23 Γ( 32 ) = 23 ( 2π ) ⇒ Γ( 52 ) = 3 4 π
Negative values of x
Since Γ(x) = Γ(x+1) x , then as x → 0, Γ(x) → ∞ ⇒ Γ(0) = ∞
The same result occurs for all negative integral values of x
Examples:
At x = −1, Γ(−1) = Γ(0) −1 = ∞
Γ(−1)
At x = −2, Γ(−2) = −2 = ∞ etc.
Γ( 1 ) √
Also at x = − 12 , Γ(− 12 ) = −21 = −2 π
2
Γ(− 21 ) √
3
and at x = − 2 , Γ(− 2 ) = − 3 = 34 π
3
2
Gragh of y = Γ(x)
Examples:
1. Evaluate 0∞ x7e−xdx
31
1 1 1
x = y 2 ⇒ x2 = y 4
1
4 e−y ∞ y 41 e−y 1 ∞ − 41 −y
I = 0∞ y 2x dy = 1 dy = 2 0 y e dy = 1 ∞ v−1 −y
2 0 y e dy where
0
2y 2
v = 43 ⇒ I = 12 Γ( 34 )
From tables, Γ(0.75) = 1.2254 ⇒ I = 0.613
It can be shown that the beta function and the gamma function are related
as β(m, n) = Γ(m)Γ(n) (m−1)!(n−1)!
Γ(m+n) = (m+n−1)!
then m − 1 = 5 ⇒ m = 6 and n − 1 = 4 ⇒ n = 5
I = β(6, 5) = 5!4!
10! = √1
1 41260
2.Evaluate I = 0 x 1 − x2dx
Comparing this with β(m, n) = 01 xm−1(1 − x)n−1dx
m − 1 = 32 ⇒ m = 52 and n − 1 = 21 ⇒ n = 32
3√ 1√
Γ( 5 )Γ( 3 ) 1 ( 4 π)( 2 π)
Therefore, I = 12 β( 52 , 23 ) = 21 Γ(25 + 32) = 2 3! = π
32
2 2
3 x3 dx
3. Evaluate I = 0 3−x √ (F.E.M. 170)
32
∞
c 2 3 r c
y = x (a0 + a1 x + a2 x + a3 x + . . . + ar x + . . .) or y = x ar xr
r=0
∞
i.e. y = a0 xc + a1 xc+1 + a2 xc+2 + . . . + ar xc+r + . . . or y = ar xc+r
(2)
r=0
where c, a0 , a1 , a2, . . . , ar are constants. a0 is the first non-zero coefficient. c
is called the indicial constant.
dy
= a0 cxc−1 + a1 (c + 1)xc + a2 (c + 2)xc+1 + . . . + ar (c + r)xc+r−1 + . . . (3)
dx
d2 y
2
= a0 c(c−1)xc−2+a1 c(c+1)xc−1+a2 (c+1)(c+2)xc+. . .+ar (c+r−1)(c+r)xc+r−2+. . .
dx
(4)
Substituting eqs.(2),(3) and (4) into (1) and equating coefficients of equal
powers of x, we have c = ±v and a1 = 0.
ar−2
The recurrence relation is ar = v2 −(c+r)2 for r ≥ 2.
y = Ay1 + By2
33
1
Let a0 = 2v Γ(v+1) then the solution y1 gives for c = v = n (where n is a
positive integer) Bessel’s functions of the first kind of order n denoted by
Jn (x) where
x2 x4
⎧ ⎫
x ⎨ 1
Jn(x) = ( )n ⎩
⎬
− 2 + 4 − . . .⎭
2 Γ(n + 1) 2 (1!)Γ(n + 2) 2 (2!)Γ(n + 3)
x ∞ (−1)k x2k
= ( )n
2 k=0 22k (k!)Γ(n + k + 1)
x ∞ (−1)k x2k
= ( )n
2 k=0 22k (k!)(n + k)!
Similarly for c = −v = −n (a negative integer)
x2 x4
⎧ ⎫
x 1
( )−n ⎩
⎨ ⎬
J−n(x) = − 2 + 4 − . . .⎭
2 Γ(1 − n) 2 (1!)Γ(2 − n) 2 (2!)Γ(3 − n)
x ∞ (−1)k x2k
( )−n
= 2k
2 k=0 2 (k!)Γ(k − n + 1)
x ∞ (−1)k x2k
= (−1)n( )n (for details see F.E.M. 247)
2 k=0 22k (k!)(n + k)!
= (−1)nJn(x)
⇒ The two solutions Jn(x) and J−n(x) dependent on each other. Further
more the series for Jn(x) is
⎧ ⎫
x ⎨1 1 x 1 x
Jn (x) = ( )n ⎩ − ( )2 + ( )4 − . . .⎭
⎬
34
Remark: Note that J0(x) and J1(x) are similar to cos x and sin x respec-
tively.
Generating function: If we want to study a certain sequence {f (x)} and
can find a function G(t, x) = ∞ n
n=0 fn (x)t we may obtain the properties of
{fn(x)} from those of G which ”generates” this sequence and is called a gen-
erating function of it.
x 1
The generating function for Jn (x) is e 2 (t− t ) = ∞
−∞ Jn (x)t
n
Recurrence formula: Jn(x) can also be obtained from the recurrence for-
mula → Jn+1(x) = 2n x [Jn (x) − Jn−1 (x)]
For (0 < x < 1) Jn (x) are orthogonal
2 d2 y dy
(1 − x ) 2 − 2x + k(k + 1)y = 0
dx dx
where k is a real constant. Solving it by the Frobenius method as before we
obtain c = 0 andc = 1 and the corresponding solutions are
k(k+1) 2 k(k−2)(k+1)(k+3) 4
a) c = 1 : y = a0 1 − 2! x + 4! x −...
b) c = 0 : y = a1 x − (k−1)(k−2)
3! x3 + (k−1)(k−3)(k+2)(k+4)
5! x5 − . . .
where a0 and a1 are the usual arbitrary constants. When k is an integer
n, one of the solution series terminates after a finite number of terms. The
resulting polynomial in x denoted by Pn (x) is called Legendre polynomial with
a0 and a1 being chosen so that the polynomial has unit value when x = 1.
(−1 < x < 1) orthogonality
e.g. P0 (x) = a0 {1 − 0 + 0 − . . .} = a0 . We choose a0 = 1 so that P0 (x) = 1
P1 (x) = a1 {x − 0 + 0 − . . .} = a1 x
a1 is then chosen
to make P1 (x) =1 when x = 1 ⇒ a1 = 1 ⇒ P1 (x) = x
2x3 2
p2(x) = a0 1 − 2! x + 0 + 0 + . . . = a0 {1 − 3x2}
If P2 (x) = 1 when x = 1 then a0 = − 12 ⇒ P2 (x) = 12 (3x2 − 1)
Using the same procedure obtain:
35
P3 (x) = 12 (5x3 − 3x)
P4 (x) = 81 (35x4 − 30x2 + 3)
P5 (x) = 81 (63x5 − 70x3 + 15x) etc.
Legendre polynomials can also be expressed by Rodrigue’s formula given by
1 dn 2
Pn (x) = n n
(x − 1)n
2 n! dx
(Use this formula to obtain P0 (x), P1(x), P2(x), P3(x), etc)
The generating function is
1 ∞
√ Pn (x)tn
2
=
1 − 2xt + t n=0
1
To show this, start from the binomial expansion of √1−v where v = 2xt − t2,
multiply the powers of 2xt − t2 out, collect all the terms involving tn and
verify that the sum of these terms is Pn (x)tn.
The recurrence formula for Legendre polynomials is
2n + 1 n
Pn+1(x) = xPn (x) − Pn−1 (x)
n+1 n+1
This means that if we know Pn−1(x) and Pn (x) we can calculate Pn+1 (x),
e.g. given that P0 (x) = 1 and P1 (x) = x we can calculate P2 (x) using the
recurrence formula by taking Pn−1 = P0 , Pn = P1 and Pn+1 = P2 ⇒ n = 1.
Substituting these in the formula,
P2 (x) = 2×1+1 1 1 2
1+1 xP1 (x) − 1+1 P0 (x) = 2 (3x − 1)
Similarly to find P3 (x) we set Pn−1 = P1 , Pn = P2 and Pn+1 = P3 where
n = 2. Substituting these in the formula we have
P3 (x) = 2×2+1 2
2+1 xP2 (x) − 2+1 P1 (x)
= 35 x × 21 (3x2 − 1) − 32 x
1 3
= 2 (5x − 3x)
(Using the recurrence formula obtain P4 (x) and P5 (x))
37
3.5 Laguerre Polynomials
They are solutions of the Laguerre differential equation
d2 y dy
x dx 2 + (1 − x) dx + vy = 0 (∗1)
Using the Frobenius method again we have
∞ c+r
y = r=0 ar x , where a0 = 0
dy ∞ c+r−1
dx = r=0 ar (c + r)x and
2
d y ∞ c+r−2
dx2 = r=0 ar (c + r)(c + r − 1)x
38
3.5.1 Hypergeometric Function
The solutions of Gauss’s hypergeometric differential equation
39
which gives y2 (x) = x1−γ F (α − γ + 1, β − γ + 1, 2 − γ; x). The complete
solution is
40
Chapter 4
a
⇒
L{a} =
(1)
s
1
e.g. for a = 1, L{1} = s
41
2. If f (t) = eat ∞
e−(s−a)t 1 1
L{eat }= 0∞ eat e−st dt=
∞ −(s−a)t
0 e dt= −(s−a) 0 = − s−a [0 − 1] =
s−a
1
⇒
L{eat } =
(2)
s−a
1
Similarly
L{e−at } =
(3)
s+a
3. If f (t) = sin at
a
⇒
L{sin at} = 2
(4)
s + a2
2
e.g. L{sin 2t} = s2 +4
5. if f (t) = tn
n!
⇒
L{tn } =
(6)
sn+1
3! 6
e.g. L{t3 } = s3+1 = s4 .
42
6. If f (t) = sinh at
eat − e−at ⎠ −st
⎛ ⎞
∞ ∞
L{sinh at} = sinh(at)e−st dt = ⎝ e dt
0 0 2
1 ∞ −(s−a)t ∞
= e dt − e−(s+a)t dt
2 0 0
1 1 1
= −
2 s−a s+a
a
= 2
s − a2
a
⇒
L{sinh at} = 2
(7)
s − a2
s 4s
e.g. L{4 cosh 3t} = 4 s2 −3 2 = s2 −9
43
4.1.1 Inverse Transform
Given a LT, F (s) one can find the function f (t) by inverse transform
f (t) = L!−1 {F"(s)} where L−1 indicates inverse transform.
1
e.g. L−1 s−2 = e2t
L−1 ! 4s ="4
! "
s
L−1 s2 +25 = cos 5t
3s + 1 1 2
# $ # $
−1 −1
L = L + (by partial fractions)
s2 − s − 6 s + 2 s − 3
1 2
# $ # $
−1 −1
= L +L
s+2 s−3
−1
(Note: L, L are linear operators. Prove it)
= e−2t + 2e3t
A B C
5. Similarly (s + a)3 gives s+a + (s+a)2 + (s+a)3
As+B
6. Quadratic factor (s2 + ps + q) gives s2 +ps+q
As+B Cs+D
7. repeated quadratic factor (s2 + ps + q)2 gives s2 +ps+q + (s2 +ps+q)2
Examples
s2 −15s+41 3 2 1
1. (s+2)(s−3) 2 = s+2 − s−3 + (s−3)2
4s2 −5s+6
2. L−1 (s+1)(s 2 +4)
4s2 −5s+6 A 3 3 6
but (s+1)(s2 +4) ≡ s+1 + Bs+C
s2 +4 = s+1 + s−6
s2 +4 = s+1 + s
s2 +4 − s2 +4
44
−1 4s2 −5s+6 −1 3 s 6
! "
⇒ f (t) = L =L
(s+1)(s2 +4) s+1 + s2 +4 − s2 +4
−t
⇒ f (t) = 3e + cos 2t − 3 sin 2t
2
2. If L{t2 } = s23 then L{t2 e4t } = (s−4)3
Integrating by parts,
45
L{f ′ (t)} = [e−st f (t)]∞ ∞
0 − 0 f (t){−se
−st
}dt
′
i.e. L{f (t)} = −f (0) + sL{f (t)}
Similarly L{f ′′(t)} = −f ′(0) + sL{f ′ (t)} = −f ′ (0) + s[−f (0) + sL{f (t)}]
46
c) Now we rearrange this to give an expression for x̄:
s+4
i.e. x̄ = s(s−2)
a) L{x} = x̄
L{ẋ} = sx̄ − x0
L{ẍ} = s2 x̄ − sx0 − x1
2
The equation becomes (s2x̄ − sx0 − x1) − 3(sx̄ − x0) + 2x̄ = s−3
47
Solve the pair of simultaneous equations
ẏ − x = et
ẋ + y = e−t
given that at t = 0, x = 0 and y = 0
1
a) (sȳ − y0 ) − x̄ = s−1
1
(sx̄ − x0) + ȳ = s+1
b) Insert the initial conditions x0 = 0 and y0 = 0
1
sȳ − x̄ = s−1
1
sx̄ + ȳ = s+1
c) Eliminating = ȳ we have
1
sȳ − x̄ = s−1
s
sȳ + s2x̄ = s+1
s2 −2s−1 1 1 1 1 s 1
⇒ x̄ = (s−1)(s+1)(s 2 +1) = − 2 . s−1 − 2 . s+1 + s2 +1 + s2 +1
48
If the Dirac delta function is at the origin, a = 0 and so it is denoted by
δ(t)
Now consider pq f (t)δ(t − a)dt since f (t)δ(t − a) is zero for all values of t
within the interval [p, q] except at the point t = a, f (t) may be regarded as
a constant f (a), so that pq f (t)δ(t − a)dt = f (a) pq δ(t − a)dt = f (a)
Examples
Evaluate 13 (t2 + 4) · δ(t − 2)dt. Here a = 2 f (t) = t2 + 4 ⇒ f (a) = f (2) =
22 + 4 = 8
Evaluate
6
1. 0 5 · δ(t − 3)dt
5 −2t
2. 2 e · δ(t − 4)dt
Laplace transform of δ(t − a)
Recall that pq f (t) · δ(t − a)dt = f (a), p < a < q
49
c) Rearranging the denominator by completing the square, this can be
2(s+2) 6
written as x̄ = (s+2)2 +9 + (s+2)2 +9
50
4.3.1 Fourier series of functions of period 2π
Any periodic function f (x) = f (x + 2πn) can be written in Fourier series as
1 ∞
f (x) = a0 + (an cos nx + bn sin nx)
2 n=1
1
= a0 + a1 cos x + a2 cos 2x + . . . + b1 sin x + b2 sin 2x + . . .
2
(where a0 , an , bn, n = 1, 2, 3 . . . are Fourier coefficients) or as
1
f (x) = a0 + c1 sin(x + α1 ) + c2 sin(2x + α2 ) + . . .
2
where ci = a2i + b2i and αi = arctan( abii ).
c1 sin(x + α1 ) is the first harmonic or fundamental
c2 sin(2x + α2 ) is the second harmonic
cn sin(nx + αn ) is the nth harmonic.
For the Fourier series to accurately represent f (x) it should be such that if
we put x = x1 in the series the answer should be approximately equal to
the value of f (x1) i.e. the value should converge to f (x1) as more and more
terms of the series are evaluated. For this to happen f (x) must satisfy the
following Dirichlet conditions:
a) f (x) must be defined and single-valued.
b) f (x) must be continuous or have a finite number of discontinuities within
a periodic interval.
c) f (x) and f ′(x) must be piecewise continuous in the periodic interval.
If these conditions are met the series converges fairly quickly to f (x1) if
x = x1, and only the first few terms are required to give a good approxima-
tion of the function f (x)
Fourier coefficients: The Fourier coefficients above are given by
a0 = π1 −π
π
f (x)dx
1 π
an = π −π f (x) cos nxdx
bn = π1 −π
π
f (x) sin nxdx
Odd and even functions
a) Even functions: A function f (x) is said to be even if f (−x) = f (x). The
graph of an even function is, therefore, symmetrical about the y-axis. e.g.
51
f (x) = x2 f (x) = cos x
8 1 1 1
# $
⇒ f (x) = 2 + cos x − cos 3x + cos 5x − cos 7x + . . .
π 3 5 7
Theorem 2: If f (x) is an odd function defined over the interval −π < x < π,
then the Fourier series for f (x) contains sine terms only. Here a0 = an = 0
52
Example
f (x) = −6 − π < x < 0
f (x) = 6 0 < x < π
f (x) = f (x + 2π)
This is an odd function so f (x) contains only the sine terms
i.e. f (x) = ∞ n=1 bn sin nx
1 π
and bn = π −π f (x) sin nxdx
f (x) sin nx is even since it is a product of two odd functions.
2 π 2 π 12 −cosnx π 12
% &
⇒ bn = π 0 f (x) sin nxdx= π 0 6 sin nxdx= π n 0
= πn (1 − cos nπ)
24 1 1
# $
f (x) = sin x + sin 3x + sin 5x + . . .
π 3 5
If f (x) is neither even nor odd we must obtain expressions for a0 , an and bn
in full
Examples
Determine the Fourier series of the function shown.
f (x) = 2x
π 0<x<π
f (x) = 2 π < x < 2π
f (x) = f (x + 2π)
This is neither odd nor even,
1 ∞
⇒ f (x) = a0 + {an cos nx + bn sin nx}
2 n=1
1 2π 1 π 2 2π
! "
a) a0 = 0 f (x)dx = 0 π xdx + π 2dx
π π π
1 x2 1
= π π + [2x]π2π = π {π + 4π − 2π} = 3
0
⇒ a0 = 3
b) an =! π1 02π f (x) cos nxdx
= π1 0π 2x
2π "
cos nxdx + 2 cos nxdx
! % π π
2 1 x sin nx π 1 π 2π
& "
=π π n 0
− πn 0 sin nxdx + π cos nxdx
1 cos nx π sin nx 2π
&
= π2 1
% & %
(π sin nπx) + +
! πn πn n 0 n π
2 1 1 1
"
= π ! n sin nπx + πn2 (cos πnx − 1) +" n (sin 2πnx − sin nπx)
= π2 πn1 2 (cos πnx − 1) + n1 sin 2nπx
an = 0 (n even); an = π−4 2 n2 (n odd)
= π1 0π 2x
2π "
sin nxdx + 2 sin nxdx
π π
53
1 −x cos nx π
= π2 1 π
! % & 2π "
− πn 0
cos nxdx + π sin nxdx
π n 0
sin nx π nx 2π
&
= π πn (−π cos nπx) + πnx n 0 + − cos
2 1 1
% & %
n π
2 1 1
! "
= π !− n cos nπx +" (0 − 0) − n (cos 2πnx − cos nπx)
= π2 − n1 cos 2nπx = − πn 2
cos 2nπx
2
But cos 2nπ = 1 ⇒ bn = − πn
3 4 1 1
# $
f (x) = − 2 cos x + cos 3x + cos 5x + . . .
2 #π 9 25
2 1 1 1
$
− sin x + sin 2x + sin 3x + sin 4x . . .
π 2 3 4
−8
Simplifying, a!n = 0 for n even and an = πn 2 for n odd. In this b0 = 0 and so
8 1 1
"
f (x) = π − π cos x + 9 cos 3x + 25 cos 5x + . . .
Obtain a half-range sine series for f (x).
54
= 12 a0 + ∞ 2πnt 2πnt
n=1 {an cos T + bn sin T }
where
2π
a0 = T2 0T f (t)dt = ωπ 0ω f (t)dt
2π
2 T ω ω
an = T 0 f (t) cos nωtdt = π 0 f (t) cos nωtdt
2π
2 T ω ω
bn = T 0 f (t) sin nωtdt = π 0 f (t) sin nωtdt
Example
Determine the Fourier series for a periodic function defined by
f (t) = 2(1 + t) − 1 < t < 0
f (t) = 0 0 < t < 1
f (t) = f (t + 2) 0 < t < 1
Answer:
1 4 1 1
# $
f (t) = + 2 cos ωt + cos 3ωt + cos 5ωt . . .
2 #ω 9 25
2 1 1 1
$
− sin ωt + sin 2ωt + sin 3ωt + sin 4ωt
ω 2 3 4
55
as follows: ∞
f (x) = {A(k) cos kx + B(k) sin kx}dx (1)
0
∞
where A(k) = −∞ f (x) cos kxdx (2)
∞
B(k) = −∞ f (x) sin kx}dx
(3)
f (x+0)+f (x−0)
If x is a point of discontinuity, then f (x) must be replaced by 2
as in the case of Fourier series. This can, in other words, be expressed by the
following theorem.
Theorem 1: If f (x) is piecewise continuous in every finite interval and has a
∞
right-hand derivative and a left-hand derivative at every point and if −∞ |f (x)|dx
exists, then f (x) can be represented by a Fourier integral. At a point where
f (x) is discontinuous the value of the Fourier integral equals the average of
the left- and right-hand limits of f (x) at that point.
Examples
Find the Fourier integral representation of the function ⎧ in fig. below
⎨ 1 if |x| < 1
f (x) = ⎩
0 if |x| > 1
Solution: From (2) and (3) we have
1
A(k) = π1 −∞ f (x) cos kxdx = π1 −1 f (x) cos kxdx = sinπkkx −1 = sin k
∞ 1
πk
B(k) = π1 −1
1
sin kxdx = 0
and (1) gives the answer
2 ∞ cos kx sin k
f (x) = dk (4)
π 0 k
The average of the left- right-hand limits of f (x) at x = 1 is equal to (1+0)/2,
that is, 1/2.
Furthermore, from (4) and Theorem 1 we obtain
⎧
π
⎪
⎨ 2
if 0 ≤ x < 1
cos kx sin k π
∞ ⎪
⎪
π
= f (x) = 4 if x = 1
0 k 2 ⎪
⎩ 0 if x > 1
⎪
⎪
57