Agenda
• Matrices and their types
• REF and RREF
• Rank, its computation and properties
• Determinant, its computation and properties
• Consistency and inconsistency of linear systems
• Nature of solutions of linear systems
Matrice
s
• A matrix is a rectangular array of numbers or functions
which we will enclose in brackets. For example,
a11 a12 a13
0.3 1 5 , a
0 0.2 16 21 a22 a23 ,
a a32 a33 (1)
31
e x 4
2x
2
1
e6 x , a1 a2 a3 ,
4x
2
• The numbers (or functions) are called entries or, less
commonly, elements of the matrix.
• The first matrix in (1) has two rows, which are the
horizontal lines of entries.
Matrix – Notations
• We shall denote matrices by capital boldface letters A, B,
C, … , or by writing the general entry in brackets; thus A =
[ajk], and so on.
• By an m × n matrix (read m by n matrix) we mean a matrix
with m rows and n columns—rows always come first! m
× n is called the size of the matrix. Thus an m × n matrix
is of the form
(2)
Vectors
• A vector is a matrix with only one row or column. Its
entries are called the components of the vector.
• We shall denote vectors by lowercase boldface letters a,
b, … or by its general component in brackets, a = [aj], and
so on. Our special vectors in (1) suggest that a (general)
row vector is of the form
A column vector
Equality of Matrices
• Two matrices A = [ajk] and B = [bjk] are equal, written A = B,
if and only if (1) they have the same size and (2) the
corresponding entries are equal, that is, a11 = b11, a12 = b12,
and so on.
• Matrices that are not equal are called different. Thus,
matrices of different sizes are always different.
Algebra of Matrices
1. Addition of Matrices
• The sum of two matrices A = [ajk] and B = [bjk] of the same size is written A + B
and has the entries ajk + bjk obtained by adding the corresponding entries of A
and B. Matrices of different sizes cannot be added.
2. Scalar Multiplication (Multiplication by a Number)
• The product of any m × n matrix A = [ajk] and any scalar c (number c) is written
cA and is the m × n matrix cA = [cajk] obtained by multiplying each entry of A by c.
(a) AB BA (a) c(A B) cA cB
(b) (A B) C A (B C) (written A B C) (b) (c k)A cA kA
(c) A0 A (c) c(kA) (ck)A (written ckA)
(d) A (A) 0. (d) 1A A.
• Here 0 denotes the zero matrix (of size m × n), that is, the m × n matrix with all
entries zero.
Matrix Multiplication
Multiplication of a Matrix by a Matrix
• The product C = AB (in this order) of an m × n matrix A = [ajk]
times an r × p matrix B = [bjk] is defined if and only if r = n
and is then the m × p matrix C = [cjk] with entries
(3)
• The condition r = n means that the second factor, B, must
have as many rows as the first factor has columns, namely n.
A diagram of sizes that shows when matrix multiplication
is possible is as follows:
A B = C
[m × n] [n × p] = [m × p].
Matrix Multiplication
EXAMPLE 1
3 5 12 2 3 1 22 2 43 42
AB 4 0 2 5 0 7 8 26 16 14 6
6 3 2 9 4 1 1 9 4 37 28
• Here c11 = 3 · 2 + 5 · 5 + (−1) · 9 = 22, and so on. The entry
in the box is c23 = 4 · 3 + 0 · 7 + 2 · 1 = 14.
• The product BA is not defined.
Matrix Multiplication
Matrix Multiplication Is Not Commutative, AB ≠ BA in
General
• This is illustrated by Example 1, where one of the two
products is not even defined. But it also holds for
square matrices. For instance,
1 1 1 1 0 0
100 100 1 1 0 0
1 1 1 1 99 99
but .
1 1 100 100 99 99
• It is interesting that this also shows that AB = 0 does
not necessarily imply BA = 0 or A = 0 or B = 0.
Transposition of Matrices &
Vectors
• The transpose of an m × n matrix A = [ajk] is the n × m
matrix AT (read A transpose) that has the first row of A as its
first column, the second row of A as its second column, and
so on. Thus the transpose of A in (2) is AT = [akj], written out
• As a special case, transposition converts row vectors to
column vectors and conversely.
Transposition of Matrices
• Rules for transposition are
(a) (A T )T A
(b) (A B)T A T BT (5)
(c) ( cA)T cAT
(d) (AB)T BT A T .
Note that in (d) the transposed matrices are in reversed order.
Special Matrices
1 1 1
• Symmetric: aij = aji Eg: 1 2 0
1 0 5
0 1 2
• Skew Symmetric : aij = - aji Eg: 1 0 3
2 3 0
• Triangular: Upper Triangular aij = 0 for all i > j
Lower Triangular aij = 0 for all i < j
• Diagonal Matrix: aij = 0 for all i ≠ j Eg:
• Sparse Matrix: Many zeroes and few non-zero entities
Positive Definite matrix
Positive Definite matrix
• A square matrix is called positive definite if it is symmetric and all
its eigenvalues λ are positive, that is λ > 0.
• If A is positive definite, then it is invertible and det A > 0.
The converse is also true. In fact every positive definite matrix A can be
factored as A = UTU where U is an upper triangular matrix with positive
elements on the main diagonal.
Positive Definite matrix
Here are some of the key ways in which positive definite matrices are useful:
1.Optimization: In quadratic optimization problems, a positive definite matrix appears in
the objective function, and properties of positive definiteness are used to guarantee the
existence and uniqueness of the optimal solution.
2.Numerical Analysis: Positive definiteness ensures the efficiency and stability of these
algorithms.
3.Statistics: In multivariate analysis, such as in the context of covariance matrices,
positive definite matrices ensure that the covariance matrix is well-behaved and
invertible, allowing for the estimation of parameters in multivariate distributions.
4.Machine Learning: Positive definite matrices are also utilized in support vector
machines. Positive definite kernel matrices are used to define inner products in high-
dimensional feature spaces, enabling the application of linear methods in nonlinear
settings.
5.Signal Processing: Positive definiteness is a requirement for the autocorrelation
matrix in signal processing to ensure that it is invertible and well-conditioned.
6.Geometry: They are used in geometric contexts to define metrics, distances, and
angles in spaces, making them fundamental in areas like differential geometry and
optimization on manifolds.
7.Eigenvalue Problems: Positive definite matrices have real and positive eigenvalues,
which makes them particularly useful in solving eigenvalue problems. Properties of
positive definiteness simplify the analysis and computation of eigenvalues and