KEMBAR78
Linear Algebra & Discrete Math Guide | PDF | Matrix (Mathematics) | Argument
0% found this document useful (0 votes)
51 views19 pages

Linear Algebra & Discrete Math Guide

The document discusses sets, logic, and matrices. It defines sets, operations on sets, and relations between sets. It also defines logic, statements, propositions, and logical connectives. Matrices are introduced but not explained in detail. Various concepts are defined and examples are provided to illustrate the definitions.

Uploaded by

nzobor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views19 pages

Linear Algebra & Discrete Math Guide

The document discusses sets, logic, and matrices. It defines sets, operations on sets, and relations between sets. It also defines logic, statements, propositions, and logical connectives. Matrices are introduced but not explained in detail. Various concepts are defined and examples are provided to illustrate the definitions.

Uploaded by

nzobor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

NAU 07212: LINEAR ALGEBRA AND DISCREET

MATHEMATICS

Rosemary Jasson Nzobo (nzobor@gmail.com)


Dar-es-Salaam Maritime Institute (DMI)

April 2024
Contents
1 Set, Logic and Matrix. 2
1.1 Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Logic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1
1. Set, Logic and Matrix.
1.1 Set
Definition 1.1.1. A set is a collection of well defined and distinct objects.

Example 1.1.2. The following are examples of a set


1. A set of positive intergers.
2. A set of even numbers between 5 and 21.
3. A set of colors of Tanzanian flag.

Example 1.1.3. A set can be represented by listing method. For example:


1. {1, 2, 3, . . .} is a set of positive integers.
2. {6, 8, 10, 12, 14, 16, 18, 20} is set of even numbers between 5 and 21.
3. {black, blue, yellow, green} is a set of colors of Tanzanian flag.
A set can be finite or infinite depending on the number of elements.

Definition 1.1.4. A set is said to be finite if it has finite number of elements, otherwise it is
infinite.

Example 1.1.5. A set {6, 8, 10, 12, 14, 16, 18, 20} is finite set while a set R is infinite.

Definition 1.1.6. The cardinality of a set is defined as the number of elements in a set. Two
sets are said to be equivalent if they have the same cardinality while two sets are said to be equal
if they have the same cardinality and all elements are equal.

Example 1.1.7. Let A = {a, e, i, o, u}, B = {e, a, i, u, o} and C = {a, b, c, d, e}. We note that
A is equal to B but not equal to C. Also these sets are equivalent to each other i.e A = B ̸= C
while A ≡ B ≡ C.

Definition 1.1.8 (Set difference). The difference of two sets A and B written as A \ B or A − B
is a set that contains elements of A that are not in B. Mathematically, A − B = A ∩ B c .

Definition 1.1.9 (Symmetric difference). The symmetric difference of two sets A and B written
as A ⊖ B or A∆B is a set that contains elements which are either in A or in B but not in both.
Mathematically, A∆B = (A ∩ B c ) ∪ (B ∩ Ac ).

Example 1.1.10. If A = {1, 2, 3, 6, 7, 8} and B = {3, 5, 7, 9} then A \ B = {1, 2, 6, 8} while


A ⊖ B = {1, 2, 5, 6, 8, 9}.

2
Section 1.1. Set Page 3

1.1.11 Algebra of Sets. Let A, B and C be sets. The following holds.


1. De-Morgan’s Laws.

(A ∪ B)c = Ac ∩ B c
(A ∩ B)c = Ac ∪ B c

2. Commutative Laws.

A∪B =B∪A
A∩B =B∩A

3. Associative Laws.

(A ∪ B) ∪ C = A ∪ (B ∪ C)
(A ∩ B) ∩ C = A ∩ (B ∩ C)

4. Distributive Laws.

(A ∪ B) ∩ C = (A ∩ C) ∪ (B ∩ C)
(A ∩ B) ∪ C = (A ∪ C) ∩ (B ∪ C)

5. Absorption Laws.

(A ∪ B) ∩ A = A
(A ∩ B) ∪ A = A

6. Identity Laws.

A∪∅=∅∪A=A
A∩U =U ∩A=A

7. Dominance Laws (Univeral Bound Laws).

A∪U =U
A∩∅=∅

8. Complement Laws.

A ∪ Ac = U
A ∩ Ac = ∅

9. Double Complement.

(Ac )c = A
Section 1.1. Set Page 4

10. Complements of U and ∅.


Uc = ∅
∅c = U

11. Idempotent Laws.


A∪A=A
A∩A=A

Exercise 1.1.12. Let A, B and C be non-empty sets. Use set identities to:
1. Prove that:
(a) (A − B) ∪ B = A ∪ B.
(b) (A − B) ∩ B = ∅.
2. Show that, if A ∩ B = ∅, then A − B = A.
3. Show that:
(a) (A ∩ B) − (A ∩ C) = A ∩ (B − c) = A ∩ (B − C).
(b) (A \ B) ∪ (A \ C) = A \ (B ∩ C).
4. Prove that, (A \ B) \ (B \ C) = A ∪ B.
5. Prove that, (A − B) ∩ (B − A) = ∅.

Relation
Cartesian Product.

Definition 1.1.13. Let A and B be non empty sets. The cartesian product of A with B is the
set
A × B = {(a, b) : a ∈ A and b ∈ B}.
The element (a, b) is called an ordered pair with co-ordinates a and b.
If A1 , A2 , . . . , An are non empty sets, then the product of these sets is:
A1 × A2 × . . . × An = {(a1 , a2 , . . . , an ) : ai ∈ Ai , i = 1, 2, . . . , n}.
If A and B are finite sets with |A| = m and |B| = n, then |A × B| = m × n.

Example 1.1.14. If A = {1, 2} and B = {a, b, c}, then:


A × B = {(1, a), (1, b), (1, c), (2, a), (2, b), (2, c)} and |A × B| = 2 × 3 = 6
B × A = {(a, 1), (a, 2), (b, 1), (b, 2), (c, 1), (c, 2)} and |B × A| = 3 × 2 = 6.
Section 1.1. Set Page 5

Relation on a Set.

Definition 1.1.15. Let A and B be non empty sets. A relation R from A to B denoted as A RB
is a subset of a cartesian product A × B.

Example 1.1.16. Let A = {a, b, c} and B = {1, 2, 3, 4};


1. R = {(a, 1), (a, 2), (c, 4)} is a relation from A to B.
2. R = {(1, a), (1, b), (3, a)} is a relation from B to A.
3. R = {(a, b), (c, a), (c, c)} is a relation from A to A (A relation on A).

Example 1.1.17. Is equal to (“ = ”) is a relation on R.

Definition 1.1.18. Let R be a relation from A to B. The inverse of R denoted by R−1 is the
relation R−1 = {(b, a) : (a, b) ∈ R}.

Example 1.1.19. Let A = {a, b, c} and B = {1, 2, 3, 4}; R = {(a, 1), (a, 2), (c, 4)} is a relation
from A to B and R−1 = {(1, 1), (2, a), (4, c)} is the inverse of R.

Definition 1.1.20. Let A be a non empty set and R be a relation on A.


1. R is reflexive if for all a in A, a Ra i.e (a, a) is in R.
2. R is symmetric if for all a, b in A, if (a, b) is in R then (b, a) is in R.
3. R is transitive if for all a, b, c in A, if (a, b) and (b, c) are in R then (a, c) is in R.

Exercise 1.1.21. Write a relation on {a, b, c} which is:


1. Reflexive but not symmetric.
2. Symmetric but not reflexive.
3. Transitive and symmetric but not reflexive.
4. Transitive and reflexive but not symmetric.
5. Reflexive, symmetric and transitive.

Definition 1.1.22. A relation R on a set A is called an equivalence relation if it reflexive,


symmetric and transitive.

Example 1.1.23. Is equal to (“ = ”) is an equivalence relation on R.


Section 1.2. Logic. Page 6

1.2 Logic.
Logic is the science of reasoning. Reasoning in logic is called inferring (drawing conclusion from
premises).

Example 1.2.1. The following are examples of inferences.


1. You see clouds and infer that it will rain.
2. You count 15 people in a group that originally had 17 people, you infer that two people are
missing.
3. You see smoke and infer that there is fire.

Definition 1.2.2. An argument is a collection of statements, one of which is the conclusion and
others are statement.

Example 1.2.3. There is smoke; therefore there is fire.

Logic can be deductive or inductive. Consider the following arguments.


1. There is smoke; therefore, there is fire.
2. Originally, there were five people; Currently, there are three people; Therefore, two people
are missing.
The investigation of argument 1 is called inductive logic while that of argument 2 is called
deductive logic.

Definition 1.2.4. Inductive logic is the form of logic which draws probable(likely) conclusion.
That is, the truth value of premises does not guarantee the truth value of the conclusion.
Argument 1 is inductive logic as the existence of smoke does not guarantee the existence of fire.

Definition 1.2.5. Deductive logic is the form of logic whereby the truth value of the premises
necessitates the truth of the conclusion.

Statement Vs Proposition.

Definition 1.2.6. A statement is a declarative sentence that is either true or false, but not both
and the truth or falsehood of the provided statement may be unknown.

Example 1.2.7. The following are examples of statement.


1. It is raining.
2. 5 + 9 = 12.
3. God exists.
Section 1.2. Logic. Page 7

Definition 1.2.8. A proposition is a declarative sentence that can either be true or false but not
both, and the truth or falsehood of the provided statement must be known.
1. 5 + 9 = 12.
2. A statement “God exists” is not a proposition as the existence of God is unknown.

Remark 1.2.9. Every proposition is a statement but not every statement is a proposition.

Atomic Proposition Vs Complex Proposition.

Definition 1.2.10. A proposition is said to be atomic if its truth value does not depend on the
truth value of any other proposition. Example, A proposition 2 + 3 = 5 is an atomic proposition

Definition 1.2.11. A proposition is said to be complex(compound) if its truth value depends on


the truth value of other proposition. Example, a proposition “If it is cold and it is cloudy then it
is raining” is a compound proposition.

Logical Connectives and Truth Tables.


1. Conjuction: This is a logical and, it is denoted by ∧. If P and Q are arbitrary propositions,
then P ∧ Q is true iff both P and Q are true. The following is the truth table of P ∧ Q.
P Q P ∧Q
T T T
T F F
F T F
F F F
2. Disjunction: This is a logical or, it is denoted by ∨. If P and Q are arbitrary propositions,
then P ∨ Q is true iff either P is true, or Q is true, or both P and Q are true. The following
is the truth table of P ∨ Q.
P Q P ∨Q
T T T
T F T
F T T
F F F
3. Conditional(Implication): This is a logical If · · ·, then · · ·; It is denoted by −→. If P
and Q are arbitrary propositions, then P =⇒ Q is true iff either P is false or Q is true.
The following is the truth table of P ∧ Q.
P Q P −→ Q
T T T
T F F
F T T
F F T
Section 1.2. Logic. Page 8

4. Bi-conditional(Double Implication): This is a logical iff; It is denoted by ↔. If P and


Q are arbitrary propositions, then P ↔ Q is true iff both P and Q are true, or both P and
Q are false. The following is the truth table of P ∧ Q.
P Q P ↔Q
T T T
T F F
F T F
F F T
5. Negation: This is a logical not; it is denoted by ∼ or ¬. Negation is unary connective as
it takes only one argument. If P is an arbitrary logical proposition, then ¬P is true iff P
is false. The following is the truth table of P ∧ Q.
P ¬P
T F
F T
To nest atomic formulae into complex formula, parentheses are used to disambiguate formulae.
Let P, Q and R be atomic proposition. The following are nested formulae.
1. P ∧ Q −→ R (ambiguous).
2. P ∧ (Q −→ R).
3. (P ∧ Q) −→ R

Exercise 1.2.12. Let P, Q, R and S be atomic proposition. Construct the truth table for each
of the following.
1. (P ∧ Q) −→ R.
2. (P ∧ (Q −→ R)) ∨ S.
3. (P ∨ Q) −→ (P ∧ Q).
4. P ↔ (R ∧ S).
5. ¬(¬P ∨ Q).
6. ¬((P ∧ (¬R ∨ Q)) ∧ R).

Tautology, Contradiction, Contingency and Logical Equivalence.


A valuation is a function which assigns a truth value to each primitive proposition. Each line in
the truth table of a formula corresponds to a valuation.

Definition 1.2.13. A statement is a tautology if it is always true. That is, a formula is a


tautology if it is true under every valuation. Example: P ∨ ¬P .
Section 1.2. Logic. Page 9

Definition 1.2.14. A statement is a contradiction(inconsistency) if it is always false. That is, a


formula is a contradiction if it is false under every valuation. Example: P ∧ ¬P .

Definition 1.2.15. A statement is contingency if it is true under at least one valuation.

Definition 1.2.16. Two statements are said to be logically equivalent if they always produce
the same truth value (they have the same truth value under every valuation). If P is equivalent
to Q, we denote it as ≡ Q.

Exercise 1.2.17. Use truth tables to:


1. To show whether each of the following statement is a tautology, contingency or a contra-
diction.
(a) (P −→ Q) ↔ ¬(P ∧ ¬Q).
(b) He is healthy or he is not healthy.
(c) ¬(P −→ Q) ∨ (P −→ Q).
(d) (¬P ∧ Q) −→ ¬(P ∨ Q).
(e) P −→ (P ∨ P ).
2. show whether the following equivalence holds.
(a) ¬P ∧ ¬Q ≡ ¬Q ∧ ¬P .
(b) ¬(P ∧ Q) ≡ ¬P ∨ ¬Q.
(c) ¬(P ∨ Q) ≡ ¬P ∧ ¬Q.
(d) ¬(P ∧ Q) ≡ ¬P ∧ ¬Q.

Algebra of Propositions (Laws of Logical equivalences).


Let P, Q and R be logical propositions. The following holds.
1. De-Morgan’s Laws.
¬(P ∨ Q) ≡ ¬P ∧ ¬Q
¬(P ∧ Q) ≡ ¬P ∨ ¬Q

2. Commutative Laws.
P ∨Q≡Q∨P
P ∧Q≡Q∧P

3. Associative Laws.
(P ∨ Q) ∨ R ≡ P ∨ (Q ∨ R)
(P ∧ Q) ∧ R ≡ P ∧ (Q ∧ R)
Section 1.2. Logic. Page 10

4. Distributive Laws.

(P ∨ Q) ∧ R ≡ (P ∧ R) ∨ (Q ∧ R)
(P ∧ Q) ∨ R ≡ (P ∨ R) ∧ (Q ∨ R)

5. Absorption Laws.

(P ∨ Q) ∧ P ≡ P
(P ∧ Q) ∨ P ≡ P

6. Identity Laws.

P ∨F ≡F ∨P ≡P
P ∧T ≡T ∧P ≡P

7. Dominance Laws (Univeral Bound Laws).

P ∨T ≡T
P ∧F ≡F

8. Negation Laws.

P ∨ ¬P ≡ T
P ∧ ¬P ≡ F

9. Double Negation Law.

¬P (¬P ) ≡ P

10. Negation of T and F .

¬T ≡ F
¬F ≡ T

11. Idempotent Laws.

P ∨P ≡P
P ∧P ≡P

Exercise 1.2.18. Repeat exercise 1.2.17 by using the laws of logical equivalences.
Section 1.3. Matrix Page 11

1.3 Matrix
1.3.1 Definition and notations. A representation of items (information) stored or arranged in
an order fashion is simply called a Matrix. Mathematically, the concept of a matrix refers to a
set of numbers, variables or functions ordered in rows and columns.

Definition 1.3.2. A matrix is a rectangular array of numbers, letters or symbols.


Matrices are always denoted by upper cases and the entries inside the array are called elements
that are uniquely identified by their positions; such as ith row and j th column.

Example 1.3.3. Consider the Matrix A below.


 
a11 a12 a13
a21 a22 a23 
A= a31

a32 a33 
a41 a42 a43
From Example 1.3.3 above, A is the matrix of 4 rows and 3 columns. The art of identifying a
matrix in terms of its number of rows and columns is known as Order(size) of a Matrix i.e Matrix
A above has 4 × 3 (4 by 3) order. Generally, order of a Matrix A with m rows and n columns is
m × n pronounced as m by n order.

Definition 1.3.4. A matrix is said to be a column matrix (column vector) if it has only one
column and several row(s). While a matrix with only one row and several column(s) is called a
row matrix (row vector).

Example 1.3.5. The matrix  


1
2
A= 
3
4
is a column matrix while the matrix
 
B= 1 2 3 4

is a row matrix.

Exercise 1.3.6. Revision: Matrix Operation.


Evaluate the following:
  
3 7 2
1.
4 5 9
 
0 −5  
4 −1 3
2. −1 −4
1 −2 9
0 −1
Section 1.3. Matrix Page 12

 
5 0 4
3. (5) ·
−1 2 3
   
3 7 5 2 1 1
4. 4 5 −2 + 9 −11 0
3 0 −7 2 3 1
   
3 7 5 2 1 1
5. 4 5 −2 − 9 −11 0
3 0 −7 2 3 1

Trace and transpose of a matrix.

Definition 1.3.7 (Trace of a matrix). If A is the square matrix, then the trace of A denoted by
tr(A) is defined as the sum of all entries in the main diagonal of A.
 
2 1 1
Example 1.3.8. If A = 9 −11 0, then tr(A) = 2 + (−11) + 1 = −8.
2 3 1

Definition 1.3.9 (Transpose of a matrix). The transpose AT of an n × m matrix A is the m × n


matrix obtained by interchanging the rows and columns of A.
 
2 1 1  
9 −11 0  2 9 2 1
Example 1.3.10. If A =  , then AT = 1 −11 3 −1.
2 3 1
1 0 1 −2
1 −1 −2

Special types of matrices.

Definition 1.3.11 (Square Matrix). A square matrix is a matrix of dimension n. i.e a matrix
with the same number of rows and columns.

Example 1.3.12. The matrix  


a11 a12 a13
A = a21 a22 a23 
a31 a32 a33
is a square matrix.

Definition 1.3.13 (Identity Matrix). It is a square matrix in which all the elements of the
principal(main) diagonal are ones and all other elements are zeros.
Section 1.3. Matrix Page 13

Example 1.3.14. The matrix  


1 0 0 0
0 1 0 0
I4 = 
0

0 1 0
0 0 0 1
is an identity matrix.

Definition 1.3.15 (Diagonal Matrix). A diagonal matrix is an n × n matrix whose non diagonal
entries are zero.

Example 1.3.16. The matrix


 
d11 0 0 0
 0 d22 0 0
D=
0

0 d33 0 
0 0 0 d44

is a diagonal matrix.

Example 1.3.17. Any identity matrix is a diagonal matrix.

Definition 1.3.18 (Triangular Matrix). It is a square matrix which is either upper triangular or
lower triangular. An upper triangular matrix is an n × n matrix in which all entries below the
main diagonal are zero. A lower triangular matrix is a square matrix in which all entries above
the main diagonal are zero.

Example 1.3.19. The matrix


 
u11 u12 u13 u14
 0 u22 u23 u24 
U =
 0

0 u33 34 
0 0 0 u44

is an upper triangular matrix while the matrix


 
l11 0 0 0
l21 l22 0 0 
L= l31

l32 l33 0 
l41 l42 l43 l44

is a lower triangular matrix.

Definition 1.3.20 (Symmetric Matrix). A matrix A is said to be symmetric if aij = aji for all i
and j. That is, AT = A. .
Section 1.3. Matrix Page 14

Example 1.3.21. The matrix  


1 2 3 4
2 5 −1 0
S= 
3 −1 −2 2
4 0 2 2

Definition 1.3.22 (Skew Matrix). A matrix A is said to be skew if AT = −A. .

Example 1.3.23. The matrix  


0 2 3 4
−2 0 −1 5
S=
−3 1

0 2
−4 −5 −2 0
is a skew matrix.

Definition 1.3.24 (Orthogonal Matrix). An n×n real-valued matrix A is said to be an orthogonal


matrix if AT A = I i.e AT = A−1 .

Example 1.3.25. The matrix  


3 −6 2
1
S = 2 3 6
7
6 2 −3
is an orthogonal matrix.

Determinant and Inverse of a matrix.


The determinant of a matrix is a scalar (number), obtained from the elements of the square
matrices by specified operations. It is denoted by det(A) or |A| for a square matrix A.

Definition 1.3.26 (Minor and Cofactor). The minor of the entry aij in a given determinant is
denoted by Mij obtained from the determinant of order (n − 1 × n − 1) after deleting the ith
row and j th column of An matrix. The scalar(number) Cij = (−1)i+j Mij is called the Cofactor
of the element aij of the matrix A.
 
1 0 −2
Example 1.3.27. Consider the matrix A = −3 1 4 . We have:
2 −3 4
1 4
M11 = = 4 − (−12) = 16 and C11 = (−1)1+1 (16) = 16;
−3 4
0 −2
M21 = = 0 − 6 = −6 and C21 = (−1)2+1 (−6) = (−1)(−6) = 6.
−3 4
Section 1.3. Matrix Page 15

Definition 1.3.28. Generally, we define the determinant det(A) of n × n matrix to be

det(A) = a11 C11 + a12 C12 + a13 C13 +· · · a1n C1n .


 
3 2 1
Example 1.3.29. The determinant of A = 0 1 −2 is:
1 3 4

1 −2 0 −2 0 1
3 −2 +1 = 3(4 + 6) − 2(2) + (−1) = 25.
3 4 1 3 1 3

Definition 1.3.30 (Cofactor matrix and adjoint matrix). If A is a square matrix and Cij is a
cofactor of aij then, the matrix
 
C11 C12 · · · C1n
 C21 C22 · · · C2n 
C =  ..
 
.. .. .. 
 . . . . 
Cn1 Cn2 · · · Cnn

is called cofactor matrix. The transpose of cofactor matrix is called the adjoint matrix.
A matrix A is said to be invertible (has the inverse) if det(A) ̸= 0.

Definition 1.3.31. If the matrix A is invertible, then the inverse of A denoted by A−1 is given
by:
Adj(A)
A−1 = .
det(A)
 
3 2 1
Example 1.3.32. Given the matrix A = 0 1 −2;
1 3 4
1 −2 0 −2 0 1
C11 = = 10; C12 = − = −2; C13 = = −1
3 4 1 4 1 3
2 1 3 1 3 2
C21 = − = −5; C22 = = 11; C23 = − = −7
3 4 1 4 1 3
2 1 3 1 3 2
C31 = = −5; C32 = − = 6; C33 = = 3.
1 −2 0 −2 0 1
It follows that:
   
10 −2 −1 10 −5 −5
Cof (A) = −5 11 −7 and Adj(A) = −2 11 6  .
−5 6 3 −1 −7 3
Section 1.3. Matrix Page 16

Also, the determinant of A is:

1 −2 0 −2 0 1
det(A) = 3 −2 +1 = 3(4 + 6) − 2(2) + (−1) = 25.
3 4 1 3 1 3

Adj(A)
From A−1 = det(A)
, we get

−1 −1
2 
  5 5 5
10 −5 −5  
−1 1   −2 11 6 

A = −2 11 6  = 
 25 25 25 
.
25
−1 −7 3  
−1 −7 3
25 25 25

Exercise 1.3.33. For each of the following matrices, evaluate its determinant and compute the
inverse (if it exists).
         
1 2 3 1 0 3 3 2 −1 5 8 9 1 −7 3
1. 0 −1 1 2. −7 0 1 3. 1 6 3 4. 0 1 −2 5. 0 0 0
3 2 2 3 0 2 2 −4 0 1 3 4 3 5 −2

Elementary row operations (ERO).


There are three basic row operations correspond to the operations on the augmented matrix in a
linear system. These are:
1. Scaling: Multiply all entries in a row by nonzero constant.
2. Interchange: It involves interchange of two rows.
3. Replacement: Replace one row by the sum of itself and a multiple of another row(add to
one row a multiple of another row).

Definition 1.3.34. Two matrices are said to be row equivalent if there is a sequence of ERO
that transforms one matrix into the other.

Remark 1.3.35. Row operations are reversible. If two rows are interchanged they can be returned
to their original positions by another interchange. If a row is scaled by a constant c, then
multiplying by 1c produces the original row. Also, suppose that c times row 1 is added to row 2
to produce new row 2, to reverse this operation add −c times row 1 to new row 2 you will obtain
the original row 2.

Definition 1.3.36. A row (column) in a matrix is called a nonzero row (column) if it contains at
least one nonzero entry. Leading entry of a row is the leftmost nonzero entry in a nonzero row.
Row Echelon Form (Gauss method).
Section 1.3. Matrix Page 17

Definition 1.3.37. A rectangular matrix is in echelon form/row echelon form EF(REF) if it has
the following three properties:
1. All nonzero rows are above any rows of all zeros.
2. Each row leading entry is in a column to the right of the leading entry of the row above it.
3. All entries in a column below a leading entry are zeros.

Definition 1.3.38 (Rank of a matrix). Suppose an m × n matrix A is reduced by row operations


to an echelon form E. The rank of A denoted by r(A) is equal to the number of non-zero rows
in E.
Reduced Row Echelon Form (Gauss Jordan method).

Definition 1.3.39. The matrix A is in RREF if:


1. It is in REF.
2. The leading entry in each nonzero row is 1, which is the only nonzero entry in its column.
 
1 2 1 1
Example 1.3.40. Given the matrix 2 4 2 2.
3 6 3 4

     
1 2 1 1 1 2 1 1 1 2 1 1
R2 =2R1 −R2 & R3 =3R1 −R3
2 4 2 2 −−−−−−−−−−−−−−−−−→ 0
  0 0 0  R2 ↔ R3 0
 0 0 −1 
3 6 3 4 0 0 0 −1 0 0 0 0

is in ref and rank(A) = 2.


Also,
     
1 2 1 1 1 2 1 0 1 2 1 0
R1 =R1 +R2 R2 =−1R2
0 0 0 −1  −− −−− −→  0 0 0 −1  −− −−−→  0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0

is in rref.

Computation of the inverse matrix by elementary row operations.


If A is n × n matrix, then [A|In ] is row equivalent to [In |A−1 ]. To find the inverse of A:
1. Put A in the form [A|In ],
2. Reduce [A|In ] by Gauss Jordan to obtain [In |A−1 ].
 
3 2 1
Example 1.3.41. Use Gauss-Jordan elimination to evaluate the inverse of a matrix A = 0 1 −2
1 3 4
Section 1.3. Matrix Page 18

   
3 2 1 | 1 0 0 3 2 1 | 1 0 0
R =3R −R1
Solution: [A|I] = 0 1 −2 | 0 1 0 −−3−−−3−−→ 0 1 −2 | 0 1 0
1 3 4 | 0 0 1 0 7 11 | −1 0 3



y
R1 = R1 − 2R2
R3 = R3 − 7R2
 
3 0 5 | 1 −2 0
0 1 −2 | 0 1 0
0 0 25 | −1 −7 3



y
R1 = 5R1 − R3
R2 = 25R2 + 2R3
 
15 0 0 | 6 −3 −3
 0 25 0 | −2 11 6 
0 0 25 | −1 −7 3



y
1 1 1
R1 = R1 , R2 = R2 , R3 = R3
15 25 25
2 −1 −1
   
1 0 0 | 5 5 5 1 10 −5 −5
0 1 0 | −2 11 6  =⇒ A−1 = −2 11 6 
25 25 25
−1 −7 3 25
0 0 1 | 25 25 25 −1 −7 3

Exercise 1.3.42. Reduce the following matrices into ref, state the rank and reduce into rref.
         
1 2 3 1 0 3 3 2 −1 5 8 9 1 −7 3
1. 0 −1 1
  2. −7 0
 1 3. 1 6 3  4. 0 1 −2 5. 0 0 0
3 2 2 3 0 2 2 −4 0 1 3 4 3 5 −2

Find the inverse (if it exists) of each of the above matrices by elementary row operations.

You might also like