Chapter 1 – Vector, Matrix and Tensor Review
1.1. Vectors
v1
v
2
v
{v} 3 {v}T v1 v2 v3 vn 1 vn
vn 1
vn
{v} is an n-vector. The vector has n components.
Notations
Matrix notation: {v}
Vector notation: v
v1
Matrix component notation as: v
2
v3
vn 1
vn
Indicial notation: vi where i = 1, 2, 3, …, n
1.2. Tensors
Physical continuum mechanics laws are expressed by tensor equations.
o Tensor equations are valid in any coordinate system (i.e. they are invariant under coordinate transformation).
In 3-D physical space a tensor of order (or rank) n has 3n components.
A tensor of order 0 has 1 component and is also called a scalar.
A tensor of order 1 has 3 components and is also called a vector.
A tensor of order 2 has 9 components.
A tensor of order 3 has 27 components.
A tensor of order 4 has 81 components.
Tensors of higher order than 5 are very rarely used in continuum mechanics.
2nd Order tensor:
Matrix notation: [ ]
Tensor notation:
Matrix component notation as: 11 12 13
23
21 22
31 32 33
Indicial notation: ij where i , j = 1, 2, 3
3rd Order tensor:
Indicial notation: ijk where i, j, k = 1, 2, 3
1.3. Matrices
A matrix is rectangular array of numbers.
1
a11 a12 a13 a1, m 1 a1 m
a a22 a23 a2, m 1 a2 m
21
a31 a32 a33 a3, m 1 a3 m
[ A]
an 1,1 an 1, 2 an 1,3 an 1, m 1 an 1, m
an1 an 2 an 3 an , m 1 an m
[A] is an n m matrix, where:
n = number of rows
m = number of columns
The element of the matrix at row i and column j is: aij
where: i = 1, 2, 3, …, n
j = 1, 2, 3, …, m
i, j are called indices
Square matrix: n = m
Column matrix = vector = vector: m = 1 ai1 = ai
Row matrix = row vector = transpose of a vector = n = 1 a1j = aj
A scalar can be thought of as a matrix with one row and one column, i.e, n = m = 1
Notations:
A matrix can be written in matrix notation as: [ A]
a11 a12 a13 a1, m 1 a1m
a a2 m
A matrix can be written in component notation as: 21 a22 a23 a2,m 1
a31 a32 a33 a3,m 1 a3 m
an 1,1 an 1, 2 an 1,3 an 1,m 1 an 1,m
an1 an 2 an 3 an ,m 1 an m
A vector can be written in indicial notation as: vi where i = 1, 2, 3, …, n
A 3D tensor of order 2 are represented using a 33 matrix.
Higher order tensors are represented using higher order matrices.
1.4. Norm of a vector
1/ p
n p
x xi
i 1
p Name Formula
1 1-norm n
Cartesian norm x 1 xi (2nd most common)
Manhattan norm i 1
2 2-norm n
x
2
Eucledian norm x2 i x x x xi xi (1st most common and most useful norm)
Radius norm i 1
Distance norm
Magnitude
Length
-norm x max in1 xi (3rd most common)
Max-norm
2
Unit circles using various vector norms.
1.5. Vector addition
{w} {v} {u} wi vi ui
1.6. Multiplication of a vector by a scalar
{w} {v} wi vi
1.7. Unit vector
v vi
vˆ vˆi
v v jv j
1.8. Dot product (inner vector product)
n
{v}.{u} viui v1u1 v2u2 vnun
i 1
The result of a dot product is a scalar.
Using Einstein summation convention, the dot product can be written as:
{v}.{u} viui where a repeated index implies summation over total range of the index.
Notes:
If {v}.{u} 0 then the vectors are orthogonal or normal to each other.
In 2D and 3D: v u v u cos( ) where is the angle between the two vectors.
The inner vector product reduces the total tensor order of the product by two.
1.9. Outer vector product
Aij viu j
The outer vector product keeps the total tensor order of the product the same.
1.10. Matrix addition
Matrix notation: [C ] [ A] [ B ]
3
c11 c12 c13 c1, m 1 c1 m a11 b11 a12 b12 a13 b13 a1, m 1 b1, m 1 a1 m b1 m
c c22 c23 c2, m 1
c2 m a21 b21 a22 b22 a23 b23 a2, m 1 b2, m 1 a2 m b2 m
21
c31 c32 c33 c3, m 1 c3 m a31 b31 a32 b32 a33 b33 a3, m 1 b3, m 1 a3 m b3 m
cn 1,1 cn 1, 2 cn 1,3 cn 1, m 1 cn 1, m an 1,1 bn 1,1 an 1, 2 bn 1, 2 an 1,3 bn 1,3 an 1, m 1 bn 1, m 1 an 1, m bn 1, m
cn 1 cn 2 cn 3 cn , m 1 cn m an 1 bn 1 an 2 bn 2 an 3 bn 3 an , m 1 bn , m 1 an m bn m
Matrix [A] and matrix [B] must have the same number of rows and columns, i.e. both matrices must be of size n m
Indicial notation: cij aij bij
Subtraction: [C ] [ A] [ B] cij aij bij
Properties of matrix addition: [C ] [ A] [ B] [ B] [ A] cij aij bij bij aij
1.11. Multiplication by a scalar
[C ] [ A] [ A] cij aij
1.12. Transpose of a matrix
[C ] [ A]T cij a ji
c11 c12 c13 c1, m1 c1m a11 a21 a31 am1,1 am1
c c22 c23 c2,m1 c2 m a12 a22 a32 am1, 2 am 2
21
c31 c32 c33 c3,m1 c3 m a13 a23 a33 am1,3 am 3
cn1,1 cn1, 2 cn1,3 cn1,m1 cn1,m a1,n1 a2,n1 a3,n1 am1,n1 am,n1
cn1 cn 2 cn 3 cn ,m1 cn m a1n a2 n a3 n am1,n amn
If [A] is an n m matrix, then [C] is an m n matrix.
1.13. Matrix multiplication
Matrix notation: [C ] [ A][ B ]
[A] is an n m matrix
[B] must be an m p matrix
[C] is a n p matrix
The number of columns of the first matrix must be equal to the number of rows of the second matrix.
Indicial notation:
m
[C ] [ A][ B] cik aij b jk ai1b1k ai 2b2 k ai 3b3k ai , m 1bm 1, k ai , mbm , k
j 1
m
[C ] [ B][ A] cik bij a jk bi1a1k bi 2 a2 k bi 3a3k bi , m 1am 1, k bi , m am , k
j 1
In general [ A][ B] [ B][ A]
Example:
4
2 5
1 7 0 2
[ A] 3 2 [ B]
0 1 3 2 1 1
17 4 5 1
[C ] [ A][ B] 3 25 2 4
3 2 1 1
Note that the product [ B][ A] does not exist because the number of columns of [B] is not equal to the number of rows of [A].
Einstein summation convention:
cik aij b jk , summation over the index j is implied because it is a repeated index in a multiplication.
- Repeated indices in a multiplication are called “dummy indices” because they cancel out.
- Non-repeated indices are called “free indices.”
1.14. Tensor multiplication
aib j Tij outer product
aib jk Tijk outer product
aibik Tk inner product (reduces results tensor order by 2)
aij bkm Tijkm outer product
aij b jk Tik inner product
aij bij inner product (contracted to a scalar)
1.15. Square matrix
[A] is a square matrix if it has the same number of rows as columns, i.e: n = m
1.16. Zero matrix
[A] is a zero matrix if: aij 0 , for all values of i and j
1.17. Symmetric matrix
Given a square matrix [A]. [A] is a symmetric matrix if: aij a ji [ A] [ A]T
a11 a12 a13 a1, n 1 a1 n
a a22 a23 a2, n 1 a2 n
12
a13 a23 a33 a3, n 1 a3 n
[ A]
a1, n 1 a2, n 1 a3n 1 an 1, n 1 an 1, n
a1n a2 n a3n an 1, n an n
Note that by definition [A] is a square matrix.
1.18. Anti-symmetric matrix
Given a square matrix [A]. The matrix is called an anti-symmetric matrix if: aij a ji [ A] [ A]
T
5
0 a12 a13 a1, n 1 a1 n
a 0 a23 a2, n 1 a2 n
12
a13 a23 0 a3, n 1 a3 n
[ A]
a1, n 1 a2, n 1 a3, n 1 0 an 1, n
a1n a2 n a3n an 1, n 0
All the diagonal terms of an anti-symmetric matrix are zero.
An anti-symmetric tensor is called a “skew-symmetric tensor.”
1.19. Matrix / Tensor decomposition in Symmetric and Anti-symmetric matrix/tensor
If [A] is a square matrix or a second order tensor then:
The matrix: ( aij a ji ) is a symmetric matrix.
The matrix: (aij a ji ) is an anti-symmetric matrix.
Any square matrix or tensor can be expressed as a sum of a symmetric and an anti-symmetric matrix:
aij 0.5(aij a ji ) 0.5(aij a ji )
0.5(aij a ji ) is the symmetric matrix/tensor
0.5(aij a ji ) is the anti-symmetric matrix (skew-symmetric tensor)
1.20. Diagonal matrix
Given a square matrix [A]. [A] is a diagonal matrix if: aij 0 , for i j
a11 0 0 0 0
0 a 0 0 0
22
0 0 a33 0 0
[ A]
0 0 0 an 1, n 1 0
0 0 0 0 an n
Note that by definition [A] is a square matrix.
1.21. Identity matrix
The identity matrix is denoted by [I]. It is a special case of a diagonal matrix where:
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
[I ]
0 0 0 1 0
0 0 0 0 1
Note that by definition [I] is a square matrix.
In indicial notation [I] is denoted by the tensor ij , which is called Kronecker delta tensor.
6
1 i j
ij
0 i j
1.22. Tri-Diagonal matrix
Given a square matrix [A]. [A] is a tri-diagonal matrix if:
aij 0 , for i j 1
a11 a12 0 0 0
a 0 0
21 a22 a23
0 a32 a33 0 0
[ A]
0 0 0 an 1, n 1 an 1, n
0 0 0 an , n 1 an n
1.23. Banded matrix
Given a square matrix [A]. [A] is a banded matrix with a band-width equal to b if:
aij 0 , for i j b
b =1 tri-diagonal matrix.
1.24. Triangular matrix
Given a square matrix [A].
[A] is a lower triangular matrix if: aij 0 , for i j
a11 0 0 0 0
a a22 0 0 0
21
a31 a32 a33 0 0
[ A]
an 1,1 an 1, 2 an 1,3 an 1, n 1 0
an 1 an 2 an 3 an , n 1 an n
[A] is an upper triangular matrix if: aij 0 , for i j
a11 a12 a13 a1, n 1 a1 n
0 a a23 a2, n 1 a2 n
22
0 0 a33 a3, n 1 a3 n
[ A]
0 0 0 an 1, n 1 an 1, n
0 0 0 0 an n
1.25. Inverse of a matrix
Given a square matrix [A] . [ A]1 is defined as the inverse of [A] if:
[ A]1[ A] [ I ] also [ A][ A]1 [ I ]
7
bik akj ij
1
If [ B] [ A]
1
A matrix is singular if [ A] does not exist.
1.26. Orthogonal matrix
Given a square matrix [ A] . [ A] is orthogonal if:
[ A]1 [ A]T [ A]T [ A] [ A][ A]T [ I ]
aik a jk aki akj ij
Example of an orthogonal matrix:
The rotation matrix:
cos( ) sin( ) cos( ) sin( )
[ A] [ A]T
sin( ) cos( ) sin( ) cos( )
1 0 1 0
[ A][ A]T [ A]T [ A]
0 1 0 1
1.27. Polar decomposition theorem
Any arbitrary non-singular second order tensor can be decomposed into a product of a positive symmetric second order tensor and an
orthogonal second order tensor or the product of an orthogonal second order tensor and a positive symmetric second order tensor. The
orthogonal tensor is the same in both decompositions. But the symmetric tensor is different.
Aij Rik S kj Tik Rkj
1.28. Matrix determinant
Determinant of a 2 2 matrix
a a
[ A] 11 12 A det( A) a11a22 a12 a21
a21 a22
Determinant of a 3 3 matrix
a11 a12 a13
[ A] a21 a22 a23 A a11 (a22 a33 a23a32 ) a12 (a21a33 a31a23 ) a13 (a21a32 a22 a31 )
a31 a32 a33
The two above determinants can be used to recursively calculate the value of a determinant of an arbitrary n n matrix.
a11 a12 a13 a1, m 1 a1 m
a a22 a23 a2, m 1 a2 m
21
a31 a32 a33 a3, m 1 a3 m
[ A]
an 1,1 an 1, 2 an 1,3 an 1, m 1 an 1, m
an 1 an 2 an 3 an , m 1 an m
A a11 A11 a12 A12 a13 A13 a14 A14 a1n A1n if n is odd
where [ Aij ] is the matrix formed by deleting the first row and column j of the matrix.
Properties of a determinant:
8
- [ A][ B] [ B ][ A] A B
- If [A] is an upper or lower triangular matrix then the determinant of [A] is the product of the diagonal terms.
- If we can decompose a matrix into a product of an upper and lower triangular matrix, then the determinant can be easily evaluated:
[ A] [ L][U ]
A LU
Notes:
- Determinant is only defined for a square matrix.
1
- If A 0 the matrix is said to be “singular” and [ A] does not exist
1.29. Writing a system of linear Algebraic equations in matrix form
a11 x1 a12 x2 a13 x3 a1, m 1 xm 1 a1m xm c1
a21 x1 a22 x2 a23 x3 a2, m 1 xm 1 a2 m xm c2
a31 x1 a32 x2 a33 x3 a3, m 1 xm 1 a3m xm c3
…
an 1,1 x1 an 1, 2 x2 an 1,3 x3 an 1,, m 1 xm 1 an 1, m xm cn 1
an1 x1 an 2 x2 an 3 x3 an , m 1 xm 1 anm xm cn
a11 a12 a13 a1, m 1 a1 m x1 c1
a a22 a23 a2, m 1 a2 m x2 c2
21
a31 a32 a33 a3, m 1 a3 m x3 c3
an 1,1 an 1, 2 an 1,3 an 1, m 1 an 1, m xm 1 cn 1
an 1 an 2 an 3 an , m 1 an m xm cn
[ A]{x} {c} aij x j ci
If n = m, then we have n equations in n unknowns.
o If det( A) 0 then all the equations are linearly independent and the above system of equations has one solution,
i.e. one set of xi values. We can find the solution by inverting the matrix [A]:
[ A]1[ A]{x} [ A]1{c} [ I ]{x} [ A]1{c} {x} [ A]1{c}
If n > m then we have more unknowns than equations. Therefore, there are an infinite number of possible solutions that
satisfy the equations.
If det( A) 0 , then one of more of the equations are linearly dependent. This means there is either an infinite number of
solutions or there is no solution.
If n < m then we have more equations than unknowns. If all the equations are linearly independent, then there is no solution.
1.30. Cross-product
The cross product is only defined for a vector that has a size of 3 (i.e. 3 components).
Cross-product or vector (defined only for 3-D vectors):
9
e1 e2 e3 u2v3 u3v2
w u v {u} {v} u1 u2 u3 (u2v3 u3v2 )e1 (u1v3 u3v1 )e2 (u1v2 u2v1 )e3 u3v1 u1v3
u v u v
v1 v2 v3 1 2 2 1
The result of the cross-product is a vector that is normal to the two vectors {u} and {v}.
w u v sin( )
Indicial notation: wk ijk ui v j or wi ijk u j vk where ijk is called the permutation tensor.
1 i. j.k appear in the order :1,2,3,1,2
ijk 1 i. j.k appear in the order : 3,2,1,3,2
0 two or more of the indices have the same value
a1 a2 a3
Box or triple scalar product: ijk aib j ck ( a b ) c a (b c ) b1 b2 b3
c1 c2 c3
If ijk ai b j ck a b c 0 then vectors a , b , c are coplanar.
Triple vector product: d a (b c ) ( a c )b (a b )c
1.31. Principle values of a second order tensor
If T is a tensor and n is a unit normal vector
vi Tij n j is a vector in a conjugate direction to n .
If we want to find the conjugate direction of T which is in the same direction as n then:
Tij n j n j Tij n j n j 0 (Tij ij )n j 0 Tij ij 0
T11 T12 T13
T21 T22 T23 0
T31 T32 T33
(T11 )(T22 )(T33 ) T23T32 T12 T21 (T33 ) T23T31 T13 T21T32 (T22 )T31 0
This is a third order polynomial equation that has 3 solutions.
1 , 2 , 3 are the eigenvalues of T and are also called the principle values of tensor T.
T11 T12 T13
If the tensor is symmetric then: T12 T22 T23 0 and the values of are real.
T13 T23 T33
(T11 ) (T22 )(T33 ) T23 T12 T12 (T33 ) T23T13 T13 T12T23 (T22 )T13 0
2
3 I12 I 2 I 3 0
where: I1 Tii T11 T22 T33
10
I2
1
TiiT jj TijTij
2
I 3 det(T ) Tij
I1 is the first invariant of T
I 2 is the second invariant of T
I 3 is the third invariant of T
1.32. Scalar, Vector and Tensor fields
Scalar field: ( xk ) ( x ) ({x}) i, j, k = 1, 2, 3
Vector field: vi vi ( xk ) v ( x ) v({x})
Tensor field: Tij Tij ( xk ) T ( x )
Note: The field can also be a function of time t.
1.33. Derivative of a tensor
d (ui vi ) d (u v ) dv du
Time derivatives: u v
dt dt dt dt
d (u v ) dv du
u v
dt dt dt
Del or gradient operator:
xi
v
Gradient of v (outer product): v i vi , j
x j
v v v v
Divergence of v (inner product): v div(v ) i vi , i 1 2 3
xi x1 x2 x3
d
Gradient of a scalar: ,i
dxi
dv
Curl of v : v eijk j
dxi
2 2 2
Square Gradient of a scalar: 2 ,ii 2 2
xi xi x1 x2
2
x3
1.34. Derivative of a tensor
T T T
T ij T ij or T ij
xk xi x j
1.35. Green-Gauss Theorem
If we have a vector field: ai ai ( xk ) ai ( x1 , x2 , x3 )
For an infinitesimal rectangular volume dV dx1dx2 dx3
11
The net flux in or out of the volume is given by: f a n ds
S
f a1 ( x1 dx1 , x2 , x3 ) a1 ( x1 , x2 , x3 ) dx2 dx3 a2 ( x1 , x2 dx2 , x3 ) a2 ( x1 , x2 , x3 ) dx1dx3 a3 ( x1 , x2 , x3 dx3 ) a3 ( x1 , x2 , x3 ) dx1dx2
S 23 S13 S12
a a
Or, a1 ( x1 dx1 , x2 , x3 ) a1 ( x1 , x2 , x3 ) dx2 dx3 a1( x1 , x2 , x3 ) 1 dx1 a1 ( x1 , x2 , x3 ) dx2 dx3 1 dx1dx2 dx3
x1 x1
Similarly: a2 ( x1, x2 dx2 , x3 ) a2 ( x1, x2 , x3 ) dx1dx3 a2 dx1dx2dx3
x2
And: a3 ( x1, x2 , x3 dx3 ) a3 ( x1, x2 , x3 ) dx1dx2 a3 dx1dx2dx3
x3
a a2 a3 a
Therefore: f 1 dx1dx2 dx3 i dV a dV
V
x1 x2 x3 V
xi V
ai
a n dS a dV S ai ni dS V xi dV S ai nˆi dS V ai,i dV
S V
This is called Green-Gauss theorem.
Also, we can easily prove:
n dS dV dV
2
S V V
S
n a dS a dV
V
Tij
T
S
ij ni dS
V
xi
dV
1.36. Coordinate transformations
In order to transform from one coordinate system x1 , x2 , x3 to another coordinate system 1 , 2 , 3 we use:
xi
dxi d j
j
x1 x1 x1
dx1 1 2 3 d
1
x x2 x2
dx2 2 d 2
1 2 3
dx
3 x x3 x3 d3
3
1 2 3
1.37. Infinitesimal volume in different coordinates
(1) x x x
Let dx i d1 dx ( 2) i d 2 dx ( 3) i d3
1 2 3
12
x x x x x j xk x
dV dx (1) dx ( 2) dx ( 3) eijk i d1 j d 2 k d3 eijk i
1 2 3 1 2 3
d1d 2 d3 i d1d 2 d3
j
xi
Where J = Jacobian of the transformation.
j
1.38. Tensor transformation to another coordinate system
A tensor T can be transformed from one coordinate system to another coordinate system using:
Tij Rim RinTmn T RTRT
where R is the orthogonal rotation matrix between the two coordinate systems.
eˆi Rij eˆ j
13