ADVANCE THEORY OF STRUCTURES
INTRODUCTION
1. REFERENCE
a. Problems in Matrix Structural Analysis, Bhatt
2. SYLLABUS
a. Introduction
b. Matrix Structural Analysis
i. Special Matrix Operations
ii. Truss (Ch. 3)
1. Manual Solution
2. Computer Based Solution
iii. Plane Frames (Ch. 4)
iv. Plane Grids (Ch. 5)
v. Combination (Ch. 6)
vi. Self-straining Loads (Ch.7)
c. Lagrangian Mechanics (Energy Methods)
i. Principle of Virtual Work (PVW)
ii. Stationary Potential Energy (SPE)
iii. Stability
iv. Continuous Systems
v. Calculus of Variations
vi. Rayleigh / Rayleigh-Ritz Method
vii. Timoshenko Beams (Deep Beams) (Optional)
BASIC MATRIX OPERATIONS
Addition & Subtraction A(mxn) ± B(mxn) = C(mxn)
aij ± bij = cij
Multiplication A(mxn) * B(nxo) = C(mxo)
o cij = aik * bkj (For k = 1 to n)
A(mxn) B(nxo) C(oxp) D(pxq) = E(mxq)
Division A(nxn) x(n) = b(n) Given: A & b, solve for x(n)
Gaussian Elimination or Cramer’s Rule
Special Problem:
1. Simultaneous linear equations. Are best solved using matrix operations
if the sizes of the matrices are large.
2. “A” matrix is square. Number of rows = no. of equations & no. of
columns = no. of unknowns. To have a unique solution, the number of
independent equations = no. of unknowns.
3. “A” matrix is symmetrical (aij = aji)
30 -5 4 -2
-5 20 -3 6
4 -3 25 -7
-2 6 -7 40
aij = aji
and positive definite.
All matrices and sub-matrices formed whose main diagonal
elements are part of the main diagonal of matrix A will have determinants that
are positive.
30 -5 4 -2
-5 20 -3 6
4 -3 25 -7
-2 6 -7 40
Cholesky Procedure
1. Decomposition of Matrix A
2. Forward Elimination
3. Backward Substitution
Decomposition
o A=LxU
L Lower triangular matrix
x 0 0 0
x x 0 0
x x x 0
x x x x
U Upper triangular matrix
x x x x
0 x x x
0 0 x x
0 0 0 x
o But L & U are transpose of each other. lij = uji
o There are infinite number of matrix pairs L & U but only one
unique pair will be transpose of each other.
o If we compare it to scalar values,
E.g. 12 = 3x4 or 2x6 or 5x2.4 or 10x1.2 or …
But unique pair is the pair where the factors are equal,
meaning 12 = √12 x √12
Step 1. Decomposition Procedure
A11 A12 A13 A14 L11 0 0 0 U11 U12 U13 U14
A21 A22 A23 A24 L21 L22 0 0 0 U22 U23 U24
= *
A31 A32 A33 A44 L31 L32 L33 0 0 0 U33 U34
A41 A42 A43 A44 L41 L42 L43 L44 0 0 0 U44
A11 A12 A13 A14 U11 0 0 0 U11 U12 U13 U14
A21 A22 A23 A24 U12 U22 0 0 0 U22 U23 U24
= *
A31 A32 A33 A34 U13 U23 U33 0 0 0 U33 U34
A41 A42 A43 A44 U14 U24 U34 U44 0 0 0 U44
a11 = [u11]2 u11 = [a11]0.5
a12 = u11 * u12 u12 = a12 / u11
a13 = u11 * u13 u13 = a13 / u11
a14 = u11 * u14 u14 = a14 / u11
a22 = [u122 + u222] u22 = [a22 – (u122)]0.5
a23 = u12*u13 + u22*u23 u23 = (a23 – [u12*u13]) / u22
a24 = u12*u14 + u22*u24 u24 = (a24 – [u12*u14]) / u22
a33 = u132 + u232 + u332 u33 = [a33 – (u132 + u232)]0.5
a34 = u13*u14 + u23*u24 + u33*u34 u34 = (a34 – [u13*u14 + u23*u24) / u33
a44 = u142 + u242 + u342 + u442 u44 = [a44 – (u142 + u242 + u342)]0.5
For the main diagonal elements,
u44 = [a44 – (u142 + u242 + u342)]0.5
uii = [aii – k(uki2)]0.5 for k = 1 to i -1
For the off-diagonal elements,
u34 = [a34 – (u13 u14 + u23 u24)]/ u33
uij = [aij – k(uki ukj)]/ uii for k = 1 to i -1
Note: Ax = b. Given A and b, solve for x.
A = LU ==> LUx = b
Let Ux = y ==> Ly = b
Step 2. Forward Elimination: Ly = b. Given L from Step 1 and b, solve for y.
U11 0 0 0 y1 b1
U12 U22 0 0 y2 b2
* =
U13 U23 U33 0 y3 b3
U14 U24 U34 U44 y4 b4
u14y1 + u24y2 + u34y3 + u44y4 = b4
y4 = [b4 – (u14y1 + u24y2 + u34y3)] / u44
yi = [bi – k(uki yk)] / uii for k = 1 to i -1
Step 3. Backward Substitution: Ux = y. Given U from Step 1 and y from Step
2, solve for x.
U11 U12 U13 U14 x1 y1
0 U22 U23 U24 x2 y2
* =
0 0 U33 U34 x3 y3
0 0 0 U44 x4 y4
u11x1 + u12x2 + u13x3 + u14x4 = y1
x1 = [y1 – (u12x2 + u13x3 + u14x4)] / u11
xi = [yi – k(uik xk)] / uii for k = i+1 to n
ALGORITHMS
INPUT N
DIM A(N,N), B(N), U(N,N), Y(N), X(N)
REM INPUT ROUTINE
FOR I = 1 TO N
FOR J = I TO N
INPUT A(I,J)
NEXT J
INPUT B(I)
NEXT I
REM DECOMPOSITION
FOR I = 1 TO N
FOR J = I TO N
IF I = J
THEN
SUM = 0
FOR K = 1 TO I-1
SUM = SUM + U(K,I)^2
NEXT K
U(I,I) = (A(I,I) – SUM)^0.5
ELSE
SUM = 0
FOR K = 1 TO I-1
SUM = SUM + U(K,I)*U(K,J)
NEXT K
U(I,J) = (A(I,J) – SUM) / U(I,I)
NEXT J
NEXT I
REM FORWARD ELIMINATION
FOR I = 1 TO N
SUM = 0
FOR K = 1 TO I-1
SUM = SUM + U(K,I)*Y(K)
NEXT K
Y(I) = (B(I) – SUM) / U(I,I)
NEXT I
REM BACKWARD SUBSTITUTION
FOR I = N TO 1 STEP -1
SUM = 0
FOR K = I+1 TO N
SUM = SUM + U(I,K)*X(K)
NEXT K
X(I) = (Y(I) – SUM) / U(I,I)
NEXT I
Extra special matrix operations
Situation is 1,000 equations 1,000 rows by 1,000 columns for Matrix A
Memory requirement for double precision variables is 2 bytes of memory.
1,000 x 1,000 = 1M x 2 bytes = 2MB
To save on memory requirements which accordingly will also speed up the
process, softwares use “block” operations. The size of the block is equal to
HBW x HBW.
Where HBW = half-band width
And, HBW = (BW+1) / 2 where BW = band width
1,000 columns
1,000 rows =
A x = b
With block operations, we convert Matrix A into a “banded” matrix. A banded
matrix looks like what is shown below.
1,000 columns
Band Width
1 10 zeroes
10
1,000 rows Non zero values
zeroes 10
1
Half-band Width
Band width = 10 + 1 + 10 = 21
HBW = (21 + 1) / 2 = 11 size of the block
For Cholesky operations, we only use HBW x HBW of Matrix A at any given time
and HBW values for the corresponding “x” and “b” vectors.
HBW
HBW
Banded A x b
Decomposition 11 x 11 = 121 variables compared to 1,000,000 variables.
The band width can be minimized by adopting a numbering of the joints such that
the difference of the joint numbers of the two ends of a member is likewise
minimized. This is achieved by numbering the joints in a “wave-like” fashion in
the direction of the “longer” dimension.