KEMBAR78
Python Solutions for PDEs | PDF | Matrix (Mathematics) | Numerical Analysis
0% found this document useful (0 votes)
37 views14 pages

Python Solutions for PDEs

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views14 pages

Python Solutions for PDEs

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

MAT-614 ASSIGNMENT

NUMERICAL SOLUTIONS AND


APPLICATIONS OF PARTIAL DIFFERENTIAL
EQUATIONS

Name: TAFFOHOUO NWAFFEU YVES VALDEZ


Module: Python Program for solving n-dimensional linear system
Matricule No: SC23P191
Date of Pre: 2024.04.12
Date of Sub: 2024.04.12
1 ASSIGNMENT 1
1 Download and install the Python Programming languages including the associated graphic pack-
ages: DONE

2 Consider a typical linear system of the form A⃗x = ⃗b in Rn with the nxn matrix defined as
A = (aij ) , aij ∈ R and the vector ⃗b = (b1 , b2 , ..., bn )T and ⃗x = (x1 , x2 , ..., xn )T .
The following output shows how coefficients and constant vector components are arranged and store
in the array matrix.

3 Required to Program two different direct methods to solve for ⃗x=(x1 , x2 , ..., xn )T .
In order to design a program capable of solving the stated problem, we are going to consider two
commonly used direct methods which are;

1. GAUSS ELIMINATION

2. THOMAS ALGORITHM

1. GAUSS ELIMINATION
The elimination comes first: At each stage, we remove one unknown from all but one of
the equations to produce a set of equations of order one smaller than the last.
Back substitutionn follows: The last equation contains just the last unknown which is found
directly. All unknowns so far found can then be substituted into the equation with one extra
unknown. This can be found in the next step.
For a small set (a value of n not too large), the result is exact. For sets in which the numbers
are less simple or large sets, the calculations would in generally be affected by rounding errors.
We note that this is the method of systematic elimination that is commonly taught in school.
The actual computed result depends, usually on the sequence of equations, because of round-
ing errors. Moreover the result depends, in general, on the type of arithmetic used. Work or
calculations which would be exact in decimal arithmetic will not be exact in binary arithmetic.
The process also relies on the pivot; they must be non-zero because we have to divide by them.
Their choice depends on the sequence of equations and the number of variables.
Application of Gauss Elimination . We suppose that a11 ̸= 0 and fix mi1 = ai1 /a11 , i =
(2) (2)
2, 3, ..., n. Thus computing aij and bi , we got
(2)
aij = aij − mi1 ai1
(2)
bi = bi − mi1 b1
where i, j = 1, 2, 3, ..., n
A computation yielding an (n-1)x(n-1) subsystem in x1 , x2 , x3 , ..., xn−1 , xn . Upon an adjoin-
ment with the first equation to this subsystem one obtains the n × n system A(2) x = b(2) . We

1
(2)
continue recursively, and the next step assumes a22 ̸= 0 and take
mi1 = ai1 /a11 , i = 2, 3, ..., n
(2) (2)
Similarly computing aij and bi , we got
(3)
aij = aij − mi2 ai2
(3)
bi = bi − mi2 b2

where i, j = 1, 2, 3, ..., n.
To obtain a subsystem in x3 , x4 , . . . , xn . We can adjoin the first two equations to this
(n) (n)
subsystem to obtain the n × n system A(3) x = b(3) . This we continue until we find ann xn = bn ,
(n)
assuming that ann ̸= 0, as the first step of the back-substitution. The whole back-substitution
process is applied to system of equations of the form.

 
a11 a12 a13 a14 . . . a1(n−1) a1n x1
 
b1


 0 a222 a223 a224 . . . 2
a2(n−1) a22n 
 x2   b22 
0 0 a333 a334 . . . 3
a3(n−1) a33n b33
  
x3
    
 
. . . . . . . . .
  



 .   . 
  
(n)
A x=
 . . . . . . . . . 
 . = . 
  

 . . . . . . . . . 
 .   . 
(n−2) (n−2)   (n−2) 
0 0 0 0 . . . a(n−2)(n−1) a(n−2)n xn−2   b
  
    n−2 

0 0 0 0 . .
(n−1) (n−1)
. a(n−1)(n−1) a(n−1)n

xn−1   b(n−1) 
n−1
 
0 0 0 0 . . . 0 annn xn bnn

Here is an efficient Python code that solves an n-system of linear equations using the Gauss
Elimination method and also provides the operation count:

Python Code

1 import numpy as np
2

3 # Define the Gauss Elimination function to solve a system of


linear equations
4 def gauss_elimination (A , b ) :
5 n = len ( b ) # Get the number of equations ( or variables ) in
the system
6 ops_count = 0 # Initialize the operation count
7

8 for i in range ( n ) :
9 # Partial pivoting : Find the row with the maximum
absolute value in the current column
10 max_row_index = i
11 for j in range ( i +1 , n ) :
12 if abs ( A [j , i ]) > abs ( A [ max_row_index , i ]) :
13 max_row_index = j
14 ops_count += 1
15

2
16 # Swap rows based on the pivot element
17 A [[ i , max_row_index ]] = A [[ max_row_index , i ]]
18 b [ i ] , b [ max_row_index ] = b [ max_row_index ] , b [ i ]
19

20 # Perform row operations to eliminate coefficients below


the diagonal
21 for j in range ( i +1 , n ) :
22 factor = A [j , i ] / A [i , i ]
23 ops_count += 1
24

25 for k in range (i , n ) :
26 A [j , k ] -= factor * A [i , k ]
27 ops_count += 2
28 b [ j ] -= factor * b [ i ]
29 ops_count += 2
30

31 # Backward substitution to find the solution


32 x = np . zeros ( n )
33 x [n -1] = b [n -1] / A [n -1 , n -1]
34

35 for i in range (n -2 , -1 , -1) :


36 sum_val = b [ i ]
37 for j in range ( i +1 , n ) :
38 sum_val -= A [i , j ] * x [ j ]
39 ops_count += 2
40 x [ i ] = sum_val / A [i , i ]
41 ops_count += 2
42

43 return x , ops_count
44

45 # Ask user for the number of equations and variables


46 n = int ( input ( " Enter the number of equations and variables : " ) )
47

48 # Initialize coefficient matrix A and constant vector b


49 A = np . zeros (( n , n ) )
50 b = np . zeros ( n )
51

52 # Input coefficients and constants from the user


53 print ( " Enter the coefficients of the matrix A : " )
54 for i in range ( n ) :
55 A [ i ] = list ( map ( float , input () . split () ) )
56

57 print ( " Enter the constants : " )


58 b = list ( map ( float , input () . split () ) )
59

60 # Solve the system of linear equations using Gauss Elimination


method
61 x , ops_count = gauss_elimination (A , b )
62 print ( " Solution : " , x )
63 print ( " Operation count : " , ops_count )
64

3
65 print ( results )

The Output of this python code is simply displayed below

2. THOMAS ALGORITHM Definition(Banded-matrices):


An n×n matrix A is called banded if there exist and integer r < n such that aij = 0 ∀ |i−j| > r.
For a banded
 matrix, all non-zero elements lie in a band of width 2r+1 a long the main diagonal.

b11 c12 0 0 . . . 0 0 0 0
 a21 b22 c23 0 . . . 0 0 0 0 
 
 0 a32 b33 c34 . . . 0 0 0 0 
 
 . . . . . . . . . 0 0 
 
For r = 1 
 . . . . . . . . . 0 0 

 . . . . . . . . . 0 0 
 
 0 0 0 . . . a(n−2)(n−3) b(n−2)(n−2) c(n−2)(n−1) 0 
 
 0 0 0 0 . . . 0 a(n−1)(n−2) b(n−1)(n−1) c(n−1)(n−2) 
0 0 0 0 . . . 0 0 an(n−1) bnn

Python Code
Code Listing 1: TDMAsolver.py
1 def TDMAsolver (n , a , b , c , d ) :
2 # Apply the Thomas algorithm
3 for i in range (1 , n + 1) :
4 m = a[i] / b[i]
5 b [ i ] -= m * c [ i - 1]
6 d [ i ] -= m * d [ i - 1]
7

8 # Backward substitution
9 x = [0.0] * ( n + 1)
10 x[n] = d[n] / b[n]
11 for i in range ( n - 1 , 0 , -1) :
12 x [ i ] = ( d [ i ] - c [ i ] * x [ i + 1]) / b [ i ]
13

14 return x [1:] # Exclude the initial zero


15

16 # Example usage :
17 n_equations = int ( input ( " Enter the number of equations : " ) )
18

19 print ( " Enter coefficients for the tridiagonal matrix : " )


20 a = [0.0] * ( n_equations + 1)
21 b = [0.0] * ( n_equations + 1)
22 c = [0.0] * ( n_equations + 1)

4
23 d = [0.0] * ( n_equations + 1)
24

25 for i in range ( n_equations ) :


26 a_val = float ( input ( " a [{}]: " . format ( i ) ) )
27 b_val = float ( input ( " b [{}]: " . format ( i ) ) )
28 c_val = float ( input ( " c [{}]: " . format ( i ) ) )
29 d_val = float ( input ( " d [{}]: " . format ( i ) ) )
30 a [ i ] = a_val
31 b [ i ] = b_val
32 c [ i ] = c_val
33 d [ i ] = d_val
34

35 solution = TDMAsolver ( n_equations , a , b , c , d )


36 print ( " Solution : " , solution )

The Thomas algorithm is an efficient method for solving banded matrices. This Python implemen-
tation can be used to solve systems of linear equations with real coefficients.

Remark 1
The two best direct methods to solve a system of n-variable system of linear equations with
real coefficients that are commonly encountered in solving P.D.Es numerically in Python are:
LU decomposition: This method factors the coefficient matrix A into the product of a lower
triangular matrix L and an upper triangular matrix U, such that A = LU. The system of
linear equations can then be solved by first solving the lower triangular system L⃗y = d⃗ for
⃗y , and then solving the upper triangular system U⃗x = ⃗y for ⃗x. This method is efficient and
stable, and is often used in numerical methods for solving P.D.Es.
QR decomposition: This method factors the coefficient matrix A into the product of an
orthogonal matrix Q and an upper triangular matrix R, such that A = QR. The system of
linear equations can then be solved by first solving the upper triangular system R⃗x = QT d⃗ for
⃗x. This method is also efficient and stable, and is often used in numerical methods for solving
P.D.Es.
Both LU and QR decomposition methods are implemented in popular Python libraries for
numerical computing, such as NumPy and SciPy.
For example, the numpy.linalg.lu − f actor and numpy.linalg.qr functions can be used to com-
pute the LU and QR decompositions of a matrix, respectively. The numpy.linalg.solve func-
tion can then be used to solve the system of linear equations using the computed decomposition.

It’s worth noting that there are other direct methods for solving systems of linear equations,
such as the Cholesky decomposition and the Gaussian elimination method. However, LU and
QR decomposition are generally considered to be the most efficient and stable methods for
solving large systems of linear equations.

LU Decomposition Python Code

1 import numpy as np
2

3 def lu_decomposition ( A ) :
4 n = A . shape [0]
5 L = np . identity ( n )

5
6 U = np . zeros ( A . shape )
7

8 for i in range ( n ) :
9 for j in range (i , n ) :
10 s = sum ( L [i , k ] * U [k , j ] for k in range ( i ) )
11 U [i , j ] = A [i , j ] - s
12

13 for j in range (i , n ) :
14 if i == j :
15 L [i , i ] = 1
16 else :
17 s = sum ( L [k , j ] * U [i , k ] for k in range ( i ) )
18 L [i , j ] = s / U [i , i ]
19

20 return L , U

QR Factorization Python Code

1 import numpy as np
2

3 def qr_factorization ( A ) :
4 n = A . shape [0]
5 Q = np . zeros ( A . shape )
6 R = np . zeros ( A . shape )
7

8 for i in range ( n ) :
9 for j in range (i , n ) :
10 s = sum ( A [k , i ] * A [k , j ] for k in range ( i ) )
11 R [i , j ] = np . sqrt ( s **2 + A [i , i ]**2)
12 Q [i , i ] = A [i , i ] / R [i , i ]
13 Q [i , j ] = s / R [i , i ]
14

15 for j in range (i , n ) :
16 s = sum ( Q [k , i ] * R [i , j ] for k in range (i , n ) )
17 A[i:, j] = A[i:, j] - s
18

19 R [: n , : n ] = A [: n , : n ]
20

21 return Q , R

4 The two best iterative methods to solve a system of n-variable system of linear equations with
real coefficients that are commonly encountered in solving P.D.Es numerically in Python are:

1. Jacobi Iteration

2. Gauss-Seidel Iteration
(1) Jacobi Iteration
This method is a simple iterative method that uses the formula ⃗xk+1 = D−1 (b−(L + U)⃗xk ) to update
the solution vector ⃗xk , where D is the diagonal matrix, L is the lower triangular matrix, and U is

6
the upper triangular matrix. The Jacobi iteration method is easy to implement and requires less
memory than other iterative methods. However, it may converge slowly or not converge at all for
some matrices.
Jacobi Iteration Algorithm

1 import numpy as np
2

3 def jacobi (A , b , N =100 , x = None , tol =1 e -5) :


4 """
5 Solves the system Ax = b using the Jacobi iterative method .
6

7 Parameters :
8 A ( numpy array ) : An n x n matrix of coefficients .
9 b ( numpy array ) : An n x 1 matrix of constants .
10 N ( int ) : The maximum number of iterations .
11 x ( numpy array ) : The initial guess for the solution . If None , a
zero vector is used .
12 tol ( float ) : The tolerance for convergence .
13

14 Returns :
15 numpy array : The solution to the system Ax = b .
16 """
17

18 # Get the size of the matrix


19 n = len ( b )
20

21 # If no initial guess is provided , set x to a zero vector


22 if x is None :
23 x = np . zeros ( n )
24

25 # Create a copy of x for the next iteration


26 x_next = x . copy ()
27

28 # Initialize the error to a large value


29 error = 1
30

31 # Iterate until the error is below the tolerance or the maximum


number of iterations is reached
32 for i in range ( N ) :
33 for j in range ( n ) :
34 # Calculate the sum of the coefficients of x_next ,
excluding the current coefficient
35 sum_j = sum ( A [j , k ] * x_next [ k ] for k in range ( n ) if k !=
j)
36 # Calculate the new value of x_next for the current
coefficient
37 x_next [ j ] = ( b [ j ] - sum_j ) / A [j , j ]
38

39 # Calculate the error between the current and next iterations


40 error = max ( abs ( x_next [ i ] - x [ i ]) for i in range ( n ) )

7
41 # Update x to be x_next for the next iteration
42 x = x_next . copy ()
43

44 # Print the iteration number and error


45 print ( f " Iteration { i +1}: Error = { error :.5 f } " )
46

47 # If the error is below the tolerance , break the loop


48 if error < tol :
49 break
50

51 return x

Jacobi Iteration Python Code

1 def jacobi_iteration (A , b , x0 , tol =1 e -5 , max_iter =1000) :


2 n = len ( b )
3 D = np . diag ( np . diag ( A ) )
4 L = np . tril (A , k = -1)
5 U = np . triu (A , k =1)
6 I = np . identity ( n )
7 iter_count = 0
8

9 # Input coefficients and right - hand side vector from the user
10 print ( " Enter coefficients and right - hand side vector : " )
11 for i in range ( n ) :
12 for j in range ( n ) :
13 if j == i :
14 A [ i ][ j ] = float ( input ( f " a [{ i }][{ i }]: " ) )
15 else :
16 A [ i ][ j ] = float ( input ( f " a [{ i }][{ j }]: " ) )
17 b [ i ] = float ( input ( f " b [{ i }]: " ) )
18

19 while iter_count < max_iter :


20 x = x0 . copy ()
21 x = np . linalg . solve (D , b - ( L + U ) @ x )
22 diff = np . linalg . norm ( x - x0 )
23 x0 = x
24

25 if diff < tol :


26 break
27

28 iter_count += 1
29

30 return x0 , iter_count
31

32 # Example usage :
33 n_equations = int ( input ( " Enter the number of equations : " ) )
34 A = np . zeros (( n_equations , n_equations ) )
35 b = np . zeros ( n_equations )
36 x0 = np . zeros ( n_equations )
37

8
38 solution , no_iterations = jacobi_iteration (A , b , x0 )
39

40 print ( " Solution : " , solution )


41 print ( " Number of iterations : " , no_iterations )

(2) Gauss-Seidel Iteration


This method is a more efficient iterative method than the Jacobi iteration method. It uses the formula
⃗xk+1 = (D − L)−1 (⃗b − U⃗xk ) to update the solution vector xk , where D is the diagonal matrix, L
is the lower triangular matrix, and U is the upper triangular matrix. The Gauss-Seidel iteration
method is more efficient than the Jacobi iteration method because it uses the updated values of the
solution vector in each iteration. However, it may also converge slowly or not converge at all for
some matrices.
Gauss-Seidel Iteration Algorithm

1 import numpy as np
2

3 def gauss_seidel (A , b , N =100 , x = None , tol =1 e -5) :


4 """
5 Solves the system Ax = b using the Gauss - Seidel iterative method .
6

7 Parameters :
8 A ( numpy array ) : An n x n matrix of coefficients .
9 b ( numpy array ) : An n x 1 matrix of constants .
10 N ( int ) : The maximum number of iterations .
11 x ( numpy array ) : The initial guess for the solution . If None , a
zero vector is used .
12 tol ( float ) : The tolerance for convergence .
13

14 Returns :
15 numpy array : The solution to the system Ax = b .
16 """
17

18 # Get the size of the matrix


19 n = len ( b )
20

21 # If no initial guess is provided , set x to a zero vector


22 if x is None :
23 x = np . zeros ( n )
24

25 # Create a copy of x for the next iteration


26 x_next = x . copy ()
27

28 # Initialize the error to a large value


29 error = 1
30

31 # Iterate until the error is below the tolerance or the maximum


number of iterations is reached
32 for i in range ( N ) :
33 for j in range ( n ) :
34 # Calculate the sum of the coefficients of x_next ,
excluding the current coefficient

9
35 sum_j = sum ( A [j , k ] * x_next [ k ] for k in range ( n ) if k <
j)
36 sum_j += sum ( A [j , k ] * x [ k ] for k in range ( n ) if k > j )
37 # Calculate the new value of x_next for the current
coefficient
38 x_next [ j ] = ( b [ j ] - sum_j ) / A [j , j ]
39

40 # Calculate the error between the current and next iterations


41 error = max ( abs ( x_next [ i ] - x [ i ]) for i in range ( n ) )
42 # Update x to be x_next for the next iteration
43 x = x_next . copy ()
44

45 # Print the iteration number and error


46 print ( f " Iteration { i +1}: Error = { error :.5 f } " )
47

48 # If the error is below the tolerance , break the loop


49 if error < tol :
50 break
51

52 return x

Gauss-Seidel Iteration Python Code

1 def gauss_seidel_iteration (A , b , x0 , tol =1 e -5 , max_iter =1000) :


2 n = len ( b )
3 D = np . diag ( np . diag ( A ) )
4 L = np . tril (A , k = -1)
5 U = np . triu (A , k =1)
6 I = np . identity ( n )
7 iter_count = 0
8

9 # Input coefficients and right - hand side vector from the user
10 print ( " Enter coefficients and right - hand side vector : " )
11 for i in range ( n ) :
12 for j in range ( n ) :
13 if j == i :
14 A [ i ][ j ] = float ( input ( f " a [{ i }][{ i }]: " ) )
15 else :
16 A [ i ][ j ] = float ( input ( f " a [{ i }][{ j }]: " ) )
17 b [ i ] = float ( input ( f " b [{ i }]: " ) )
18

19 while iter_count < max_iter :


20 x = x0 . copy ()
21 for i in range ( n ) :
22 x [ i ] = ( b [ i ] - np . dot ( L [ i ] , x ) - np . dot ( U [ i ] , x0 ) ) / D [ i
][ i ]
23 diff = np . linalg . norm ( x - x0 )
24 x0 = x
25

26 if diff < tol :


27 break

10
28

29 iter_count += 1
30

31 return x0 , iter_count
32

33 # Example usage :
34 n_equations = int ( input ( " Enter the number of equations : " ) )
35 A = np . zeros (( n_equations , n_equations ) )
36 b = np . zeros ( n_equations )
37 x0 = np . zeros ( n_equations )
38

39 solution , no_iterations = gauss_seidel_iteration (A , b , x0 )


40

41 print ( " Solution : " , solution )


42 print ( " Number of iterations : " , no_iterations )

Remark 2
Successive Over-Relaxation (SOR) is a variant of the Gauss-Seidel method that can converge faster
for certain types of linear systems, but it is not always more reliable than the Jacobi or Gauss-
Seidel methods for solving linear systems generated by PDEs when estimating numerical solutions
in Python.

The reason for this is that the (SOR) method introduces a relaxation parameter, omega(ω),
which can accelerate convergence for certain matrices, but can also diverge for others. The optimal
value of omega depends on the matrix A and the initial guess x0 , and finding the optimal value can
be challenging.

In addition, the (SOR) method can be more sensitive to roundoff errors and numerical instabil-
ities than the Jacobi and Gauss-Seidel methods. This is because the (SOR) method updates the
solution vector ⃗x after each component is computed, which can amplify any errors in the intermediate
calculations.

Furthermore, the (SOR) method requires more computations per iteration than the Jacobi and
Gauss-Seidel methods, which can offset its faster convergence rate for certain matrices.

Therefore, while the (SOR) method can be a powerful tool for solving certain types of linear
systems, it is not always more reliable than the Jacobi or Gauss-Seidel methods for solving linear
systems generated by PDEs when estimating numerical solutions in Python. It is important to care-
fully consider the properties of the matrix A and the initial guess ⃗x0 when choosing an iterative
method for solving linear systems.

• Gauss Elimination
Input

11
Output

• Thomas Algorithm for Branded Matrices


Input-Output

• Jacobi Iteration
Input

12
Output

• Gauss-Seidel Iteration
Input

Output

13

You might also like