KEMBAR78
Notes | PDF | Integral | Variance
0% found this document useful (0 votes)
34 views16 pages

Notes

Uploaded by

charliva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views16 pages

Notes

Uploaded by

charliva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Multiple Integrals and Probability: A Numerical

Exploration

March 30, 2012

The homework questions are due in class on Monday 9 April

1 Probability
Definition 1.1. f : U ⊂ R2 → R+ is a probability density function if
Z Z
f dA = 1
U

Definition 1.2. If f is a probability density function which takes the set U ⊂ R2 ,


then the probability of events in the set W ⊂ U occurring is
Z Z
P (W ) = f dA.
W

Example 1.1. The joint density for it to snow x inches tomorrow and for Kelly to
win y dollar in the lottery tomorrow is given by
c
f=
(1 + x)(100 + y)
for
x, y ∈ [0, 100] × [0, 100]
and f = 0 otherwise. Find c.
Definition 1.3. Suppose X is a random variable with probability density function
f1 (x) and Y is a random variable with a probability density function f2 (y). Then X
and Y are independent random variables if their joint density function is
f (x, y) = f1 (x)f2 (y).

1
Example 1.2. The probability it will snow tomorrow and the probability Kelly will
win the lottery tomorrow are independent random variables.

Definition 1.4. If f (x, y) is a probability density function for the random variables
X and Y , the X mean is Z Z
µ1 = X̄ = xf dA

and the Y mean is Z Z


µ2 = Ȳ = yf dA.

Remark 1.1. The X mean and the Y mean are the expected values of X and Y.

Definition 1.5. If f (x, y) is a probability density function for the random variables
X and Y , the X variance is
Z Z
2
σ1 = (X − X̄) =2 (x − X̄)2 f dA

and the Y variance is


Z Z
σ22 = (Y − Ȳ )2 = (y − Ȳ )2 f dA.

Definition 1.6. The standard deviation is defined to be the square root of the vari-
ance.

Example 1.3. Find an expression for the probability that it will snow more than
1.1 times the expected snowfall and also that Kelly will win more than 1.2 times the
expected amount in the lottery.

Homework question 1: A class is graded on a curve. It is assumed that the class


is a representative sample of the population, the probability density function for the
numerical score x is given by

(x − µ)2
 
f (x) = C exp − .
2σ 2

For simplicity we assume that x can take on the values −∞ and ∞, though in actual
fact the exam is scored from 0 to 100.

a) Determine C using results from your previous homework.

2
b) Suppose there are 240 students in the class and the mean and standard devi-
ation for the class is not reported. As an enterprising student, you poll 60 of
your fellow students (we shall suppose they are selected randomly). You find
that the mean for these 60 students is 55% and the standard deviation is 10%.
Use the Student’s t distribution http://en.wikipedia.org/wiki/Student%
27s_t-distribution to estimate the 90% confidence interval for the actual
sample mean. Make a sketch of the t-distribution probability density function
and shade the region which corresponds to the 90% confidence interval for the
sample mean.1
Remark Fortunately, all the students are hard working, so the possibility of a
negative score, although possible, is extremely low, and so we neglect it to make the
above computation easier.

2 Riemann Integration
Recall that we can approximate integrals by Riemann sums. There are many integrals
one cannot evaluate analytically, but for which a numerical answer is required. In
this section, we shall explore a simple way of doing this on a computer. Suppose we
want to find Z 1Z 4
I2d = x2 + 2y 2 dydx
0 0
If we do this analytically we find

I2d = 44.

Let us suppose we have forgotten how to integrate, and so we do this numerically.


We can do so using the following Matlab code:
% A program to approximate an integral

clear all; format compact; format short;

nx=1000; % number of points in x


xend=1; % last discretization point
xstart=0; % first discretization point
1
The Student’s t distribution is implemented in many numerical packages such as Maple, Math-
ematica, Matlab, R, Sage etc., so if you need to use to obtain numerical results, it is helpful to use
on of these packages.

3
dx=(xend-xstart)/(nx-1); % size of each x sub-interval

ny=4000; % number of points in y


yend=4; % last discretization point
ystart=0; % first discretization point
dy=(yend-ystart)/(ny-1); % size of each y sub-interval

% create vectors with points for x and y


for i=1:nx
x(i)=xstart+(i-1)*dx;
end
for j=1:ny
y(j)=ystart+(j-1)*dy;
end

% Approximate the integral by a sum


I2d=0;
for i=1:nx
for j=1:ny
I2d=I2d+(x(i)^2+2*y(j)^2)*dy*dx;
end
end
% print out final answer
I2d

We can do something similar in three dimensions. Suppose we want to calculate


Z 1Z 1Z 4
I3d = x2 + 2y 2 + 3z 2 dzdydx.
0 0 0

Analytically we find that


I3d = 68
Homework question 2:

a) Modify the Matlab code to perform the three dimensional integral.

b) Try and determine how the accuracy of either the two or three dimensional
method varies as the number of subintervals is changed.

4
3 Monte Carlo Integration
2
It is possible to extend the above integration schemes to higher and higher dimen-
sional integrals. This can become computationally intensive and an alternate method
of integration based on probability is often used. The method we will discuss is called
the Monte Carlo method. The idea behind it is based on the concept of the average
value of a function, which you learned in single-variable calculus. Recall that for a
continuous function f (x), the average value f¯ of f over an interval [a, b] is defined
as
Z b
1
f¯ = f (x) dx . (1)
b−a a

The quantity b − a is the length of the interval [a, b], which can be thought of
as the “volume” of the interval. Applying the same reasoning to functions of two or
three variables, we define the average value of f (x, y) over a region R to be
ZZ
¯ 1
f = f (x, y) dA , (2)
A(R)
R

where A(R) is the area of the region R, and we define the average value of f (x, y, z)
over a solid S to be
ZZZ
¯ 1
f = f (x, y, z) dV , (3)
V (S)
S

where V (S) is the volume of the solid S. Thus, for example, we have
ZZ
f (x, y) dA = A(R)f¯ . (4)
R

The average value of f (x, y) over R can be thought of as representing the sum of
all the values of f divided by the number of points in R. Unfortunately there are
an infinite number (in fact, uncountably many) points in any region, i.e. they can
not be listed in a discrete sequence. But what if we took a very large number N of
random points in the region R (which can be generated by a computer) and then
2
This section is taken from Chapter 3 of Vector Calculus by M. Corral which is available at
http://www.mecmath.net/ and where Sage programs for doing Monte Carlo integration can be
found.

5
took the average of the values of f for those points, and used that average as the
value of f¯? This is exactly what the Monte Carlo method does. So in formula (4)
the approximation we get is
s
f 2 − (f¯)2
ZZ
f (x, y) dA ≈ A(R)f¯ ± A(R) , (5)
N
R

where
PN PN 2
f (xi , yi ) i=1 (f (xi , yi ))
f¯ = i=1
and f2 = , (6)
N N
with the sums taken over the N random points (x1 , y1 ), . . ., (xN , yN ). The ± “error
term” in formula (5) does not really provide hard bounds on the approximation.
It represents a single standard deviation from the expected value of the integral.
That is, it provides a likely bound on the error. Due to its use of random points,
the Monte Carlo method is an example of a probabilistic method (as opposed to
deterministic methods such as the Riemann sum approximation method, which use
a specific formula for generating points).
For example, we can use formula (5) to approximate the volume V under the
surface z = x2 + 2y 2 over the rectangle R = [0, 1] × [0, 4]. Recall that the actual
volume is 44. Below is a Matlab code that calculates the volume using Monte Carlo
integration

% A program to approximate an integral using the Monte Carlos method

% This program can be made much faster by using Matlab’s matrix and vector
% operations, however to allow easy translation to other languages we have
% made it as simple as possible.

Numpoints=512; % number of random points

I2d=0; % Initialize value


I2dsquare=0; % initial variance
for n=1:Numpoints
% generate random number drawn from a uniform distribution on (0,1)
x=rand(1);
y=rand(1)*4;
I2d=I2d+x^2+2*y^2;

6
I2dsquare=I2dsquare+(x^2+2*y^2)^2;
end
% we sclae the integral by the total area and divide by the number of
% points used
I2d=I2d*4/Numpoints
% we also output an estimated error
I2dsquare=I2dsquare*4/Numpoints;
EstimError=4*sqrt( (I2d^2-I2dsquare)/Numpoints)

The results of running this program with various numbers of random points are
shown below:
N = 16: 41.3026 +/- 30.9791
N = 256: 47.1855 +/- 9.0386
N = 4096: 43.4527 +/- 2.0280
N = 65536: 44.0026 +/- 0.5151
As you can see, the approximation is fairly good. As N → ∞, it can be shown
that the
√ Monte Carlo approximation converges to the actual volume (on the order
of O( N ), in computational complexity terminology).
In the above example the region R was a rectangle. To use the Monte Carlo
method for a nonrectangular (bounded) region R, only a slight modification is needed.
Pick a rectangle R̃ that encloses R, and generate random points in that rectangle as
before. Then use those points in the calculation of f¯ only if they are inside R. There
is no need to calculate the area of R for formula (5) in this case, since the exclusion
of points not inside R allows you to use the area of the rectangle R̃ instead, similar
to before.
For instance, one can show that the volume under the surface z = 1 over the
nonrectangular region R = {(x, y) : 0 ≤ x2 + y 2 ≤ 1} is π. Since the rectangle R̃ =
[−1, 1] × [−1, 1] contains R, we can use a similar program to the one we used, the
largest change being a check to see if y 2 + x3 ≤ 1 for a random point (x, y) in
[−1, 1] × [−1, 1]. A Matlab code listing which demonstrates this is below:
% This program can be made much faster by using Matlab’s matrix and vector
% operations, however to allow easy translation to other languages we have
% made it as simple as possible.

Numpoints=65536; % number of random points

I2d=0; % Initialize value

7
I2dsquare=0; % initial variance
for n=1:Numpoints
% generate random number drawn from a uniform distribution on (0,1) and
% scale this to (-1,1)
x=2*rand(1)-1;
y=2*rand(1) -1;
if ((x^2+y^2) <1)
I2d=I2d+1;
I2dsquare=I2dsquare+1;
end
end
% we scale the integral by the total area and divide by the number of
% points used
I2d=I2d*4/Numpoints
% we also output an estimated error
I2dsquare=I2dsquare*4/Numpoints;
EstimError=4*sqrt( (I2d^2-I2dsquare)/Numpoints)

The results of running the program with various numbers of random points are shown
below:

N = 16: 3.5000 +/- 2.9580


N = 256: 3.2031 +/- 0.6641
N = 4096: 3.1689 +/- 0.1639
N = 65536: 3.1493 +/- 0.0407

To use the Monte Carlo method to evaluate triple integrals, you will need to
generate random triples (x, y, z) in a parallelepiped, instead of random pairs (x, y)
in a rectangle, and use the volume of the parallelepiped instead of the area of a
rectangle in formula (5). For a more detailed discussion of numerical integration
methods, please take a further course in mathematics such as Math 371, Math 471
or Math 472.
Homework Question 3

a) Write a program
RR xy that uses the Monte Carlo method to approximate the double
integral e dA, where R = [0, 1] × [0, 1]. Show the program output for
R
N = 10, 100, 1000, 10000, 100000 and 1000000 random points.

8
b) Write a RRR
program that uses the Monte Carlo method to approximate the triple
integral exyz dV , where S = [0, 1] × [0, 1] × [0, 1]. Show the program output
S
for N = 10, 100, 1000, 10000, 100000 and 1000000 random points.

c) Use the Monte Carlo method to approximate the volume of a sphere of radius
1.

4 Parallel Monte Carlo Integration


As you may have noticed, the algorithms are simple, but can require very many grid
points to become accurate. It is therefore useful to run these algorithms on a parallel
computer. We will demonstrate a parallel monte Carlo calculation of π. Before we
can do this, we need to learn how to use a parallel computer. We shall use Trestles
located at the San Diego supercomputing center. Information on this computer is
available at: http://www.sdsc.edu/us/resources/trestles/

4.1 MPI
Before we can do Monte Carlo integration, we need to learn a little about paral-
lel programming. A copy of the current standard MPI standard can be found at
http://www.mpi-forum.org/. It allows for parallelization of Fortran, C and C++
programs. There are newer parallel programming languages such as Co-Array For-
tran (CAF) and Unified Parallel C (UPC) which allow the programmer to view
memory as a single addressable space even on a distributed memory machine. How-
ever, computer hardware limitations imply that most of the programming concepts
used when writing MPI programs will be required to write programs in CAF and
UPC. Compiler technology for these languages is also not as well developed as
compiler technology for older languages such as Fortran and C, so at the present
time, Fortran and C dominate high performance computing. An introduction to
the essential concepts required for writing and using MPI programs can be found
at http://www.shodor.org/refdesk/Resources/Tutorials/. More information
on MPI can be found in Gropp, Lusk and Skjellum3 , Gropp, Lusk and Thakur4
and at https://computing.llnl.gov/tutorials/mpi/. There are many resources
available online, however once the basic concepts have been mastered, what is most
3
Using MPI MIT Press(1999)
4
Using MPI 2 MIT Press (1999)

9
useful is an index of MPI commands, usually a search engine will give you sources
of listings, however we have found the following sites useful:

• http://publib.boulder.ibm.com/infocenter/zos/v1r13/index.jsp?topic=
%2Fcom.ibm.zos.r13.fomp200%2Fipezps00172.htm

• http://www.open-mpi.org/doc/v1.4/

Homework Question 4

a) What does MPI stand for?

b) Please read the tutorials at http://www.shodor.org/refdesk/Resources/


Tutorials/BasicMPI/ and at https://computing.llnl.gov/tutorials/mpi/,
then explain what the following commands do:

• USE mpi or INCLUDE ’mpif.h’


• MPI INIT
• MPI COMM SIZE
• MPI COMM RANK
• MPI FINALIZE

c) What is the version number of the current MPI standard?

d) Try to understand the Hello World program in listing 7. Run the program in
listing 7 on 32 and 64 MPI processes5 . Put the output of each run in your
solutions, the output will be in a file of the form
job output
An example makefile to compile this on Trestles is in listing 8. An example
submission script is in listing 9. On Trestles, there is a maximum of 32 cores
per node, so if more than 32 MPI processes are required, one needs to change
the number of nodes as well. The total number of cores required is equal to the
number of nodes multiplied by the number of processes per node. To change
the number of MPI processes that the program will run on from 32 to 64,
change
nodes=1:ppn=32
to
5
One can run this program on many more than 64 processes, however, the output becomes quite
excessive

10
nodes=2:ppn=32
and also change the submission script from
mpirun rsh -np 32 -hostfile $PBS NODEFILE helloworld
to
mpirun rsh -np 64 -hostfile $PBS NODEFILE helloworld.

!−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
!
!
! PURPOSE
!
! T h i s program u s e s MPI t o p r i n t h e l l o w o r l d from a l l a v a i l a b l e
! processes
!
! . . Parameters . .
!
! . . Scalars . .
! myid = process id
! numprocs = t o t a l number o f MPI p r o c e s s e s
! ierr = e r r o r code
!
! . . Arrays . .
!
! . . Vectors . .
!
! REFERENCES
! h t t p : / / en . w i k i p e d i a . o r g / w i k i /OpenMP
!
! ACKNOWLEDGEMENTS
! The program b e l o w was m o d i f i e d from one a v a i l a b l e a t t h e i n t e r n e t
! a d d r e s s i n t h e r e f e r e n c e s . T h i s i n t e r n e t a d d r e s s was l a s t c h e c k e d
! on 30 December 2011
!
! ACCURACY
!
! ERROR INDICATORS AND WARNINGS
!
! FURTHER COMMENTS
!
!−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
! External routines required
!
! External l i b r a r i e s required
! MPI l i b r a r y

PROGRAM h e l l o 9 0
USE MPI
INTEGER( k i n d =4) :: myid , numprocs , ierr

CALL MPI INIT ( i e r r )


CALL MPI COMM SIZE(MPI COMM WORLD, numprocs , i e r r )
CALL MPI COMM RANK(MPI COMM WORLD, myid , i e r r )

PRINT∗ , ’ H e l l o World from p r o c e s s ’ , myid


CALL MPI BARRIER(MPI COMM WORLD, i e r r )
IF ( myid == 0 ) THEN
PRINT∗ , ’ There a r e ’ , numprocs , ’ MPI p r o c e s s e s ’
END IF
CALL MPI FINALIZE ( i e r r )
END PROGRAM

Listing 1: A Fortran program which demonstrates parallelizm using MPI.


#d e f i n e t h e c o m p l i e r
COMPILER = m p i f 9 0
# compilation settings , optimization , precision , parallelization

11
FLAGS = −O0

# libraries
LIBS =
# s o u r c e l i s t f o r main program
SOURCES = helloworld . f90

t e s t : $ (SOURCES)
$ {COMPILER} −o h e l l o w o r l d $ (FLAGS) $ (SOURCES)

clean :
rm ∗ . o

clobber :
rm h e l l o w o r l d

Listing 2: An example makefile for compiling the helloworld program in listing 7.


#! / b i n / bash
# t h e queue t o be u s e d .
#PBS −q normal
# s p e c i f y your p r o j e c t a l l o c a t i o n
#PBS −A mia122
# number o f n o d e s and number o f p r o c e s s o r s p e r node r e q u e s t e d
#PBS − l n o d e s =1: ppn=32
# r e q u e s t e d Wall−c l o c k t i m e .
#PBS − l w a l l t i m e = 0 0 : 0 5 : 0 0
# name o f t h e s t a n d a r d o u t f i l e t o be ” output− f i l e ” .
#PBS −o j o b o u t p u t
# name o f t h e j o b
#PBS −N M P I H e l l o
# Email a d d r e s s t o s e n d a n o t i f i c a t i o n to , c h a n g e ” y o u r e m a i l ” a p p r o p r i a t e l y
#PBS −M youremail@umich . edu
# s e n d a n o t i f i c a t i o n f o r j o b a b o r t , b e g i n and end
#PBS −m abe
#PBS −V
cd $PBS O WORKDIR #c h a n g e t o t h e w o r k i n g d i r e c t o r y
m p i r u n r s h −np 32 − h o s t f i l e $PBS NODEFILE h e l l o w o r l d

Listing 3: An example submission script for use on Trestles.


We now examine a Fortran program for calculating π. These programs are taken
from http://chpc.wustl.edu/mpi-fortran.html, where further explanation can
be found. The original source of these programs appears to be Using MPI by Gropp,
Lusk and Skjellum.

Serial

!−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
!
!
! PURPOSE
!
! T h i s program u s e a monte c a r l o method t o c a l c u l a t e p i
!
! . . Parameters . .
! npts = t o t a l number o f Monte C a r l o p o i n t s
! xmin = l o w e r bound f o r i n t e g r a t i o n r e g i o n
! xmax = u p p e r bound f o r i n t e g r a t i o n r e g i o n
! . . Scalars . .
! i = loop counter
! f = a v e r a g e v a l u e from summation
! sum = t o t a l sum

12
! randnum = random number g e n e r a t e d from ( 0 , 1 ) u n i f o r m
! distribution
! x = c u r r e n t Monte C a r l o l o c a t i o n
! . . Arrays . .
!
! . . Vectors . .
!
! REFERENCES
! h t t p : / / chpc . w u s t l . edu /mpi−f o r t r a n . html
! Gropp , Lusk and S k j e l l u m , ” U s i n g MPI” MIT p r e s s ( 1 9 9 9 )
!
! ACKNOWLEDGEMENTS
! The program b e l o w was m o d i f i e d from one a v a i l a b l e a t t h e i n t e r n e t
! a d d r e s s i n t h e r e f e r e n c e s . T h i s i n t e r n e t a d d r e s s was l a s t c h e c k e d
! on 30 March 2012
!
! ACCURACY
!
! ERROR INDICATORS AND WARNINGS
!
! FURTHER COMMENTS
!
!−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
! External routines required
!
! External l i b r a r i e s required
! None
PROGRAM m o n t e c a r l o
IMPLICIT NONE

INTEGER( k i n d =8) , PARAMETER : : npts = 1 e10


REAL( k i n d =8) , PARAMETER : : xmin =0.0 d0 , xmax=1.0 d0
INTEGER( k i n d =8) :: i
REAL( k i n d =8) : : f , sum , randnum , x

DO i =1 , n p t s
CALL random number ( randnum )
x = ( xmax−xmin ) ∗ randnum + xmin
sum = sum + 4 . 0 d0 / ( 1 . 0 d0 + x ∗ ∗ 2 )
END DO
f = sum/ n p t s
PRINT∗ , ’ PI c a l c u l a t e d w i t h ’ , n p t s , ’ points = ’ , f

STOP
END

Listing 4: A Fortran program which demonstrates parallelizm using MPI.


#d e f i n e t h e c o m p l i e r
COMPILER = m p i f 9 0
# compilation settings , optimization , precision , parallelization
FLAGS = −O0

# libraries
LIBS =
# s o u r c e l i s t f o r main program
SOURCES = montecarloserial . f90

t e s t : $ (SOURCES)
$ {COMPILER} −o m o n t e c a r l o s e r i a l $ (FLAGS) $ (SOURCES)

clean :
rm ∗ . o

clobber :
rm montecarloserial

Listing 5: An example makefile for compiling the helloworld program in listing 7.


#! / b i n / bash
# t h e queue t o be u s e d .
#PBS −q s h a r e d
# s p e c i f y your p r o j e c t a l l o c a t i o n
#PBS −A mia122
# number o f n o d e s and number o f p r o c e s s o r s p e r node r e q u e s t e d

13
#PBS − l n o d e s =1: ppn=1
# r e q u e s t e d Wall−c l o c k t i m e .
#PBS − l w a l l t i m e = 0 0 : 0 5 : 0 0
# name o f t h e s t a n d a r d o u t f i l e t o be ” output− f i l e ” .
#PBS −o j o b o u t p u t
# name o f t h e j o b
#PBS −N M C s e r i a l
# Email a d d r e s s t o s e n d a n o t i f i c a t i o n to , c h a n g e ” y o u r e m a i l ” a p p r o p r i a t e l y
#PBS −M youremail@umich . edu
# s e n d a n o t i f i c a t i o n f o r j o b a b o r t , b e g i n and end
#PBS −m abe
#PBS −V
cd $PBS O WORKDIR #c h a n g e t o t h e w o r k i n g d i r e c t o r y
m p i r u n r s h −np 1 − h o s t f i l e $PBS NODEFILE montecarloserial

Listing 6: An example submission script for use on Trestles.


Parallel

!−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
!
!
! PURPOSE
!
! T h i s program u s e s MPI t o do a p a r a l l e l monte c a r l o c a l c u l a t i o n o f p i
!
! . . Parameters . .
! npts = t o t a l number o f Monte C a r l o p o i n t s
! xmin = l o w e r bound f o r i n t e g r a t i o n r e g i o n
! xmax = u p p e r bound f o r i n t e g r a t i o n r e g i o n
! . . Scalars . .
! mynpts = t h i s p r o c e s s e s number o f Monte C a r l o p o i n t s
! myid = process id
! nprocs = t o t a l number o f MPI p r o c e s s e s
! ierr = e r r o r code
! i = loop counter
! f = a v e r a g e v a l u e from summation
! sum = t o t a l sum
! mysum = sum on t h i s p r o c e s s
! randnum = random number g e n e r a t e d from ( 0 , 1 ) u n i f o r m
! distribution
! x = c u r r e n t Monte C a r l o l o c a t i o n
! start = s i m u l a t i o n s t a r t time
! finish = s i m u l a t i o n end t i m e
! . . Arrays . .
!
! . . Vectors . .
!
! REFERENCES
! h t t p : / / chpc . w u s t l . edu /mpi−f o r t r a n . html
! Gropp , Lusk and S k j e l l u m , ” U s i n g MPI” MIT p r e s s ( 1 9 9 9 )
!
! ACKNOWLEDGEMENTS
! The program b e l o w was m o d i f i e d from one a v a i l a b l e a t t h e i n t e r n e t
! a d d r e s s i n t h e r e f e r e n c e s . T h i s i n t e r n e t a d d r e s s was l a s t c h e c k e d
! on 30 March 2012
!
! ACCURACY
!
! ERROR INDICATORS AND WARNINGS
!
! FURTHER COMMENTS
!
!−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
! External routines required
!
! External l i b r a r i e s required
! MPI l i b r a r y
PROGRAM m o n t e c a r l o m p i
USE MPI
IMPLICIT NONE

14
INTEGER( k i n d =8) , PARAMETER : : npts = 1 e10
REAL( k i n d =8) , PARAMETER : : xmin =0.0 d0 , xmax=1.0 d0
INTEGER( k i n d =8) : : mynpts
INTEGER( k i n d =4) : : i e r r , myid , n p r o c s
INTEGER( k i n d =8) :: i
REAL( k i n d =8) : : f , sum , mysum , randnum
REAL( k i n d =8) : : x , start , f i n i s h

! I n i t i a l i z e MPI
CALL MPI INIT ( i e r r )
CALL MPI COMM RANK(MPI COMM WORLD, myid , i e r r )
CALL MPI COMM SIZE(MPI COMM WORLD, n p r o c s , i e r r )
s t a r t=MPI WTIME ( )

! C a l c u l a t e t h e number o f p o i n t s e a c h MPI p r o c e s s needs to generate


IF ( myid . eq . 0 ) THEN
mynpts = n p t s − ( n p r o c s −1)∗( n p t s / n p r o c s )
ELSE
mynpts = n p t s / n p r o c s
ENDIF

! s e t i n i t i a l sum t o z e r o
mysum = 0 . 0 d0
! u s e l o o p on l o c a l p r o c e s s t o g e n e r a t e p o r t i o n o f Monte C a r l o integral
DO i =1 , mynpts
CALL random number ( randnum )
x = ( xmax−xmin ) ∗ randnum + xmin
mysum = mysum + 4 . 0 d0 / ( 1 . 0 d0 + x ∗ ∗ 2 )
ENDDO

! Do a r e d u c t i o n and sum t h e r e s u l t s from a l l p r o c e s s e s


CALL MPI REDUCE( mysum , sum , 1 , MPI DOUBLE PRECISION , MPI SUM,&
0 ,MPI COMM WORLD, i e r r )
f i n i s h =MPI WTIME ( )

! Get one p r o c e s s t o o u t p u t t h e r e s u l t and r u n n i n g t i m e


IF ( myid . eq . 0 ) THEN
f = sum/ n p t s
PRINT∗ , ’ PI c a l c u l a t e d w i t h ’ , n p t s , ’ p o i n t s = ’ , f
PRINT∗ , ’ Program t o o k ’ , f i n i s h −s t a r t , ’ f o r Time s t e p p i n g ’
ENDIF

CALL MPI FINALIZE ( i e r r )

STOP
END PROGRAM

Listing 7: A Fortran program which demonstrates parallelizm using MPI.


#d e f i n e t h e c o m p l i e r
COMPILER = m p i f 9 0
# compilation settings , optimization , precision , parallelization
FLAGS = −O0

# libraries
LIBS =
# s o u r c e l i s t f o r main program
SOURCES = montecarloparallel . f90

t e s t : $ (SOURCES)
$ {COMPILER} −o m o n t e c a r l o p a r a l l e l $ (FLAGS) $ (SOURCES)

clean :
rm ∗ . o

clobber :
rm montecarloparallel

Listing 8: An example makefile for compiling the helloworld program in listing 7.


#! / b i n / bash
# t h e queue t o be u s e d .
#PBS −q normal
# s p e c i f y your p r o j e c t a l l o c a t i o n

15
#PBS −A mia122
# number o f n o d e s and number o f p r o c e s s o r s p e r node r e q u e s t e d
#PBS − l n o d e s =1: ppn=32
# r e q u e s t e d Wall−c l o c k t i m e .
#PBS − l w a l l t i m e = 0 0 : 0 5 : 0 0
# name o f t h e s t a n d a r d o u t f i l e t o be ” output− f i l e ” .
#PBS −o j o b o u t p u t
# name o f t h e j o b , you may want t o c h a n g e t h i s s o i t i s u n i q u e t o you
#PBS −N MPI MCPARALLEL
# Email a d d r e s s t o s e n d a n o t i f i c a t i o n to , c h a n g e ” y o u r e m a i l ” a p p r o p r i a t e l y
#PBS −M youremail@umich . edu
# s e n d a n o t i f i c a t i o n f o r j o b a b o r t , b e g i n and end
#PBS −m abe
#PBS −V

# change to the job s u b m i s s i o n d i r e c t o r y


cd $PBS O WORKDIR
# Run t h e j o b
m p i r u n r s h −np 32 − h o s t f i l e $PBS NODEFILE montecarloparallel

Listing 9: An example submission script for use on Trestles.


Homework Question 5
a) Explain why using Monte Carlo to evaluate
Z 1
1
2
dx
0 1+x

allows you to find π and, in your own words, explain what the serial and parallel
programs do.
b) Find the time it takes to run the Parallel Monte Carlo program on 32, 64, 128,
256 and 512 cores.
Bonus Questions
a) Use a parallel Monte Carlo integration program to evaluate
ZZ
x2 + y 6 + exp(xy) cos(y exp(x))dA

over the unit circle.


b) Use a parallel Monte Carlo integration program to approximate the volume of
2 2 2
the ellipsoid x9 + y4 + z1 = 1.
c) Write parallel programs to find the volume of the 4 dimensional sphere
4
X
1≥ x2i .
i=1

Try both Monte Carlo and Riemann sum techniques.

16

You might also like