KEMBAR78
PDC Lecture 2 | PDF | Multi Core Processor | Parallel Computing
0% found this document useful (0 votes)
7 views13 pages

PDC Lecture 2

The lecture by Ms. Maryam Arshad covers the fundamentals of parallel and distributed computing, emphasizing its relevance in modern technology and nature. It outlines three major levels of parallel processing: thread level, core level, and node level, along with their respective programming models and tools like CUDA and OpenMP. The document also highlights the advantages of parallel computing over serial computing, such as time and cost efficiency, and discusses the programming approaches for parallel computers.

Uploaded by

furqan aslam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views13 pages

PDC Lecture 2

The lecture by Ms. Maryam Arshad covers the fundamentals of parallel and distributed computing, emphasizing its relevance in modern technology and nature. It outlines three major levels of parallel processing: thread level, core level, and node level, along with their respective programming models and tools like CUDA and OpenMP. The document also highlights the advantages of parallel computing over serial computing, such as time and cost efficiency, and discusses the programming approaches for parallel computers.

Uploaded by

furqan aslam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Parallel & distributed

computing
Lecture # 2
By
Ms. Maryam Arshad

Maryam Arshad | maryam.arshad@iisat.edu.pk 1


Need to learn about parallel computing.
In today’s digital era, Parallel computing is everywhere. This field is not limited to only those who deals
with the speed and time and money (the major benefits of parallelism). But we must understand that the
behavior of nature is also parallel.
• Rush hour traffic

• Ordering a hamburger
• Automobile Assembly
• Galaxy formation

• Planetary movement
• Weather / Ocean patterns
To solve daily life problems at large scale with the help of computing, one must understand to work with
parallelism. Besides nature, parallel computing is being used in your mobiles, laptops, desktop computers
and other devices. So It is in interest of all of us to understand parallelism.
Major Level’s of Parallel Processing
There are major three levels of parallel processing. In this course we are going to study all of
them individually in details.

Thread level parallelism


CPU / GPU parallel computing using cudaC
Core level parallelism
Multicore SMP(systematic multiprocessing) programming using OpenMP
Node level parallelism
Cluster / Multicomputer programming using MPI

3
Refreshing some concepts
Threads: are the virtual components or codes, which divides the physical core of a CPU into
virtual multiple cores.
Working of threads: The thread is created by operating system as result of a process. Every time
you open an application, it itself creates a thread which will handle all the tasks of that specific
application.
Example: The smartphone application is an example of this, when you open a app it shows a
circle which spins continuously, this process is done by a thread created for this purpose only,
and the second thread loads the information and presents it in the Graphical User Interface.

4
Refreshing some concepts
Please note that a traditional processor / core have two threads. A single CPU
core can have up-to 2 threads per core. For example, if a CPU is dual core (i.e.,
2 cores) it will have 4 threads. And if a CPU is Octal core (i.e., 8 core) it will
have 16 threads and vice-versa. Hence by increasing the threads, processors can
perform better with applications.
GPU(Graphic processing unit): A GPU has many number of threads, thousands
even tens of thousands threads for processing parallel data. If you have an
image with matrix of pixels to process with a GPU, you can process every pixel
parallel. GPU’s can locate on video card, with a general CPU on motherboard.

5
Thread level parallelism
This is lowest level of parallelism achieved by multithreading. As we know,
every core of CPU can have multiple threads. So for achieving thread level
parallelism, one must understand the architecture of CPU / GPU and CUDA.
CUDA: CUDA is a parallel computing platform and programming model to
program GPU. CUDA makes using a GPU for general purpose computing
simple and elegant. The developer still programs in the familiar C, C++,
Fortran, or an ever expanding list of supported languages.

6
Core level Parallelism
Multicore Processor: A multicore processor is a computer processor on a single
integrated circuit with two or more separated processing units called cores.
Within a node (PC, any digital device), there could be multiple cores in processor. One
need to understand the concept of cores and its architecture before program core level
parallelism.
Core level parallelism is achieved by multicore SMP (Systematic multiprocessing)
programming using OpenMP.
OpenMP: OpenMP is a library for parallel programming in SMP model. OpenMP
supports C C++ and Fortran.

7
Node Level Parallelism
Independent devices with processing units when work in the form of groups to achieve parallelism,
they formulate clusters. These devices are linked with each other through high speed network using
high speed switches.

One login node receives the job, distribute it among all other nodes with the help of high speed
network. This is where distributed computing comes into picture.

8
Distributed Computing
Distributed computing is a field of computer science that studies distributed
systems. A distributed system is a system whose components are located on
different networked computers, which communicate and coordinate their
actions by passing messages to one another from any system. The components
interact with one another in order to achieve a common goal.

9
Parallel Computing Redefined
It is the use of multiple processing elements simultaneously for solving any problem. Problems
are broken down into instructions and are solved concurrently as each resource that has been
applied to work is working at the same time.
• Advantages of Parallel Computing over Serial Computing are as follows:
• It saves time and money as many resources working together will reduce the time and cut
potential costs.
• It can be impractical to solve larger problems on Serial Computing.
• It can take advantage of non-local resources when the local resources are finite.
• Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing makes
better work of the hardware.

10
Usage of Parallel Computing

11
Programming Parallel Computers – How?

• Extend compilers: Translate sequential programs into parallel programs


• Extend languages: Add parallel operations
• Add parallel language layer on top of sequential language
• Define totally new parallel language and compiler system

12
Thank You

Maryam Arshad | maryam.arshad@iisat.edu.pk 13

You might also like