KEMBAR78
Research Scope in Parallel Computing And Parallel Programming | PPSX
Research Scope in Parallel
Programming and Parallel
computing
By Shitalkumar R. Sukhdeve
(www.Balloys.com)
1By Shitalkumar R . Sukhdeve
Parallel computing
• Parallel computing is a form of computation in
which many calculations are carried out
simultaneously,operating on the principle that
large problems can often be divided into
smaller ones, which are then solved at the
same time.
2By Shitalkumar R Sukhdeve
Parallel computing
• Different forms of parallel computing:
1. bit-level,
2. instruction level,
3. data, and
4. task parallelism.
By Shitalkumar R Sukhdeve 3
Parallel computing
• Parallel computing is closely related
to concurrent computing .
• But the two are distinct:
• it is possible to have parallelism without
concurrency (such as bit-level parallelism),
and concurrency without parallelism (such as
multitasking by time-sharing on a single-core
CPU).
By Shitalkumar R Sukhdeve 4
Parallel computing
• Parallel computers can be roughly classified
according to the level at which the hardware
supports parallelism, with multi-
core and multi-processor computers having
multiple processing elements within a single
machine, while clusters, MPPs, and grids use
multiple computers to work on the same task.
By Shitalkumar R Sukhdeve 5
Parallel computing and
Implementation issues
• In some cases parallelism is transparent to the
programmer, such as in bit-level or instruction-level
parallelism,
• but explicitly parallel algorithms, particularly those
that use concurrency, are more difficult to write than
sequential ones,because concurrency introduces
several new classes of potential software bugs, of
which race conditions are the most common.
• Communication and synchronization between the
different subtasks are typically some of the greatest
obstacles to getting good parallel program
performance.
By Shitalkumar R Sukhdeve 6
Software
Solutions
1. Programming parallel computers.
• Concurrent programming languages,
• libraries,
• APIs, and
• parallel programming models (such
as Algorithmic Skeletons) .
By Shitalkumar R Sukhdeve 7
Software Solutions Classification
• Above solutions can be divided into classes
based on the assumptions they make about
the underlying memory architecture—
1. shared memory,
2. distributed memory, or
3. shared distributed memory.
By Shitalkumar R Sukhdeve 8
Software Solutions Classification
• Shared memory programming languages
communicate by manipulating shared
memory variables.
• Distributed memory uses message
passing. POSIX Threads and OpenMP are two
of most widely used shared memory APIs,
whereas Message Passing Interface (MPI) is
the most widely used message-passing system
API.
By Shitalkumar R Sukhdeve 9
Software Solutions Classification
• The ”future concept” is also useful while
implementing parallel programming.
• In Future concept ,one part of a program
promises to deliver a required datum to
another part of a program at some future
time.
By Shitalkumar R Sukhdeve 10
Software Solutions Classification
2. Automatic parallelization
• Automatic parallelization of a sequential program
by a compiler.
• Despite decades of work by compiler researchers,
automatic parallelization has had only limited
success.
• Mainstream parallel programming languages
remain either explicitly parallel or (at
best) partially implicit, in which a programmer
gives the compiler directives for parallelization.
By Shitalkumar R Sukhdeve 11
Software Solutions Classification
• A few fully implicit parallel programming
languages exist—
1. SISAL,
2. Parallel Haskell,
3. System C (for FPGAs),
4. Mitrion-C,
5. VHDL, and
6. Verilog.
By Shitalkumar R Sukhdeve 12
Software Solutions Classification
3. Application checkpointing
• Is a technique whereby the computer system
takes a "snapshot" of the application — a record
of all current resource allocations and variable
states, akin to a core dump; this information can
be used to restore the program if the computer
should fail.
• Application checkpointing means that the
program has to restart from only its last
checkpoint rather than the beginning.
By Shitalkumar R Sukhdeve 13
Algorithmic Methods
As parallel computers become larger and faster, it becomes
feasible to solve problems that previously took too long to
run. Common types of problems found in parallel
computing applications are:[
• Dense linear algebra
• Sparse linear algebra
• Spectral methods (such as Cooley–Tukey fast Fourier
transform)
• n-body problems (such as Barnes–Hut simulation)
• Structured grid problems (such as Lattice Boltzmann
methods)
• Unstructured grid problems (such as found in finite
element analysis)
By Shitalkumar R Sukhdeve 14
Algorithmic Methods
• Monte Carlo simulation
• Combinational logic (such as brute-force
cryptographic techniques)
• Graph traversal (such as sorting algorithms)
• Dynamic programming
• Branch and bound methods
• Graphical models (such as detecting hidden
Markov models and constructing Bayesian
networks)
• Finite-state machine simulation
By Shitalkumar R Sukhdeve 15
Conclusion
• As based on the above literature survey,
parallel computing and parallel programming
has immense scope for research in the area of
implementation and performance analysis.
By Shitalkumar R Sukhdeve 16
References
• https://en.wikipedia.org
By Shitalkumar R Sukhdeve 17
Thank you
By Shitalkumar R Sukhdeve 18

Research Scope in Parallel Computing And Parallel Programming

  • 1.
    Research Scope inParallel Programming and Parallel computing By Shitalkumar R. Sukhdeve (www.Balloys.com) 1By Shitalkumar R . Sukhdeve
  • 2.
    Parallel computing • Parallelcomputing is a form of computation in which many calculations are carried out simultaneously,operating on the principle that large problems can often be divided into smaller ones, which are then solved at the same time. 2By Shitalkumar R Sukhdeve
  • 3.
    Parallel computing • Differentforms of parallel computing: 1. bit-level, 2. instruction level, 3. data, and 4. task parallelism. By Shitalkumar R Sukhdeve 3
  • 4.
    Parallel computing • Parallelcomputing is closely related to concurrent computing . • But the two are distinct: • it is possible to have parallelism without concurrency (such as bit-level parallelism), and concurrency without parallelism (such as multitasking by time-sharing on a single-core CPU). By Shitalkumar R Sukhdeve 4
  • 5.
    Parallel computing • Parallelcomputers can be roughly classified according to the level at which the hardware supports parallelism, with multi- core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. By Shitalkumar R Sukhdeve 5
  • 6.
    Parallel computing and Implementationissues • In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, • but explicitly parallel algorithms, particularly those that use concurrency, are more difficult to write than sequential ones,because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. • Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance. By Shitalkumar R Sukhdeve 6
  • 7.
    Software Solutions 1. Programming parallelcomputers. • Concurrent programming languages, • libraries, • APIs, and • parallel programming models (such as Algorithmic Skeletons) . By Shitalkumar R Sukhdeve 7
  • 8.
    Software Solutions Classification •Above solutions can be divided into classes based on the assumptions they make about the underlying memory architecture— 1. shared memory, 2. distributed memory, or 3. shared distributed memory. By Shitalkumar R Sukhdeve 8
  • 9.
    Software Solutions Classification •Shared memory programming languages communicate by manipulating shared memory variables. • Distributed memory uses message passing. POSIX Threads and OpenMP are two of most widely used shared memory APIs, whereas Message Passing Interface (MPI) is the most widely used message-passing system API. By Shitalkumar R Sukhdeve 9
  • 10.
    Software Solutions Classification •The ”future concept” is also useful while implementing parallel programming. • In Future concept ,one part of a program promises to deliver a required datum to another part of a program at some future time. By Shitalkumar R Sukhdeve 10
  • 11.
    Software Solutions Classification 2.Automatic parallelization • Automatic parallelization of a sequential program by a compiler. • Despite decades of work by compiler researchers, automatic parallelization has had only limited success. • Mainstream parallel programming languages remain either explicitly parallel or (at best) partially implicit, in which a programmer gives the compiler directives for parallelization. By Shitalkumar R Sukhdeve 11
  • 12.
    Software Solutions Classification •A few fully implicit parallel programming languages exist— 1. SISAL, 2. Parallel Haskell, 3. System C (for FPGAs), 4. Mitrion-C, 5. VHDL, and 6. Verilog. By Shitalkumar R Sukhdeve 12
  • 13.
    Software Solutions Classification 3.Application checkpointing • Is a technique whereby the computer system takes a "snapshot" of the application — a record of all current resource allocations and variable states, akin to a core dump; this information can be used to restore the program if the computer should fail. • Application checkpointing means that the program has to restart from only its last checkpoint rather than the beginning. By Shitalkumar R Sukhdeve 13
  • 14.
    Algorithmic Methods As parallelcomputers become larger and faster, it becomes feasible to solve problems that previously took too long to run. Common types of problems found in parallel computing applications are:[ • Dense linear algebra • Sparse linear algebra • Spectral methods (such as Cooley–Tukey fast Fourier transform) • n-body problems (such as Barnes–Hut simulation) • Structured grid problems (such as Lattice Boltzmann methods) • Unstructured grid problems (such as found in finite element analysis) By Shitalkumar R Sukhdeve 14
  • 15.
    Algorithmic Methods • MonteCarlo simulation • Combinational logic (such as brute-force cryptographic techniques) • Graph traversal (such as sorting algorithms) • Dynamic programming • Branch and bound methods • Graphical models (such as detecting hidden Markov models and constructing Bayesian networks) • Finite-state machine simulation By Shitalkumar R Sukhdeve 15
  • 16.
    Conclusion • As basedon the above literature survey, parallel computing and parallel programming has immense scope for research in the area of implementation and performance analysis. By Shitalkumar R Sukhdeve 16
  • 17.
  • 18.