KEMBAR78
Parallel Computing - Question Bank | PDF | Parallel Computing | Concurrency (Computer Science)
0% found this document useful (0 votes)
64 views4 pages

Parallel Computing - Question Bank

Uploaded by

Raghu Nandan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views4 pages

Parallel Computing - Question Bank

Uploaded by

Raghu Nandan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

SUBJECT CODE AND TITLE Parallel Computing (BCS702)

DEPARTMENT ISE
SCHEME 2022 BATCH 2022
SEMESTER& SECTION 7th
FACULTY NAME Ms Maheshwari Patil

Module 1
1. Differentiate between sequential and parallel programming. Write a neat
comparison table.

2. With neat diagrams, explain types of parallelism (data parallelism and task
parallelism) with suitable examples.

3. Explain Flynn’s Taxonomy of computer architectures. Write short notes on


SISD, SIMD, MISD, and MIMD systems.

4. Explain in detail the characteristics, advantages, and limitations of SIMD


systems.

5. Describe the types of MIMD systems (shared-memory and distributed-


memory) with examples.

6. Explain interconnection networks in parallel computing. Compare ring, mesh,


torus, and hypercube topologies with neat diagrams.

7. What is cache coherence? Explain snooping-based and directory-based cache


coherence protocols.

8. With neat diagram, explain the MESI protocol used in cache coherence.

9. Compare and contrast shared-memory and distributed-memory


architectures with respect to communication, synchronization, scalability, and
programming model.

10. Discuss in detail the coordination mechanisms in parallel programming:


locks, barriers, condition variables, and message passing.

11. Explain the steps involved in writing a parallel program with examples
from OpenMP or MPI.

12. What are the major challenges in parallel programming? Explain with
solutions.

13. Write a detailed note on false sharing and cache-related issues in shared-
memory parallel systems.

14. Explain load balancing and scheduling techniques in parallel computing.

15. With neat diagram, explain the shared-memory architecture. Discuss its
advantages and disadvantages.

16. With neat diagram, explain the distributed-memory architecture.


Discuss its advantages and disadvantages.

17. Write a detailed note on real-time applications of parallel programming


in areas like scientific simulations, AI, and robotics.

18. Explain in detail the need for parallel programming in modern computing
with suitable examples.

19. Explain with examples the coordination of processes and threads in


parallel programming.

20. Compare and contrast shared-memory and distributed-memory


programming models with examples from OpenMP and MPI.

Module 2
1. Explain the memory organization of a GPU. Differentiate between global
memory and processor-local memory.

2. What is branch divergence in GPU programming? Illustrate with an example


and discuss its impact on performance.
3. Explain how GPU hardware schedules threads. Why is this efficient compared
to CPU scheduling?

4. What is hybrid programming in cluster systems? Explain with an example and


list its advantages and disadvantages.

5. Discuss the challenges of performing I/O in parallel programs. What rules are
followed to simplify I/O?

6. Compare I/O handling in MIMD systems and GPU-based systems.

7. Define speedup and efficiency of a parallel program. Derive their mathematical


expressions.

8. A program has Tserial = 24 ms, p = 8, and Tparallel = 4 ms.


Calculate:
a) Speedup
b) Efficiency
c) Parallel overhead per process

9. Explain the effect of problem size on efficiency of a parallel program.

10. State and explain Amdahl’s Law. Derive the formula for maximum
speedup.

11. A program has 90% parallel portion. If Tserial = 20s, calculate the
maximum speedup possible using:
a) 10 processors
b) Infinite processors

Why should we not be overly concerned about Amdahl’s Law? Explain with
Gustafson’s perspective
12. Define scalability. Differentiate between strong scalability and weak
scalability with examples.

13. For a program with Tserial = n and Tparallel = n/p + 1,


prove that the program is weakly scalable.

14. Why is wall clock time preferred over CPU time in parallel programs?
Explain with examples.

15. Explain the steps involved in taking timings of a distributed-memory


program.

16. Why is efficiency not commonly used to evaluate GPU performance?

17. How is GPU performance measured? Explain timing strategies for GPU
programs.

18. Apply Amdahl’s Law to GPU programs. Explain with an example how the
CPU-handled serial portion limits speedup.

19. Write a short note on scalability in GPU programs.

You might also like