HPC stands for high performance computing and refers to systems that provide more computing power than is generally available. HPC bridges the gap between what small organizations can afford and what supercomputers provide. HPC uses clusters of commodity hardware and parallel processing techniques to increase processing speed and efficiency while reducing costs. Key applications of HPC include geographic information systems, bioinformatics, weather forecasting, and online transaction processing.
Computing Performance Introductionto HPC High To achieve more Than the normal Operational ability in terms of throughput Actual Operation or execution of An algorithm In simple term HPC is a computing system that provides more computing performance, power or resources than is generally available.
4.
Traditionally universities, researchorganizations and large companies use super computer to achieve more computing power. Super computers are most expensive machines. Small organization can not afford this. However most of the organizations need more computing power to solve business problems. Right Solution is HPC that bridges this gap . Why HPC ?
5.
How HPC ?Super Computer HPC Cluster made up to cheap workstations = Performance > Cost
6.
The main areaof this discipline is developing parallel processing algorithms and software programs that can be divided in to little pieces . Each pieces can executed on separate processors . How HPC ?
7.
Increase Processing speed . Reduce cost Increase efficiency and achieve scalability . Goals of HPC
8.
Geographic Information SystemBio-informatics and drug discovery Weather forecasting Statistical analysis and mathematical modeling Online transaction processing Virtual reality Search engine Other mission critical applications. Applications of HPC
9.
HPC Systems canbe broadly classified in to Shared Memory Systems Distribute Memory Systems Hybrid Systems HPC Architecture
10.
Example: Symmetric MultiProcessing (SMP) systems like Compaq Alpha Server ES 40 Operate on single shared bank of memory . Operating System manages the allocation of processor time to applications. Parallel nature of machine is generally hidden from user. Shared Memory Systems CPU CPU CPU Cache Cache Cache Main Memory
11.
Special case ofSMP architecture is Parallel vector processing. Shared Memory Systems Ability to apply Instruction set to CPU at once rather than single instruction at a time. Specially designed memory that is filled and passed along a vector instruction pipeline and returning results much faster than traditional scalar processors.
12.
It include Massively Parallel Processing (MPP) systems Clustered systems . Massively parallel processing architecture It employs several processing nodes consisting of CPU and memory connected together using high speed interconnects. Loosely coupled in nature. Allows to scale up to tens of thousands of CPU . Distributed Memory Systems
Cluster Concept isto tie together a lot of computing elements. Primary goals are economies of scale and entry level cost . Distributed Memory Systems
15.
Cont … Thecomponents used to build HPC clusters include Commercially off the shelf (COTS) nodes . Cluster interconnects like Ethernet, Myrinet or ServerNet and Operating systems like Linux. Cluster management software is an important component to manage the cluster as a single system. Distributed Memory Systems
16.
Hybrid Systems DistributedMemory Architecture Shared Memory Architecture + Non-Uniform Memory Access (NUMA) is an example of Hybrid System
17.
Two disadvantage ofthe SMP shared memory model are Processors must communicate with a single bank of memory, thus gives rise to the danger of memory bottlenecks. Single memory bus to connect the processors with the memory. Adding more processors quickly saturates the bus capacity and kills scalability . Why Hybrid Systems ?
18.
In this ModelMemory is physically distributed and Hardware Controller maintain single shared memory image at the user level. Hybrid Model CPU CPU CPU Cache Cache Cache Memory Memory Memory Controller Controller Controller
19.
Vectorisation The softwaretechnique, in which vector arrays are used for partial calculation as an intermediate process. Threads and Multithreading An old concept, but an excellent way to program today's Symmetric Multi-Processors Data Decomposition The divide-and-conquer parallel approach to large data applications. Task Based Parallelism Scalable client-server parallelism for high throughput. HPC Programming Concepts
20.
We take alook at some major software techniques and standards in two key areas of parallel HPC : Data Parallelism Message Passing HPC Software
21.
Successful when theoperations performed on the distributed data subsets are largely independent of each other. Easier for the programmer . parallelism is applied by the program compiler rather than by the programmer. Two important technology fall under this heading HPF – High Performance Fortran Shared memory Programming HPC Software – Data Parallelism
22.
It is an inter-processor communication standards for flexible, efficient applications. The programmer has to understand and code by hand the necessary data communications between processors. More difficult to write the programs. Advantage is the full control over application and detailed knowledge of its ins and outs HPC Software – Message Passing Human programmers can almost always write faster and more efficient parallel programs than automatic compilers can generate.
23.
Cont … Twomost important Message Passing systems are Message Passing Interface (MPI) Parallel Virtual Machine (PVM) HPC Software – Message Passing
24.
Science and technologyplays an important role in improving the quality of life. The interests in extending capability of workstations to High Performance Computer System will definitely improve quality of life. Conclusion