KEMBAR78
High Performance Computing Presentation | PPTX
HIGH Performance Computing
Latest Advancements
Prepared By
Omar Altayyan
Muhammad Ayoub
Ahmad Yasser Al-Shalabi
What is HPC?
• suitable environments
• Solid infrastructure
• Software and Hardware Components
• allows Scientists and Researchers to solve Math, Biology, Machine
Learning, Physics Simulations, and numerous other problems
• Allowing significant breakthroughs.
What is HPC?
• Large Amount Of High-end Computers Called Servers
• Huge Amounts of disk space, memory, and CPUs
• Large Cooling Systems
• Reserve Power Sources
• Reserve Hardware
• Software tolerant to Hardware Faults
• easy to swap any component if damaged
Hardware Components
HPC
• Withstands Heat, Usage Pressure, and
Electrical Outage.
• Costs Multiple times (often 10x) more than
regular consumer products, despite not
having any significant computation, space, or
speed advantage
• Components have low degradation rate
Consumer
• Damaged if under pressure or constant heat
or in the case of a power outage or
overcharge
• Regular Cost
• Relatively High Degradation Rate
HPC Applications
• Biology:
• Protein Folding
• DNA Sequencing
• MRI Image Analysis
• Physics Simulations:
• Galaxies Collision
• Particle Interaction
• Math:
• Fractals
• Theorem Proving
• Cloud Services:
• Storage Rental
• Web Hosting
• Compute Time
• Machine Learning
• Neural Networks Training
• Data Clustering
• Stocks:
• Stock Price Forecast
• Market data analysis
HPC Technologies
• Message Passing Interface
• Requires High Speed Networking
(InfiniBand?)
• Inner-components messages
• Synchronization
• Single Task, Multiple Servers
MPI
HPC Technologies
• Standard
• Portable across platforms
• Functional
• Available
MPI
HPC Technologies
• Compute Unified Device Architecture
• Programming the GPU
• Use The Large amount of cores in a
GPU compared to a CPU
(thousands vs tens)
• Graphics to GPGPU
• Multiple GPUs on a single board
CUDA
HPC Technologies
• Used extensively in most famous
neural network libraries, including:
• TensorFlow
• Mxnet
• Caffe
• Dadiannao
CUDA
HPC Technologies Comparison
MPI
• Multiple Servers talking to each other
• Many few-core CPUs
• Expensive Hardware to be Effective
• Relatively Simple to learn and develop
• 1 – layer of extra memory management
• Large Programming Language Support
CUDA
• Single Server/PC
• Single or a couple of Many-Core GPUs
• Good Performance on Consumer Grade GPUs
• Very challenging to learn
• Over 5 different types of memory
• Limited to C and Fortran
HPC Technologies
Combine MPI and CUDA?
Enter CUDA-aware MPI
• New Technology
• Many Servers containing GPUs
• CUDA program Run on every GPU
• MPI for inner-server communication
• Robust and well optimized (unlike an adhoc)
Enter CUDA-aware MPI
• Combine GPUs to reach High Computation Power
• Very low cost
• Very low power usage
Example: Building a Petaflops System
• To understand the Gains From this Technology,
Lets Build A powerful Petaflops system
• What is a Petaflop?
1 Million Billion (10^15) Floating Point Operations Per second
Example: Building a Petaflops System
• Cost: 1200$
• 3840 Cores
• 12 Teraflops
Titan XP
Example: Building a Petaflops System
• Combine 100 of this GPU
• 1.2 Petaflops system
• 120K for GPUs + 380K Infrastructure at most:
• Total Cost = 500K
• Power Usage: 600W * 100 = 60K watts
Titan XP
Example: Building a Petaflops System
• Built in 2008
• About 1.7 Petaflops
• Costs 100 Million $
• Power Usage: 2.5 MW
Thanks for Listening!!

High Performance Computing Presentation

  • 1.
    HIGH Performance Computing LatestAdvancements Prepared By Omar Altayyan Muhammad Ayoub Ahmad Yasser Al-Shalabi
  • 2.
    What is HPC? •suitable environments • Solid infrastructure • Software and Hardware Components • allows Scientists and Researchers to solve Math, Biology, Machine Learning, Physics Simulations, and numerous other problems • Allowing significant breakthroughs.
  • 3.
    What is HPC? •Large Amount Of High-end Computers Called Servers • Huge Amounts of disk space, memory, and CPUs • Large Cooling Systems • Reserve Power Sources • Reserve Hardware • Software tolerant to Hardware Faults • easy to swap any component if damaged
  • 4.
    Hardware Components HPC • WithstandsHeat, Usage Pressure, and Electrical Outage. • Costs Multiple times (often 10x) more than regular consumer products, despite not having any significant computation, space, or speed advantage • Components have low degradation rate Consumer • Damaged if under pressure or constant heat or in the case of a power outage or overcharge • Regular Cost • Relatively High Degradation Rate
  • 5.
    HPC Applications • Biology: •Protein Folding • DNA Sequencing • MRI Image Analysis • Physics Simulations: • Galaxies Collision • Particle Interaction • Math: • Fractals • Theorem Proving • Cloud Services: • Storage Rental • Web Hosting • Compute Time • Machine Learning • Neural Networks Training • Data Clustering • Stocks: • Stock Price Forecast • Market data analysis
  • 6.
    HPC Technologies • MessagePassing Interface • Requires High Speed Networking (InfiniBand?) • Inner-components messages • Synchronization • Single Task, Multiple Servers MPI
  • 7.
    HPC Technologies • Standard •Portable across platforms • Functional • Available MPI
  • 8.
    HPC Technologies • ComputeUnified Device Architecture • Programming the GPU • Use The Large amount of cores in a GPU compared to a CPU (thousands vs tens) • Graphics to GPGPU • Multiple GPUs on a single board CUDA
  • 9.
    HPC Technologies • Usedextensively in most famous neural network libraries, including: • TensorFlow • Mxnet • Caffe • Dadiannao CUDA
  • 10.
    HPC Technologies Comparison MPI •Multiple Servers talking to each other • Many few-core CPUs • Expensive Hardware to be Effective • Relatively Simple to learn and develop • 1 – layer of extra memory management • Large Programming Language Support CUDA • Single Server/PC • Single or a couple of Many-Core GPUs • Good Performance on Consumer Grade GPUs • Very challenging to learn • Over 5 different types of memory • Limited to C and Fortran
  • 11.
  • 12.
    Enter CUDA-aware MPI •New Technology • Many Servers containing GPUs • CUDA program Run on every GPU • MPI for inner-server communication • Robust and well optimized (unlike an adhoc)
  • 13.
    Enter CUDA-aware MPI •Combine GPUs to reach High Computation Power • Very low cost • Very low power usage
  • 14.
    Example: Building aPetaflops System • To understand the Gains From this Technology, Lets Build A powerful Petaflops system • What is a Petaflop? 1 Million Billion (10^15) Floating Point Operations Per second
  • 15.
    Example: Building aPetaflops System • Cost: 1200$ • 3840 Cores • 12 Teraflops Titan XP
  • 16.
    Example: Building aPetaflops System • Combine 100 of this GPU • 1.2 Petaflops system • 120K for GPUs + 380K Infrastructure at most: • Total Cost = 500K • Power Usage: 600W * 100 = 60K watts Titan XP
  • 17.
    Example: Building aPetaflops System • Built in 2008 • About 1.7 Petaflops • Costs 100 Million $ • Power Usage: 2.5 MW
  • 18.