KEMBAR78
Virtualization concept slideshare | PPTX
Yogesh Kumar
Introduction to Virtualization
Terminologies of Virtualization
Basic Concepts & Types
Introduction to Hypervisors
Bare metal
KVM as a use case
The idea behind VMs originates in the concept of virtual memory and time sharing.
All of which are concepts that were introduced in the early 60s, and pioneered at
the Massachusetts Institute of Technology and the Cambridge Scientific Center.
The most popular open-source virtualization suites, Xen, KVM, Libvirt and
VirtualBox.
 Benefits are as:
Server consolidation, hardware cost and performance.
Isolation and Ease of Management - Allowing users to have concurrent operating
systems on one computer, have potentially hazardous applications run in a
sandbox, all of which can be managed from a single terminal.
Virtualization tools like Qemu and KVM are widely used by Linux developers
during their development cycle and testing.
Virtual testbeds for Education.
Demerits are:
Single Point of Failure
Chances of Performance hit
Application Support
 Virtual Machine (VM) :
A virtual machine is the machine that is being run itself. It is a machine that is ”fooled”
into thinking that it is being run on real hardware, when in fact the machine is running
its software or operating system on an abstraction layer that sits between the VM and the
hardware.
 Virtual Machine Monitor/Hypervisor(VMM):
The VMM is what sits between the VM and the hardware. There are two types of VMMs.
Type-1 and Type-2. Type-1 Native sits directly on top of the hardware. Mostly used in
traditional virtualization systems from the 1960s from IBM and the modern virtualization
suite Xen. Type-2 Hypervisor sits on top of an existing operating system. The most
prominent in modern virtualization systems like KVM, Virtual Box, VMware Workstation
etc. The abbreviation VMM can be both virtual machine manager and virtual machine
monitor.
 Type-1 Virtualization
Figure: Type-1 or bare-metal Hypervisor sits directly on host hardware
 Type-2 Virtualization:
Figure: Type-2 hypervisor runs as an application on host operating system
 Operating System level Virtualization:
 Open Source Solution
 KVM technology is a kernel device driver for the Linux kernel, which takes full usage of
the hardware extensions to the X86 architecture.
 KVM allowed for guests to run unmodified, thus making full virtualization of guests
possible on X86 processors.
 Uses existing Intel VT-x and AMD-V technology, to allow for virtualization: A goal of
KVM was to not reinvent the wheel. The Linux kernel already has among the best
hardware support and a plethora of drivers available, in addition to being a fully blown
operating system. So the KVM developers decided to take use of the facilities already
present in the Linux kernel and let Linux be the hypervisor. KVM virtualization solution
for the Linux kernel on the x86 platform.
KVM developers uses the facilities already present in the Linux kernel(having a
plethora of drivers available, in addition to being a fully blown operating system)
and Linux acts as the hypervisor.
KVM allows the guests to be scheduled on the host Linux system as a regular
process, in fact a KVM guest is simply run as a process, with a thread for each
virtual processor core on the guest.
Accepted on the Linux kernel since version 2.6.20. Red hat had “Xen” as
foundation for virtualization solution, shifted to KVM with OS version 6.
 All guests has to be initialized from a user-space tool, this
usually is a version of Qemu with KVM support.
 Each guest processor is run in its own thread that is spawned
from the user space tool, which then gets scheduled by the
hypervisor.
 Each guest process and processor thread gets scheduled as any
other user process alongside other processes by the Linux
kernel. Each of these threads can be pinned to a specific
processor core on a multi-core processors, to allow some manual
load balancing.
 The memory of a guest is allocated by the user-space tool,
which maps the memory from the guests physical memory to
the hosts virtual memory.
 I/O and storage are handled by the user-space tools
 Process Emulator and Virtualizer : Qemu in itself is only a emulator, when put
together with virtualization tools like KVM, it becomes a powerful virtualization
tool.
 Supports a mix of binary translation and native execution, running directly on
hardware.
 Access to low-level serial and parallel ports to be able to communicate with the
desired hardware.
Thank You 

Virtualization concept slideshare

  • 1.
  • 2.
    Introduction to Virtualization Terminologiesof Virtualization Basic Concepts & Types Introduction to Hypervisors Bare metal KVM as a use case
  • 3.
    The idea behindVMs originates in the concept of virtual memory and time sharing. All of which are concepts that were introduced in the early 60s, and pioneered at the Massachusetts Institute of Technology and the Cambridge Scientific Center. The most popular open-source virtualization suites, Xen, KVM, Libvirt and VirtualBox.
  • 4.
     Benefits areas: Server consolidation, hardware cost and performance. Isolation and Ease of Management - Allowing users to have concurrent operating systems on one computer, have potentially hazardous applications run in a sandbox, all of which can be managed from a single terminal. Virtualization tools like Qemu and KVM are widely used by Linux developers during their development cycle and testing. Virtual testbeds for Education.
  • 5.
    Demerits are: Single Pointof Failure Chances of Performance hit Application Support
  • 6.
     Virtual Machine(VM) : A virtual machine is the machine that is being run itself. It is a machine that is ”fooled” into thinking that it is being run on real hardware, when in fact the machine is running its software or operating system on an abstraction layer that sits between the VM and the hardware.  Virtual Machine Monitor/Hypervisor(VMM): The VMM is what sits between the VM and the hardware. There are two types of VMMs. Type-1 and Type-2. Type-1 Native sits directly on top of the hardware. Mostly used in traditional virtualization systems from the 1960s from IBM and the modern virtualization suite Xen. Type-2 Hypervisor sits on top of an existing operating system. The most prominent in modern virtualization systems like KVM, Virtual Box, VMware Workstation etc. The abbreviation VMM can be both virtual machine manager and virtual machine monitor.
  • 8.
     Type-1 Virtualization Figure:Type-1 or bare-metal Hypervisor sits directly on host hardware
  • 9.
     Type-2 Virtualization: Figure:Type-2 hypervisor runs as an application on host operating system
  • 10.
     Operating Systemlevel Virtualization:
  • 11.
     Open SourceSolution  KVM technology is a kernel device driver for the Linux kernel, which takes full usage of the hardware extensions to the X86 architecture.  KVM allowed for guests to run unmodified, thus making full virtualization of guests possible on X86 processors.  Uses existing Intel VT-x and AMD-V technology, to allow for virtualization: A goal of KVM was to not reinvent the wheel. The Linux kernel already has among the best hardware support and a plethora of drivers available, in addition to being a fully blown operating system. So the KVM developers decided to take use of the facilities already present in the Linux kernel and let Linux be the hypervisor. KVM virtualization solution for the Linux kernel on the x86 platform.
  • 12.
    KVM developers usesthe facilities already present in the Linux kernel(having a plethora of drivers available, in addition to being a fully blown operating system) and Linux acts as the hypervisor. KVM allows the guests to be scheduled on the host Linux system as a regular process, in fact a KVM guest is simply run as a process, with a thread for each virtual processor core on the guest. Accepted on the Linux kernel since version 2.6.20. Red hat had “Xen” as foundation for virtualization solution, shifted to KVM with OS version 6.
  • 14.
     All guestshas to be initialized from a user-space tool, this usually is a version of Qemu with KVM support.  Each guest processor is run in its own thread that is spawned from the user space tool, which then gets scheduled by the hypervisor.  Each guest process and processor thread gets scheduled as any other user process alongside other processes by the Linux kernel. Each of these threads can be pinned to a specific processor core on a multi-core processors, to allow some manual load balancing.  The memory of a guest is allocated by the user-space tool, which maps the memory from the guests physical memory to the hosts virtual memory.  I/O and storage are handled by the user-space tools
  • 16.
     Process Emulatorand Virtualizer : Qemu in itself is only a emulator, when put together with virtualization tools like KVM, it becomes a powerful virtualization tool.  Supports a mix of binary translation and native execution, running directly on hardware.  Access to low-level serial and parallel ports to be able to communicate with the desired hardware.
  • 17.