KEMBAR78
A Cloud Computing | PDF | Virtualization | Virtual Machine
0% found this document useful (0 votes)
5 views16 pages

A Cloud Computing

Cloud Computing is a technology that provides on-demand access to computing resources over the Internet, allowing users to rent services instead of maintaining physical hardware. Key features include scalability, cost efficiency, and global availability, with underlying principles of parallel and distributed computing enhancing performance. The document also discusses cloud characteristics, enabling technologies, and various virtualization types that facilitate efficient resource management in cloud environments.

Uploaded by

rk9285786
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views16 pages

A Cloud Computing

Cloud Computing is a technology that provides on-demand access to computing resources over the Internet, allowing users to rent services instead of maintaining physical hardware. Key features include scalability, cost efficiency, and global availability, with underlying principles of parallel and distributed computing enhancing performance. The document also discusses cloud characteristics, enabling technologies, and various virtualization types that facilitate efficient resource management in cloud environments.

Uploaded by

rk9285786
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Rohit Bind

Section:B

Cloud Computing
Introduction Of Cloud Computing
Cloud Computing is a technology that allows users to access computing resources (servers, storage,
databases, networking, software, analytics, etc.) over the Internet on a pay-as-you-go basis.
• Instead of buying and maintaining physical hardware, you rent IT services from providers like
Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).

Why is it called “Cloud”?


• The internet is often represented as a cloud in network diagrams.
• Computing resources are delivered from a “virtual cloud” of servers instead of a specific physical
location.
Key Features
1. On-Demand Access – Use resources whenever needed, no manual setup required.
2. Scalability & Elasticity – Increase or decrease resources based on demand.
3. Cost Efficiency – Pay only for what you use.
4. Global Availability – Accessible from anywhere via the Internet.

A diagram explaining Cloud Computing?

Definition of Cloud
Cloud Computing is the delivery of computing services—such as servers, storage, databases, networking,
software, and analytics—over the internet (the cloud) instead of using a local computer or physical
hardware.
Key Points
• On-demand access to IT resources.
• Pay-as-you-go model – pay only for what you use.
• Accessible from anywhere via the internet.

Example
• Google Drive: You don’t need to buy storage devices; you use Google’s storage space online and
access it anytime.
Evolution of Cloud Computing
Cloud Computing evolved from mainframes client-server virtualization internet-based services.

1. Mainframe Era (1960s – 1970s)


• Large mainframe computers shared by multiple users via terminals.
• Time-sharing concept introduced – multiple users could access computing power at the same time.
2. Client-Server Era (1980s – 1990s)
• Shift from centralized mainframes to client-server architecture.
• Data stored on servers; users accessed through personal computers (clients).
3. Virtualization Era (2000s)
• Virtualization technology allowed multiple virtual machines (VMs) to run on a single physical
machine.
• Improved resource utilization and efficiency.
4. Modern Cloud Era (2010s – Present)
• Internet-based delivery of services (IaaS, PaaS, SaaS).
• Large data centers and service providers like AWS, Azure, and Google Cloud provide on-demand,
scalable resources globally.

Underlying Principles of Parallel and Distributed


Computing
1. Parallel Computing
• Definition: Performing multiple tasks simultaneously by dividing a problem into smaller
sub-tasks and executing them on multiple processors at the same time.
• Key Principle: Divide and Conquer – Split a large task into smaller ones, run them
in parallel, and combine the results.
• Example: Processing a large image by dividing it into sections and using multiple
CPUs to process each section at the same time.
It is the use of multiple processing elements simultaneously for solving any problem.
Problems are broken down into instructions and are solved concurrently as each resource
that has been applied to work is working at the same time.
2. Distributed Computing
• Definition: A computing model where tasks are distributed across multiple machines
(nodes) connected through a network, and they work together to achieve a common
goal.
• Key Principle: Workload Distribution – Multiple computers work on different parts
of a task and share results.
• Example: Google Search uses thousands of servers worldwide to process search
queries simultaneously.

Why Important for Cloud Computing?


• Cloud platforms like AWS, Google Cloud, and Azure use parallel computing for
faster processing and distributed computing for reliability and scalability.
• Together, they ensure high performance, fault tolerance, and efficient handling of
massive workloads.

Cloud Characteristics
Cloud Characteristics are the essential features that define how cloud computing delivers services,
including on-demand access, scalability, resource pooling, broad network access, and measured service.

Cloud Characteristics refer to the defining attributes that make cloud computing unique
and efficient. These include on-demand self-service, allowing users to access resources
anytime; broad network access from any device; resource pooling for shared usage;
rapid elasticity for dynamic scaling; and measured service for pay-as-you-go billing.
Together, these features ensure flexibility, scalability, and cost-effectiveness in
delivering IT services over the internet.
1: On-Demand Self-Servic
On-Demand Self-Service allows users to access and manage computing resources automatically whenever
needed, without human intervention from the service provider.
Means users can instantly access resources like servers, storage, and applications whenever required. No
manual setup or approval from the provider is needed.

2. Broad Network Access


Broad Network Access means cloud services are available over the internet and can be accessed from
any device (laptop, smartphone, tablet) at any time.
Broad Network Access is a fundamental characteristic of cloud computing where services are available
over the internet and can be accessed through standard protocols from various devices like laptops,
desktops, smartphones, or tablets. This feature ensures global accessibility, enabling users to work from
anywhere and at any time. For example, Google Drive and Dropbox allow users to store and retrieve files
via web browsers or mobile apps.
3. Resource Pooling
Resource Pooling means cloud providers use shared computing resources to serve multiple customers
dynamically, based on their demand.
Resource Pooling is a key characteristic of cloud computing where service providers maintain a large pool
of computing resources—such as servers, storage, and processing power—that are shared among multiple
customers. These resources are dynamically allocated, ensuring efficiency and cost-effectiveness while
maintaining data security and privacy through isolation. For example, Netflix streams videos to millions of
users simultaneously using shared cloud infrastructure.
4. Rapid Elasticity
Rapid Elasticity ensures that cloud computing resources can quickly expand or shrink according to the
user’s needs, providing flexibility and cost savings.
For example, e-commerce sites like Amazon automatically add more servers during big sales events and
scale back afterward to save costs.
5. Measured Service
Measured Service means cloud usage is monitored, controlled, and billed based on the amount of
resources consumed.
Measured Service in cloud computing ensures that resource usage (CPU time, storage, bandwidth) is
tracked and users pay only for what they use, similar to utilities like electricity or water.
example, AWS and Google Cloud bill customers based on the exact amount of storage or compute power
consumed.
Cloud Enabling Technologies
Cloud Enabling Technologies are the foundational technologies that make cloud computing possible by
providing flexibility, scalability, and efficient resource management.
Why Important?
• Provides flexibility (services can be used independently).
• Improves scalability (resources can be allocated dynamically).
• Enhances efficiency (through virtualization and automation).
1. Service-Oriented Architecture (SOA)
• Definition: A design approach where services are delivered as independent components that can be
accessed over a network.
• Example: REST APIs used for communication between different applications.
2. Web Services
• Enable communication between applications over the internet using protocols like HTTP, XML, and
JSON.
3. Publish-Subscribe Model
• A messaging pattern where senders (publishers) send messages without knowing receivers
(subscribers), who receive only relevant messages.
4. Virtualization
• The creation of virtual versions of resources like servers, storage, and networks to optimize usage.

REST And System Of System


REST (Representational State Transfer) is an architectural style used for designing networked applications.
It uses standard HTTP methods for operations and treats everything as a resource identified by a unique
URI. REST is stateless, meaning each client request contains all the necessary information to process it.
This makes REST ideal for scalable and lightweight web services, commonly used in cloud applications like
APIs.
REST is an architectural style for building web services that use standard HTTP methods (GET, POST, PUT,
DELETE) for communication between clients and servers.
Use of HTTP Methods –
• GET – Retrieve data
• POST – Create data
• PUT – Update data
• DELETE – Remove data

1. Stateless – Each request contains all necessary


information; the server does not store session state.
2. Resource-Based – Everything is treated as a
resource and identified by a URI.
System of Systems (SoS)
System of Systems (SoS) is an architecture where multiple independent systems integrate to achieve a
larger, common objective.
A System of Systems (SoS) is an integrated collection of independent and self-contained systems that
collaborate to achieve a purpose that none of the individual systems can achieve alone. Each system
operates independently but is connected via a larger framework. In cloud computing, SoS ensures large-
scale interoperability, enabling services like healthcare, finance, and logistics systems to work together
seamlessly.
In Cloud Computing
• Different services (storage, database, payment gateways, user authentication) work together to
provide a complete application.
• Example: Amazon uses inventory, billing, delivery, and recommendation systems together to provide a
seamless shopping experience.
Easy Example
Think about an airport:
• One system handles flight schedules.
• Another handles baggage tracking.
• Another manages security checks.
• Another manages fuel and ground operations.
Each system works independently but combines to make the entire airport function smoothly. That
combination is a System of Systems.

Web services
Web services are standardized methods that allow applications to communicate and exchange data over
the Internet using protocols like HTTP, XML, or JSON.
Web services are self-contained, modular software applications that provide functionality or data to other
applications over the Internet using standardized protocols such as HTTP, XML, SOAP, and REST. They
allow interoperability between different systems, enabling them to communicate and share information
seamlessly. For example, when a weather app fetches live weather data from an online weather service, it
uses web services to communicate.
Example
• Google Maps API – Allows apps to integrate maps and location services.
• Payment Gateways (Razorpay, PayPal) – Allow e-commerce websites to process payments.

How Web Services Work


• A client application sends a request to a web service, often in XML or JSON format.
• The web service processes the request and responds with data, also in a standard format like XML
or JSON.
• Protocols such as SOAP (Simple Object Access Protocol) or REST (Representational State Transfer)
are commonly used to specify how messages are formatted and transmitted.
Publish, Subscribe model
A publish-subscribe model is a messaging pattern where publishers send messages to a channel, and
subscribers receive only the messages they are interested in.
In the Publish-Subscribe (Pub/Sub) model, the sender of a message (publisher) does not send it directly
to specific receivers. Instead, it publishes the message to a channel or topic, and all subscribers who
have expressed interest in that topic will receive the message. This approach decouples the sender and
receiver, allowing for scalable and efficient communication. It is widely used in event-driven systems,
IoT applications, and cloud messaging services.

Publisher: Creates and sends messages to topics—does not know subscribers.


• Subscriber: Receives messages from topics it is subscribed to—does not know publishers.
• Topic: Named channels for message organization; publishers send messages here, subscribers
receive from here.
• Message Broker: Routes messages from publishers to subscribers; handles delivery and other
features.
• Message: The data sent by publishers to subscribers; can be any format.
• Subscription: Defines which topics a subscriber receives messages from; controls message delivery
details.

Example
• YouTube Notifications – You subscribe to a channel; whenever a video is published, you receive a
notification.
• Stock Market Apps – Publish real-time stock prices, and subscribed users get updates instantly.
Basics of Virtualization
Virtualization is the process of creating a virtual version of computing resources like hardware, storage, or
networks, allowing multiple systems to run on a single physical machine.
Virtualization is a way to use one computer as if it were many. Before virtualization, most computers were
only doing one job at a time, and a lot of their power was wasted. Virtualization lets you run several
virtual computers on one real computer, so you can use its full power and do more tasks at once.
In cloud computing, this idea is taken further. Cloud providers use virtualization to split one big server
into many smaller virtual ones, so businesses can use just what they need, no extra hardware, no extra
cost.

Working of Virtualization
Virtualizations uses special software known as hypervisor, to create many virtual computers (cloud
instances) on one physical computer. The Virtual Machines behave like actual computers but use the same
physical machine.

Hypervisors
A hypervisor is the software that gets virtualization to work. It serves as an intermediary between the
physical computer and the virtual machines. The hypervisor controls the virtual machines' use of the
physical resources (such as the CPU and memory) of the host computer.
For instance, if one virtual machine wants additional computing capability, it requests it from the
hypervisor. The hypervisor ensures the request is forwarded to the physical hardware, and it's
accomplished.
Types of Hypervisors:
1. Type 1 (Bare-Metal): Installed directly on hardware; faster and more efficient.
2. Type 2: Runs on top of an existing OS; suitable for running multiple OS on one machine.

Types of Virtualization
1. Application Virtualization
• What it is: Runs apps remotely without installing them on the user’s device. Data and settings stay on
the server but are accessed via the internet.
• Why useful: Allows using multiple versions of software easily.
• Example: Microsoft Azure – apps run on Azure servers but feel like they’re installed locally,
improving speed, security, and access from any device.2
2. Network Virtualization
• What it is: Creates multiple virtual networks on one physical network, each working independently.
Includes virtual switches, routers, firewalls, and VPNs.
• Why useful: Simplifies network setup, management, and scaling without new hardware.
• Example: Google Cloud – lets companies build, manage, and expand virtual networks via software,
saving cost and offering flexibility.
3. Desktop Virtualization
• What it is: Runs a desktop environment on a remote server instead of a local computer. Users access
it via the internet.
• Why useful: Enables access to the same desktop setup from any device and simplifies software
updates.
• Example: VMware Horizon – allows employees to access their desktop and apps remotely.
4. Storage Virtualization
• What it is: Combines multiple physical storage devices into a single virtual storage pool.
• Why useful: Makes data management easier, improves performance, and ensures better resource
use.
• Example: IBM Spectrum Virtualize – pools storage from different systems into one, managed
centrally.
5. Server Virtualization
• What it is: Divides a single physical server into multiple virtual servers, each running its own
operating system and applications.
• Why useful: Improves resource utilization, reduces hardware costs, and allows running multiple
environments on one machine.
• Example: VMware ESXi – creates multiple virtual servers on one physical server for different
workloads.
6. Hardware Virtualization
• What it is: Creates virtual versions of physical hardware (CPU, memory, storage) using a hypervisor.
• Why useful: Allows multiple operating systems and applications to run on a single physical machine
efficiently.
• Example: Oracle VM VirtualBox – virtualizes hardware so different OS environments can run on one
computer.
Implementation Levels of Virtualization
Virtualization can be implemented at different layers of a computer system, such as hardware, operating
system, library, or application level, to create virtual environments for better resource utilization and
flexibility.
Implementation levels of virtualization describe where virtualization is applied within a computer system.
At the hardware level, hypervisors create virtual machines directly on physical hardware. At the operating
system level, containers isolate applications within the same OS. At the library level, calls are redirected to
virtualized libraries, and at the application level, individual apps run in isolated environments. Each level
offers different flexibility, performance, and use cases, making virtualization a key enabler of cloud
computing.

1:Instruction Set Architecture (ISA) Level Virtualization


At the ISA level, virtualization works by using an emulator to
mimic the instruction set of one hardware architecture on
another. This means software compiled for one type of processor
(e.g., x86) can run on a different type (e.g., ARM) without
modification. Although this method provides high flexibility and
compatibility, it can be slower than hardware-level virtualization
because instructions must be translated in real-time.

2. Hardware Level Virtualization


In Hardware Level Virtualization, a hypervisor (Type 1 – Bare-
Metal) is installed directly on a server without a host operating
system. This hypervisor manages CPU, memory, and storage
allocation to multiple virtual machines, enabling them to function
like independent computers. It provides high performance,
security, and isolation because there is no additional OS layer.
This level is widely used in cloud computing for efficient resource
utilization.

3: Operating System (OS) Level Virtualization


In OS Level Virtualization, the host operating system creates multiple isolated user spaces called
containers. These containers share the same kernel but run independently, each with its own libraries,
configurations, and applications. Unlike full virtual machines, containers are lightweight, start quickly, and
consume fewer resources because they do not require separate operating systems. This makes them ideal
for cloud environments and microservices architectures.
Example
• Docker, LXC (Linux Containers), and OpenVZ – provide container-based OS virtualization for fast and
efficient deployment.
4. Library Level Virtualization
At this level, instead of virtualizing the whole operating system or hardware, only the libraries that an
application needs are virtualized. When an application makes a system call, it is captured and redirected to
a virtual library that mimics the required environment. This enables software designed for one platform to
run on another without modification. It is lightweight compared to OS-level virtualization and avoids the
overhead of running a full virtual machine.
Example
• WINE – allows Windows applications to run on Linux by providing a virtualized set of Windows
libraries.
5. Application Level Virtualization
Application Level Virtualization allows applications to run in isolated environments without being installed
on the local system.
In Application Level Virtualization, only the application is virtualized—not the entire operating system or
hardware. The application runs in a self-contained package with all necessary files, libraries, and settings,
making it independent of the underlying OS configuration. This improves portability, simplifies updates, and
prevents conflicts with other software. It is commonly used to deploy applications across multiple devices
without manual installation.
Example
• Citrix Virtual Apps or VMware ThinApp – deliver applications to users
without requiring local installation.

Virtualization Structure
Virtualization structure refers to how different components—hardware, hypervisors, and virtual machines—
are organized to enable virtualization.

1. Physical Hardware Layer


• Includes CPU, memory, storage, and I/O devices.
• Provides raw computing power to be virtualized.
2. Hypervisor Layer
• Software that controls and allocates hardware resources to virtual machines.
• Type 1 Hypervisor: Runs directly on hardware for high performance (e.g., VMware ESXi).
• Type 2 Hypervisor: Runs on top of an OS (e.g., VirtualBox).
3. Virtual Machine Layer
• Contains multiple VMs, each with its own operating system and applications.
• Ensures isolation so one VM does not affect another.
4. Application Layer
• Applications run within VMs or containers, accessing virtualized resources seamlessly.
Tools and mechanisms
Virtualization in Cloud Computing and Types - GeeksforGeeksVirtualization utilizes mechanisms like
hypervisors, which create and manage virtual machines, and techniques such as full virtualization,
paravirtualization, and OS-level virtualization to enable multiple operating systems to run on single
physical hardware. Common tools and platforms implementing these mechanisms include VMware, Xen,
KVM, and Docker, which provide comprehensive solutions for server, desktop, and network virtualization.

Mechanisms of Virtualization
These are the core technologies that allow virtualization to function:
Hypervisor/Virtual Machine Monitor (VMM):
A software or firmware layer that sits between the physical hardware and the operating systems, enabling
multiple guest operating systems to run concurrently on a single host machine.
Full Virtualization:
The hypervisor creates a complete virtual hardware environment for each guest OS, which remains
unaware of the virtualization layer.
Paravirtualization:
A method that modifies guest operating systems to make them aware of the virtualization layer, improving
performance by providing optimized interfaces and allowing direct hardware access through the
hypervisor.
Operating System-Level Virtualization (OS-level virtualization):
This approach virtualizes the operating system itself, allowing multiple isolated user-space instances
(containers) to run on a single OS kernel.
Hardware-Assisted Virtualization:
Modern processors include hardware support, like memory management units, to accelerate virtualization
tasks, making them more efficient
Tools and Platforms
These are popular software solutions that implement the mechanisms listed above:
VMware:
A broad suite of products including VMware vSphere and VMware Workstation, which offers both server
and desktop virtualization solutions.
Xen:
A micro-kernel hypervisor that can be used for both full and paravirtualization, providing a foundation for
server virtualization and cloud infrastructure.
Kernel-based Virtual Machine (KVM):
A full virtualization solution for Linux that uses the Linux kernel to manage VMs, leveraging the existing
operating system's features.
QEMU:
A powerful and versatile emulator that can also function as a hypervisor, often used in conjunction with
KVM for enhanced performance.
Docker:
A prominent platform for OS-level virtualization, enabling the creation and deployment of portable
software containers for applications.
Microsoft Hyper-V:
Microsoft's built-in virtualization solution, functioning as a hypervisor that can run multiple operating
systems on a Windows host
Virtualization of CPU
CPU Virtualization allows multiple virtual machines to share a single physical CPU by creating virtual CPUs
(vCPUs) for each VM.
CPU virtualization, is a technology that creates multiple virtual CPUs (vCPUs) from a single physical CPU,
enabling a single server to run multiple operating systems and applications simultaneously in isolated
virtual machines (VMs). This technique, which involves a hypervisor or virtualization software managing
the physical hardware, is a cornerstone of cloud computing, enhancing resource utilization, reducing costs,
and improving workload management by abstracting the physical hardware from the software.

How It Works
1. Abstraction Layer:
A software layer, called the hypervisor, sits between the physical hardware and the operating systems
running on it.
2. Virtual CPU Creation:
The hypervisor creates multiple virtual CPUs (vCPUs) from the single physical CPU.
3. Resource Allocation:
The hypervisor allocates physical resources, such as processing power and memory, to each virtual
machine.
4. Isolation:
Each VM operates independently, with its own operating system and applications, and cannot see or
interact with other VMs, ensuring security and stability.

Virtualization Support and Disaster Recovery


Virtualization helps in disaster recovery by enabling backup, migration, and restoration of virtual machines
quickly, reducing downtime and data loss.
Virtualization provides strong support for Disaster Recovery (DR) in cloud environments. Since virtual
machines are just software files, they can be easily copied, backed up, or moved across servers. In case of
system failure, disasters, or hardware crashes, VMs can be restored quickly on another host machine
without reinstalling the OS or applications.
Creating a disaster recovery plan is crucial for every organization as it helps prevent permanent data loss
in the event of a disaster, leading to data loss or corruption. Virtualization helps in data recovery by
creating a virtual copy of your hardware that can be accessed after a disaster.

Virtualization reduces downtime, helps to recover data from the hardware, reduces hardware needs, and
facilitates testing your data recovery plans. However, you must note that virtual data recovery is only a
part of a failproof disaster recovery plan. You must make provisions for an off-premises backup site for
more robust protection.

Data Recovery Strategies for Virtualization

Below are some practical strategies to help build a robust data recovery plan for your organization’s
virtual environment:

Backup and Replication


Create regular backups of your virtual machines that will be stored in a different location—for instance,
an external drive or a cloud service. You can also create replicas and copies of your virtual machines that
are synchronized with the original. You can switch from the original to a replica in case of failure.

Snapshot and Restore


Snapshots capture your data at specific preset moments, creating memories of them. Restore points also
capture data but include all information changes after the last snapshot. You can use snapshot and restore
to recover the previous state of your data before the data loss or corruption.

Encryption and Authentication


Encryption and authentication are essential security measures that work in tandem to safeguard data
from unauthorized access. By employing both methods, you establish robust layers of defense. This,
thereby, fortifies your data against potential cyber threats, ultimately mitigating the risks associated with
corruption and theft.!

You might also like