KEMBAR78
Unit.2 CT (Notes) | PDF | Cloud Computing | Virtualization
0% found this document useful (0 votes)
11 views29 pages

Unit.2 CT (Notes)

Uploaded by

aftabansari11th
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views29 pages

Unit.2 CT (Notes)

Uploaded by

aftabansari11th
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

UNIT. NO.

2
Cloud Infrastructure & Virtualization

• Historical Perspective of Data Centers, Data center Components:


IT Equipment and Facilities

• Design Considerations: Requirements, Power, Efficiency, &


Redundancy, Power Calculations, PUE and Challenges in Cloud

• Data Centers, Cloud Management and Cloud Software


Deployment Considerations.

• Virtualization: Introduction to virtualization, Types of


Virtualizations, Pros and cons of virtualization, Virtualization
applications in enterprises: Server virtualization, Desktop and
Application Virtualization, Storage and Network Virtualization.
Historical Perspective of Data Centers

1940s and 1950s

• The Electronic Numerical Integrator and Computer (ENIAC) was the first electronic
digital programmable general-purpose computer. The U.S. military designed the
ENIAC to calculate artillery firing tables. However, it was not completed until late
1945. Its first program was a thermonuclear weapon feasibility study.
• The ENIAC was large, weighing more than 27 tons and occupying 300 square feet
of space. Its thousands of vacuum tubes, resistors, capacitors, relays, and crystal
diodes could only perform numeric calculations. Punch cards served as data
storage.
• The first data center (called a “mainframe”) was built in 1945 to house the ENIAC
at the University of Pennsylvania.

Mainframes as the first data centers:


• Large mainframe computers (like ENIAC, IBM 701) were used for scientific and
military applications.
• These machines were huge, power-hungry, and expensive.
Environment:
• Required specialized rooms with cooling systems, raised floors, and controlled
humidity.
• Only governments, research labs, and large corporations could afford them.
Operations:
• Batch processing (punch cards and magnetic tapes).
• Centralized computing with dumb terminals (no local processing power).
---------------------------------------------------------------------------------------------------------------------
1960s and 1970s

• The development of the transistor transformed the computing industry. Bell Labs
developed the first transistorized computer, called TRADIC (Transistor Digital
Computer), in 1954.
• IBM introduced its first completely transistorized computer in 1955. The IBM 608
was half the size of a comparable system based on vacuum tubes and required 90
percent less power. It cost $83,210 in 1957.
• The smaller size and lower cost of transistorized computers made them suitable
for commercial applications. By the 1960s, data centers (or “computer rooms”)
were built in office buildings.
• The new mainframes were faster and more powerful than early machines, with
innovations such as memory and storage.
• Reliability was critical because the entire enterprise IT infrastructure ran on one
system.
• Data centers were designed to ensure ideal operating conditions. In other words,
like cooling and airflow, data center downtime was a concern even in the 60’s and
70’s.
The Minicomputer & Early Networking Era

Transition to smaller systems:


• Minicomputers (DEC PDP, VAX series) made computing more accessible.
Networking Begins:
• ARPANET (precursor to the internet) enabled remote access and data sharing.
Data Centers in this era:
• Still centralized but with multiple smaller systems.
• Physical security became more important as more organizations adopted
computing.
---------------------------------------------------------------------------------------------------------------------
1980s and 1990s

• Minicomputers and microcomputers began to replace mainframes, and data centers


adapted accordingly. The first personal computers were introduced in 1981.
• PCs were adopted rapidly and installed everywhere, with minimal concern about
environmental conditions.
• By the early 1990s, PCs were connecting to “servers” in the client-server model. This gave
rise to the first true “data centers,” where multiple microcomputers and PC servers
replaced mainframes.
• The .com boom of the mid-1990s drove the construction of ever-larger data center facilities
with hundreds or even thousands of servers.
• VMware introduced the concept of virtualization in 1999.

Client-Server & PC Revolution (1980s)


• Shift from centralized mainframes to distributed client-server architecture.
• Clients (PCs) connected to servers for shared applications and databases.
• Rise of Local Area Networks (LANs): Ethernet made intra-office networking common.
Enterprise Data Centers:
• Companies built dedicated rooms for servers, mainframes, and networking gear.
• UPS (Uninterruptible Power Supplies) and backup generators introduced.
Examples:
• Oracle databases, IBM AS/400, Novell NetWare networks.

Dot-Com Boom & Virtualization (1990s)


Explosion of Internet & Web Services:
• Data centers had to support email, websites, and e-commerce.
Standardization of Server Racks:
• Rack-mounted x86 servers replaced proprietary systems.
• Blade servers emerged to save space.
Virtualization:
• VMware introduced virtualization in the late 1990s → multiple virtual servers on one
physical machine.
Data Center Features:
• Fire suppression, climate control, structured cabling, access control.
Growth of Colocation Facilities:
• Companies rented space in professional data centers instead of building their own.
2000s and 2010s

• The dot-com bubble peaked by March 2000 and began crashing over the next two
years. Tech companies lost funding and much of their capital investment.
• However, the buildout of the Internet backbone during the dot-com era led to a new
concept in the early 2000s — cloud services.
• Salesforce.com pioneered the concept of delivering applications via the web in
1999. Amazon Web Services began offering compute, storage, and other IT
infrastructure in 2006. This led to the buildout of ever-larger data centers to support
these cloud services.
• Those facilities grew into what are now known as hyperscale data centers, often
surpassing a million square feet and serving as the backbone for the largest
technology platforms in the world.
• By 2012, 38 percent of organizations were using cloud services. Cloud service
providers needed facilities that allowed them to scale rapidly while minimizing
operating costs. Facebook launched the Open Computer Project in 2011, providing
best practices and specifications for developing economical and energy-efficient
data centers.

Early 2000s – The Rise of Web-Scale Data Centers


Major Drivers: Google, Amazon, and Facebook.
Design Innovations:
• Commodity hardware + software redundancy (instead of expensive high-end
servers).
• Distributed computing frameworks (Google File System, MapReduce → led to
Hadoop).
Key Characteristics:
• Modular server racks.
• Redundant power & cooling.
• Advanced monitoring systems.

Cloud Computing Era (2010s)

On-Demand Computing:
• Data centers became cloud platforms (AWS, Microsoft Azure, Google Cloud).
• Pay-as-you-go model replaced fixed infrastructure investment.
Software-Defined Data Centers (SDDCs):
Virtualization extended to networking (SDN) and storage (SDS).
Characteristics:
• Multi-tenant environments.
• Global availability zones.
• Service models (IaaS, PaaS, SaaS).
Green Data Centers:
• Focus on energy efficiency (free cooling, renewable energy).
2020s and Beyond

• Data center operators face unprecedented challenges and opportunities. Rising


energy costs and sustainability initiatives are forcing them to rethink their power
and cooling models.
• At the same time, artificial intelligence, 5G cellular, and other innovative
technologies are enabling the delivery of new products and services. Data centers
must find ways to support these applications efficiently and continue to drive
down energy usage.
• Data centers have come a long way. Their use cases have evolved from top-secret
military purposes to near-ubiquity. They have their own acronyms and
terminology and are poised for even more growth as new technologies such as AI,
IoT, and 5G mature.

Modern & Future Data Centers (2020s–Present)

Hyperscale Data Centers:


• Facilities with hundreds of thousands of servers (Amazon, Google, Microsoft,
Meta).
• Designed for AI, big data, machine learning workloads.
Edge Computing:
• Small-scale data centers closer to users for low-latency applications (IoT, 5G,
autonomous vehicles).
Automation & AI in Data Centers:
• Predictive maintenance, workload balancing, and self-healing systems.
• Sustainability Focus:
• Liquid cooling, carbon-neutral operations, renewable energy.
Trends:
• Containerization (Docker, Kubernetes).
• Serverless computing.
• Quantum computing integration (experimental).

Era Key Features Examples

1940s–1950s Mainframes, centralized, batch processing ENIAC, IBM 701

1960s–1970s Minicomputers, networking begins DEC PDP, ARPANET

1980s Client-server, LANs, enterprise IT Oracle DB, IBM AS/400

1990s Internet boom, virtualization, colocation VMware, Rack servers

2000s Web-scale, distributed computing Google GFS, Hadoop

2010s Cloud computing, SDDCs, green IT AWS, Azure, GCP

2020s–Future Hyperscale, AI-driven, edge computing AWS, Kubernetes, Edge nodes


Data center Components: IT Equipment and Facilities
Data Centre IT Infrastructure:

Servers

• IT servers take many forms and provide many different services and functions, but
the fundamental goal is the same: They provide a service as part of bipartite
communication between a client and a server.
• A server may be a software program connected locally to the same hardware
machine, or remotely via networking infrastructure.
• Servers are generally software or hardware systems designed to carry out
dedicated functions: e-mail server, Web server, print server or database server.
• Within a data Centre the IT hardware used to host a software server may differ in
design, efficiency and function.
• A server may be designed to host a particular operating system (OS), and within a
data Centre there may be capacity for different OSs. Each server machine will
consist of the physical hardware, the OS and the software service
• They are Physical computers that host applications and services.
Types:
• Rack servers – standardized servers mounted in racks.
• Blade servers – thin, modular servers that share power and cooling.
• Mainframes & supercomputers (in specialized centers).
Functions: Run applications, manage databases, host virtual machines.

---------------------------------------------------------------------------------------------------------------------

Networking:

• The gateway machine of a data Centre will sit at the entrance to the data Centre.
• Its primary function is protocol translation in and out of the data Centre, acting as
the connection point between the data Centre’s internal local area network (LAN)
and the wide area network (WAN) outside of the data Centre– in most cases, the
Internet Service Provider’s network.

Networking Equipment
• Routers – connect data center to external networks (Internet).
• Switches – connect servers, storage, and devices within the data center.
• Firewalls & Intrusion Detection Systems (IDS/IPS) – security layers.
• Load Balancers – distribute traffic across multiple servers.
• WAN Optimization & SDN (Software-Defined Networking) for performance and
flexibility.
Storage

• Data storage is a critical element of data Centre design. A number of options exist,
each of which caters to the requirements of other elements in the overall IT
infrastructure choices made.
• The key differentiator in storage type lies in the way the client machine– in our
case, the data Centre server– logically sees the storage medium.
• This will play a part in how the server manages the space and the access protocols
available for accessing the data stored.
• Network-attached storage (NAS) appears to the client machine as a network-
based file server able to support the Network File System (NFS) protocol.
• In a SAN, the disc space appears as local to the client machine. This is a key point
that enables the client (our data Centre servers) to use disc management utility
software to configure and optimize the space to best suit the needs of the server
application.
• Storage Systems are Used to store and retrieve data efficiently.

Types:

• Direct-Attached Storage (DAS) – connected directly to a server.


• Network-Attached Storage (NAS) – file-level storage over network.
• Storage Area Network (SAN) – block-level storage with high speed.
• Object Storage – scalable cloud-native storage (e.g., Amazon S3).

Examples: Dell EMC, NetApp, HPE, Hitachi.

---------------------------------------------------------------------------------------------------------------------

Virtualization

• An additional approach to compute server provision is virtualization.


• Rather than a single hardware machine supporting a single OS, that in turn hosts
a single server application (Web server, mail server, etc.), a virtualized system
enables a single hardware machine running a single OS to host multiple virtual
machines, which may or may not be running the same OS.
• This leads to a single host machine running multiple virtual machines.
• Virtualization presents the opportunity to scale the service provision within a data
Centre, make far more efficient use of hardware resources, reduce running costs
and reduce energy consumption.

Virtualization & Cloud Platforms


• Hypervisors (VMware ESXi, Microsoft Hyper-V, KVM) → run multiple VMs on one server.
• Containers & Orchestration (Docker, Kubernetes) → lightweight, cloud-native
applications.
• Management Software → monitors resources, automates scaling, and provisioning.
Cabling & Connectivity
• Copper & Fiber-optic cables are used for high-speed connections.
• Structured cabling systems are installed for easy management and scalability.

Backup and Disaster Recovery Equipment:


• Tape libraries, redundant storage arrays, and backup generators are essential for
data backup and disaster recovery.

---------------------------------------------------------------------------------------------------------------------

Data Centre Facility Infrastructure

Power Systems

• In terms of power, the primary difference between a normal office or home environment
and a data Centre relates to the ‘criticality’ of the electrical load. Losing power in most
situations is nothing more than an inconvenience, whereas losing power to critical IT Data
Centre’s services (e.g. in the case of a financial institution) can be extremely disruptive,
even catastrophic.

• To avoid such disruption, a data Centre employs an UPS together with a battery bank to
ensure that smooth and uninterrupted power is supplied to the critical IT load.

• Power distribution units (PDUs), which usually contain electrical transformers, are also
used to smooth the alternating current (AC) power and to distribute that power to the IT
equipment racks within the data Centre.

• Within the IT equipment, AC power is subsequently converted to direct current (DC) power
which is utilized by the individual IT components. If electrical supply is lost, UPS utilizes
the batteries to provide ‘ride-through’ power to the critical load.

• The objective of providing the ride-through power is to allow time for support electrical
generators (usually diesel powered) to come online until the mains power supply is
restored.

Physical Building:
• Data centers are constructed to meet stringent environmental and security requirements,
often located in secure, nondescript buildings.

Power Infrastructure:
• Data centers need robust electrical infrastructure with redundant power sources, backup
generators, and advanced power management systems.

Cooling Infrastructure:
• Precision cooling systems, such as raised floor cooling and hot/cold aisle containment,
help maintain an ideal temperature and humidity level.
Fire Suppression Systems:
• Data centers use specialized fire suppression systems, like clean agents or inert gas
systems, to protect IT equipment without causing damage.

Physical Security:
• Facilities have multiple layers of security, including access control, biometric
authentication, and security personnel. –

Redundancy:
• To ensure high availability, data centers often employ redundancy in power, cooling, and
networking components.

Monitoring and Management:


• Advanced monitoring and management systems continuously track environmental
conditions, power usage, and equipment health.

Power Infrastructure
• Utility Power Supply – electricity from grid.
• Uninterruptible Power Supply (UPS) – battery backup for short outages.
• Diesel Generators – long-term backup during power failures.
• Power Distribution Units (PDUs) – distribute electricity to racks and servers.
• Redundant Power Paths (N+1, 2N) – ensure high availability.

Cooling & Environmental Control


• IT equipment generates huge amounts of heat, so cooling is critical.
• CRAC (Computer Room Air Conditioners) – regulate temperature and humidity.
• Chillers & Cooling Towers – for large-scale heat management.
• Hot/Cold Aisle Containment – separates airflow to improve cooling efficiency.
• Liquid Cooling – advanced systems for HPC and AI workloads.

Physical Infrastructure
• Racks & Cabinets – house servers, networking gear, and storage.
• Raised Floors & Overhead Cable Trays – manage airflow and cabling.
• Fire Suppression Systems – gas-based (FM200, Novec 1230) to protect equipment.
• Lighting & Flooring – designed for safety and energy efficiency.

Security Systems
• Physical Security:
▪ Biometric access control, smart cards, and mantraps.
▪ 24/7 video surveillance (CCTV).
▪ Security guards.
• Cybersecurity Integration: Firewalls, intrusion detection, DDoS protection.

Monitoring & Management Systems


• DCIM (Data Center Infrastructure Management) software:
▪ Tracks power, cooling, health equipment, and energy usage.
▪ Provides predictive analytics for preventive maintenance.

Building Management Systems (BMS) – centralizes facility monitoring.


Component Category Examples Purpose

Servers Rack, Blade, Mainframe Compute power

Storage SAN, NAS, DAS, Object storage Data storage & retrieval

Networking Routers, Switches, Firewalls, Load balancers Connectivity & security

Virtualization/Cloud VMware, Hyper-V, Kubernetes Efficient use of resources

Power Systems UPS, Generators, PDUs Continuous power supply

Cooling Systems CRAC, Chillers, Hot/Cold aisles Prevent overheating

Racks & Cabling Cabinets, Fiber, Copper Organization & airflow

Security Systems CCTV, Biometrics, Fire suppression Physical & cyber protection

Monitoring DCIM, BMS Performance & efficiency

Design Considerations: Requirements, Power, Efficiency, &


Redundancy, Power Calculations, PUE and Challenges in Cloud

Requirements:

Business & workload:


• Use cases: Enterprise apps, VDI, AI/HPC, storage, CDN/edge, cloud on-ramp.
• Criticality/SLAs: Target uptime (e.g., 99.982% Tier III vs 99.995% Tier IV), RTO/RPO,
maintenance windows.
• Growth model: Initial IT load (kW), ramp curve (12–36 months), modular expansion
blocks (e.g., +250 kW blocks).
Location & risk:
• Grid reliability, energy prices, renewable mix, seismic/flood/fire plains, ambient
temperatures (affects economization hours).
• Regulatory: Building/fire codes, environmental permits, data residency and compliance
(HIPAA, PCI DSS, ISO 27001, SOC 2).
Architecture:
• White space density: kW/rack (typical enterprise 3–8 kW; modern >15 kW; AI/HPC 30–
100 kW with liquid cooling).
• Topology: Single site vs. active–active regions; colo vs. owned; edge satellites; network
ingress/egress needs.
• Security: Zoning (perimeter, MMR, staging, white space), mantraps, biometrics, CCTV
retention, cyber/OT segmentation.
Scalability and Flexibility
• Data centers must be designed to accommodate future growth and changing needs. This
involves modular designs, flexible layouts, and adaptable infrastructure.
Power and Cooling:
• Adequate power and cooling systems are essential for reliable operation. This includes
robust power distribution, backup generators, and efficient cooling technologies.
High Availability and Redundancy:
• Redundancy in power, network, and cooling systems is crucial for minimizing downtime
and ensuring continuous operation.
Security and Physical Protection:
• Data centers need robust security measures, including physical access controls,
surveillance systems, and cybersecurity protocols.
Building Structure:
• The building itself must be designed to support the weight and environmental
requirements of the data center equipment.
-------------------------------------------------------------------------------------------------------------------------------
Power Efficiency:
• Energy-efficient hardware and cooling systems can significantly reduce operational costs.
• Implement virtualization and server consolidation to improve resource utilization.
• Use of energy-efficient power supplies and cooling solutions, such as hot/cold aisle
containment.
-------------------------------------------------------------------------------------------------------------------------------
Power Calculations:
• Calculate the power requirements for servers, networking equipment, and cooling
systems.
• Consider peak load scenarios to size the power infrastructure adequately.
• Use Power Usage Effectiveness (PUE) as a metric to measure and optimize energy
efficiency.
• Calculate the power consumption of servers, storage, networking devices, and other IT
equipment.
• Estimate the power needed for cooling systems (e.g., chillers, CRAC units).
• Account for lighting, power distribution, and other non-IT equipment.
• Ensure sufficient power capacity from the grid and backup power sources (e.g.,
generators, batteries).
-------------------------------------------------------------------------------------------------------------------------------
Cooling Efficiency:
• Optimize the data center's cooling system for efficiency.
• This might involve using free cooling, hot/cold aisle containment, or liquid cooling.
• Employ temperature and humidity control systems to maintain optimal conditions.
-------------------------------------------------------------------------------------------------------------------------------
Redundancy:
• Data centers implement redundancy in power supplies, cooling systems, network
connections, and storage to prevent single points of failure.
• A common approach is to have "N+1" redundancy, where there's one extra component
beyond what's needed for normal operation.
• Redundancy minimizes downtime, improves reliability, and ensures continuous operation
during equipment failures.
-------------------------------------------------------------------------------------------------------------------------------
PUE (Power Usage Effectiveness):
• PUE is a metric that measures data center efficiency.
• A lower PUE indicates better efficiency.
• Calculate PUE regularly and strive to improve it by reducing energy consumption.

-------------------------------------------------------------------------------------------------------------------------------

Challenges in Cloud and Cloud design:


Security and Privacy:
Data Breaches:
• Protecting sensitive data from unauthorized access and breaches is paramount,
requiring robust security measures and continuous monitoring.
Unauthorized Access:
• Preventing unauthorized access to cloud resources is crucial, often involving strong
authentication and authorization mechanisms.
Compliance:
• Meeting various regulatory and compliance requirements related to data privacy and
security can be complex and time-consuming.
Insider Threats:
• Mitigating the risk of data loss or leakage from malicious or accidental actions by
insiders is a constant concern.

Performance and Reliability:


Scalability and Performance:
• Ensuring that cloud resources can scale to meet fluctuating demands without
performance degradation is a major challenge.
High Availability:
• Guaranteeing high availability and reliability of cloud services is essential for business
continuity.
Connectivity Issues:
• Dependence on stable and reliable network connection can impact performance,
especially for latency-sensitive applications.

Cost Management:
Cost Optimization:
• Optimizing cloud spending and avoiding unexpected costs requires careful resource
management and cost monitoring.
Resource Allocation:
• Efficiently allocating and managing cloud resources to avoid over-provisioning or under-
utilization is crucial for cost control.

Multi-Cloud Complexity:
Integration and Interoperability:
• Managing applications and data across multiple cloud platforms can be complex,
requiring robust integration and interoperability solutions.
Vendor Lock-in:
• Avoiding vendor lock-in by choosing flexible and portable cloud solutions is important for
long-term success.

Other Challenges:

Lack of Expertise:
• Finding and retaining skilled cloud professionals can be a hurdle for many
organizations.
Migration Challenges:
• Migrating existing applications and data to the cloud can be a complex and time-
consuming process.
Compliance:
• Cloud environments may present unique compliance challenges, requiring organizations
to stay abreast of evolving regulations.
Sustainability:
• Organizations are increasingly considering the environmental impact of their cloud
usage and the need for sustainable cloud practices.
-------------------------------------------------------------------------------------------------------------------------------

Overcoming these challenges requires a proactive approach, including:

Developing a comprehensive cloud strategy:


• This includes defining goals, selecting appropriate cloud services, and establishing clear
security and compliance policies.
Implementing robust security measures:
• This includes using encryption, access controls, and security monitoring tools.
Optimizing resource utilization:
• This includes using auto-scaling, cost monitoring tools, and right-sizing resources.
Fostering a culture of cloud expertise:
• This includes investing in training and development for cloud professionals.
Choosing the right cloud provider:
• This includes evaluating providers based on security, reliability, cost, and support.
Data Centers, Cloud Management and Cloud Software
Deployment Considerations
Data Centers
• A data center is a physical facility where IT equipment and supporting infrastructure
is housed to deliver computing, storage, and networking services.

Components
• IT Equipment: Servers, storage, networking gear, virtualization platforms.
• Facilities Infrastructure: Power (UPS, generators), cooling (CRAC/CRAH, liquid
cooling), cabling, racks.
• Security Systems: Physical (biometrics, CCTV, fire suppression) + Cyber (firewalls,
IDS/IPS, Zero Trust).
• Monitoring & Automation: DCIM (Data Center Infrastructure Management),
telemetry, predictive AI-driven controls.

Types of Data Centers


• Enterprise Data Centers – Owned/operated by businesses.
• Colocation (Colo) – Rent rack/cage space in shared facility.
• Hyperscale Data Centers – Run by cloud giants (AWS, Azure, Google).
• Edge Data Centers – Small, distributed centers for low latency (5G, IoT).

Key Design Considerations


• Availability/Uptime: Tier I–IV standards.
• Redundancy: N+1, 2N for power/cooling.
• Efficiency: PUE (Power Usage Effectiveness), WUE (Water Usage Effectiveness).
• Scalability: Modular design for growth.
• Sustainability: Renewable energy, liquid cooling, carbon neutrality.

Data Centers in Cloud Computing

• A Data Center is the physical backbone of cloud computing. Cloud platforms


(AWS, Azure, Google Cloud, etc.) run their services on globally distributed data
centers. While cloud is virtual, it relies heavily on physical data centers for storage,
processing, and networking.
• A Data Center in Cloud Computing is a centralized physical facility that houses
servers, storage systems, networking equipment, and infrastructure to support
cloud services like IaaS, PaaS, SaaS.
• It is where virtual resources (VMs, containers, serverless functions) are actually
hosted.

Role of Data Centers in Cloud


In cloud computing, data centers:

• Host cloud services (compute, storage, networking, AI/ML, databases).


• Provide scalability (elastic resources for millions of users).
• Ensure reliability (redundant power, cooling, failover systems).
• Enable global access (cloud providers deploy data centers worldwide for low-
latency services).
• Support security & compliance (firewalls, IDS/IPS, encryption, regulatory
certifications).

Key Components of Cloud Data Centers

• Compute Resources → Virtualized servers, GPUs for AI/ML.


• Storage Systems → Object storage (S3, Blob), block storage, distributed file
systems.
• Networking Equipment → High-speed switches, routers, load balancers, SDN
(Software-Defined Networking).
• Power & Cooling → UPS, generators, CRAC/CRAH units, liquid cooling.
• Security → Biometric access, CCTV, Zero Trust architecture, fire suppression.
• Automation & Monitoring → DCIM, AI-based predictive maintenance.

Data Centers & Cloud Service Models

• IaaS (Infrastructure as a Service): VMs, storage, networks run inside the data
center (e.g., AWS EC2, Azure VMs).
• PaaS (Platform as a Service): Databases, runtimes, middleware hosted in data
centers (e.g., Google App Engine).
• SaaS (Software as a Service): End-user apps (e.g., Salesforce, Office 365)
delivered from cloud data centers.

Challenges in Cloud Data Centers

• High Power Consumption → Huge energy costs.


• Cooling Requirements → Rising demand due to AI/ML workloads.
• Security Risks → Physical + cyber threats.
• Data Privacy → Compliance with GDPR, HIPAA, etc.
• Vendor Lock-In → Migration across providers is difficult.
---------------------------------------------------------------------------------------------------------------------
Cloud Management
• Cloud Management is the process of monitoring, administering, and controlling
cloud computing resources, services, and applications.
• It ensures that cloud environments (public, private, or hybrid) operate efficiently,
securely, and cost-effectively.

Objectives of Cloud Management


• Resource Optimization – Efficient use of compute, storage, and networking.
• Cost Management – Track and optimize spending (pay-as-you-go billing).
• Security & Compliance – Enforce policies, protect data, meet regulations.
• Automation & Orchestration – Reduce manual work, speed up deployments.
• Monitoring & Performance – Ensure services run with minimal downtime.
• Scalability – Dynamically allocate resources based on workload demand.

Key Functions of Cloud Management

Provisioning & Orchestration


• Deploying virtual machines, containers, databases, and applications.
• Tools: Terraform, Ansible, Kubernetes.

Monitoring & Performance Tracking


• Metrics for CPU, memory, storage, and network utilization.
• Tools: Prometheus, Grafana, CloudWatch (AWS), Azure Monitor.

Cost & Billing Management


• Usage tracking, cost forecasting, and chargeback to departments.
• Tools: Cloud Health, AWS Cost Explorer.

Security & Identity Management


• Role-based access control (RBAC), multi-factor authentication (MFA), Zero Trust.
• Tools: IAM (AWS), Azure Active Directory, Okta.

Backup & Disaster Recovery (DR)


• Automated backups, replication across regions, failover strategies.
• Tools: Veeam, Rubrik, native cloud snapshots.

Automation & Self-Service


• Users can self-provision resources without IT intervention.
• DevOps CI/CD pipelines integrated with cloud platforms.

Compliance & Governance


• Enforce policies for GDPR, HIPAA, ISO 27001, PCI-DSS, etc.
• Cloud policy enforcement frameworks (Azure Policy, AWS Config).

Cloud Software Deployment Considerations


Architecture & Design
• Choose cloud-native architectures (microservices, containers, serverless).
• Ensure stateless design for scalability.
• Use APIs & service mesh for inter-service communication.
• Plan for disaster recovery (DR) with multi-region deployments.

Scalability & Performance


• Use auto-scaling (horizontal/vertical scaling).
• Load balancing across multiple servers or availability zones.
• Optimize applications for latency and throughput (CDN, edge computing).
• Capacity planning → Ensure resources are sufficient for peak loads.
Security & Compliance
• Implement Identity and Access Management (IAM) with RBAC/ABAC.
• Apply encryption (at rest and in transit).
• Use firewalls, WAFs, and intrusion detection/prevention systems.
• Meet compliance standards (GDPR, HIPAA, ISO 27001, PCI-DSS).
• Regular vulnerability scanning and patch management.

Automation & CI/CD (Continuous Integration and Continuous Deployment)


• Automate deployments with Infrastructure as Code (IaC) (Terraform, Ansible,
CloudFormation).
• Use CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, Azure DevOps).
• Blue Green / Canary deployments for zero-downtime releases.
• Container orchestration with Kubernetes, Docker Swarm, or OpenShift.

Monitoring & Observability


• Real-time monitoring of CPU, memory, storage, and network.
• Centralized logging (ELK Stack, Splunk, CloudWatch, Azure Monitor).
• Distributed tracing (Jaeger, Zipkin).
• Proactive alerts and predictive analytics with AI/ML.

Cost Management
• Choose the right pricing model (On-demand, Reserved, Spot instances).
• Implement cost monitoring tools (Cloud Health, AWS Cost Explorer).
• Rightsized resources to avoid over-provisioning.
• Plan for scalability vs. budget trade-offs.

Reliability & Redundancy


• Deploy across multiple availability zones/regions.
• Implement backup and disaster recovery strategies.
• Design for fault tolerance (redundant servers, network paths, storage).
• Use content delivery networks (CDNs) for global reach.

Data Management
• Decide on databases (SQL, NoSQL, managed DBaaS like AWS RDS, Fire store,
Cosmos DB).
• Implement data replication & sharding for scalability.
• Ensure data residency compliance (storing data in specific regions).
• Backup policies & lifecycle management for archival.
Virtualization: Introduction to virtualization, Types of
Virtualizations, Pros and cons of virtualization, Virtualization
applications in enterprises: Server virtualization, Desktop and
Application Virtualization, Storage and Network Virtualization.

• Virtualization, in computing, refers to the act of creating a virtual (rather than actual)
version of something, including but not limited to a virtual computer hardware platform,
operating system (OS), storage device, or computer network resources.
• Virtualization is a technology that allows multiple virtual instances of computing resources,
like servers, storage, and networks, to run on a single physical machine.
• It essentially creates a layer of abstraction between the physical hardware and the
operating systems and applications, enabling them to function as if they were on separate
dedicated machines.
• This leads to increased resource utilization, improved flexibility, and reduced costs.
Types of Virtualizations, Virtualization applications in enterprises

Application Virtualization:
Application virtualization enables remote access by which users can directly interact with
deployed applications without installing them on their local machine.
Your personal data and the applications settings are stored on the server, but you can still
run it locally via the internet. It’s useful if you need to work with multiple versions of the
same software. Common examples include hosted or packaged apps.
Working:
• An application is packaged into a virtual container with all its dependencies (DLLs,
registry entries, config files).
• When the user launches the app, the virtualization layer intercepts calls between
the application and the OS.
• To the user, it looks like a normal application, but in reality, it runs in a
sandboxed/virtualized environment.
• The app can be streamed from a server or run locally from the container.
Types:
Server-Based Application Virtualization
• Applications are installed on a centralized server.
• Users access them remotely through thin clients or remote display protocols (RDP,
ICA).
• Example: Citrix Virtual Apps (XenApp), Microsoft RemoteApp.
Streaming Application Virtualization
• Application is streamed on demand from a server.
• Only the necessary parts are downloaded when needed.
• Example: Microsoft App-V (Application Virtualization).
Network Virtualization:
• This allows multiple virtual networks to run on the same physical network, each operating
independently. You can quickly set up virtual switches, routers, firewalls, and VPNs,
making network management more flexible and efficient.
Working:
• A virtualization layer (software-defined networking controller or hypervisor) sits between
the physical hardware and virtual networks.
• Each virtual network behaves as if it were a completely separate physical network.
• Administrators can create, modify, or delete virtual networks without changing the
underlying hardware.
• Technologies like VLANs, VXLANs, GRE tunnels, and SDN (Software-Defined
Networking) are often used.
Types:
External Network Virtualization
• Combines multiple physical networks into a single logical network.
• Uses technologies like VLAN (Virtual LAN) and VPN (Virtual Private Network).
• Example: Enterprises connecting multiple branch networks securely.
Internal Network Virtualization
• Provides virtual networks inside a single server or data center.
• Virtual switches and routers connect VMs within a host or cluster.
• Example: VMware vSphere Distributed Switch, Microsoft Hyper-V Virtual Switch.
Key Components of Network Virtualization
• Virtual Switch (v Switch) → Connects virtual machines (VMs) to each other or to the
physical network.
• Virtual Router → Provides routing between virtual networks.
• Virtual Firewall → Provides security and traffic filtering.
• Software-Defined Networking (SDN) → Centralized control plane separates network
management from hardware.
• Overlay Protocols → VXLAN, GRE, NVGRE for creating virtual networks over physical
networks.
Desktop Virtualization:

• Desktop virtualization is a process in which you can create different virtual


desktops that users can use from any device like laptop or tablet.
• It’s great for users who need flexibility, as it simplifies software updates and
provides portability.

Working:

• A virtual machine (VM) runs the desktop OS (like Windows, Linux) on a server in
a data center or cloud.

• Users connect to the virtual desktop through a remote display protocol (like
Microsoft RDP, Citrix ICA, VMware Blast).

• Input/output (keyboard, mouse, display) happens on the user’s device, but all
computing and storage happen on the server.

Types:

Virtual Desktop Infrastructure (VDI)

• Each user gets a dedicated VM running a full desktop OS.

• Hosted in a centralized server or cloud.

• Example: VMware Horizon, Citrix Virtual Apps and Desktops, Microsoft AVD.

Remote Desktop Services (RDS) / Session-based

• Multiple users share the same OS instance on a server.

• Each user gets an isolated desktop session.

• Example: Microsoft Remote Desktop Services.

Desktop-as-a-Service (DaaS)
• Cloud providers host virtual desktops as a service.
• Users pay subscription fees instead of managing infrastructure.
• Example: Amazon Workspaces, Microsoft Azure Virtual Desktop, Citrix Cloud.
Storage Virtualization:

• This combines storage from different servers into a single system, making it easier
to manage. It ensures smooth performance and efficient operations even when the
underlying hardware changes or fails.

Working:

• A virtualization layer (software or hardware) sits between the physical storage


devices and the applications/servers.

• It hides the complexity of underlying hardware and presents virtual storage


volumes (logical units).

• Administrators can allocate, resize, and manage storage dynamically.

Types:

Block-Level Virtualization

• Virtualizes data at the block level (used by storage area networks – SAN).

• Applications see logical blocks of data rather than physical disks.

• Example: IBM SAN Volume Controller, Dell EMC VPLEX.

File-Level Virtualization

• Virtualizes storage at the file system level.

• Allows files to be accessed and managed across multiple file servers.

• Example: Network Attached Storage (NAS), DFS (Distributed File System).


Server Virtualization:
This splits a physical server into multiple virtual servers, each functioning independently.
It helps improve performance, cut costs and makes tasks like server migration and energy
management easier.
Working:
• A physical server has CPU, memory, storage, and networking resources.
• The hypervisor (virtualization layer) abstracts these resources and allocates them
to virtual machines (VMs).
• Each VM behaves like an independent physical server, with its own OS and
applications.
• Multiple VMs can run simultaneously on the same physical machine.
Types:

Full Virtualization
• Each VM runs its own unmodified OS.
• Hypervisor handles all hardware calls.
• Example: VMware ESXi, Microsoft Hyper-V.
Para-Virtualization
• The guest OS is modified to be aware it is running in a virtualized environment.
• Improves performance by reducing hypervisor overhead.
• Example: Xen Hypervisor.
OS-Level Virtualization (Containerization)
• No hypervisor is used. Instead, the host OS creates isolated environments
(containers).
• Containers share the same OS kernel but run apps in isolation.
• Example: Docker, LXC, Kubernetes
Data Virtualization:

• This brings data from different sources together in one place without needing to
know where or how it’s stored. It creates a unified view of the data, which can be
accessed remotely via cloud services.

Working:

• A data virtualization layer (middleware) connects to multiple heterogeneous data


sources (SQL, NoSQL, cloud storage, APIs, spreadsheets, etc.).

• It abstracts and integrates this data into a single unified virtual database.

• Users can query and analyze data in real time using SQL, BI tools, or applications,
without worrying about where the data resides.

• No need for ETL (Extract, Transform, Load) into a data warehouse unless
necessary.

Key Characteristics

• Abstraction → Users don’t need to know data source details.

• Real-time Access → Query data on demand without replication.

• Heterogeneous Integration → Supports multiple data types (structured, semi-


structured, unstructured).

• Security & Governance → Centralized control over data access policies.

• Minimal Data Movement → Unlike ETL, data remains in its original source.
Pros and cons of virtualization

Advantages (Pros) of Virtualization:

Reduced Costs:
• Running multiple virtual machines on a single physical server minimizes hardware
requirements, lowering both initial purchase costs and ongoing expenses like
power and cooling.
Improved Hardware/Resource Utilization:
• Virtualization allows for more efficient use of resources by distributing workloads
across available hardware, preventing underutilized servers.
• Virtualization allows multiple virtual machines (VMs) to run on a single physical
server.
• Better use of CPU, memory, and storage instead of leaving hardware underutilized.
Increased Agility and Flexibility:
• Virtual machines can be quickly provisioned, cloned, and moved, enabling faster
deployment of new applications and easier adaptation to changing business
needs.
Simplified & Improved Disaster Recovery:
• Virtualization makes it easier to create backups and restore systems in case of
failure, ensuring business continuity.
Enhanced Security:
• Virtualization can create isolated environments for testing and development,
reducing the risk of impacting the main system.
Flexibility & Scalability
• It is easy to create, clone, or delete virtual machines as needed.
• Can run different operating systems (Windows, Linux, etc.) on the same hardware.
Simplified Management
• Centralized tools (like VMware vCenter, Hyper-V Manager) manage all VMs.
• Automation improves efficiency.
Disadvantages (Cons) of Virtualization:

Performance Overhead
• VMs share hardware resources.
• Some overhead in CPU, memory, and I/O due to hypervisor management.
• Not ideal for high-performance computing (HPC) or real-time applications.
Single Point of Failure
• If the physical server crashes, all hosted VMs go down.
• Needs redundancy and failover mechanisms to avoid downtime.
Complexity in Management
• Large, virtualized environments require skilled administrators.
• Needs proper monitoring to prevent “VM sprawl” (too many unmanaged VMs).
Licensing & Compliance Issues
• Some OS/software vendors have strict licensing rules for virtual environments.
• May increase costs if not planned properly.
Security Risks
• Hypervisor vulnerabilities can compromise multiple VMs.
• Shared resources increase risks if not isolated properly.
Resource Contention
• When many VMs compete for CPU, memory, or storage, performance drops.
• Requires capacity planning.
Initial Setup Costs
• Though cost-saving in the long run, initial investment in servers, hypervisors, and
management tools can be high.

You might also like