Unit.2 CT (Notes)
Unit.2 CT (Notes)
2
Cloud Infrastructure & Virtualization
• The Electronic Numerical Integrator and Computer (ENIAC) was the first electronic
digital programmable general-purpose computer. The U.S. military designed the
ENIAC to calculate artillery firing tables. However, it was not completed until late
1945. Its first program was a thermonuclear weapon feasibility study.
• The ENIAC was large, weighing more than 27 tons and occupying 300 square feet
of space. Its thousands of vacuum tubes, resistors, capacitors, relays, and crystal
diodes could only perform numeric calculations. Punch cards served as data
storage.
• The first data center (called a “mainframe”) was built in 1945 to house the ENIAC
at the University of Pennsylvania.
• The development of the transistor transformed the computing industry. Bell Labs
developed the first transistorized computer, called TRADIC (Transistor Digital
Computer), in 1954.
• IBM introduced its first completely transistorized computer in 1955. The IBM 608
was half the size of a comparable system based on vacuum tubes and required 90
percent less power. It cost $83,210 in 1957.
• The smaller size and lower cost of transistorized computers made them suitable
for commercial applications. By the 1960s, data centers (or “computer rooms”)
were built in office buildings.
• The new mainframes were faster and more powerful than early machines, with
innovations such as memory and storage.
• Reliability was critical because the entire enterprise IT infrastructure ran on one
system.
• Data centers were designed to ensure ideal operating conditions. In other words,
like cooling and airflow, data center downtime was a concern even in the 60’s and
70’s.
The Minicomputer & Early Networking Era
• The dot-com bubble peaked by March 2000 and began crashing over the next two
years. Tech companies lost funding and much of their capital investment.
• However, the buildout of the Internet backbone during the dot-com era led to a new
concept in the early 2000s — cloud services.
• Salesforce.com pioneered the concept of delivering applications via the web in
1999. Amazon Web Services began offering compute, storage, and other IT
infrastructure in 2006. This led to the buildout of ever-larger data centers to support
these cloud services.
• Those facilities grew into what are now known as hyperscale data centers, often
surpassing a million square feet and serving as the backbone for the largest
technology platforms in the world.
• By 2012, 38 percent of organizations were using cloud services. Cloud service
providers needed facilities that allowed them to scale rapidly while minimizing
operating costs. Facebook launched the Open Computer Project in 2011, providing
best practices and specifications for developing economical and energy-efficient
data centers.
On-Demand Computing:
• Data centers became cloud platforms (AWS, Microsoft Azure, Google Cloud).
• Pay-as-you-go model replaced fixed infrastructure investment.
Software-Defined Data Centers (SDDCs):
Virtualization extended to networking (SDN) and storage (SDS).
Characteristics:
• Multi-tenant environments.
• Global availability zones.
• Service models (IaaS, PaaS, SaaS).
Green Data Centers:
• Focus on energy efficiency (free cooling, renewable energy).
2020s and Beyond
Servers
• IT servers take many forms and provide many different services and functions, but
the fundamental goal is the same: They provide a service as part of bipartite
communication between a client and a server.
• A server may be a software program connected locally to the same hardware
machine, or remotely via networking infrastructure.
• Servers are generally software or hardware systems designed to carry out
dedicated functions: e-mail server, Web server, print server or database server.
• Within a data Centre the IT hardware used to host a software server may differ in
design, efficiency and function.
• A server may be designed to host a particular operating system (OS), and within a
data Centre there may be capacity for different OSs. Each server machine will
consist of the physical hardware, the OS and the software service
• They are Physical computers that host applications and services.
Types:
• Rack servers – standardized servers mounted in racks.
• Blade servers – thin, modular servers that share power and cooling.
• Mainframes & supercomputers (in specialized centers).
Functions: Run applications, manage databases, host virtual machines.
---------------------------------------------------------------------------------------------------------------------
Networking:
• The gateway machine of a data Centre will sit at the entrance to the data Centre.
• Its primary function is protocol translation in and out of the data Centre, acting as
the connection point between the data Centre’s internal local area network (LAN)
and the wide area network (WAN) outside of the data Centre– in most cases, the
Internet Service Provider’s network.
Networking Equipment
• Routers – connect data center to external networks (Internet).
• Switches – connect servers, storage, and devices within the data center.
• Firewalls & Intrusion Detection Systems (IDS/IPS) – security layers.
• Load Balancers – distribute traffic across multiple servers.
• WAN Optimization & SDN (Software-Defined Networking) for performance and
flexibility.
Storage
• Data storage is a critical element of data Centre design. A number of options exist,
each of which caters to the requirements of other elements in the overall IT
infrastructure choices made.
• The key differentiator in storage type lies in the way the client machine– in our
case, the data Centre server– logically sees the storage medium.
• This will play a part in how the server manages the space and the access protocols
available for accessing the data stored.
• Network-attached storage (NAS) appears to the client machine as a network-
based file server able to support the Network File System (NFS) protocol.
• In a SAN, the disc space appears as local to the client machine. This is a key point
that enables the client (our data Centre servers) to use disc management utility
software to configure and optimize the space to best suit the needs of the server
application.
• Storage Systems are Used to store and retrieve data efficiently.
Types:
---------------------------------------------------------------------------------------------------------------------
Virtualization
---------------------------------------------------------------------------------------------------------------------
Power Systems
• In terms of power, the primary difference between a normal office or home environment
and a data Centre relates to the ‘criticality’ of the electrical load. Losing power in most
situations is nothing more than an inconvenience, whereas losing power to critical IT Data
Centre’s services (e.g. in the case of a financial institution) can be extremely disruptive,
even catastrophic.
• To avoid such disruption, a data Centre employs an UPS together with a battery bank to
ensure that smooth and uninterrupted power is supplied to the critical IT load.
• Power distribution units (PDUs), which usually contain electrical transformers, are also
used to smooth the alternating current (AC) power and to distribute that power to the IT
equipment racks within the data Centre.
• Within the IT equipment, AC power is subsequently converted to direct current (DC) power
which is utilized by the individual IT components. If electrical supply is lost, UPS utilizes
the batteries to provide ‘ride-through’ power to the critical load.
• The objective of providing the ride-through power is to allow time for support electrical
generators (usually diesel powered) to come online until the mains power supply is
restored.
Physical Building:
• Data centers are constructed to meet stringent environmental and security requirements,
often located in secure, nondescript buildings.
Power Infrastructure:
• Data centers need robust electrical infrastructure with redundant power sources, backup
generators, and advanced power management systems.
Cooling Infrastructure:
• Precision cooling systems, such as raised floor cooling and hot/cold aisle containment,
help maintain an ideal temperature and humidity level.
Fire Suppression Systems:
• Data centers use specialized fire suppression systems, like clean agents or inert gas
systems, to protect IT equipment without causing damage.
Physical Security:
• Facilities have multiple layers of security, including access control, biometric
authentication, and security personnel. –
Redundancy:
• To ensure high availability, data centers often employ redundancy in power, cooling, and
networking components.
Power Infrastructure
• Utility Power Supply – electricity from grid.
• Uninterruptible Power Supply (UPS) – battery backup for short outages.
• Diesel Generators – long-term backup during power failures.
• Power Distribution Units (PDUs) – distribute electricity to racks and servers.
• Redundant Power Paths (N+1, 2N) – ensure high availability.
Physical Infrastructure
• Racks & Cabinets – house servers, networking gear, and storage.
• Raised Floors & Overhead Cable Trays – manage airflow and cabling.
• Fire Suppression Systems – gas-based (FM200, Novec 1230) to protect equipment.
• Lighting & Flooring – designed for safety and energy efficiency.
Security Systems
• Physical Security:
▪ Biometric access control, smart cards, and mantraps.
▪ 24/7 video surveillance (CCTV).
▪ Security guards.
• Cybersecurity Integration: Firewalls, intrusion detection, DDoS protection.
Storage SAN, NAS, DAS, Object storage Data storage & retrieval
Security Systems CCTV, Biometrics, Fire suppression Physical & cyber protection
Requirements:
-------------------------------------------------------------------------------------------------------------------------------
Cost Management:
Cost Optimization:
• Optimizing cloud spending and avoiding unexpected costs requires careful resource
management and cost monitoring.
Resource Allocation:
• Efficiently allocating and managing cloud resources to avoid over-provisioning or under-
utilization is crucial for cost control.
Multi-Cloud Complexity:
Integration and Interoperability:
• Managing applications and data across multiple cloud platforms can be complex,
requiring robust integration and interoperability solutions.
Vendor Lock-in:
• Avoiding vendor lock-in by choosing flexible and portable cloud solutions is important for
long-term success.
Other Challenges:
Lack of Expertise:
• Finding and retaining skilled cloud professionals can be a hurdle for many
organizations.
Migration Challenges:
• Migrating existing applications and data to the cloud can be a complex and time-
consuming process.
Compliance:
• Cloud environments may present unique compliance challenges, requiring organizations
to stay abreast of evolving regulations.
Sustainability:
• Organizations are increasingly considering the environmental impact of their cloud
usage and the need for sustainable cloud practices.
-------------------------------------------------------------------------------------------------------------------------------
Components
• IT Equipment: Servers, storage, networking gear, virtualization platforms.
• Facilities Infrastructure: Power (UPS, generators), cooling (CRAC/CRAH, liquid
cooling), cabling, racks.
• Security Systems: Physical (biometrics, CCTV, fire suppression) + Cyber (firewalls,
IDS/IPS, Zero Trust).
• Monitoring & Automation: DCIM (Data Center Infrastructure Management),
telemetry, predictive AI-driven controls.
• IaaS (Infrastructure as a Service): VMs, storage, networks run inside the data
center (e.g., AWS EC2, Azure VMs).
• PaaS (Platform as a Service): Databases, runtimes, middleware hosted in data
centers (e.g., Google App Engine).
• SaaS (Software as a Service): End-user apps (e.g., Salesforce, Office 365)
delivered from cloud data centers.
Cost Management
• Choose the right pricing model (On-demand, Reserved, Spot instances).
• Implement cost monitoring tools (Cloud Health, AWS Cost Explorer).
• Rightsized resources to avoid over-provisioning.
• Plan for scalability vs. budget trade-offs.
Data Management
• Decide on databases (SQL, NoSQL, managed DBaaS like AWS RDS, Fire store,
Cosmos DB).
• Implement data replication & sharding for scalability.
• Ensure data residency compliance (storing data in specific regions).
• Backup policies & lifecycle management for archival.
Virtualization: Introduction to virtualization, Types of
Virtualizations, Pros and cons of virtualization, Virtualization
applications in enterprises: Server virtualization, Desktop and
Application Virtualization, Storage and Network Virtualization.
• Virtualization, in computing, refers to the act of creating a virtual (rather than actual)
version of something, including but not limited to a virtual computer hardware platform,
operating system (OS), storage device, or computer network resources.
• Virtualization is a technology that allows multiple virtual instances of computing resources,
like servers, storage, and networks, to run on a single physical machine.
• It essentially creates a layer of abstraction between the physical hardware and the
operating systems and applications, enabling them to function as if they were on separate
dedicated machines.
• This leads to increased resource utilization, improved flexibility, and reduced costs.
Types of Virtualizations, Virtualization applications in enterprises
Application Virtualization:
Application virtualization enables remote access by which users can directly interact with
deployed applications without installing them on their local machine.
Your personal data and the applications settings are stored on the server, but you can still
run it locally via the internet. It’s useful if you need to work with multiple versions of the
same software. Common examples include hosted or packaged apps.
Working:
• An application is packaged into a virtual container with all its dependencies (DLLs,
registry entries, config files).
• When the user launches the app, the virtualization layer intercepts calls between
the application and the OS.
• To the user, it looks like a normal application, but in reality, it runs in a
sandboxed/virtualized environment.
• The app can be streamed from a server or run locally from the container.
Types:
Server-Based Application Virtualization
• Applications are installed on a centralized server.
• Users access them remotely through thin clients or remote display protocols (RDP,
ICA).
• Example: Citrix Virtual Apps (XenApp), Microsoft RemoteApp.
Streaming Application Virtualization
• Application is streamed on demand from a server.
• Only the necessary parts are downloaded when needed.
• Example: Microsoft App-V (Application Virtualization).
Network Virtualization:
• This allows multiple virtual networks to run on the same physical network, each operating
independently. You can quickly set up virtual switches, routers, firewalls, and VPNs,
making network management more flexible and efficient.
Working:
• A virtualization layer (software-defined networking controller or hypervisor) sits between
the physical hardware and virtual networks.
• Each virtual network behaves as if it were a completely separate physical network.
• Administrators can create, modify, or delete virtual networks without changing the
underlying hardware.
• Technologies like VLANs, VXLANs, GRE tunnels, and SDN (Software-Defined
Networking) are often used.
Types:
External Network Virtualization
• Combines multiple physical networks into a single logical network.
• Uses technologies like VLAN (Virtual LAN) and VPN (Virtual Private Network).
• Example: Enterprises connecting multiple branch networks securely.
Internal Network Virtualization
• Provides virtual networks inside a single server or data center.
• Virtual switches and routers connect VMs within a host or cluster.
• Example: VMware vSphere Distributed Switch, Microsoft Hyper-V Virtual Switch.
Key Components of Network Virtualization
• Virtual Switch (v Switch) → Connects virtual machines (VMs) to each other or to the
physical network.
• Virtual Router → Provides routing between virtual networks.
• Virtual Firewall → Provides security and traffic filtering.
• Software-Defined Networking (SDN) → Centralized control plane separates network
management from hardware.
• Overlay Protocols → VXLAN, GRE, NVGRE for creating virtual networks over physical
networks.
Desktop Virtualization:
Working:
• A virtual machine (VM) runs the desktop OS (like Windows, Linux) on a server in
a data center or cloud.
• Users connect to the virtual desktop through a remote display protocol (like
Microsoft RDP, Citrix ICA, VMware Blast).
• Input/output (keyboard, mouse, display) happens on the user’s device, but all
computing and storage happen on the server.
Types:
• Example: VMware Horizon, Citrix Virtual Apps and Desktops, Microsoft AVD.
Desktop-as-a-Service (DaaS)
• Cloud providers host virtual desktops as a service.
• Users pay subscription fees instead of managing infrastructure.
• Example: Amazon Workspaces, Microsoft Azure Virtual Desktop, Citrix Cloud.
Storage Virtualization:
• This combines storage from different servers into a single system, making it easier
to manage. It ensures smooth performance and efficient operations even when the
underlying hardware changes or fails.
Working:
Types:
Block-Level Virtualization
• Virtualizes data at the block level (used by storage area networks – SAN).
File-Level Virtualization
Full Virtualization
• Each VM runs its own unmodified OS.
• Hypervisor handles all hardware calls.
• Example: VMware ESXi, Microsoft Hyper-V.
Para-Virtualization
• The guest OS is modified to be aware it is running in a virtualized environment.
• Improves performance by reducing hypervisor overhead.
• Example: Xen Hypervisor.
OS-Level Virtualization (Containerization)
• No hypervisor is used. Instead, the host OS creates isolated environments
(containers).
• Containers share the same OS kernel but run apps in isolation.
• Example: Docker, LXC, Kubernetes
Data Virtualization:
• This brings data from different sources together in one place without needing to
know where or how it’s stored. It creates a unified view of the data, which can be
accessed remotely via cloud services.
Working:
• It abstracts and integrates this data into a single unified virtual database.
• Users can query and analyze data in real time using SQL, BI tools, or applications,
without worrying about where the data resides.
• No need for ETL (Extract, Transform, Load) into a data warehouse unless
necessary.
Key Characteristics
• Minimal Data Movement → Unlike ETL, data remains in its original source.
Pros and cons of virtualization
Reduced Costs:
• Running multiple virtual machines on a single physical server minimizes hardware
requirements, lowering both initial purchase costs and ongoing expenses like
power and cooling.
Improved Hardware/Resource Utilization:
• Virtualization allows for more efficient use of resources by distributing workloads
across available hardware, preventing underutilized servers.
• Virtualization allows multiple virtual machines (VMs) to run on a single physical
server.
• Better use of CPU, memory, and storage instead of leaving hardware underutilized.
Increased Agility and Flexibility:
• Virtual machines can be quickly provisioned, cloned, and moved, enabling faster
deployment of new applications and easier adaptation to changing business
needs.
Simplified & Improved Disaster Recovery:
• Virtualization makes it easier to create backups and restore systems in case of
failure, ensuring business continuity.
Enhanced Security:
• Virtualization can create isolated environments for testing and development,
reducing the risk of impacting the main system.
Flexibility & Scalability
• It is easy to create, clone, or delete virtual machines as needed.
• Can run different operating systems (Windows, Linux, etc.) on the same hardware.
Simplified Management
• Centralized tools (like VMware vCenter, Hyper-V Manager) manage all VMs.
• Automation improves efficiency.
Disadvantages (Cons) of Virtualization:
Performance Overhead
• VMs share hardware resources.
• Some overhead in CPU, memory, and I/O due to hypervisor management.
• Not ideal for high-performance computing (HPC) or real-time applications.
Single Point of Failure
• If the physical server crashes, all hosted VMs go down.
• Needs redundancy and failover mechanisms to avoid downtime.
Complexity in Management
• Large, virtualized environments require skilled administrators.
• Needs proper monitoring to prevent “VM sprawl” (too many unmanaged VMs).
Licensing & Compliance Issues
• Some OS/software vendors have strict licensing rules for virtual environments.
• May increase costs if not planned properly.
Security Risks
• Hypervisor vulnerabilities can compromise multiple VMs.
• Shared resources increase risks if not isolated properly.
Resource Contention
• When many VMs compete for CPU, memory, or storage, performance drops.
• Requires capacity planning.
Initial Setup Costs
• Though cost-saving in the long run, initial investment in servers, hypervisors, and
management tools can be high.