Cloud Computing Overview
Cloud Computing refers to storing and accessing data and
applications over the internet instead of a local computer or
server.
It is also called Internet-based computing.
It provides computing resources (like storage, servers, and
software) as services over the internet.
Users can access these resources anytime and from anywhere
with an internet connection.
The stored data can include files, images, documents, or any other
digital content.
It eliminates the need for physical infrastructure and reduces
maintenance costs.
Access: Resources are accessible from anywhere with an internet
connection, on-demand.
Services Provided:
Storage: Saving data like files, images, or documents online.
Computing Power: Running applications or performing data
analysis.
Software: Accessing applications (like Google Docs or Microsoft
Office 365) via browsers.
How Cloud Computing Works
Infrastructure:
Cloud computing relies on a network of remote servers hosted on
the internet to store, manage, and process data instead of using
local machines.
On-Demand Access:
Users can access computing resources anytime as needed.
Services can be scaled up or down based on demand, with no
need to invest in or maintain physical hardware.
Types of Cloud Services:
IaaS (Infrastructure as a Service) – virtual servers and storage.
PaaS (Platform as a Service) – tools for app development.
SaaS (Software as a Service) – ready-to-use software over the
internet.
Deployment Models:
Public Cloud – services provided over a public network.
Private Cloud – dedicated resources for one organization.
Hybrid Cloud – combination of public and private clouds.
Benefits:
Cost-effective (pay only for what you use).
Scalable and flexible.
Automatic updates and maintenance.
Reliable and secure data storage.
Advantages / Need of Cloud
Computing
1. Easy Integration
o Connects quickly with existing apps (custom or third-party).
o Reduces time and effort for system integration.
2. Scalability and Reliability
o Easily handles increased workload or traffic.
o Provides strong uptime and disaster recovery support.
3. No Hardware or Software Installation
o Runs fully online with no physical setup required.
o Lowers capital costs and setup time.
4. Fast and Low-Risk Deployment
o Applications go live in weeks, not months.
o Reduces risk and complexity during setup.
5. Supports Deep Customization
o Allows extensive personalization of features.
o Preserves changes even after updates.
6. Empowers Business Users
o Users can make changes without coding.
o Enables easy report creation without IT support.
7. Automatic Upgrades
o Updates happen without downtime.
o Keeps all custom settings and data intact.
8. Cost Efficiency
o Pay only for what you use (pay-as-you-go).
o Avoids large upfront hardware/software costs.
9. Remote Access / Mobility
o Access data and apps from anywhere.
o Supports remote work and flexible teams.
10. Enhanced Security
Built-in firewalls and encryption protect data.
24/7 monitoring ensures safe operations.
11. Access to Advanced Technology
Use tools like AI, ML, and analytics easily.
No need to buy or maintain high-end servers.
12. Improved Collaboration
Teams work in real time from different locations.
Reduces errors and speeds up decision-making.
History of cloud computing
History of Cloud Computing
1. Client-Server Computing (Before Cloud)
All data and control were stored on a central server.
Users had to connect to the server to access data.
Limitations: Poor scalability, single point of failure, and heavy
server dependency.
2. Distributed Computing
Multiple computers were connected via a network.
Allowed users to share resources and tasks.
Limitations: Complex setup, difficult maintenance, and less
flexibility.
3. Beginning of Cloud Computing
1961: John McCarthy (MIT) proposed the idea that “computing
could be sold like a utility (e.g., water or electricity).”
The idea was ahead of its time and not widely accepted back then.
4. Rise of Cloud Computing
1999 – Salesforce.com:
o Delivered enterprise applications over the internet.
o Marked the beginning of commercial cloud services.
2002 – Amazon Web Services (AWS):
o Started offering storage and computing services online.
2006 – Amazon EC2 (Elastic Compute Cloud):
o Launched the first public cloud service, allowing anyone to
rent virtual servers.
2009 – Google and Microsoft:
o Google Cloud and Microsoft Azure launched cloud platforms
for enterprise use.
Later Years:
o Companies like IBM, Oracle, HP, and Alibaba entered the
cloud market.
o Cloud computing grew into a mainstream, essential
technology.
Cloud stakeholders
NIST Cloud Computing reference architecture defines five major performers:
Cloud Provider
Cloud Carrier
Cloud Broker
Cloud Auditor
Cloud Consumer
Each performer is an object (a person or an organization) that contributes to a transaction or method
and/or performs tasks in Cloud computing. There are five major actors defined in the NIST cloud
computing reference architecture, which are described below:
1. Cloud Service Providers: A group or object that delivers cloud services to cloud consumers or end-
users. It offers various components of cloud computing. Cloud computing consumers purchase a
growing variety of cloud services from cloud service providers. There are various categories of cloud-
based services mentioned below:
IaaS Providers: In this model, the cloud service providers offer infrastructure components that
would exist in an on-premises data center. These components consist of servers, networking,
and storage as well as the virtualization layer.
SaaS Providers: In Software as a Service (SaaS), vendors provide a wide sequence of business
technologies, such as Human resources management (HRM) software, customer relationship
management (CRM) software, all of which the SaaS vendor hosts and provides services through
the internet.
PaaS Providers: In Platform as a Service (PaaS), vendors offer cloud infrastructure and services
that can access to perform many functions. In PaaS, services and products are mostly utilized in
software development. PaaS providers offer more services than IaaS providers. PaaS providers
provide operating system and middleware along with application stack, to the underlying
infrastructure.
2. Cloud Carrier: The mediator who provides offers connectivity and transport of cloud services within
cloud service providers and cloud consumers. It allows access to the services of the cloud through
Internet networks, telecommunication, and other access devices. Network and telecom carriers or a
transport agent can provide distribution. A consistent level of services is provided when cloud providers
set up Service Level Agreements (SLA) with a cloud carrier. In general, Carrier may be required to offer
dedicated and encrypted connections.
3. Cloud Broker: An organization or a unit that manages the performance, use, and delivery of cloud
services by enhancing specific capability and offers value-added services to cloud consumers. It
combines and integrates various services into one or more new services. They provide service arbitrage
which allows flexibility and opportunistic choices. There is major three services offered by a cloud
broker:
Service Intermediation.
Service Aggregation.
Service Arbitrage.
4. Cloud Auditor: An entity that can conduct independent assessment of cloud services, security,
performance, and information system operations of the cloud implementations. The services that are
provided by Cloud Service Providers (CSP) can be evaluated by service auditors in terms of privacy
impact, security control, and performance, etc. Cloud Auditor can make an assessment of the security
controls in the information system to determine the extent to which the controls are implemented
correctly, operating as planned and constructing the desired outcome with respect to meeting the
security necessities for the system. There are three major roles of Cloud Auditor which are mentioned
below:
Security Audit.
Privacy Impact Audit.
Performance Audit.
5. Cloud Consumer: A cloud consumer is the end-user who uses services provided by Cloud Service
Providers (CSPs).
They set up service contracts with the cloud provider.
Payment is made per use of the provisioned services (measured services).
A group of organizations with mutual regulatory constraints performs security and risk
assessments for cloud migrations and deployments.
Service-Level Agreements (SLAs) are used to specify:
o Technical performance requirements.
o Quality of service expectations.
o Security measures.
o Remedies for performance failures.
SLAs may also include:
o Limitations or boundaries.
o Obligations that cloud consumers must follow.
In a mature market, cloud consumers can choose providers offering:
o Better pricing.
o More favorable terms.
Typically:
o Public pricing and SLAs are non-negotiable.
o However, large-scale consumers may be able to negotiate better contracts.
End User
An end user is the final person who uses a product, application, or service. In cloud computing, the end
user does not manage or interact directly with the cloud infrastructure — instead, they access services
built and maintained by cloud users (such as developers or IT admins) and hosted by cloud providers.
Characteristics of End Users:
Non-technical in most cases.
Use applications via web browsers, mobile apps, or software interfaces.
Often unaware that the application runs on cloud infrastructure.
Expect performance, reliability, and security, even if they don’t understand how it’s delivered.
Examples of End Users:
A student using Google Docs to write an assignment.
A customer shopping on an e-commerce website hosted on AWS.
A person streaming videos on Netflix (Netflix uses AWS cloud services).
Difference from Cloud Users:
Cloud users configure or deploy the application on the cloud.
End users just consume the output of that application.
Character of Cloud Computing
On-Demand Self-Service
Users can automatically provision computing resources like servers, storage, or applications
whenever they need them, without requiring human interaction with the service provider.
Example: Creating a virtual machine on AWS or Google Cloud within minutes.
2. Broad Network Access
Cloud services are accessible over the internet through standard devices like laptops,
smartphones, or tablets.
Ensures availability anywhere, anytime using standard protocols (HTTP, HTTPS).
Example: Accessing Dropbox or Google Drive from any device with an internet connection.
3. Resource Pooling
Cloud providers use multi-tenant models where physical and virtual resources are shared
among multiple users.
Resources like memory, processing, and storage are dynamically assigned based on demand.
The users don’t know the exact physical location of the resources but may specify a location at
a higher level (e.g., region).
4. Rapid Elasticity
Resources can be quickly scaled up or down as needed.
From the consumer’s perspective, resources often appear unlimited and immediately available.
Example: A website automatically handling sudden spikes in traffic during a sale.
5. Measured Service
Cloud systems automatically control and optimize resource usage.
Usage is monitored, controlled, and reported—providing transparency for both provider and
consumer.
Example: AWS charges you based on how many hours you run a server, or how much data you
store.
Cloud Computing Challenges
1. Data Security and Privacy
Critical issue since sensitive data is stored on the cloud.
Responsibilities include user authentication, encryption, access control.
Risks: identity theft, data breaches, malware, data leaks.
Can cause loss of trust, reputation, and revenue.
2. Cost Management
“Pay As You Go” helps, but hidden costs arise from:
o Underutilized resources
o Performance degradation
o Sudden usage spikes
o Forgotten active instances/services
Leads to unexpectedly high bills.
3. Multi-Cloud Environments
Many companies use multiple cloud providers or hybrid clouds (84%+).
Managing different platforms creates complexity for IT teams.
Integration and operation can be difficult.
4. Performance Challenges
Latency and slow load times drive users away.
Issues stem from inefficient load balancing and lack of fault tolerance.
Performance is crucial to user satisfaction and profitability.
5. Interoperability and Flexibility
Switching cloud providers often requires rewriting applications.
Data migration, network setup, and security reconfiguration add complexity.
Results in reduced flexibility and vendor lock-in.
6. High Dependence on Network
Cloud depends on high-speed, reliable internet connectivity.
Limited bandwidth or outages cause downtime and business loss.
Smaller businesses may struggle with cost of maintaining strong network.
7. Lack of Knowledge and Expertise
Cloud computing requires specialized, up-to-date skills.
Talent shortage leads to high salaries and vacancies.
Constant upskilling is necessary for managing complex cloud environments effectively.
Grid Computing
Definition:
Grid computing is a distributed computing architecture where multiple computers, often geographically
dispersed, work together as a virtual supercomputer to perform large-scale tasks. It breaks down
complex jobs into smaller subtasks that run concurrently across the network.
Key Features:
Combines computing resources like processing power and storage from different machines.
Machines may have different hardware and operating systems (heterogeneous networks).
Uses middleware software to manage resource allocation and task distribution.
Importance:
Scalability: Easily add or remove resources based on demand.
Efficient Resource Utilization: Idle computers contribute to processing, reducing waste.
Solves Complex Problems: Suitable for tasks like climate modeling, scientific simulations, and
genome analysis.
Facilitates Collaboration: Enables researchers worldwide to share resources and work together.
Cost-effective: Reuses existing hardware and reduces the need for expensive supercomputers.
Working:
A control node manages the grid by tracking available resources.
Providers contribute their computing resources to the grid.
Users request resources to run their applications.
The control node allocates resources without overloading any provider.
Types of Grid Computing:
Computational Grid: Focused on processing power.
Scavenging Grid: Uses idle resources of regular computers.
Data Grid: Connects distributed data storage for large data access.
Advantages:
High resource utilization and parallel task execution.
Scalability and flexibility in resource management.
Disadvantages:
Complexity in managing heterogeneous resources.
Security concerns due to distributed nature.
Software and standards are still evolving.
Fog Computing
Definition:
Fog Computing, coined by Cisco in 2014, is an extension of cloud computing to the edge of the
network, closer to the data source (host).
Also known as Edge Computing or Fogging.
It provides computing, storage, and networking services between end devices and cloud data
centers.
Key Points:
1. Fog Nodes: Devices in the fog infrastructure are called fog nodes.
2. Placement: Data, storage, computation, and applications are placed between the cloud and
physical host.
3. Proximity to Host: Processing happens near the data source, reducing latency.
4. Efficiency & Security: Improves system speed and enhances data security.
History:
Term introduced by Cisco in January 2014.
"Fog" refers to clouds close to the ground—similar to placing computing close to the data.
In 2015, IBM introduced a similar term: Edge Computing.
When to Use Fog Computing:
1. When only selected data needs to be sent to the cloud.
2. When low latency (fast processing) is required.
3. For large-scale geographically distributed services.
4. For intensive computations on local devices.
5. Used in IoT devices, sensor networks, Cameras, Industrial IoT (e.g., Car-to-Car Consortium in
Europe).
Advantages:
Reduces data sent to the cloud.
Saves network bandwidth.
Reduces response time.
Enhances security and privacy.
Enables local data analysis.
Disadvantages:
Traffic congestion may occur between host and fog node.
Increased power consumption due to the extra layer.
Complex task scheduling between host, fog, and cloud.
Difficult data management (encryption/decryption overhead).
Applications:
Healthcare Monitoring: Real-time patient data analysis and emergency alerts.
Railway Monitoring: Low-latency monitoring for high-speed trains.
Oil & Gas Pipelines: Local data analysis due to large data volume.
*****Try to learn difference among them fog cloud and grid ******