KEMBAR78
Cloud Computing Notes FINAL | PDF | Cloud Computing | Amazon Web Services
0% found this document useful (0 votes)
26 views43 pages

Cloud Computing Notes FINAL

The document outlines a comprehensive syllabus for Cloud Computing, focusing on AWS fundamentals, global infrastructure, storage, security, and AWS essentials. It includes detailed information on Amazon EC2, its benefits, instance types, and steps for creating and managing instances, as well as services like Elastic Load Balancing, Simple Notification Service, and Simple Queue Service. The notes serve as a study guide for AWS-related topics and preparation for the AWS Certified Cloud Practitioner examination.

Uploaded by

selvatharrun007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views43 pages

Cloud Computing Notes FINAL

The document outlines a comprehensive syllabus for Cloud Computing, focusing on AWS fundamentals, global infrastructure, storage, security, and AWS essentials. It includes detailed information on Amazon EC2, its benefits, instance types, and steps for creating and managing instances, as well as services like Elastic Load Balancing, Simple Notification Service, and Simple Queue Service. The notes serve as a study guide for AWS-related topics and preparation for the AWS Certified Cloud Practitioner examination.

Uploaded by

selvatharrun007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Cloud Computing Notes (Based on Past QPs &

Concise Notes)
Syllabus:-

Module 1: AWS Fundamentals and Compute in the Cloud


Benefits of the AWS Cloud

Differences between on‑demand delivery and cloud deployments

Benefits of Amazon Elastic Compute Cloud (Amazon EC2) and Auto Scaling

Overview of Elastic Load Balancing, Amazon Simple Notification Service (Amazon SNS), and Amazon Simple Queue Service
(Amazon SQS)

Module 2: Global Infrastructure, Reliability and Networking


Benefits of Amazon CloudFront and Edge locations

Networking basics

Real‑life scenarios for virtual private networks (VPN)

Benefits of AWS Direct Connect and hybrid deployments

Module 3: Storage and Databases


Overview of Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), and Amazon Elastic
File System (Amazon EFS)

Introduction to Amazon Relational Database Service (Amazon RDS)

Module 4: Security, Monitoring and Analytics


The shared responsibility model and foundational security policies

Overview of the benefits of compliance with AWS

How to monitor your AWS environment

Overview of AWS monitoring services, including Amazon CloudWatch, AWS CloudTrail, and AWS Trusted Advisor

Module 5: AWS Essentials (Pricing & Support, Migration & Innovation, AWS Certified Cloud Practitioner
Basics)
Overview of AWS pricing and support models

Intro to AWS services like AWS Budgets, AWS Cost Explorer, and AWS Pricing Calculator

What is migration and innovation in the AWS Cloud?

What is the AWS Cloud Adoption Framework (AWS CAF)?

The five pillars of the AWS Well‑Architected Framework and the six benefits of cloud computing

Resources for preparing for the AWS Certified Cloud Practitioner examination

Note: For Subnet Mask & Network Related problems (Mod 2) , Refer YouTube.

****Colors represent the modules , so that you won’t lose track****

Cloud Computing Notes (Based on Past QPs & Concise Notes) 1


Module - 1
1) What is Amazon EC2?
Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services
(AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use
Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage.
You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website
traffic. When usage decreases, you can reduce capacity (scale down) again.

An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify
determines the hardware available to your instance. Each instance type offers a different balance of compute, memory,
network, and storage resources.

What it does:
Offers scalable compute capacity.

Allows you to launch virtual machines (VMs) with your preferred OS and configuration.

Provides flexibility to scale up/down based on demand.

Supports multiple architectures: x86, ARM, and GPU-based computing.

2) EC2 Instance Categorization


EC2 instances are categorized based on their purpose, hardware, and performance characteristics. The main categories are:

Category Purpose Prefix

General Purpose Balanced compute, memory, and networking t, m

Compute Optimized High-performance processors for compute-heavy tasks c

Memory Optimized High memory for databases, caching, in-memory apps r, x, z

Storage Optimized High IOPS for large data workloads i, d, h

Accelerated Computing GPUs or FPGAs for ML, HPC, 3D rendering p , inf , trn , f

Bare Metal Direct hardware access, no virtualization Same prefixes with .metal suffix

ARM-Based Cost-effective Graviton processors a , t4g , m6g

Sample EC2 Instances with Configuration Details

Instance Type Category vCPUs Memory (GiB) Storage Use Case

8 GiB EBS General Purpose Small workloads, development


t2.micro General Purpose 1 1
SSD environments

32 GiB EBS General Web servers, small DBs,


m5.large General Purpose 2 8
Purpose SSD processing

Compute-intensive tasks, ML,


c5.large Compute Optimized 2 8 32 GiB EBS SSD
batch jobs

Compute Optimized Next-gen compute with AWS


c6g.large 2 8 32 GiB EBS SSD
(ARM) Graviton2

32 GiB EBS General In-memory DBs, memory-


r5.large Memory Optimized 2 32
Purpose SSD intensive apps

Memory Optimized 32 GiB EBS General Next-gen memory apps with


r6g.large 2 32
(ARM) Purpose SSD Graviton2

🖥️ Steps to Create an Amazon EC2 Instance


1. Open AWS Cloud Management Console

Visit https://console.aws.amazon.com and sign in.

2. Search and Open EC2 Dashboard

In the top search bar, type EC2 and select it.

Click on the Launch Instance button.

3. Name Your Instance

Cloud Computing Notes (Based on Past QPs & Concise Notes) 2


Provide a relevant name tag for easy identification (e.g., MyTestInstance).

4. Select Amazon Machine Image (AMI)

Choose the desired operating system (Amazon Linux 2, Ubuntu, Windows, etc.).

Amazon Linux 2 is free-tier eligible and suitable for most basic tasks.

5. Choose Instance Type

Select the instance type based on your resource needs.

For beginners or testing purposes, choose t2.micro (free tier eligible).

6. Create or Select a Key Pair (SSH)

Create a new key pair or use an existing one.

Download the .pem file – required for SSH access to the instance.

7. Configure Network Settings (Security Group)

Set up firewall rules to control traffic.

Allow required ports (e.g., port 22 for SSH, port 80 for HTTP, etc.).

You can restrict access by IP address or CIDR range.

8. Configure Storage

Default is 8 GB General Purpose SSD (gp2).

You can increase size or change volume type as needed.

9. Configure Advanced Settings

Optional settings like:

Enable Termination Protection (prevents accidental deletion),

Enable CloudWatch monitoring,

User data scripts for boot-time initialization.

10. Launch Your Instance

Click on Launch Instance at the bottom right.

Wait a few moments until the instance status becomes Running.

✅ Post-Launch Access
Connect to your instance via SSH (Linux/Mac) or PuTTY (Windows) using the downloaded key.

3) Benefits of the AWS Cloud / Cloud Computing

Cloud Computing Notes (Based on Past QPs & Concise Notes) 3


1. Scalability & Elasticity

On-demand resources: Instantly add or remove compute, storage, database, and other resources to match workload.

Elastic services: Auto Scaling ensures capacity adjusts automatically based on defined policies (e.g., CPU/memory
thresholds).

2. Cost-Effectiveness

Pay-as-you-go: Only pay for resources you actually use (per hour, per second).

Reserved & Savings Plans: Commit to 1–3 year usage for steep discounts (~30–72% off).

3. Global Reach & Low Latency

Multiple Regions & Availability Zones (AZs): Deploy your applications near end-users.

Edge locations (CloudFront): Cache content at edge for sub-100 ms latency worldwide.

4. Reliability & High Availability

Fault-tolerant design: Spread applications across AZs.

Service SLAs: 99.99%+ availability for many services.

5. Security & Compliance

Shared responsibility model: AWS secures the cloud; you secure in the cloud.

Built-in security controls: IAM, VPC network isolation, encryption at rest/in transit, AWS Shield/WAF.

Certifications: ISO, SOC, HIPAA, GDPR, PCI DSS.

6. Innovation & Agility

Rapid provisioning: Spin up new environments in minutes.

Broad service portfolio: Compute, storage, databases, ML, analytics, IoT, serverless, etc.

7. Managed Services

Offload heavy lifting (patching, backups, scaling) for databases (RDS).

4) Differences Between On-Demand Delivery and Traditional Cloud Deployments


Aspect On-Demand Delivery (AWS) Traditional Cloud Deployments

Provisioning Speed Seconds or minutes via API/console Days or weeks; manual hardware orders

Cloud Computing Notes (Based on Past QPs & Concise Notes) 4


Billing Model Usage-based (hour/second) Often monthly blocks or fixed flat fees

Hardware Ownership AWS owns & maintains infrastructure You (or provider) own/rent physical nodes

Capacity Planning Autoscale dynamically Must overprovision to handle peaks

Upgrades & Patching Managed by AWS (for managed services) Handled in-house or by third party

Global Footprint 30+ Regions, 90+ AZs worldwide Limited datacenter locations

APIs & Automation Full API-driven lifecycle (Infrastructure-as-Code) Often manual or semi-automated

Availability SLAs
99.99%+ with multi-AZ designs Varies; often lower without redundancy
(Service Level Agreement)

On-Demand Delivery refers to instantly provisionable, metered cloud resources (compute, storage, network) that you spin
up/down as needed.

Traditional Cloud often means managed/private hosting where resources are provisioned in larger blocks, with longer lead
times.

5)Auto Scaling

Auto Scaling is a service in AWS that automatically adjusts the number of EC2 instances (or other resources) in response to
changes in demand. It helps maintain consistent performance at the lowest possible cost by launching or terminating instances
based on traffic load or custom metrics.

Auto Scaling is managed using Auto Scaling Groups (ASGs), which define scaling rules, minimum and maximum instance
counts, and health check policies.

Benefits of Auto Scaling


1. Improved Availability – Automatically replaces unhealthy instances to ensure application uptime.

2. Cost Efficiency – Reduces costs by scaling in during low traffic.

3. Elasticity – Dynamically handles sudden traffic spikes or drops.

4. Fault Tolerance – Monitors instance health and performs automatic recovery.

5. Better Resource Management – Ensures optimal resource usage without manual intervention.

6) Elastic Load Balancer (EBS)

Cloud Computing Notes (Based on Past QPs & Concise Notes) 5


ELB is a fully managed service that automatically distributes incoming traffic across multiple targets (e.g., EC2 instances,
containers, IPs) across Availability Zones for increased availability and fault tolerance. It monitors target health and only routes
to healthy endpoints.

Types
Application Load Balancer (ALB) – Layer 7 (HTTP/HTTPS); supports advanced routing, SSL termination, Web Application
Firewall (WAF) integration.

Network Load Balancer (NLB) – Layer 4; ultralow latency, high throughput, static IP support.

Gateway Load Balancer (GLB) – Layers 3–4; used for integrating third-party virtual appliances.

Key Features
Automatic scaling to handle variable traffic.

Health checks for registered targets.

Sticky sessions (session affinity).

Monitoring with CloudWatch, logging, and CloudTrail audit logs.

Delete protection to avoid accidental removal

7) Simple Notification Service (SNS)

Amazon SNS is a fully managed publish–subscribe (pub/sub) messaging service. Publishers send messages to topics;
subscribers (like email, SMS, HTTP endpoints, SQS queues, or Lambda functions) receive them. Supports application-to-
application (A2A) and application-to-person (A2P) communication.

Types & Delivery Modes


Standard topics: high throughput, best-effort ordering

Cloud Computing Notes (Based on Past QPs & Concise Notes) 6


FIFO topics: preserve strict message order and deduplication

Supports multiple subscriber types: mobile push, SMS/text, email, HTTP/S, Lambda, SQS, Firehose, etc.

Features
Fan‑out delivery: send one message to many subscribers.

Message filtering: subscribers receive only matching messages via filter policies.

Durability & retries: retries on delivery failure, dead-letter queues optional, messages stored across multiple servers.

Security: encryption with AWS KMS, VPC endpoints for private traffic.

Message attributes & metadata.

8) Simple Queue Service (SQS)

SQS is a fully managed message queuing service that enables reliable asynchronous messaging between distributed
application components. Ideal for decoupling and buffering workloads.

Types
Standard queues: unlimited throughput, at‑least‑once delivery (possible duplicates), best-effort ordering.

FIFO Queue: Ensures messages are processed in order and exactly once.

Features
Payload up to 256 KB (can offload larger payloads to S3).

Batch operations (send/receive/delete up to 10 messages at once).

Long polling (wait up to 20 s for messages to reduce empty responses).

Message locking to avoid duplicate processing.

Message retention configurable up to 14 days.

Dead‑letter queues (DLQs) to handle failed processing.

Server-side encryption (SSE) with AWS KMS.

Cross-account queue sharing.

SQS vs. SNS – Key Differences

Feature SQS (Queue-Based Messaging) SNS (Pub/Sub Messaging)

Message Model Point-to-Point (Polling) Publish-Subscribe (Push)

Delivery Mechanism Messages stored in queue Messages pushed to subscribers

Message Persistence Stored until read Immediate delivery

Use Case Task processing, decoupling microservices Notifications, event-driven applications

Possible Questions(5/10 Mark) - Mod 1

✅suitable
1. Gopal needs a highly configured machine for a short period of time. Suggest a
solution and explain steps. (10 Marks)

Cloud Computing Notes (Based on Past QPs & Concise Notes) 7


📌 Problem:
Gopal needs a powerful computing machine for a short-term project. Buying a physical machine is expensive and not
practical.

✅ Suggested Solution: Use Amazon EC2 (Elastic Compute Cloud)


Amazon EC2 provides on-demand, resizable virtual servers (instances) in the cloud. Gopal can launch a high-configuration
instance, use it for his project, and terminate it to avoid further cost.

🧾 Steps to Implement the Solution:


Step 1: Sign in to AWS
Visit https://aws.amazon.com

Sign in with an AWS account or create a new one.

Step 2: Launch EC2 Instance


Go to the EC2 Dashboard.

Click Launch Instance.

Step 3: Choose Amazon Machine Image (AMI)


Select an OS like Amazon Linux, Ubuntu, or Windows Server depending on project needs.

Step 4: Choose Instance Type


Choose a high-performance instance like:

c5.2xlarge for CPU-intensive tasks

p3.2xlarge for GPU/ML workloads

r5.2xlarge for memory-intensive workloads

Step 5: Configure Instance Details


Set number of instances, shutdown behavior, IAM role, etc.

Add Auto-termination script if needed.

Step 6: Add Storage


Increase root volume size if required.

Add EBS volumes for extra space.

Step 7: Configure Security Group


Add rules to allow SSH (port 22), HTTP (port 80), or other necessary ports.

Step 8: Review & Launch


Download the .pem key file for SSH login.

Click Launch Instance.

Step 9: Connect and Use


SSH into the instance from terminal:

ssh -i your-key.pem ec2-user@<Public-IP>

Install necessary tools and run the project.

Step 10: Terminate When Done


Go to EC2 dashboard → Select instance → Click Terminate to stop billing.

✅ Advantages:
Cloud Computing Notes (Based on Past QPs & Concise Notes) 8
No upfront hardware cost

Fully scalable

Pay-as-you-go model

Quick setup in minutes

✅Explain
2. You are asked to create a website to share CA-1 exam dates using EC2.
all steps. (10 Marks)
🔹 Objective:
Host a basic website showing “Advanced Statistics CA-1 date” using an Apache Web Server on an EC2 instance.

✅ Steps to Host a Website in EC2:


Step 1: Sign in to AWS
Open AWS Console → Go to EC2 Service.

Step 2: Launch an EC2 Instance


Click Launch Instance

Choose:

AMI: Amazon Linux 2 (Free tier)

Instance Type: t2.micro

Click Next

Step 3: Configure Instance


Leave defaults or choose your network/VPC.

Set shutdown behavior to "stop" or "terminate".

Step 4: Add Storage


Default: 8 GiB EBS (can increase if needed)

Step 5: Configure Security Group


Allow:

SSH (Port 22) – for remote login

HTTP (Port 80) – for web traffic

Step 6: Launch & Connect


Review and Launch the instance

Download .pem key file

Connect using terminal:

ssh -i "key.pem" ec2-user@<Public-IP>

✅ Install Apache Web Server


sudo yum update -y
sudo yum install httpd -y
sudo systemctl start httpd
sudo systemctl enable httpd

✅ Host Website
Cloud Computing Notes (Based on Past QPs & Concise Notes) 9
echo "<h1>CA-1 Exam for Advanced Statistics is on 28 August 2025</h1>" | sudo tee /var/www/html/index.html

✅ Verify Website
Go to browser:

http://<EC2 Public IP>

You should see the exam date message.

✅ Benefits of Using EC2:


Cost-effective (Free Tier available)

Easy to manage

Scalable and reliable

Temporary setup with on-demand termination

Module - 2
1) Edge Location

Edge locations are data centers in AWS’s Content Delivery Network (CDN) used by Amazon CloudFront to cache content
closer to end users.

They are different from AWS Regions or AZs.

It is not a full AWS Region or Availability Zone, but a smaller data center located globally.

Edge locations help reduce latency by serving cached data quickly

Example:
If a video is stored in the Mumbai AWS Region, but a user in Chennai requests it, CloudFront serves it from the Chennai Edge
Location, reducing latency.

Benefits:
1. Low Latency – Delivers content quickly by serving from nearest location.

2. Improved User Experience – Faster page loads, smoother video streaming.

3. Reduced Server Load – Offloads repeated requests from origin servers.

4. High Availability – Content is available even if origin is temporarily down.

2) Four Business Factors to Consider When Choosing an AWS Region


Factor Explanation

1. Latency Choose a region closest to your customers to reduce network delays.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 10


Factor Explanation

2. Compliance Some data must reside in specific countries or regions to meet legal policies.

3. Cost AWS pricing varies by region. Select a region that offers cost-effective services.

4. Service Availability Not all AWS services (e.g., SageMaker, EKS) are available in every region.

3) Availability Zones

In AWS, an Availability Zone (AZ) is a physically separate data center within a specific AWS Region.

Each AZ consists of one or more data centers with independent power, cooling, and networking, yet all AZs in a region are
connected via low-latency private fiber-optic links.

Key Features of AZs:


They are isolated from failures in other zones.

They are connected to each other for high throughput and low latency.

Each AWS Region has at least two AZs (some have 3 or more).

Why Multi-AZ Deployment is Better than Single AZ


Factor Single AZ Deployment Multi-AZ Deployment

Fault Tolerance If the AZ fails, the application goes down. Automatically shifts workload to another healthy AZ.

Availability Lower – Single point of failure. Higher – Redundant systems in other AZs.

Disaster Recovery Difficult and time-consuming. Built-in redundancy supports quick failover.

Data Replication Manual setup needed. Automatic data replication (e.g., for RDS databases).

Cost Slightly cheaper. Slightly more expensive but worth it for critical apps.

Real-World Example:
If you're running an EC2 instance or RDS database in a Single AZ, and that zone suffers power failure, your app becomes
unavailable.

But in Multi-AZ setup, AWS automatically switches to the backup instance in another AZ — no downtime.

Services That Support Multi-AZ:


Amazon RDS (Automatic failover across AZs)

Elastic Load Balancing (Distributes traffic across AZs)

Auto Scaling Groups (Launch instances in multiple AZs)

Cloud Computing Notes (Based on Past QPs & Concise Notes) 11


4)AWS CloudFront

Amazon CloudFront is a Content Delivery Network (CDN) that delivers data, videos, applications, and APIs globally with
low latency and high transfer speed.

It caches content at edge locations close to the user, reducing the need to fetch data from the origin server repeatedly.

How It Works
If content exists at the nearest edge location, it’s delivered immediately.

Otherwise, CloudFront retrieves it from an origin server (e.g., Amazon S3, EC2, MediaPackage, or custom HTTP server).

Two delivery types:

Web Distribution – for static/dynamic content (HTML, CSS, JS, images).

RTMP Distribution – for streaming media.

Benefits of AWS CloudFront


1. Global low-latency network with 225+ PoPs.

2. Built-in DDoS protection via AWS Shield.

3. Edge computing with Lambda@Edge for personalized, low-latency responses.

4. Integration with AWS services like S3, EC2, Route 53, and ELB.

5. Cost-efficient with no transfer charges for AWS-origin fetches and free TLS certificates via AWS ACM.

Setting up CloudFront for a Static Website


1. Upload HTML files to an S3 bucket and make it public.

2. Enable static website hosting.

3. Create a CloudFront distribution using the S3 website endpoint as the origin.

4. Use the CloudFront domain to access the website.

5. Observe reduced latency using browser network inspection tools.

Use Cases
Website delivery and security.

Dynamic content and API acceleration.

Live and on-demand video streaming.

Software and game delivery.

Lambda@Edge
A CloudFront feature to run functions closer to the user.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 12


Improves performance, reduces latency.

No need for provisioning; only pay for execution time.

Conclusion
Use CloudFront in front of load balancers to improve content delivery speed.

Ideal for delivering real-time and static content efficiently.

5)AWS Direct Connect

It is a dedicated fiber-optic Ethernet connection between your internal network and AWS.

Bypasses internet service providers to ensure low latency, high performance, and secure connectivity.

You can construct virtual interfaces to public


AWS services using this connection (for instance, to Amazon S3 or Amazon VPC).

How It Works?
Traffic stays within the AWS global network, avoiding the public internet.

Available through 100+ global locations.

Can be set up as a dedicated or hosted connection.

SiteLink enables fast data transfer between global AWS Direct Connect locations.

Core Components
1. Connections: Physical links from your data center to AWS.

2. Virtual Interfaces (VIFs):

Public VIF: Access to public AWS services (e.g., S3).

Private VIF: Access to Amazon VPC.

3. Direct Connect Gateway: Connects VPCs across multiple regions.

4. Redundancy Setup:

Active/Active: Load-balanced.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 13


Active/Passive: One standby for failover.

5. ASN (Autonomous System Numbers): Used for external routing policies.

Key Features
Bandwidth cost reduction – direct transfer lowers costs.

Full AWS service compatibility (S3, EC2, VPC, etc.).

Private high-speed connectivity to VPCs.

Scalable bandwidth (1 Gbps to 10 Gbps).

Simple setup via AWS Console.

Use Cases
Hybrid networks: Connect on-premises systems with AWS.

Network extension: Move data across locations via SiteLink.

Large dataset handling: Real-time analytics, backups, and media processing.

Benefits
Up to 44% lower latency.

60–70% cost savings on data egress.

Enhanced security through private links.

Improved reliability with reduced network unpredictability.

Monitoring
1. Your Direct Connect resources can potentially have tags applied to them in order to be managed or categorized. You define
both the key and the optional values that make up a tag.

2. All AWS Direct Connect API calls are recorded by CloudTrail as events.

3. Create CloudWatch alarms to keep an eye on metrics.

Pricing
1. Port hours and outgoing data transfer are the two components of billing for it.
Capacity and connection type impact how much a port hour costs (dedicated connection or
hosted connection).

2. Data Exchange The AWS account is responsible for the Data Transfer will be
charged for private interfaces and transit virtual interfaces. The use of an AWS Direct

Cloud Computing Notes (Based on Past QPs & Concise Notes) 14


Connect gateway with multiple accounts is free.

3. For publicly addressable AWS resources (such as Amazon S3 buckets, Classic EC2
instances, or EC2 traffic that passes through an internet gateway), the Data Transfer Out
(DTO) usage is metered toward the resource owner at the AWS Direct Connect data transfer
rate if the outbound traffic is headed for public prefixes owned by the same AWS payer
account and actively advertised to AWS through an AWS Direct Connect public virtual
Interface.

6)AWS Shared Responsibility Model

The AWS Shared Responsibility Model divides security responsibilities between AWS and the customer.

"In the Cloud" = Customer Responsibility

"Of the Cloud" = AWS Responsibility

Customer Responsibility – Security “IN” the Cloud


Customers are responsible for:

1. Customer Data – Protect and manage data stored or processed.

2. Platform, Applications, IAM – Secure application code, configure Identity & Access Management (IAM).

3. OS, Network & Firewall Config – Manage guest OS, firewall rules, and networking.

4. Encryption & Authentication:

Client-side encryption & integrity

Server-side encryption for files/data

Network traffic protection (encryption, identity, and integrity)

Focus: Configuration and control over what you run in AWS.

Customer Responsibility by Cloud Service Model

IaaS (Infrastructure as a Service)

Examples: Amazon EC2, EBS, VPC

Customer manages:

OS patching & maintenance

Network & firewall configuration

Identity & Access Management (IAM)

Data encryption & backups

Application security

Cloud Computing Notes (Based on Past QPs & Concise Notes) 15


🔸Maximum control, maximum responsibility

PaaS (Platform as a Service)

Examples: AWS Lambda, RDS, Elastic Beanstalk

Customer manages:

Application code

Data access and integrity

IAM roles and user permissions

🔸 Moderate responsibility; AWS handles OS, patching, and middleware


SaaS (Software as a Service)

Examples: AWS Shield, Trusted Advisor

Customer manages:

User access controls

Data entered into the application

Compliance with business policies

🔸 Minimal responsibility; focus on safe usage of the service


AWS Responsibility – Security “OF” the Cloud
AWS is responsible for securing:

1. Software Services:

Compute (e.g., EC2)

Storage (e.g., S3)

Database (e.g., RDS)

Networking (e.g., VPC infrastructure)

2. Global Infrastructure:

Regions

Availability Zones

Edge Locations

Focus: Infrastructure maintenance and security.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 16


This model ensures clear accountability in cloud security and is critical to understand for AWS certifications and secure
deployments.

🤝 Security Collaboration Tips


To ensure strong security in AWS, customers must:

Use strong IAM policies and least privilege access

Encrypt data at rest and in transit

Configure firewalls (SGs, NACLs) properly

Monitor activity using CloudTrail and CloudWatch

Enable DDoS protection with AWS Shield

Conduct regular audits and compliance checks

7)AWS VPC

AWS VPC (Amazon Virtual Private Cloud) allows you to provision a logically isolated section of the AWS Cloud where you can
launch AWS resources in a customizable virtual network.

Key Features of AWS VPC:


Subnets: Divide your VPC into public/private zones within Availability Zones.

Route Tables: Control traffic routing between subnets and external networks.

Internet Gateway (IGW): Enables internet access for resources in public subnets.

NAT Gateway: Allows instances in private subnets to access the internet (outbound only).

Security Groups: Virtual firewalls for EC2 instances.

Network ACLs: Optional stateless firewalls for subnets.

VPC Peering: Connects two VPCs for private communication.

Use Cases:
Hosting secure web applications

Isolated environments for dev/test

Hybrid cloud architectures with VPN or Direct Connect

8)AWS Hybrid Cloud


Definition:
A hybrid cloud is an IT infrastructure that combines a company’s on-premises resources with third-party cloud services,
allowing data and applications to operate across multiple environments. It provides centralized management, scalability, and
flexibility.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 17


Why Businesses Use Hybrid Cloud
To integrate existing legacy systems with modern cloud technologies.

To support low-latency, local data processing, data residency, and compliance needs.

To improve cost efficiency, application modernization, and user experience.

Key Benefits
Increased Development Agility: Faster product deployment and testing.

Scalability: Dynamically shift workloads between environments to meet demand.

Business Continuity: Maintain operations during outages or maintenance by offloading to the cloud.

Use Cases
Low-latency apps: Gaming, AR/VR, automation.

Local data processing: Big data tasks done on-site; backups in cloud.

Regulatory compliance: Store data in specific regions as required.

Data center extension: Handle seasonal traffic spikes without overbuying hardware.

Cloud migration: Gradual transition of assets to cloud without downtime.

How It Works
Hybrid cloud relies on application portability, not just infrastructure links. Developers use:

Unified platforms

OS consistency

Containers and automation (e.g., Kubernetes)

Orchestration tools for deployment and resource management

Hybrid Cloud Strategy


Factors to plan:

Cloud provider selection

Workload placement based on:

Security

Compliance

Cost

Accessibility

Environment compatibility

AWS Hybrid Cloud – Pros and Cons


Pros:

Flexibility: Choose between cloud or in-house based on need.

Speed: Faster project development and data analysis.

Data Security: Sensitive data stored securely and redundantly.

Scalability: Offload heavy computing to cloud services, freeing local resources.

Cons:

Complexity: Requires expertise to manage mixed environments.

Cost: Dual environments can be expensive without proper planning.

Security Concerns: Third-party access may raise privacy issues.

Compatibility Issues: Ensuring smooth communication and file compatibility between clouds.

Conclusion:

Cloud Computing Notes (Based on Past QPs & Concise Notes) 18


Hybrid cloud offers the best of both private and public cloud worlds, ideal for businesses needing flexibility, scalability, and
compliance—but it demands careful planning and expertise to implement effectively.

9)AWS Availability Zone

An Availability Zone (AZ) in AWS is a physically isolated data center within a region, with independent power, cooling, and
networking, but connected via low-latency links to other AZs in the same region.

Each AWS Region contains multiple AZs.

AZs are designed to be isolated from failures in other zones.

Services like EC2, RDS, and ELB can be launched in specific AZs.

Why Multi-AZ Deployment is Better than Single-AZ Deployment?

Feature Multi-AZ Deployment Single-AZ Deployment

Fault Tolerance High – one AZ failure won’t affect service Low – AZ failure causes downtime

Availability Ensures 99.99%+ uptime Limited to the availability of one AZ

Disaster Recovery Built-in through redundancy No backup or failover support

Data Durability Replication across AZs protects data Data loss if AZ fails

Use Case Ideal for production & mission-critical apps Suitable for testing or non-critical apps

🔹 Multi-AZ = High Availability + Resilience


🔹 Single-AZ = Simpler but Risky
Conclusion:
Multi-Availability Zone deployment ensures better fault tolerance, high availability, and disaster recovery, making it more
reliable and robust than single-AZ deployments.

10)Cloud computing Architecture and components

Cloud Computing Notes (Based on Past QPs & Concise Notes) 19


Cloud Computing refers to the delivery of computing services such as servers, storage, databases, networking, software,
analytics, and intelligence over the internet with pay-as-you-go pricing. It's also called Internet-based computing, where users
get resources and services through the internet. This offers benefits like faster innovation, flexible resources, and cost savings.
The data that is stored can be files, images, documents, or any other storable document. Rather then buying, owning, and
maintaining physical data centers and servers, Users can access technology services, such as computing power, storage, and
databases, on an as-needed basis from a cloud provider like AWS.

Cloud Computing Architecture – Components


🔹 1. Client Infrastructure (Frontend)
Represents the user interface that interacts with the cloud.

Includes web browsers, mobile apps, and devices.

Responsible for sending requests via the internet.

🔹 2. Internet
Acts as a medium of communication between client and cloud service.

Facilitates the transmission of data, commands, and responses.

🔹 3. Backend (Core of Cloud Computing)


This includes all the layers and services that process and manage requests from clients:

✅ a. Application
Cloud-hosted software or applications (e.g., Gmail, Office365).

End-user interacts directly with this layer.

✅ b. Service
Core cloud services offered:

SaaS – Software as a Service

PaaS – Platform as a Service

IaaS – Infrastructure as a Service

✅ c. Cloud Runtime
Provides the runtime environment for execution (e.g., Java, Python runtimes).

Ensures application execution consistency across platforms.

✅ d. Storage
Cloud Computing Notes (Based on Past QPs & Concise Notes) 20
Handles data storage (object, block, file-based).

Examples: Amazon S3, EBS, Azure Blob Storage.

✅ e. Infrastructure
Physical components: servers, networking, virtual machines.

Foundation layer of the cloud (compute, storage, network hardware).

🔹 4. Management
Ensures monitoring, control, and orchestration of resources.

Examples: AWS CloudWatch, Azure Monitor, dashboards, alerts.

🔹 5. Security
Encompasses authentication, encryption, access control, and compliance.

Examples: IAM, firewalls, SSL, DDoS protection.

11)AWS IAM

AWS Identity and Access Management (IAM) is a security service that helps you control access to AWS services and
resources securely. With IAM, you can create and manage users, groups, roles, and permissions to allow or deny access to
specific AWS resources.

Key Features:
User & Group Management: You can create users and group them to manage permissions.

Policies: You attach permissions policies (written in JSON) to IAM users, groups, or roles to allow or deny access.

Roles: Allow temporary access to services. For example, an EC2 instance (Label 1) can be assigned an IAM role to access
S3 (Label 4) securely.

Fine-Grained Access: You can allow access to specific AWS resources, like a specific S3 bucket or EC2 instance.

Based on Diagram Context:


IAM (3) defines what EC2 (1) is allowed to do.

IAM roles and policies enable EC2 to:

Access S3 buckets (4) to read/write data.

Interact with AWS Backup (2) for creating and restoring backups.

This ensures secure, role-based, and auditable access without hardcoding credentials.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 21


Module - 3
To know:-

1)Simple comparison table for Amazon EBS, S3, and EFS / Discuss object-level, file-level, and block-
level storage in AWS cloud services.
Feature Amazon EBS Amazon S3 Amazon EFS

Type of
Block Storage Object Storage File Storage
Storage

Shared across multiple EC2


Access Type Attached to a single EC2 instance Accessible from anywhere via HTTP
instances

Databases, boot volumes, apps requiring low- Backups, archives, media storage, static Shared file systems, CMS, Big
Use Cases
latency storage websites Data

Automatically scales with


Scalability Fixed size, must manually resize Automatically scales to any size
usage

Persistence Data persists after instance stops Data always persists Data always persists

Performance High performance, low latency Optimized for throughput Scalable performance modes

Pay for storage used


Cost Basis Pay for provisioned size (GB/month) Pay per GB stored, requests made
(GB/month)

Backup
Snapshots Lifecycle policies, versioning Snapshots (via backup tools)
Options

Object-level storage → Amazon S3 → Used for storing objects like images, videos, backups.

File-level storage → Amazon EFS → Supports file hierarchy and shared access.

Block-level storage → Amazon EBS → Used like a hard disk for EC2, low-latency performance

2)Amazon S3

Amazon S3 is a scalable, durable, and secure object storage service offered by AWS.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 22


Key Features:
Stores objects (data + metadata) inside buckets

Designed for 11 9s durability (99.999999999%)

Offers fine-grained access control, versioning, lifecycle policies

Pay-as-you-go pricing model

Accessible via:

AWS Console

AWS CLI / SDK

REST API

Amazon S3 Storage Classes (Types)


Amazon S3 provides different storage classes optimized for use case, access frequency, and cost:

Min Storage
Storage Class Use Case Durability Availability Retrieval Time Retrieval Cost Notes
Duration

Ideal for active


Frequently 99.999999999% content,
S3 Standard 99.99% Milliseconds None No
accessed data (11 9s) websites,
analytics

Optimizes cost
Changing
S3 Intelligent- Milliseconds to Varies + by auto-moving
access 99.999999999% 99.9–99.99% 30 days
Tiering hours monitoring data between
patterns
tiers

Great for
Infrequent backups, DR,
S3 Standard-IA 99.999999999% 99.9% Milliseconds 30 days Yes
access long-lived
infrequent data

Lower cost, but


S3 One Zone- Infrequent, re-
99.999999999% 99.5% Milliseconds 30 days Yes stores data in
IA creatable data
one AZ only

Low-cost
Archival Minutes to archive with
S3 Glacier 99.999999999% N/A 90 days Yes
storage hours flexible retrieval
options

Lowest-cost
S3 Glacier Long-term Hours (up to 12 storage for
99.999999999% N/A 180 days Yes
Deep Archive archival data hrs) compliance,
archival data

🌐 Hosting a Static Website on Amazon S3


Use Case:
Deploy HTML/CSS/JS-based static websites (no backend/server-side code).

Cloud Computing Notes (Based on Past QPs & Concise Notes) 23


🔧 Step-by-Step Deployment Process
Step 1: Create an S3 Bucket
Go to the AWS Management Console → Open S3.

Click Create bucket.

Bucket name must be unique globally.

Region: Select closest region to your audience.

Uncheck Block all public access (important for website access).

Acknowledge warning → Create bucket.

Step 2: Upload Website Files


Select your bucket → Click Upload.

Add files: index.html , error.html , CSS, JS, images, etc.

Click Upload.

Step 3: Set File Permissions


Select all uploaded files → Click Actions → Make public.

(Or use a bucket policy instead – explained in Step 5)

Step 4: Enable Static Website Hosting


Go to Properties tab of the bucket.

Scroll to Static website hosting → Click Edit.

Choose Enable.

Enter:

Index document: index.html

Error document: error.html (optional)

Save changes.

You will now get a public endpoint URL (e.g., http://bucket-name.s3-website-region.amazonaws.com )

Step 5: Configure Bucket Policy for Public Access


To allow public read access to all objects:

Go to Permissions → Bucket Policy, paste this JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-bucket-name/*"
}
]
}

Replace your-bucket-name with your actual bucket name.

Step 6: Test the Website


Open the S3 static website endpoint URL.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 24


Your website should load from the bucket.

🪣 How to Upload Data to an Amazon S3 Bucket

Step 1: Log in to AWS Console


Go to: https://console.aws.amazon.com/s3

Sign in to your AWS account.

Step 2: Open the S3 Dashboard


In the Search bar, type S3 and click on it under Services.

Step 3: Choose or Create a Bucket


If you already have a bucket, click on its name.

To create a new one:

1. Click Create Bucket

2. Enter a unique bucket name

3. Choose region, configure permissions

4. Click Create

Step 4: Upload Data


1. Open the bucket you want to upload data to.

2. Click on the “Upload” button.

3. In the upload window:

Click “Add files” or “Add folder”

Browse and select files from your local system

4. Click Next

Step 5: Set Permissions


By default, files are private.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 25


If you want public access (e.g., for static website hosting):

Click “Grant public read access” (not recommended for sensitive data)

Step 6: Set Properties (Optional)


Choose storage class (Standard, IA, Glacier)

Enable encryption (SSE-S3, SSE-KMS, etc.)

Step 7: Review & Upload


Review your settings and click Upload

Wait for the files to upload (progress bar shown)

Step 8: Verify Upload


After uploading, you’ll see your files listed in the bucket

Click on a file to:

View its Object URL

See metadata

Download or delete it

3)AWS EFS

Amazon EFS is a scalable, fully managed NFS (Network File System) that provides shared file storage for multiple EC2
instances.

Key Features of EFS:


1. Shared Access – Multiple EC2 instances (across AZs) can simultaneously mount the same file system.

2. Elastic Scaling – Automatically grows/shrinks based on usage.

3. Fully Managed – No provisioning or management needed.

4. POSIX-Compliant – Supports standard Linux file system permissions and operations.

5. Highly Available & Durable – Data is stored across multiple AZs.

6. Secure – Supports encryption in transit and at rest.

7. Pay-as-you-go – Billed only for the actual storage used.

EFS Storage Classes:

Storage Class Description Use Cases

EFS Standard Default, high availability/performance Web servers, CMS, file shares

EFS IA Infrequent Access (cheaper) Backups, older project archives

Cloud Computing Notes (Based on Past QPs & Concise Notes) 26


You can enable Lifecycle Management to move files from Standard to IA automatically if they’re unused for a set number of
days.

4)AWS EBS

Create a gp3 volume → Attach it to an EC2 instance → Format and mount it → Store database files → Backup with snapshot.

Amazon EBS is a block-level storage service designed for use with Amazon EC2 (Elastic Compute Cloud). It works like a virtual
hard disk, allowing you to store persistent data that survives instance termination.

Key Features of EBS


1. Block Storage

Data is stored in blocks, like a traditional hard drive.

Suitable for databases, OS disks, and applications requiring low-latency access.

2. Persistent Storage

Data remains available even after the EC2 instance is stopped or terminated (if not deleted explicitly).

3. Availability Zone Specific

EBS volumes reside within a single Availability Zone (AZ) but can be backed up or copied across regions using
snapshots.

4. Encryption Support

Supports encryption at rest and in-transit using AWS KMS.

5. Snapshots

Point-in-time backup of an EBS volume stored in Amazon S3.

Can be used to restore data or create new volumes.

6. Elastic Volumes

You can dynamically increase volume size, change volume type, or adjust performance without downtime.

EBS Volume Types


Volume Type Description Use Case

gp3 (General Purpose SSD) Default; cost-effective SSD Boot volumes, dev/test apps

io1/io2 (Provisioned IOPS SSD) High performance with custom IOPS Databases, mission-critical apps

st1 (Throughput-optimized HDD) Low-cost, high throughput Big data, data warehouses

sc1 (Cold HDD) Lowest cost, infrequent access Archive, cold data storage

EBS Snapshots
Snapshots are backups of EBS volumes stored in Amazon S3.

Incremental: Only changed blocks are saved after the first snapshot.

Snapshots can:

Be copied to other regions

Be used to create new EBS volumes

Support automated backup via Data Lifecycle Manager

Cloud Computing Notes (Based on Past QPs & Concise Notes) 27


Benefits of Amazon EBS

Benefit Description

Durability Automatically replicated within AZ to protect against failure

High Performance SSD-backed options for low-latency, high IOPS workloads

Flexibility Choose between performance and cost with various volume types

Scalability Volumes can scale from gigabytes to petabytes

Cost Efficiency Pay-as-you-go pricing; snapshots reduce storage costs

Security Built-in encryption, IAM access control, compliance ready

Use Cases
Hosting databases (MySQL, PostgreSQL, MongoDB)

Boot volumes for EC2 instances

File systems and applications

Data warehousing and analytics

5)AWS RDS

Amazon RDS (Relational Database Service) is a managed cloud database service by AWS that makes it easy to set up,
operate, and scale a relational database in the cloud.

It supports popular database engines:

MySQL

PostgreSQL

MariaDB

Oracle

Microsoft SQL Server

Amazon Aurora (AWS-optimized engine)

Key Features of Amazon RDS:


Automated backups and snapshots

Easy replication and high availability (Multi-AZ)

Monitoring via Amazon CloudWatch

Built-in security with VPC, encryption, IAM

Auto patching and maintenance

Cloud Computing Notes (Based on Past QPs & Concise Notes) 28


Scaling compute/storage with minimal downtime

Advantages of Amazon RDS


Feature Benefit

1. Fully Managed AWS handles provisioning, setup, patching, and backups—so you can focus on your application.

2. High Availability (Multi-AZ) Automatically replicates to a standby in another AZ for fault tolerance and failover.

3. Scalability Easy to scale compute and storage vertically without impacting availability.

4. Automated Backups Supports point-in-time recovery. Snapshots can also be created manually.

5. Security Integrates with AWS IAM, VPC, encryption at rest (KMS) and in transit (SSL/TLS).

6. Performance Monitoring Uses Amazon CloudWatch and Performance Insights to monitor DB health.

7. Cost-Efficient Pay-as-you-go model with Reserved Instance options for long-term savings.

8. Supports Multiple DB Engines Flexibility to use familiar open-source or commercial databases.

Use Cases:
Web & mobile app backends

E-commerce platforms

Analytics and business intelligence workloads

Commonly Asked Questions

You are assigned to add the new volume (/dev/sdf) to a Linux instance as an ext3 file system under the “/mnt/obj-store” mount
point and on your mounted volume, create a file and add some text, also configure the Linux instance to mount this volume
whenever the instance is started. Write the necessary Linux commands for all the above tasks and explain each command.

Steps to Add and Mount /dev/sdf as ext3


1. Format the volume as ext3

sudo mkfs.ext3 /dev/sdf

Formats the volume with ext3 file system.

1. Create a mount point

sudo mkdir -p /mnt/data-store

Creates the directory to mount the volume.

1. Mount the volume

sudo mount /dev/sdf /mnt/data-store

Mounts the volume to the mount point.

1. Create a file and add text

echo "Hello AWS!" | sudo tee /mnt/data-store/hello.txt

Creates a file and writes sample text.

1. Get UUID of the volume

sudo blkid /dev/sdf

Finds the UUID to set up auto-mount.

1. Edit /etc/fstab to auto-mount at boot

Cloud Computing Notes (Based on Past QPs & Concise Notes) 29


sudo nano /etc/fstab

Add this line (replace with your UUID):

UUID=your-uuid-here /mnt/data-store ext3 defaults 0 0

1. Test auto-mount

sudo mount -a

Checks if the /etc/fstab entry works correctly.

Module - 4
1)Shared responsibility Model - Already in Module 2

2)AWS CloudFront - Already in Module 2

3)AWS CloudTrail

AWS CloudTrail is a service that enables governance, compliance, and operational and risk auditing of your AWS account. It
automatically records and logs every action made through the AWS Management Console, CLI, SDKs, and other AWS
services, giving you a complete history of API calls for your account.

Key Features
Feature Description

Event Logging Records all API calls as events: who did what, when, and from where.

Multi-Region Support You can log activity from all regions to a single S3 bucket.

Integration with CloudWatch Allows real-time monitoring and alerting for suspicious activity.

Data Integrity Validation Verifies that log files haven't been tampered with using SHA-256 hashing.

Organization-wide Trail Can create a single trail across multiple accounts using AWS Organizations.

S3 and CloudWatch Logging Trails are stored in S3 and optionally delivered to CloudWatch Logs for analysis.

What Does It Record?


CloudTrail records information such as:

User identity (IAM, root, federated)

Time of the API call

Source IP address

Cloud Computing Notes (Based on Past QPs & Concise Notes) 30


Request parameters

Response elements

Resources accessed or changed

How It Works
1. Enable CloudTrail (one trail per region or one for all regions).

2. Choose a destination S3 bucket for log storage.

3. Optionally enable CloudWatch integration for real-time monitoring.

4. Log files are delivered within ~15 minutes of the API call.

Use Cases
Security analysis (e.g., detect unauthorized access).

Resource change tracking.

Compliance auditing (HIPAA, PCI DSS, etc.).

Operational troubleshooting.

🧾 Example CloudTrail Event (JSON) (not necessary to remember)


{
"eventTime": "2025-08-02T12:00:00Z",
"eventName": "StartInstances",
"awsRegion": "us-east-1",
"sourceIPAddress": "203.0.113.0",
"userAgent": "aws-cli/2.0.0",
"requestParameters": {
"instancesSet": {
"items": [{"instanceId": "i-1234567890abcdef0"}]
}
},
"userIdentity": {
"type": "IAMUser",
"userName": "john.doe"
}
}

Important Notes
CloudTrail is enabled by default for all accounts (for management events for the last 90 days).

You must create a trail to retain logs beyond 90 days and to log data events.

Can monitor both management events (control-plane) and data events (e.g., S3 object-level access).

4)AWS CloudWatch

Cloud Computing Notes (Based on Past QPs & Concise Notes) 31


Amazon CloudWatch is a monitoring and observability service that provides data and actionable insights for AWS resources,
applications, and services. It allows you to collect metrics, logs, events, and alarms to track performance, detect anomalies,
and respond to system-wide issues.

Key Features of CloudWatch


Feature Description

Metrics Monitoring Collects and tracks standard & custom metrics (CPU usage, memory, API calls, etc.)

Logs Management Centralized collection, storage, and analysis of logs from EC2, Lambda, VPC, etc.

Alarms Automatically trigger actions (e.g., stop instance, send notification) based on thresholds.

Dashboards Custom visualizations for real-time and historical data (graphs, widgets, etc.)

Events / Rules (EventBridge) Detect changes in your environment and trigger automated responses.

CloudWatch Agent Installed on EC2 or on-prem servers to push custom metrics and logs.

Anomaly Detection Automatically detects outliers in metric patterns using ML.

How It Works
1. Collect Metrics and Logs from AWS services (e.g., EC2, Lambda, RDS) or your own applications.

2. Store this data in CloudWatch for analysis and visualization.

3. Set Alarms to monitor for specific conditions.

4. Trigger Actions such as SNS notifications, Lambda functions, or auto-scaling adjustments.

Common Use Cases

Use Case Example

System Monitoring Monitor EC2 CPU usage, memory, disk I/O

Application Logging View Lambda logs, container logs

Alarm Notifications Alert if latency > 100ms

Automated Remediation Restart failed instances automatically

Cost Optimization Spot unused resources through low-usage metrics

Custom Dashboards Create team-specific monitoring views

Example CloudWatch Alarm Logic


Metric: CPUUtilization

Threshold: > 80%

Period: 5 minutes

Action: Send alert via Amazon SNS

Difference Between CloudTrail and CloudWatch and CloudFront

Cloud Computing Notes (Based on Past QPs & Concise Notes) 32


Feature CloudTrail CloudWatch CloudFront

Content Delivery Network (CDN) for faster static/dynamic


Purpose Records API activity Monitors metrics/logs/performance
content delivery

Focus Who did what and when How the system is behaving Deliver content with low latency and high transfer speed

Data Type Event logs of API calls Logs, metrics, alarms, dashboards Cached web content (e.g., HTML, CSS, JS, media)

Auditing, security, Monitoring, alerting, auto-scaling Accelerating websites, media delivery, and securing edge
Best For
compliance actions locations

5)Benefits of compliance with AWS


1)Better Security

Protects your data using encryption, access control, and monitoring.

2)Customer Trust

Shows clients and partners that your system is secure and reliable.

3)Faster Launch

Use AWS’s ready-to-use compliant services to save setup time.

4)Meets Global Laws

Helps follow local and international rules like GDPR, HIPAA, etc.

5)Real-Time Monitoring

Use AWS tools to track and fix compliance issues automatically.

6)Avoids Fines

Reduces risk of legal penalties for not following regulations.

7)Business Advantage

Makes it easier to win contracts with government and large companies.

6)AWS Trusted Advisor


AWS Trusted Advisor is a monitoring and recommendation tool that helps optimize your AWS environment by providing real-
time best practice checks across five categories:

Five Key Categories


1. Cost Optimization

Finds idle or underutilized resources

Suggests ways to reduce costs (e.g., unused EC2 instances, EBS volumes)

2. Performance

Identifies bottlenecks or inefficient configurations

Recommends steps to improve speed and responsiveness

3. Security

Detects vulnerabilities (e.g., open ports, exposed S3 buckets)

Recommends actions to harden security

4. Fault Tolerance

Checks for backup setups, multi-AZ deployments

Helps ensure high availability and disaster recovery

5. Service Limits

Monitors your usage of AWS service quotas

Warns when you're approaching limits (e.g., EC2 instances)

How It Helps:
Saves cost by removing waste

Cloud Computing Notes (Based on Past QPs & Concise Notes) 33


Improves performance with optimization tips

Boosts security by flagging risks

Ensures reliability with fault tolerance checks

Avoids service disruptions by tracking limits

Access:
Available via the AWS Console

Basic checks available to all AWS users

Full checks require Business or Enterprise Support Plan

7)Multi-factor Authentication (MFA)


Definition:

Adds an extra layer of security by requiring an additional authentication factor beyond just a password.
Process:

User enters IAM user ID and password.

Provides a one-time code from an MFA device (e.g., smartphone app or hardware key).

Best Practice:
Enable MFA for all IAM users and especially the root user to prevent unauthorized access.

Example:
An IAM user " AdminUser " logs in using a password and then enters a 6-digit code generated by their Google Authenticator app
to complete the sign-in securely.

Other Things to know:


AWS Artifact
Provides on-demand access to security & compliance reports and online agreements.

Two sections:

Artifact Agreements (e.g., NDAs, GDPR)

Artifact Reports (e.g., audit reports)

Customer Compliance Center offers resources for learning AWS compliance.

AWS Shield
Protects applications from DDoS attacks.

Shield Standard – Free, automatic protection for all AWS users.

Shield Advanced – Paid, enhanced protection; integrates with:

Amazon CloudFront

Route 53

Elastic Load Balancing

Additional Security Services


AWS KMS – Performs encryption using managed cryptographic keys.

AWS WAF – Web Application Firewall that filters and monitors HTTP requests.

Amazon Inspector – Automated security assessments for applications.

Amazon GuardDuty – Threat detection service that monitors network traffic and account behavior.

AWS Shield – DDoS Protection Service


Purpose:
Protects your applications running on AWS from Distributed Denial of Service (DDoS) attacks.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 34


🔸 Types of AWS Shield:
1. Shield Standard (Free):

Automatically protects all AWS customers.

Defends against common, most frequent DDoS attacks (e.g., SYN/UDP floods).

2. Shield Advanced (Paid):

Provides enhanced protection and real-time attack visibility.

Includes DDoS cost protection and integration with:

Amazon CloudFront

Route 53

Elastic Load Balancing (ELB)

Access to the AWS DDoS Response Team (DRT) for assistance during attacks.

Amazon GuardDuty – Threat Detection Service


Purpose:
Continuously monitors your AWS environment for malicious activity and unauthorized behavior.

🔍 Key Features:
Intelligent threat detection using:

Machine Learning

Anomaly detection

Integrated AWS threat intelligence

Monitors:

VPC Flow Logs

AWS CloudTrail Events

DNS logs

Detects:

Suspicious API calls

Unusual network traffic

Compromised EC2 instances

IAM credential misuse

⚖️ Key Differences:
Feature AWS Shield Amazon GuardDuty

Focus DDoS protection Threat detection & account monitoring

Type of Threats Network-level DDoS attacks Malware, compromised accounts, anomalies

Cost Standard (Free), Advanced (Paid) Pay-as-you-go

Detection Method Pattern matching (network traffic) ML + behavioral analysis + AWS threat intel

Integration With CloudFront, ELB, Route 53 With VPC Flow Logs, CloudTrail, DNS logs

IAM Users & IAM Policies


IAM User
A person or application with credentials to access AWS.

Has username, password, and optionally access keys.

Example:

dev-john is an IAM user who logs in to manage EC2 instances.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 35


IAM Policy
A JSON document that defines permissions (Allow/Deny).

Attached to a user, group, or role.

Example Policy: (Allows reading S3 objects)

{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
}

IAM Groups & IAM Roles


IAM Group
A collection of IAM users with shared permissions.

Easier to manage users with the same access.

Example:
Create a Developers group → attach EC2 full access → add john , alice , and raj .

IAM Role
Temporary access identity assumed by users or services.

Used by EC2, Lambda, cross-account access, etc.

Example:
Create a role with S3 access, attach it to an EC2 instance → now EC2 can access S3 without keys.

Quick Summary Table

Concept Description Example

IAM User Individual identity john-dev

IAM Policy Permissions document (JSON) Allow EC2 or S3 access

IAM Group Collection of users with same policy Developers group

IAM Role Temporary access for service/user EC2 assumes a role to access S3

Module - 5
5.1 AWS Pricing
Understand how AWS charges for services and how to optimize costs using pricing models.

Pay-as-you-go: No upfront cost; pay only for what you use.

Pricing Models:

On-Demand: Flexible, pay per hour/second.

Reserved Instances (RI): Commit for 1 or 3 years to save up to 72%.

Spot Instances: Use spare capacity at up to 90% discount (for fault-tolerant workloads).

Free Tier Types:

Always Free (e.g., 1M Lambda requests/month)

12-Month Free

Cloud Computing Notes (Based on Past QPs & Concise Notes) 36


Trials (short-term offers)

Volume-based Discounts: Lower per-unit cost with higher usage (e.g., S3 storage).

Compute Savings Plan: Commit usage and save on EC2, Lambda, Fargate.

5.2 AWS Budget Setup


Learn how to set usage and cost budgets to avoid overspending in AWS.

AWS Budgets help:

Track service usage

Monitor costs

Set alerts for overspending

Types of Budgets:

Cost Budget: Limits total cost (e.g., stay under $100/month).

Usage Budget: Limits resource usage (e.g., 500 compute hours).

Reservation Budget: Monitor RI and Savings Plans usage.

Budget Alerts:

Email notifications when threshold is reached (e.g., 50%, 80%, 100%)

Forecasting:

Budget dashboard predicts monthly spend.

Helps make decisions proactively.

5.3 AWS Cost Explorer


Tool for visualizing and analyzing AWS cost and usage data.

Detailed Notes:
AWS Cost Explorer Features:

Shows past usage and costs

Forecasts future spend based on trends

Filters by service, account, tag, or usage type

Use Cases:

Identify high-cost services

Optimize reserved instance usage

Visualize month-to-date vs budget

Cost Categories:

Break down spend by linked accounts or services

Useful for consolidated billing

Data Updates: Updated daily

Integrations:

Works with Budgets and Reports

Export usage data for detailed analysis

1)Five Pillars of the AWS Well‑Architected Framework


These pillars are the foundation for building secure, high-performing, resilient, and efficient infrastructure in the cloud.

1. Operational Excellence
Focus: Run and monitor systems effectively.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 37


Key Practices:

Automate changes

Monitor performance

Quickly respond to events

Continuously improve processes

2. Security
Focus: Protect data, systems, and assets.

Key Practices:

Identity & access management

Data encryption

Logging and monitoring

Security incident response

3. Reliability
Focus: Ensure a workload performs its intended function correctly and consistently.

Key Practices:

Recover from failures

Handle change automatically

Design for fault tolerance and redundancy

4. Performance Efficiency
Focus: Use IT and computing resources efficiently.

Key Practices:

Choose the right instance types

Monitor and adapt to changing requirements

Use serverless architectures where applicable

5. Cost Optimization
Focus: Avoid unnecessary costs.

Key Practices:

Use a consumption model

Measure overall efficiency

Eliminate unused resources

Right-size resources

2)Six Benefits of Cloud Computing


These are the core advantages that make cloud computing ideal for modern businesses.

1. Trade Capital Expense for Variable Expense


No need for heavy upfront investment in hardware.

Pay only for the computing resources you use.

2. Benefit from Massive Economies of Scale


AWS aggregates demand from millions of customers.

Lower prices due to high-volume purchasing.

3. Stop Guessing Capacity

Cloud Computing Notes (Based on Past QPs & Concise Notes) 38


Easily scale up or down based on demand.

Prevents over-provisioning or under-provisioning.

4. Increase Speed and Agility


Deploy new servers in minutes, not weeks.

Faster innovation and reduced time to market.

5. Stop Spending Money Running and Maintaining Data Centers


No need to manage physical infrastructure.

Focus on development and business goals.

6. Go Global in Minutes
Deploy applications in multiple AWS Regions worldwide.

Reduce latency and improve user experience globally.

3)Migration Strategies
When migrating from on-premises or non-AWS cloud:

1. Rehosting (Lift-and-Shift)

Move apps without changes.

2. Replatforming (Lift, Tinker, and Shift)

Minor cloud optimizations; no core architecture change.

3. Refactoring/Re-architecting

Major changes to meet business needs (scaling, performance).

4. Repurchasing

Shift to SaaS (e.g., move CRM to Salesforce.com).

5. Retaining

Keep critical apps on-premises (due to complexity or priority).

6. Retiring

Remove obsolete apps.

4)AWS Snow Family – For Offline Data Migration


1. AWS Snowcone

Small, rugged, secure device

2 CPUs, 4 GB RAM, 8 TB storage

Edge computing + data transfer

2. AWS Snowball

Two types:

Storage Optimized: Large-scale data migration

Compute Optimized: Local compute, ML, analytics

Higher capacity than Snowcone

3. AWS Snowmobile

45-foot truck

Transfers up to 100 PB per device

For exabyte-scale transfers

5)Innovation with AWS

Cloud Computing Notes (Based on Past QPs & Concise Notes) 39


1. Serverless Applications
No server management needed

Built-in fault tolerance & scalability

Use AWS Lambda to run code on-demand

Developers focus on features, not infrastructure

2. Artificial Intelligence Services


Amazon Transcribe: Speech-to-text

Amazon Comprehend: Text pattern analysis

Amazon Fraud Detector: Detect fraud

Amazon Lex: Build chatbots (text + voice)

3. Machine Learning with SageMaker


Simplifies ML model building, training, and deployment

Saves time, cost, and complexity

Use ML to predict outcomes and analyze data

6)AWS Pricing and Cost Management


AWS Free Tier (3 Types)
1. Always Free

No expiry (e.g., Lambda: 1M free requests/month, DynamoDB: 25GB storage)

2. 12-Months Free

Valid for 1 year post sign-up (e.g., EC2, S3, CloudFront limited usage)

3. Trials

Short-term free offers (e.g., Amazon Inspector – 90 days)

Pricing Concepts
Pay as you go

Pay only for what you use, no upfront commitment.

Pay less when you reserve

Save up to 72% with Savings Plans or Reserved Instances (e.g., EC2)

Pay less with volume

Tiered pricing = lower per-unit cost at higher usage (e.g., S3)

AWS Pricing Calculator

Tool to estimate AWS service costs.

Service Pricing Examples


AWS Lambda

Charged per request and compute time.

1M free requests + 3.2M sec compute/month.

Save more with Compute Savings Plans.

Amazon EC2

Charged for compute time, EBS storage, load balancing.

Save via:

Spot Instances (up to 90% discount)

Cloud Computing Notes (Based on Past QPs & Concise Notes) 40


Savings Plans

Reserved Instances

Amazon S3

Costs include:

Storage used

Number of requests

Data transfer

Management and replication

Billing Tools
Billing & Cost Management Dashboard

Pay bill, monitor usage, compare month-to-month, forecast costs.

Consolidated Billing

Single bill for multiple accounts (default max = 4)

AWS Budgets

Set cost/usage limits with custom alerts (e.g., EC2 usage budget = $200 → alert at $100)

AWS Cost Explorer

Visualize and analyze cost/usage trends over time.

AWS Support Plans


1. Basic – Free, limited support.

2. Developer – Low-cost, technical support for early-stage development.

3. Business – 24/7 access, trusted advisor, production environment support.

4. Enterprise – Mission-critical support, TAM (Technical Account Manager), fastest response times.

AWS Marketplace
What is it?

Digital catalog of 3rd-party software products.

Use it to:

Find, try, and buy software that runs on AWS.

Categories include:

Infrastructure, Business Apps, DevOps tools, Data Products, and more.

7)AWS Cloud Adoption Framework (AWS CAF)


Purpose:
Helps organizations plan, structure, and manage their cloud adoption journey effectively.

CAF Perspectives (6 Pillars)


Each perspective outlines roles, responsibilities, and key capabilities needed for cloud adoption success.

1. Business Perspective
Focus: Align cloud adoption with business goals

Key Stakeholders: Business managers, finance, strategy teams

Goals:

Identify business outcomes

Cloud Computing Notes (Based on Past QPs & Concise Notes) 41


Measure success

Maximize return on investment (ROI)

2. People Perspective
Focus: Prepare the workforce for cloud adoption

Key Stakeholders: HR, training, change management

Goals:

Update skills and roles

Promote cloud fluency

Support organizational change

3. Governance Perspective
Focus: Manage cloud usage and compliance

Key Stakeholders: Risk management, audit, compliance teams

Goals:

Define policies and controls

Align IT investments with business strategies

Manage and monitor cloud resources effectively

4. Platform Perspective
Focus: Design and build the cloud infrastructure

Key Stakeholders: IT architects, infrastructure teams

Goals:

Define architecture blueprints

Build scalable and secure cloud environments

Manage provisioning and automation

5. Security Perspective
Focus: Protect data, systems, and assets

Key Stakeholders: Security teams, IT compliance

Goals:

Implement security controls

Manage identity and access

Ensure data privacy and regulatory compliance

6. Operations Perspective
Focus: Manage and monitor cloud services

Key Stakeholders: IT operations, support teams

Goals:

Ensure high availability and performance

Implement incident and problem management

Automate operations and optimize cost

How AWS CAF Helps


Identifies gaps in skills/processes

Guides organizations through best practices

Supports a structured approach to cloud migration and modernization

Cloud Computing Notes (Based on Past QPs & Concise Notes) 42


📢Conclusion & Disclaimer (Please Read Before Throwing Anything at Me):
All the notes provided here are lovingly stitched together from Previous CA & FA questions and the legendary Stolen PPTs of
doom and a brain running on 2% battery and 0% hope to live.
Yes, this is an arrear exam & I understand that, but If by some miracle none of these topics show up in the exam… I am
not your villain.
The real mastermind behind the chaos is our professor (one already left, we all know who the other Siamese twin is) , so please
direct all stones , slippers, water bottles, sharp instruments and emotional breakdowns accordingly.
As for me, I’ll be in the nearest boys restroom, staring into the void and waiting for the flush to take me with it.

Good luck, fellow warriors. May the AI not mark your paper.

Cloud Computing Notes (Based on Past QPs & Concise Notes) 43

You might also like