UNIT – 1
1. Basic Concepts of Cloud Computing and AWS
• Definition of Cloud Computing:
o Delivery of computing services (servers, storage, databases, networking, etc.)
over the internet.
o Key characteristics:
▪ On-demand resources
▪ Scalability
▪ Pay-as-you-go model
▪ High availability and fault tolerance
• Benefits of Cloud Computing:
o Cost savings: No need for physical infrastructure.
o Scalability: Adjust resources based on demand.
o Flexibility: Access data and applications from anywhere.
o Security: Data backups and compliance with global standards.
• Amazon Web Services (AWS):
o Leading cloud service provider.
o Offers over 200 services in categories like compute, storage, networking,
databases, and machine learning.
o Key features: Reliability, flexibility, and scalability.
2. Create and manage an AWS Account
• Steps to Create an AWS Account:
1. Visit AWS Website.
2. Sign up using your email address.
3. Enter billing details (free-tier available for beginners).
4. Set up multi-factor authentication for security.
5. Access the AWS Management Console.
• Managing AWS Resources:
o Use AWS Management Console, AWS CLI, or SDKs.
o Monitor usage and billing via the Billing Dashboard.
3. AWS Regions, Availability Zones, and Global Infrastructure
• AWS Regions:
o Geographical areas hosting AWS data centres.
o Each region operates independently to ensure fault tolerance.
• Availability Zones (AZs):
o Isolated locations within a region.
o Enable high availability by hosting applications across multiple AZs.
• Global Infrastructure:
o 31+ regions and 99+ availability zones globally.
o 400+ edge locations for content delivery.
4. Various Types of AWS Services
• Compute:
o Amazon EC2: Virtual servers for running applications.
o AWS Lambda: Serverless compute service.
• Storage:
o Amazon S3: Object storage for any type of data.
o Amazon EBS: Block storage for EC2 instances.
• Databases:
o Amazon RDS: Managed relational databases.
o Amazon DynamoDB: NoSQL database.
• Networking:
o Amazon VPC: Virtual private cloud for networking.
o AWS CloudFront: Content delivery network (CDN).
• Security:
o IAM: Manage access and permissions for AWS resources.
• Monitoring:
o Amazon CloudWatch: Monitor performance and usage.
Amazon Elastic Compute Cloud (EC2)
Introduction to Amazon EC2
• Amazon EC2 (Elastic Compute Cloud) is a web service that provides resizable
compute capacity in the cloud.
• It eliminates the need to invest in hardware upfront, allowing businesses to scale
resources up or down based on requirements.
Key Features of Amazon EC2
1. Scalability:
o Automatically adjust computing resources to meet demand using Auto Scaling
and Elastic Load Balancing.
2. Cost-Effective:
o Pay-as-you-go pricing model: Only pay for what you use.
3. Wide Selection of Instance Types:
o Optimized for various workloads (compute-intensive, memory-intensive,
storage-intensive).
4. Security:
o Integrated with AWS Identity and Access Management (IAM) for access
control.
o Allows you to create Virtual Private Clouds (VPCs) for secure environments.
5. Global Availability:
o Instances can be launched in multiple AWS regions and availability zones for
high availability and disaster recovery.
6. Elasticity:
o Quickly launch, stop, or terminate instances as needed.
7. Customizable AMIs:
o Amazon Machine Images (AMIs) allow users to launch instances with pre-
configured OS and applications.
8. Integration:
o Seamless integration with other AWS services like S3, RDS, and CloudWatch.
Types of EC2 Instances
1. General Purpose:
o Balanced compute, memory, and networking.
o Use Case: Web servers, development environments.
o Examples: t2, t3, m5 instances.
2. Compute Optimized:
o High-performance processors for compute-intensive tasks.
o Use Case: High-performance computing, gaming.
o Examples: c5, c6g instances.
3. Memory Optimized:
o For memory-intensive workloads.
o Use Case: In-memory databases, data analytics.
o Examples: r5, x1 instances.
4. Storage Optimized:
o Optimized for large amounts of data storage.
o Use Case: Big data processing, data warehousing.
o Examples: i3, d2 instances.
5. Accelerated Computing:
o Use GPUs for machine learning and graphics processing.
o Use Case: AI, machine learning, 3D rendering.
o Examples: p3, g4 instances.
Launching an EC2 Instance
1. Step 1: Log in to AWS Management Console
o Navigate to the EC2 dashboard.
2. Step 2: Choose an Amazon Machine Image (AMI)
o Select an AMI with the desired operating system (e.g., Linux, Windows).
3. Step 3: Select an Instance Type
o Choose an instance type based on workload requirements.
4. Step 4: Configure Instance Details
o Set options like the number of instances, network settings (VPC, subnet), and
auto-scaling.
5. Step 5: Add Storage
o Specify the amount and type of storage (e.g., EBS volume).
6. Step 6: Configure Security Groups
o Define inbound and outbound rules for the instance.
7. Step 7: Review and Launch
o Review all settings and click "Launch."
o Choose a key pair for SSH access.
EC2 Pricing Models
1. On-Demand:
o Pay for instances by the hour or second without commitments.
o Suitable for short-term workloads or unpredictable demand.
2. Reserved Instances:
o Commit to one- or three-year terms for discounted pricing.
o Best for steady-state or predictable usage.
3. Spot Instances:
o Purchase unused EC2 capacity at discounted rates.
o Ideal for batch processing and workloads that can handle interruptions.
4. Savings Plans:
o Flexible pricing plans offering lower costs for consistent usage.
Key Concepts in EC2
1. Elastic Load Balancer (ELB):
o Automatically distributes incoming traffic across multiple instances.
2. Auto Scaling:
o Automatically adjusts the number of EC2 instances based on demand.
3. EBS (Elastic Block Store):
o Persistent block storage for EC2 instances.
4. Key Pairs:
o Used to securely connect to instances.
Amazon Simple Storage Service (S3) with Demo
Introduction to Amazon S3
• Amazon S3 (Simple Storage Service):
o A scalable, secure, and highly durable object storage service offered by AWS.
o Used for storing and retrieving any amount of data, anytime and anywhere.
• Key Characteristics:
o Object Storage: Data is stored as objects (files) along with metadata.
o Scalable: Automatically scales to handle any amount of data.
o Durable: 99.999999999% (11 9’s) durability.
Key Features of Amazon S3
1. Buckets:
o Containers for storing objects (files).
o Each bucket must have a unique name globally.
2. Data Management:
o Lifecycle policies for automated archival or deletion.
o Versioning to retain multiple versions of an object.
3. Data Security:
o Encrypt data using server-side or client-side encryption.
o Control access with IAM roles, bucket policies, and ACLs.
4. Data Access:
o Access objects via HTTP/HTTPS.
o Integrates with AWS SDKs and CLI.
5. Storage Classes:
o S3 Standard: Frequent access.
o S3 Intelligent-Tiering: Automatic cost optimization based on access patterns.
o S3 Glacier: Long-term archival storage.
Use Cases of Amazon S3
1. Backup and disaster recovery.
2. Static website hosting.
3. Data lakes for big data analytics.
4. Media storage and content distribution.
Steps to Use Amazon S3 (Demo)
Step 1: Create an S3 Bucket
1. Open the AWS Management Console and go to the S3 service.
2. Click Create bucket.
3. Provide:
o Bucket name (e.g., my-demo-bucket).
o Select a region (e.g., us-east-1).
4. Configure options like versioning and encryption (optional).
5. Click Create bucket.
Step 2: Upload Files to the Bucket
1. Select the created bucket and click Upload.
2. Add files or folders from your local computer.
3. Set permissions (public/private) and click Upload.
Step 3: Manage Object Access
1. Use bucket policies or IAM roles to control access.
2. Example policy for public access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-demo-bucket/*"
}
]
}
Step 4: Enable S3 Versioning
1. Navigate to the bucket settings.
2. Enable Versioning to retain multiple versions of objects.
Step 5: Configure Lifecycle Rules
1. Go to Management > Lifecycle Rules.
2. Create a rule to transition objects to another storage class or delete them after a
specified time.
Step 6: Access Files
1. Open an object and copy its URL for access.
2. Test access in a browser or via the AWS CLI:
aws s3 cp s3://my-demo-bucket/myfile.txt ./localfile.txt
Demo Walkthrough
1. Scenario: Upload and manage a file in S3.
o Create a bucket (demo-bucket).
o Upload a file (example.txt) to the bucket.
o Configure public access to the file.
o Enable versioning and upload an updated version of the file.
o Set a lifecycle rule to transition old versions to Glacier.
2. Testing Access:
o Open the public URL of the file in a browser.
o Use the AWS CLI to download the file:
aws s3 cp s3://demo-bucket/example.txt ./example.txt
3. Monitoring:
o Use CloudWatch to monitor S3 metrics like bucket size and request counts.
Important Commands for AWS CLI
1. List all buckets:
aws s3 ls
2. Upload a file:
aws s3 cp ./example.txt s3://demo-bucket/
3. Download a file:
aws s3 cp s3://demo-bucket/example.txt ./example.txt
4. Enable versioning:
aws s3api put-bucket-versioning --bucket demo-bucket --
versioning-configuration Status=Enabled
Conclusion
• Why Use S3?
o Flexible and cost-effective storage.
o Secure and easy to integrate with other AWS services.
• Next Steps:
o Explore advanced features like Cross-Region Replication (CRR) and S3 Event
Notifications.
AWS Identity and Access Management (IAM) with Demo
Introduction to IAM
• What is IAM?
o AWS Identity and Access Management (IAM) is a service that enables you to
securely manage access to AWS resources.
o It allows fine-grained control over who can access resources and what actions
they can perform.
• Purpose of IAM:
o Enhance security by controlling user access.
o Manage permissions for multiple users, groups, and roles.
o Ensure least-privilege access.
Key Features of IAM
1. User Management:
o Create and manage IAM users to access AWS services.
o Each user has their own credentials (username and password, access keys).
2. Groups:
o Organize users into groups to manage permissions collectively.
o Example: AdminGroup, ReadOnlyGroup.
3. Roles:
o Assign permissions to AWS resources or services.
o Example: An EC2 instance assuming a role to access S3 buckets.
4. Policies:
o JSON documents that define permissions.
o Two types:
▪ AWS Managed Policies: Predefined policies provided by AWS.
▪ Customer Managed Policies: Custom policies created by users.
5. Multi-Factor Authentication (MFA):
o Add an extra layer of security by requiring an additional authentication factor.
6. Federation:
o Integrate with corporate identity systems (e.g., Active Directory) for single
sign-on (SSO).
7. Access Keys:
o Used for programmatic access to AWS via CLI or SDKs.
IAM Best Practices
1. Use Groups to Assign Permissions:
o Avoid assigning permissions directly to users.
2. Enable MFA for All Users:
o Protect accounts with an additional layer of security.
3. Follow Least Privilege Principle:
o Grant only the permissions needed to perform specific tasks.
4. Regularly Review Permissions:
o Audit and update policies as needed.
5. Use IAM Roles for AWS Services:
o Avoid using long-term access keys.
Demo: IAM Setup and Usage
Scenario: Create an IAM user, assign permissions, and test access.
Step 1: Create an IAM User
1. Go to the AWS Management Console > IAM > Users.
2. Click Add user.
3. Enter a username (e.g., DemoUser).
4. Select the access type:
o Programmatic access: For CLI/SDK access.
o AWS Management Console access: For browser-based login.
5. Click Next.
Step 2: Attach a Policy to the User
1. Choose a policy to assign:
o AdministratorAccess: Full access to all AWS services.
o AmazonS3ReadOnlyAccess: Read-only access to S3.
2. Click Next and Create User.
Step 3: Test Access
1. Log in using the IAM user's credentials.
2. Verify the user can access only the permitted services.
Step 4: Create an IAM Group
1. Navigate to IAM > Groups > Create Group.
2. Name the group (e.g., AdminGroup).
3. Attach a policy (e.g., AdministratorAccess).
4. Add users to the group.
Step 5: Create and Assign a Role
1. Go to IAM > Roles > Create Role.
2. Select a service that will assume this role (e.g., EC2).
3. Attach a policy (e.g., AmazonS3FullAccess).
4. Launch an EC2 instance and assign the role during setup.
Step 6: Implement MFA for a User
1. Navigate to IAM > Users > Select a user.
2. Click Security credentials > Assign MFA device.
3. Follow instructions to set up a virtual MFA device (e.g., Google Authenticator).
IAM Policy Example
Policy for S3 Read-Only Access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-demo-bucket/*"
}
]
}
IAM Monitoring and Logging
1. AWS CloudTrail:
o Record API calls made by IAM users, groups, and roles.
o Useful for auditing and troubleshooting.
2. IAM Access Analyzer:
o Identify resources with public or cross-account access.
o Ensure compliance with security best practices.
Use Cases of IAM
1. Managing user access in an organization with multiple teams.
2. Assigning specific permissions to EC2 instances or Lambda functions.
3. Enforcing MFA for critical accounts.
4. Integrating with corporate identity systems for single sign-on.
Summary
• IAM provides secure and centralized management of AWS access.
• Key components include users, groups, roles, and policies.
• Follow best practices like least privilege, MFA, and regular audits to ensure security.
Monitor and Optimize AWS Performance Using Amazon CloudWatch
Introduction to Amazon CloudWatch
• What is CloudWatch?
o A monitoring and management service offered by AWS.
o Provides metrics, logs, and alarms to track the health and performance of
AWS resources.
• Purpose:
o Monitor AWS services (e.g., EC2, S3, Lambda).
o Set alarms for specific conditions (e.g., CPU usage > 80%).
o Gain insights into application performance and resource utilization.
Key Features of CloudWatch
1. Metrics:
o Collects metrics for AWS resources like CPU utilization, disk I/O, and
network activity.
o Custom metrics can also be defined for applications.
2. Alarms:
o Notify users when a metric exceeds or falls below a defined threshold.
o Trigger actions like scaling or sending notifications via Amazon SNS.
3. Logs:
o Aggregates logs from various AWS services like EC2, Lambda, and RDS.
o Enables real-time analysis and troubleshooting.
4. Dashboards:
o Visualize metrics and alarms in a centralized dashboard.
o Create custom views for better insights.
5. Events:
o Automate responses to state changes in AWS resources.
o Example: Triggering a Lambda function when an EC2 instance stops.
6. Insights:
o CloudWatch Logs Insights: Analyze logs using query language.
o CloudWatch Application Insights: Detect anomalies in application behavior.
Benefits of CloudWatch
• Centralized monitoring of AWS resources.
• Proactive issue detection with alarms and notifications.
• Optimization of resource utilization and cost efficiency.
• Seamless integration with AWS Auto Scaling and Lambda.
Demo: Monitoring and Optimizing AWS Resources
Scenario: Monitor an EC2 instance and set up alarms for high CPU utilization.
Step 1: Enable Monitoring for an EC2 Instance
1. Launch an EC2 instance from the AWS Management Console.
2. Go to the Monitoring tab in the EC2 dashboard.
3. Verify that basic monitoring is enabled (5-minute intervals).
4. For detailed monitoring (1-minute intervals), enable Detailed Monitoring in the
instance settings.
Step 2: Create a CloudWatch Alarm
1. Open the CloudWatch console.
2. Navigate to Alarms > Create Alarm.
3. Select a metric:
o Go to Browse > EC2 > Per-Instance Metrics.
o Choose CPU Utilization for your instance.
4. Define the threshold:
o Example: Trigger alarm if CPU utilization exceeds 70%.
5. Configure actions:
o Send a notification via Amazon SNS.
o Optionally, trigger an Auto Scaling action.
6. Name the alarm and click Create Alarm.
Step 3: View Metrics in Dashboards
1. Open the CloudWatch console.
2. Navigate to Dashboards > Create Dashboard.
3. Add widgets:
o Choose Line Graph or Number for metrics like CPU, memory, or disk usage.
4. Save the dashboard for ongoing monitoring.
Step 4: Analyze Logs with CloudWatch Logs
1. Ensure that the EC2 instance has an IAM role with the
CloudWatchAgentServerPolicy.
2. Install the CloudWatch agent on the EC2 instance:
sudo yum install amazon-cloudwatch-agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-
config-wizard
sudo systemctl start amazon-cloudwatch-agent
3. Stream application logs to CloudWatch Logs.
4. Analyze logs using CloudWatch Logs Insights:
o Example query to filter errors:
fields @timestamp, @message
| filter @message like /ERROR/
| sort @timestamp desc
Step 5: Use CloudWatch Events
1. Navigate to Events > Rules.
2. Create a rule to trigger on specific events (e.g., EC2 instance state change).
3. Define a target, such as a Lambda function or SNS notification.
4. Save and test the rule.
Step 6: Optimize Resource Utilization
• Review metrics to identify underutilized resources (e.g., low CPU usage).
• Use Auto Scaling to adjust capacity dynamically.
• Analyze billing reports to optimize costs.
CloudWatch Pricing
• Metrics: Charged per metric and per API request.
• Logs: Charged based on log data ingestion and storage.
• Alarms: Charged per alarm created.
• Free Tier:
o Includes 10 custom metrics, 3 dashboards, and 5 GB of log ingestion.
Use Cases of CloudWatch
1. Monitor EC2 instances for performance issues.
2. Analyze Lambda execution metrics and troubleshoot errors.
3. Optimize S3 storage costs by monitoring usage patterns.
4. Automate scaling actions for dynamic workloads.
Summary
• CloudWatch is a powerful tool for monitoring and optimizing AWS resources.
• Key features include metrics, alarms, logs, dashboards, and events.
• Proactive monitoring helps maintain application performance and reduce costs.