KEMBAR78
Cloud Virtual | PDF | Amazon Web Services | Cloud Computing
0% found this document useful (0 votes)
69 views79 pages

Cloud Virtual

Good course

Uploaded by

Light Yagmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views79 pages

Cloud Virtual

Good course

Uploaded by

Light Yagmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 79

Summer Internship Report on

AWS CLOUD VIRTUAL


INTERNSHIP

Submitted to
Jawaharlal Nehru Technology University Kakinada

Submitted in accordance with the requirement for the degree


BACHELOR OF TECHONOLOGY

Department of

INFORMATION TECHONOLOGY

Submitted by:

VEMURI SRI VARDHAN

Reg.No:22NG1A1263

USHA RAMA COLLEGE OF ENGINEERING AND TECHONOLOGY


Student’s Declaration

I, Vemuri Sri Vardhana student of SUMMER INTERNSHIP

Program, Reg. No. 22NG1A1263 of the Department of INFORMATION

TECHNOLOGY college of Usha Rama college of Engineering and Technology I do

hereby declare that I have completed the mandatory internship from MAY to JULY

2023 in AMAZON WEB SERVICES under the Faculty Supervision,Department of

INFORMATION TECHNOLOGY , college of USHA RAMA COLLEGE OF

ENGINEERING AND TECHONOLOGY.

(Signature and Date)


OFFICIAL CERTIFICATION

This is to certify that ANUMANEDI NAGA SAI KIRAN with Reg.No


20NG1A12105 has completed his internship in AWS on CLOUD
VIRTUAL INTERNSHIP under my supervision as a part of partial
fulfilment of the requirement for the degree of B.Tech in Information
Technology , college of Usha Rama College Of Engineering And
Technology.

This is accepted for evaluation

Head of the department

(signature with date and seal)


AWS CLOUD VIRTUAL INTERNSHIP

Certificate from Intern Organization

NAGA SAI KIRAN ANUMANEDI


Usha Rama College of Engineering and
Technology

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY


AWS CLOUD VIRTUAL INTERNSHIP

TABLE OF CONTENTS
Chapter-1 Orientation and Introduction 1-6 Week 1
1.1 Introduction to the AWS Cloud platform
1.2 Setting up AWS accounts and accessing the AWS
Management Console
1.3 Overview of basic AWS services and terminology
Chapter-2 AWS Fundamentals 6 - 12 Week 2

2.1 Deep dive into AWS core services such as EC2, S3,
and RDS.
Hands-on experience with launching virtual
machines (EC2 instances) and storing data in S3
buckets.
Introduction to AWS Identity and Access
Management (IAM).
Chapter-3 Networking and Security 13 - 19 Week 3
3.1 Understanding AWS Virtual Private Cloud (VPC).
3.2 Configuring VPC, subnets, and security groups.
3.3 Implementing network security best practices.
3.4 Learning about AWS security services like AWS
WAF and AWS Shield.
Chapter-4 Databases and Storage 19 - 29 Week 4
4.1 Exploring AWS database services like RDS,
DynamoDB, and Aurora.
4.2 Hands-on experience with creating and managing
databases.
4.3 Understanding data backup and recovery
strategies.
4.4 Introduction to Amazon EBS and storage options.
Chapter-5 Serverless Computing 30 - 36 Week 5
5.1 Introduction to serverless architecture using AWS
Lambda.
5.2 Building and deploying serverless functions.
5.3 Exploring Amazon API Gateway for creating
RESTful APIs.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY


AWS CLOUD VIRTUAL INTERNSHIP

5.4 Integration of Lambda with other AWS services


Chapter-6 IAM and Security 36 – 40 Week 6
6.1 Detail your work on Identity and Access
Management (IAM).
6.2 Discuss how you configured roles, policies, and
permissions.
6.3 Emphasize the importance of security in AWS.
Chapter-7 Big Data and Analytics 41 - 46 Week 7
7.1 Introduction to AWS data analytics services like
Amazon Redshift and Athena.
7.2 Working with data lakes and AWS Glue for ETL.
7.3 Building data pipelines and performing analysis.
Visualization with Amazon QuickSight.
Chapter - 8 Application Monitoring and Management 46 - 52 Week 8
8.1 Implementing application monitoring with
Amazon CloudWatch.
8.2 Learning about AWS CloudTrail for auditing and
compliance.
8.3 Troubleshooting and optimizing AWS resources.
Cost management and billing in AWS.
Chapter – 9 Security Best Practices 53 – 57 Week 9
9.1 Advanced IAM concepts and roles.
9.2 Implementing encryption and data protection.
Chapter -10 Conclusion 58 Week 10

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY


AWS CLOUD VIRTUAL INTERNSHIP

1. Orientation and Introduction WEEK 1

1.1 Introduction to AWS Cloud platform


In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses as
web services—now commonly known as cloud computing. One of the key benefits of cloud
computing is the opportunity to replace upfront capital infrastructure expenses with low variable
costs that scale with your business. With the cloud, businesses no longer need to plan for and
procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly
spin up hundreds or thousands of servers in minutes and deliver results faster. Today, AWS
provides a highly reliable, scalable, low-cost infrastructure platform in the cloud that powers
hundreds of thousands of businesses in 190 countries around the world.
Amazon Web Services (AWS) is a comprehensive and widely-used cloud computing platform
provided by Amazon. It offers a vast array of cloud services and solutions that enable organizations
to build, deploy, and manage applications and infrastructure in a flexible, scalable, and cost-
effective manner. Here's an introduction to the key components and concepts of the AWS cloud
platform:
1.1.1 Global Reach: AWS operates data centers, known as Availability Zones, in multiple
geographic regions around the world. This global presence allows users to deploy resources closer
to their customers, improving latency and redundancy.
1.1.2 Core Services:
 Compute: AWS provides various compute services, including Amazon EC2 (Elastic
Compute Cloud) for virtual servers and AWS Lambda for serverless computing.
 Storage: Options like Amazon S3 (Simple Storage Service) for object storage, Amazon
EBS (Elastic Block Store) for block storage, and Amazon Glacier for long-term archival.
 Database: AWS offers managed database services like Amazon RDS (Relational Database
Service) and Amazon DynamoDB for NoSQL databases.
 Networking: AWS provides Virtual Private Cloud (VPC) for network isolation, Amazon
Route 53 for domain name services, and more.
 Security and Identity: AWS Identity and Access Management (IAM) for access control,
AWS Key Management Service (KMS) for encryption, and AWS Certificate Manager for
SSL/TLS certificates.
1.1.3 Scalability : AWS resources can be easily scaled up or down to meet demand. This elasticity
allows organizations to pay only for the resources they use.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY


AWS CLOUD VIRTUAL INTERNSHIP

1.1.4 Managed Services : AWS offers many managed services that handle operational tasks like
patching, provisioning, and scaling, reducing the administrative burden on users.
1.1.5 DevOps and Automation : AWS supports automation and DevOps practices through
services like AWS Elastic Beanstalk, AWS CodePipeline, and AWS CodeDeploy.
1.1.6 Analytics and Machine Learning: AWS provides services like Amazon Redshift for data
warehousing, Amazon EMR for big data processing, and Amazon SageMaker for machine
learning.
1.1.7 IoT (Internet of Things): AWS IoT Core and related services enable the management of
IoT devices, data, and applications.
1.1.8 Serverless Computing: AWS Lambda allows you to run code without provisioning or
managing servers, making it easy to build highly scalable and cost-efficient applications.
1.1.9 Containers: AWS offers services like Amazon ECS (Elastic Container Service) and Amazon
EKS (Elastic Kubernetes Service) for container management and orchestration.
1.1.10 Content Delivery and Edge Computing: Amazon CloudFront is a content delivery
network (CDN) for fast and secure content delivery, while AWS Wavelength brings AWS services
to the edge for low-latency applications.
1.1.11 Cost Management: AWS provides tools like AWS Cost Explorer and AWS Budgets to
help users monitor and control their cloud costs.
1.1.12 Security and Compliance: AWS takes security seriously, offering features like VPC,
security groups, and identity and access management. AWS also complies with various industry
standards and certifications.
1.1.13 Support and Ecosystem: AWS has a vast ecosystem of partners, a supportive community,
and various support plans to help users with their AWS deployments.
1.1.14 Hybrid and Multi-Cloud: AWS supports hybrid and multi-cloud architectures, allowing
organizations to integrate their on-premises data centers with the cloud or use multiple cloud
providers.
1.1.15 AWS Marketplace: A marketplace for third-party software and services that can be easily
integrated into your AWS environment.

1.2 Setting up AWS accounts and accessing the AWS Management Console
Setting up an AWS account and accessing the AWS Management Console is the first step in using
Amazon Web Services. Here's a step-by-step guide on how to do this:
Setting Up an AWS Account:
Step -1 : Open the AWS Registration Page:

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY


AWS CLOUD VIRTUAL INTERNSHIP

Go to the AWS registration page: https://aws.amazon.com/.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY


AWS CLOUD VIRTUAL INTERNSHIP

step – 2 : Click "Create an AWS Account"

Click the "Create an AWS Account" button to start the account creation process.
Step – 3 :Provide Account Information:
Fill in the required information, including your email address, password, and account name. Ensure
that your password meets AWS security requirements.
Step – 4 : Contact Information:
Enter your contact information, including your name, address, and phone number.
Step – 5 :Payment Information:
Enter your payment information. AWS offers a Free Tier with limited resources for the first 12
months, but you'll need a valid payment method to create an account.
Step – 6 : Choose a Support Plan:
AWS offers different support plans, including a free basic plan. Choose the one that best suits your
needs.
Step – 7 : Identity Verification:
AWS may require identity verification to prevent misuse of its services. You can
choose between receiving a phone call or using a text message for verification.
Step – 8 : Accept AWS Customer Agreement:
Review the AWS Customer Agreement and the AWS Service Terms, and then click "Create
Account and Continue" if you agree with the terms.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY


AWS CLOUD VIRTUAL INTERNSHIP

Step – 9 : Complete Registration:


Follow the on-screen instructions to complete the registration process. You may need to enter a
security code sent to your phone or email.
Step – 10 : Set Up Multi-Factor Authentication (MFA) (Optional but Recommended):
After creating your account, it's highly recommended to enable Multi-Factor Authentication (MFA)
for added security. You can set this up in the AWS Management Console.

Accessing the AWS Management Console:


Step 1 : Sign In to the AWS Management Console:
Go to the AWS Management Console login page: https://aws.amazon.com/console/.
Step 2 : Enter Your AWS Account Credentials:
Enter the email address and password you used to create your AWS account.
Step 3 : MFA Authentication (if enabled):
If you've enabled Multi-Factor Authentication (MFA), you'll be prompted to enter the
authentication code from your MFA device.
Step 4 : Dashboard and Services:
After successful authentication, you'll be redirected to the AWS Management Console dashboard.
From here, you can access various AWS services and resources.
Step 5 : Select a Region:
AWS operates in multiple regions around the world. Choose a region where you want to create and
manage your AWS resources. The region can affect factors like latency and data residency.
Step – 6 : Explore and Use AWS Services:
The AWS Management Console provides an intuitive interface to explore and use AWS services.
You can use the search bar or browse the services by category to find
the specific service you want to use.
Step 7 : Billing and Cost Management:
It's important to monitor your AWS usage and billing. You can access the AWS Billing and Cost
Management Dashboard to view your current charges, set up billing alerts, and manage your
payment methods.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY


AWS CLOUD VIRTUAL INTERNSHIP

AWS Management Console

1.3 Overview of basic AWS services and terminology


Amazon Web Services (AWS) offers a wide range of services and terminology that form the
foundation of cloud computing. Here's an overview of some of the basic AWS services and key
terminology:
1. Amazon EC2 (Elastic Compute Cloud):
EC2 provides resizable virtual machines (instances) in the cloud. It allows you to run applications
and workloads on virtual servers, providing scalability and flexibility.
2. Amazon S3 (Simple Storage Service):
S3 is a highly scalable object storage service for storing and retrieving data. It's often used for
backup, data archiving, static website hosting, and data sharing.
3. Amazon RDS (Relational Database Service):
RDS is a managed relational database service that supports multiple database engines, including
MySQL, PostgreSQL, and SQL Server. It simplifies database administration tasks like patching,
backups, and scaling.
4. Amazon VPC (Virtual Private Cloud):
VPC allows you to create isolated networks within AWS. You can control network settings,
subnets, and security groups to build secure and segmented environments for your resources.
5. Amazon IAM (Identity and Access Management):
IAM is AWS's access control service. It enables you to manage users, groups, and roles, and
control their permissions to AWS resources.
6. Amazon Lambda:
Lambda is a serverless compute service that lets you run code in response to events without
provisioning or managing servers. It's commonly used for event-driven applications and
microservices.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 11
AWS CLOUD VIRTUAL INTERNSHIP

7. Amazon SNS (Simple Notification Service):


SNS is a fully managed messaging service that allows you to send notifications and alerts to a
variety of endpoints, including email, SMS, and more.
8. Amazon Route 53:
Route 53 is a scalable Domain Name System (DNS) web service that enables you to route traffic to
various AWS resources and external endpoints.
9. Amazon EBS (Elastic Block Store):
EBS provides block storage volumes for EC2 instances. It's used for storing persistent data and can
be attached or detached from instances as needed
10. Amazon CloudWatch:
- CloudWatch is a monitoring and management service that collects and tracks metrics, collects
and monitors log files, and sets alarms. It helps you gain insights into the health and performance
of your AWS resources.
11. Auto Scaling:
- Auto Scaling automatically adjusts the number of EC2 instances in a group based on traffic,
ensuring that your application can handle varying workloads.
12. Amazon VPC peering
- VPC peering allows you to connect two VPCs and route traffic between them privately.

2. AWS FUNDAMENTALS

2.1 Deep dive into AWS core services such as EC2, S3, and

RDS. Amazon EC2 (Elastic Compute Cloud):


What is EC2?
EC2 is a fundamental AWS service that provides resizable virtual machines, known as instances, in
the cloud. These instances are designed to run various types of applications, from simple web
servers to complex distributed computing clusters.

Key Concepts:
Instances: These are the virtual servers you launch in the AWS cloud. You can choose from
various instance types, each optimized for different use cases, such as compute-intensive, memory-
intensive, or GPU-accelerated workloads.
AMI (Amazon Machine Image): An AMI is a pre-configured virtual machine image used to
create instances. AWS provides many publicly available AMIs, and you can create your custom

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 12


AWS CLOUD VIRTUAL INTERNSHIP

AMIs.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 13


AWS CLOUD VIRTUAL INTERNSHIP

Security Groups: Security groups act as virtual firewalls for your instances. You can define
inbound and outbound traffic rules to control access to your instances.
Key Pairs: Key pairs are used to securely access your EC2 instances. You create a key pair, and
AWS stores the public key while you keep the private key.
Elastic IP Addresses: Elastic IP addresses are static IP addresses that you can allocate to your
instances. They're useful when you want to ensure your instance has a consistent IP address.
Instance Storage: EC2 instances can have instance storage (ephemeral storage) attached. This
storage is temporary and typically used for caching and scratch space.
Elastic Load Balancers: You can use Elastic Load Balancers to distribute incoming traffic across
multiple EC2 instances for high availability and fault tolerance.

Use Cases:
Web hosting, application hosting, scalable and on-demand computing resources, batch processing,
machine learning, and more…

Amazon S3 (Simple Storage Service):


What is S3?
Amazon S3 is a highly scalable object storage service designed for storing and retrieving data. It's
known for its durability, availability, and cost-effectiveness.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 14


AWS CLOUD VIRTUAL INTERNSHIP

Key Concepts:
Buckets: S3 uses containers called buckets to store objects. Each bucket has a globally unique
name. Objects: Objects are the data files stored in S3 buckets. They can be of any file type, and S3
provides features for versioning, lifecycle management, and access control.
Data Consistency: S3 provides strong read-after-write consistency for all objects, ensuring that
once an object is written, it can be read immediately.
Data Encryption: S3 supports encryption in transit (SSL/TLS) and at rest (server-side and client-
side encryption).
Storage Classes: S3 offers multiple storage classes, including Standard, Intelligent-Tiering, Glacier,
and others. Each class is designed for different use cases and has different costs associated with it.
Access Control: S3 allows you to control access to your objects using bucket policies, IAM
policies, and Access Control Lists (ACLs).
Event Notifications: You can configure S3 to trigger events (e.g., Lambda functions) based on
changes to objects in a bucket.

Use Cases:
Data storage for web applications, backup and archiving, content distribution, big data analytics,
data lakes, and serving static assets for websites.

Amazon RDS (Relational Database Service):


What is RDS?
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 15
AWS CLOUD VIRTUAL INTERNSHIP

Amazon RDS is a managed relational database service that simplifies the setup, operation, and
scaling of relational databases such as MySQL, PostgreSQL, SQL Server, Oracle, and MariaDB.

Key Concepts:
DB Instances: RDS provides fully managed database instances running on your choice of database
engine.
Automated Backups: RDS automatically takes daily backups and allows you to create manual
snapshots of your databases.
Multi-AZ Deployments: RDS supports high availability through Multi-AZ deployments, where a
standby instance is automatically provisioned in a separate Availability Zone for failover.
Read Replicas: RDS allows you to create read replicas of your database for read scalability and
redundancy.
Security: RDS offers various security features, including encryption at rest and in transit, VPC
isolation, and IAM database authentication.
Scalability: You can vertically scale (resize) your RDS instances or horizontally scale by adding
read replicas.
Database Engines: RDS supports several popular database engines, each with its own
configuration options and features.

Use Cases:
Hosting web applications, managing e-commerce databases, data warehousing, business
intelligence, and disaster recovery for relational databases.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 16


AWS CLOUD VIRTUAL INTERNSHIP

2.2 Hands-on experience with launching virtual machines (EC2 instances)


and storing data in S3 buckets.
Amazon S3 is a repository for internet data. Amazon S3 provides access to reliable, fast, and
inexpensive data storage infrastructure. It is designed to make web-scale computing easier by
enabling you to store and retrieve any amount of data, at any time, from within Amazon EC2 or
anywhere on the web. Amazon S3 stores data objects redundantly on multiple devices across
multiple facilities and allows concurrent read or write access to these data objects by many separate
clients or application threads. You can use the redundant data stored in Amazon S3 to recover
quickly and reliably from instance or application failures.

Amazon EC2 uses Amazon S3 for storing Amazon Machine Images (AMIs). You use AMIs for
launching EC2 instances. In case of instance failure, you can use the stored AMI to
immediately launch another instance, thereby allowing for fast recovery and business
continuity.

Amazon EC2 also uses Amazon S3 to store snapshots (backup copies) of the data volumes. You
can use snapshots for recovering data quickly and reliably in case of application or system failures.
You can also use snapshots as a baseline to create multiple new data volumes, expand the size of
an existing data volume, or move data volumes across multiple Availability Zones, thereby making
your data usage highly scalable. For more information about using data volumes and snapshots,
see Amazon Elastic Block Store.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 16
AWS CLOUD VIRTUAL INTERNSHIP

Objects are the fundamental entities stored in Amazon S3. Every object stored in Amazon S3 is
contained in a bucket. Buckets organize the Amazon S3 namespace at the highest level and identify
the account responsible for that storage. Amazon S3 buckets are similar to internet domain names.
Objects stored in the buckets have a unique key value and are retrieved using a URL. For example,
if an object with a key value /photos/mygarden.jpg is stored in the DOC-EXAMPLE-
BUCKET1 bucket, then it is addressable using the URL https://DOC-EXAMPLE-
BUCKET1.s3.amazonaws.com/photos/mygarden.jpg.

2.3 Introduction to AWS Identity and Access Management (IAM).


AWS Identity and Access Management (IAM) is a web service that helps you securely control
access to AWS resources. With IAM, you can centrally manage permissions that control which
AWS resources users can access. You use IAM to control who is authenticated (signed in) and
authorized (has permissions) to use resources.

IAM Features
IAM gives you the following features:
Shared access to your AWS account
You can grant other people permission to administer and use resources in your AWS account
without having to share your password or access key.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 17
AWS CLOUD VIRTUAL INTERNSHIP

Granular permissions
You can grant different permissions to different people for different resources. For example, you
might allow some users complete access to Amazon Elastic Compute Cloud (Amazon EC2),
Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Redshift, and
other AWS services. For other users, you can allow read-only access to just some S3 buckets, or
permission to administer just some EC2 instances, or to access your billing information but
nothing else.
Secure access to AWS resources for applications that run on Amazon EC2
You can use IAM features to securely provide credentials for applications that run on EC2
instances. These credentials provide permissions for your application to access other AWS
resources. Examples include S3 buckets and DynamoDB tables.

ACESSING IAM
You can work with AWS Identity and Access Management in any of the following ways.
AWS Management Console
The console is a browser-based interface to manage IAM and AWS resources. For more
information about accessing IAM through the console, see How to sign in to AWS in the AWS
Sign-In User Guide.
AWS Command Line Tools
You can use the AWS command line tools to issue commands at your system's command line
to perform IAM and AWS tasks. Using the command line can be faster and more convenient
than the console. The command line tools are also useful if you want to build scripts that
perform AWS tasks.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 18


AWS CLOUD VIRTUAL INTERNSHIP

3. Networking and Security

3.1 Understanding AWS Virtual Private Cloud (VPC).


With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a logically
isolated virtual network that you've defined. This virtual network closely resembles a traditional
network that you'd operate in your own data center, with the benefits of using the scalable
infrastructure of AWS.

The following diagram shows an example VPC. The VPC has one subnet in each of the
Availability Zones in the Region, EC2 instances in each subnet, and an internet gateway to allow
communication between the resources in your VPC and the internet.

Features
The following features help you configure a VPC to provide the connectivity that your
applications need:
Virtual private clouds (VPC)

A VPC is a virtual network that closely resembles a traditional network that you'd operate in your
own data center. After you create a VPC, you can add subnets.
Subnets

A subnet is a range of IP addresses in your VPC. A subnet must reside in a single Availability
Zone. After you add subnets, you can deploy AWS resources in your VPC.
IP addressing

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 19


AWS CLOUD VIRTUAL INTERNSHIP

You can assign IP addresses, both IPv4 and IPv6, to your VPCs and subnets. You can also
bring your public IPv4 and IPv6 GUA addresses to AWS and allocate them to resources in your
VPC, such as EC2 instances, NAT gateways, and Network Load Balancers.
Routing

Use route tables to determine where network traffic from your subnet or gateway is directed.
Gateways and endpoints

A gateway connects your VPC to another network. For example, use an internet gateway to
connect your VPC to the internet. Use a VPC endpoint to connect to AWS services privately,
without the use of an internet gateway or NAT device.
Peering connections

Use a VPC peering connection to route traffic between the resources in two VPCs.
Traffic Mirroring

Copy network traffic from network interfaces and send it to security and monitoring appliances
for deep packet inspection.
Transit gateways

Use a transit gateway, which acts as a central hub, to route traffic between your VPCs, VPN
connections, and AWS Direct Connect connections.
VPC Flow Logs

A flow log captures information about the IP traffic going to and from network interfaces in your
VPC.
VPN connections

Connect your VPCs to your on-premises networks using AWS Virtual Private Network (AWS
VPN).

3.2 Configuring VPC, subnets, and security groups.


Configuring a Virtual Private Cloud (VPC), subnets, and security groups in Amazon Web
Services (AWS) involves several steps to create a network environment that meets your specific
requirements. Here's a step-by-step guide to help you set up these components:

Step 1: Create a VPC

 Sign in to the AWS Management Console:

Go to https://aws.amazon.com/, click "Sign In to the Console," and enter your AWS


credentials.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 20


AWS CLOUD VIRTUAL INTERNSHIP

 Access the VPC Dashboard:

In the AWS Management Console, navigate to the VPC service by clicking on "Services"
and selecting "VPC" under the "Networking & Content Delivery" section.

 Create a VPC:

Click on "Your VPCs" in the VPC dashboard, and then click the "Create VPC" button.

Enter a name for your VPC, specify the IPv4 CIDR block (IP address range), and
configure any additional settings as needed.

Click "Create VPC" to create the VPC.

Step 2: Create Subnets

 Create Subnets:

Within your VPC, you can create one or more subnets by specifying a unique CIDR block
for each. Subnets can be public or private, depending on their routing configuration.

Go to the "Subnets" section in the VPC dashboard and click the "Create Subnet" button.

Specify the VPC you created earlier, provide a name, and choose an Availability Zone
(AZ) for the subnet.

Define the CIDR block for the subnet and click "Create."

 Repeat the Process:

Create additional subnets for different purposes, such as public-facing subnets for web
servers and private subnets for databases.
Step 3: Configure Route Tables

 Create Route Tables:

In the VPC dashboard, navigate to "Route Tables" and click the "Create Route Table"
button.

Give the route table a name and associate it with your VPC.

 Edit Route Tables:

Edit the route table's routes to control traffic flow. For example, create a public route table
with a route to an Internet Gateway (IGW) for public subnets and a private route table
with routes to a Network Address Translation (NAT) Gateway for private subnets.

 Associate Subnets:

Associate each subnet with the appropriate route table, ensuring that public subnets use the

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 21


AWS CLOUD VIRTUAL INTERNSHIP

public route table and private subnets use the private route table.

Step 4: Configure Security Groups

 Create Security Groups:

In the VPC dashboard, go to "Security Groups" and click the "Create Security Group"
button.

Specify a name, description, and VPC for the security group.

 Define Inbound and Outbound Rules:

Configure inbound and outbound rules for the security group to control traffic. Rules are
stateful, meaning that if you allow inbound traffic from a specific IP address, outbound
traffic is automatically allowed in response.

 Associate Security Groups:

Associate the security group with the relevant EC2 instances, RDS databases, or other
resources. You can do this when launching or modifying resources.

Step 5: Test and Refine

 Launch Resources:

Create EC2 instances, RDS databases, or other resources within your subnets and associate
them with the appropriate security groups.

 Test Connectivity:

Verify that your resources can communicate as expected. Ensure that security group rules
and routing tables are correctly configured.

 Refine as Needed:

Adjust security group rules, subnet configurations, and routing tables based on your testing
and specific requirements.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 22


AWS CLOUD VIRTUAL INTERNSHIP

3.3 Implementing network security best practices


The following best practices are general guidelines and don’t represent a complete security
solution. Because these best practices might not be appropriate or sufficient for your environment,
treat them as helpful considerations rather than prescriptions.

 When you add subnets to your VPC to host your application, create them in
multiple Availability Zones. An Availability Zone is one or more discrete data
centers with redundant power, networking, and connectivity in an AWS Region.
Using multiple Availability Zones makes your production applications highly
available, fault tolerant, and scalable. For more information, see Amazon VPC on
AWS.
 Use security groups to control traffic to EC2 instances in your subnets. For more
information, see Security groups.
 Use network ACLs to control inbound and outbound traffic at the subnet level. For
more information, see Control traffic to subnets using network ACLs.
 Manage access to AWS resources in your VPC using AWS Identity and Access
Management (IAM) identity federation, users, and roles. For more information,
see Identity and access management for Amazon VPC.
 Use VPC Flow Logs to monitor the IP traffic going to and from a VPC, subnet, or
network interface. For more information, see VPC Flow Logs.
 Use Network Access Analyzer to identify unintended network access to resources
in our VPCs. For more information, see the Network Access Analyzer Guide.
 Use AWS Network Firewall to monitor and protect your VPC by filtering inbound
and outbound traffic. For more information, see the AWS Network Firewall Guide.

3.4 Learning about AWS security services like AWS WAF and AWS Shield.
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that
are forwarded to your protected web application resources. You can protect the following
resource types:
 Amazon CloudFront distribution
 Amazon API Gateway REST API
 Application Load Balancer
 AWS AppSync GraphQL API

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 23


AWS CLOUD VIRTUAL INTERNSHIP

 Amazon Cognito user pool

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 24


AWS CLOUD VIRTUAL INTERNSHIP

 AWS App Runner service


 AWS Verified Access instance
AWS WAF lets you control access to your content. Based on conditions that you specify, such as
the IP addresses that requests originate from or the values of query strings, your protected
resource responds to requests either with the requested content, with an HTTP 403 status code
(Forbidden), or with a custom response.
At the simplest level, AWS WAF lets you choose one of the following behaviors:
 Allow all requests except the ones that you specify – This is useful when you want
Amazon CloudFront, Amazon API Gateway, Application Load Balancer, AWS
AppSync, Amazon Cognito, AWS App Runner, or AWS Verified Access to serve
content for a public website, but you also want to block requests from attackers.
 Block all requests except the ones that you specify – This is useful when you want to
serve content for a restricted website whose users are readily identifiable by properties
in web requests, such as the IP addresses that they use to browse to the website.
 Count requests that match your criteria – You can use the Count action to track your
web traffic without modifying how you handle it. You can use this for general
monitoring and also to test your new web request handling rules. When you want to
allow or block requests based on new properties in the web requests, you can first
configure AWS WAF to count the requests that match those properties. This lets you
confirm your new configuration settings before you switch your rules to allow or block
matching requests.
 Run CAPTCHA or challenge checks against requests that match your criteria –
You can implement CAPTCHA and silent challenge controls against requests to help
reduce bot traffic to your protected resources.
Using AWS WAF has several benefits:
 Additional protection against web attacks using criteria that you specify. You
can define criteria using characteristics of web requests such as the following:
 IP addresses that requests originate from.
 Country that requests originate from.
 Values in request headers.
 Strings that appear in requests, either specific strings or strings that match
regular expression (regex) patterns.
 Length of requests
 Presence of SQL code that is likely to be malicious (known as SQL injection).
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 25
AWS CLOUD VIRTUAL INTERNSHIP

 Presence of a script that is likely to be malicious (known as cross-site scripting).


o Rules that can allow, block, or count web requests that meet the specified criteria.
Alternatively, rules can block or count web requests that not only meet the
specified criteria, but also exceed a specified number of requests in any 5-minute
period.
o Rules that you can reuse for multiple web applications.
o Managed rule groups from AWS and AWS Marketplace sellers.
o Real-time metrics and sampled web requests.
o Automated administration using the AWS WAF API.

4. Databases and Storage

4.1 Exploring AWS database services like RDS, DynamoDB, and Aurora.
The relational database is the most widely used database. SQL databases offer data storage in
interconnected tables. In a relational database, all the data is stored in a logically structured manner
that ensures data integrity. In simple language, you can use a relational database to store and utilise
the information that is related to each other.
Amazon RDS is a managed relational database service that simplifies database administration tasks
like provisioning, patching, backup, recovery, and scaling. It supports popular relational database
engines like MySQL, PostgreSQL, SQL Server, Oracle, and MariaDB.
Now we will see some famous relational database services offered by AWS.
1. Amazon RDS (Relational database service)
Amazon RDS is the most popular database service offered by AWS in the relational database
category. Amazon RDS supports various database products offered by AWS and open sources. In
RDS, clients can easily set up, handle, and scale up their information in the AWS database cloud,
as it is less technical, and clients can operate all the things from the Amazon console with a few
clicks. One of the main advantages of Amazon RDS is that it is cost-effective compared to other
database services. You can connect various apps and launch databases in RDS. Here are some use
cases of Amazon RDS.

Key Features:
Managed Service: AWS handles database maintenance tasks, allowing you to focus on your
application and data.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 26


AWS CLOUD VIRTUAL INTERNSHIP

Multi-AZ Deployment: RDS provides high availability by replicating your database in multiple

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 27


AWS CLOUD VIRTUAL INTERNSHIP

Availability Zones (AZs).


Automated Backups: It automatically takes daily backups and allows you to create manual snapshots
for point-in-time recovery.
Scaling: You can vertically scale (resize) your RDS instances or horizontally scale by adding read
replicas.
Security: RDS supports encryption at rest and in transit, IAM database authentication, and VPC
isolation.
Compatibility: It is compatible with various database engines, ensuring a familiar environment for
developers

Use cases
Amazon RDS is an ideal database service for small and medium-scale eCommerce businesses.
Amazon RDS can provide highly affordable and scalable database solutions to the apps of these
businesses.
You can easily set up and scale your database for your web and mobile applications.
Amazon RDS is good for the online gaming business. It provides a good database infrastructure
that can automatically scale up as per demand.
The good feature of Amazon RDS is that it can automate various tasks like backup and restore,
database set up, auto up-gradation, maintenance, and hardware provisioning.

2. AWS DynamoDB

AWS DyanmoDB is an easy-to-manage and fast NoSQL database service offered by AWS.
DynamoDB can work with multiple servers located in different regions and is equipped with

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 28


AWS CLOUD VIRTUAL INTERNSHIP

memory caching and inbuilt security features.

DynamoDB can handle 10 trillion requests in a day, and it comes with inbuilt backup and restore
features. DynmoDB has 3 components, tables, items, and attributes.

But here, Unlike relational databases, the table is not structured with a fixed number of rows and
columns. And, attributes are similar to data values in relational databases. Various identical
attributes form items.

Amazon DynamoDB is a fully managed NoSQL database service designed for seamless
scalability, high performance, and low latency. It is ideal for applications that require fast and
flexible data storage with automatic scaling.

Key Features:

Serverless and On-Demand: DynamoDB is serverless, so you don't need to manage infrastructure.
You pay only for the resources you use.

Scalability: It can handle massive workloads and automatically scales to accommodate increased
traffic.

Flexible Schema: DynamoDB is schema-less, allowing you to store and retrieve data without
predefined schemas.

Security: It offers fine-grained access control, encryption, and integrates with AWS Identity and
Access Management (IAM).

Global Tables: DynamoDB supports multi-region, multi-master deployments for global


applications.

Streams: You can use DynamoDB Streams to capture changes in your data and trigger event-
driven processing.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 29


AWS CLOUD VIRTUAL INTERNSHIP

Use Cases:

Real-time applications, gaming, IoT, mobile apps, and any application requiring a fast and highly
available NoSQL database.

3. Amazon Aurora
It is a database service that is entirely handled by Amazon RDS. Amazon Aurora is the database
engine created for the cloud, which is very secure, scalable, and high performing.

The storage infrastructure offered by amazon aurora uses a new revolutionary cloud technology
that makes it very efficient and fast. It is 5 times faster than MySQL. Also, Amazon aurora
empowers its users with more features like configuration of storage framework as per their
workload. Even its storage increases automatically by 10 GB with an upper cap of 64 TB, as per
requirement.

One more crucial benefit of amazon aurora is that you can use your existing software, drivers, and
program on amazon aurora, as it is compatible with all popular relational databases like MySQL
and PostgreSQL.

Apart from that, amazon aurora offers multiple features like backup and recovery, security,
monitoring, compliance, and auto restoring data even without backup.

Amazon Aurora is a fully managed, highly available, and high-performance relational database
engine that is compatible with MySQL and PostgreSQL. It provides the performance and
availability of commercial databases at a fraction of the cost.

Key Features:
Performance: Aurora is designed for high performance, with fast read and write operations.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 30
AWS CLOUD VIRTUAL INTERNSHIP

Compatibility: It is compatible with MySQL and PostgreSQL, allowing you to use familiar tools

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 31


AWS CLOUD VIRTUAL INTERNSHIP

and drivers.

Replication: Aurora provides automated replication for high availability and failover.

Automatic Backups: It takes continuous backups with no performance impact and offers point-in-
time recovery.

Global Databases: Aurora supports cross-region replication for global deployments.

Security: Aurora offers encryption at rest and in transit, IAM database authentication, and VPC
isolation.

Use cases
Amazon Aurora offers very robust database solution services, and businesses can focus on core
improvement areas like building high-quality software and providing good SaaS offerings.
Amazon Aurora offers big storage for online gaming applications.
Amazon aurora can assist businesses in cost-cutting with the help of a large existing database.

4.2 Hands-on experience with creating and managing databases.


To create a MySQL DB instance
1. Sign in to the AWS Management Console and open the Amazon RDS
console at https://console.aws.amazon.com/rds/.
2. In the upper-right corner of the AWS Management Console, check the AWS Region. It
should be the same as the one where you created your EC2 instance.
3. In the navigation pane, choose Databases.
4. Choose Create database.
5. On the Create database page, choose Standard create.
6. For Engine options, choose MySQL.
7. For Templates, choose Free tier.

Your DB instance configuration should look similar to the following image.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 32


AWS CLOUD VIRTUAL INTERNSHIP

8. In the Availability and durability section, keep the defaults.


9. In the Settings section, set these values:
 DB instance identifier – Type tutorial-db-instance.
 Master username – Type tutorial_user.
 Auto generate a password – Leave the option turned off.
 Master password – Type a password.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 33
AWS CLOUD VIRTUAL INTERNSHIP

 Confirm password – Retype the password.

10. In the Instance configuration section, set these values:


 Burstable classes (includes t classes)
 db.t3.micro

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 34


AWS CLOUD VIRTUAL INTERNSHIP

11. In the Storage section, keep the defaults.


12. In the Connectivity section, set these values and keep the other values as their defaults:
 For Compute resource, choose Connect to an EC2 compute resource.
 For EC2 instance, choose the EC2 instance you created previously, such as
tutorial- ec2-instance-web-server.

13. In the Database authentication section, make sure Password authentication is selected.
14. Open the Additional configuration section, and enter sample for Initial database
name. Keep the default settings for the other options.
15. To create your MySQL DB instance, choose Create database.

Your new DB instance appears in the Databases list with the status Creating.
16. Wait for the Status of your new DB instance to show as Available. Then choose the
DB instance name to show its details.
17. In the Connectivity & security section, view the Endpoint and Port of the DB instance.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 35


AWS CLOUD VIRTUAL INTERNSHIP

Note the endpoint and port for your DB instance. You use this information to connect your web
server to your DB instance.
18. Complete Install a web server on your EC2 instance.

4.3 Understanding data backup and recovery strategies


Backup and restore is a suitable approach for mitigating against data loss or corruption. This
approach can also be used to mitigate against a regional disaster by replicating data to other AWS
Regions, or to mitigate lack of redundancy for workloads deployed to a single Availability Zone.
In addition to data, you must redeploy the infrastructure, configuration, and application code in
the recovery Region. To enable infrastructure to be redeployed quickly without errors, you should
always deploy using infrastructure as code (IaC) using services such as AWS CloudFormation or
the AWS Cloud Development Kit (AWS CDK). Without IaC, it may be complex to restore
workloads in the recovery Region, which will lead to increased recovery times and possibly
exceed your RTO. In addition to user data, be sure to also back up code and configuration,
including Amazon Machine Images (AMIs) you use to create Amazon EC2 instances. You can
use AWS CodePipeline to automate redeployment of application code and configuration.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 36


AWS CLOUD VIRTUAL INTERNSHIP

4.4 Introduction to Amazon EBS and storage options


Amazon Elastic Block Store (Amazon EBS) is a block-level storage service provided by Amazon
Web Services (AWS). EBS is designed to provide highly available and durable block storage that
can be attached to Amazon EC2 instances. It plays a crucial role in the storage infrastructure of
many AWS applications. Here's an introduction to Amazon EBS and its storage options:
Key Characteristics of Amazon EBS:
Block-Level Storage: EBS provides block-level storage, which means that it allows you to create
and manage storage volumes that are attached to EC2 instances as block devices. These volumes
can be formatted with a file system and used for various purposes, such as data storage and
operating system boot volumes.
Durable and Redundant: EBS volumes are designed for durability and availability. They
automatically replicate data within an Availability Zone (AZ) to protect against component
failures. Additionally, you can create snapshots of your EBS volumes to back up your data, and
these snapshots are stored in Amazon S3.
Elasticity: You can easily resize EBS volumes, both increasing and decreasing their size, without
significant downtime. This elasticity allows you to adapt your storage capacity to the changing
needs of your applications.
Low Latency: EBS volumes provide low-latency access to data, making them suitable for
applications that require high-speed access to storage, such as databases.
Types of Amazon EBS Volumes:

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 37


AWS CLOUD VIRTUAL INTERNSHIP

Amazon EBS offers several types of storage volumes, each designed for specific use cases and
performance requirements:
Amazon EBS General Purpose (SSD):
These volumes, known as gp2, provide a balance of price and performance for a wide range of
workloads. They are suitable for most applications and offer low-latency and consistent
performance.
Amazon EBS Provisioned IOPS (SSD):
These volumes, known as io1, are designed for I/O-intensive workloads that require high
performance and low-latency access. You can specify the number of IOPS (Input/Output
Operations Per Second) when provisioning these volumes.
Amazon EBS Throughput Optimized (HDD):
These volumes, known as st1, are optimized for applications that require high throughput for
large, sequential read/write workloads. They are often used for data warehouses and big data
processing. Amazon EBS Cold HDD:
These volumes, known as sc1, are designed for less frequently accessed data that can tolerate
lower performance. They offer cost-effective storage for infrequently used data.
Amazon EBS Magnetic (HDD):
These volumes, known as standard, are the original EBS volume type and are suitable for
applications with modest I/O requirements.
Use Cases for Amazon EBS:
o Amazon EBS is used in a variety of scenarios, including:
o Storing data files, databases, and application code.
o Running operating systems on EC2 instances (boot volumes).
o Providing storage for big data processing and analytics.
o Hosting web applications and content management systems.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 38


AWS CLOUD VIRTUAL INTERNSHIP

5. Serverless Computing
5.1 Introduction to serverless architecture using AWS Lambda.
Serverless architecture is a cloud computing paradigm that abstracts away server management
tasks, allowing developers to focus solely on writing code and building applications without the
need to provision or manage servers. AWS Lambda is a key service provided by Amazon Web
Services (AWS) that enables serverless computing. Here's an introduction to serverless
architecture using AWS Lambda:

Key Concepts in Serverless Architecture:

No Server Management:

Serverless architecture eliminates the need to manage servers, virtual machines, or containers.
AWS Lambda takes care of server provisioning, scaling, and maintenance.

Event-Driven:

Serverless applications are event-driven, meaning they respond to events or triggers. These events
can be HTTP requests, database changes, file uploads, scheduled tasks, or custom events.

Pay-Per-Use Pricing:

Serverless services like AWS Lambda are billed based on actual usage (e.g., compute time and
memory). You only pay for the resources consumed during the execution of your functions.

Auto Scaling:

Serverless platforms automatically scale your applications in response to changes in workload.


You don't need to manually configure scaling policies.

Stateless Functions:

Serverless functions are typically stateless, meaning they don't store persistent data between
invocations. Data is often stored externally, such as in databases or object storage.

AWS Lambda:

AWS Lambda is a serverless compute service that allows you to run code in response to events.
Here are key aspects of AWS Lambda:

Functions: In AWS Lambda, you define functions, which are pieces of code that perform specific
tasks. Functions are designed to be small, focused, and stateless.
Event Sources: Lambda functions are triggered by event sources, which can be various AWS
services (e.g., S3, DynamoDB, SNS) or custom events. When an event occurs, Lambda executes
the associated function.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 39
AWS CLOUD VIRTUAL INTERNSHIP

Runtime Environment: AWS Lambda supports multiple programming languages, including


Node.js, Python, Java, Ruby, and more. You can write your functions in your preferred language.

Scalability: AWS Lambda automatically scales your functions in response to incoming events. It
can run multiple instances of your function concurrently to handle high loads.

Integration: Lambda can be integrated with other AWS services, making it a central part of many
serverless applications. It can also be used with API Gateway to create RESTful APIs.

Use Cases for AWS Lambda and Serverless:

Web APIs: Create RESTful APIs and microservices using Lambda and API Gateway.

Data Processing: Perform data transformation, filtering, and enrichment in response to data events.

Real-time File Processing: Process file uploads, generate thumbnails, and perform content
moderation.

IoT Applications: Handle data from IoT devices and trigger actions based on sensor data.

Automated Backups: Schedule and automate backup tasks, database snapshots, and log

archiving. Event-Driven Automation: Implement event-driven workflows for DevOps and CI/CD

pipelines. Chatbots and Voice Assistants: Develop chatbots and voice-based applications.

Image and Video Analysis: Analyze and process images and videos, including object detection
and recognition.

AWS Lambda, combined with other AWS services, offers a powerful and scalable platform for
building serverless applications. Developers can focus on writing code that solves business
problems while AWS takes care of the underlying infrastructure and scaling. It's a cost-effective
and efficient way to build modern cloud-native applications.

5.2 Building and deploying serverless functions.


Building and deploying serverless functions on AWS involves several steps, and it typically
revolves around using AWS Lambda, the core service for serverless computing on AWS. Below
are the steps to build and deploy serverless functions using AWS Lambda:

Step 1: Develop Your Lambda Function

Choose a Runtime: AWS Lambda supports various runtime environments such as Node.js,
Python, Java, Ruby, Go, .NET Core, and custom runtimes. Select the runtime that suits your
application.

Write Code: Write the code for your Lambda function. The code should be stateless and focus on
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 40
AWS CLOUD VIRTUAL INTERNSHIP

a specific task or function.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 41


AWS CLOUD VIRTUAL INTERNSHIP

Dependencies: If your function has dependencies, include them in your deployment package. For
Node.js, you can use npm; for Python, you can use pip; and so on.

Handler Function: Define a handler function within your code. This function will be the entry
point for your Lambda execution.

Step 2: Package Your Lambda Function

Create a Deployment Package: Package your code along with its dependencies into a zip file.
Ensure that the handler function is correctly specified in your AWS Lambda configuration.

Step 3: Create a Lambda Function

Access AWS Lambda Console:

Sign in to the AWS Management Console and navigate to the AWS Lambda service.

Create a Function:

Click the "Create function" button.

Choose the "Author from scratch" option.

Configure Function:

Provide a function name, runtime, and an optional description.

Define the execution role that specifies the permissions your Lambda function will have.

Upload Code Package:

In the "Function code" section, select the "Upload a .zip file" option.

Upload the deployment package you created in Step 2.

Set Handler: Specify the handler function in the format filename.handler, where filename is the
name of your code file and handler is the name of the handler function.

Configure Memory and Timeout:

Set the memory allocated to your Lambda function and the function

timeout. Create Function:

Click the "Create function" button.

Step 4: Test Your Lambda Function

Configure Test Event: In the Lambda function's configuration, you can create a test event with
sample input data to test your function.

Test Execution: Execute the test event to verify that your Lambda function behaves as expected.

Step 5: Deploy Your Lambda Function


USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 42
AWS CLOUD VIRTUAL INTERNSHIP

Deploy with Serverless Framework (Optional):

You can use the Serverless Framework or other deployment tools to automate the deployment
process. The Serverless Framework simplifies AWS Lambda deployments and integrates with
other AWS services.

Step 6: Configure Triggers

Add Triggers: Lambda functions are often triggered by various events, such as HTTP requests via
Amazon API Gateway, file uploads to Amazon S3, database changes using AWS DynamoDB
Streams, or custom events.

Step 7: Monitor and Debug

CloudWatch Logs: AWS Lambda automatically logs function execution details to CloudWatch
Logs. Use CloudWatch Logs to monitor and troubleshoot your Lambda functions.

Step 8: Scaling and Cost Optimization

Configure Concurrency: Adjust the concurrency settings to control how many instances of your
Lambda function run concurrently.

Optimize Costs: Monitor your Lambda function usage and costs, and adjust memory allocation

and function execution times to optimize costs.

5.3 Exploring Amazon API Gateway for creating RESTful APIs.


In Amazon API Gateway, you build a REST API as a collection of programmable entities known

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 43


AWS CLOUD VIRTUAL INTERNSHIP

as API Gateway resources. For example, you use a RestApi resource to represent an API that can
contain a collection of Resource entities. Each Resource entity can in turn have one or more
Method resources. Expressed in the request parameters and body, a Method defines the
application programming interface for the client to access the exposed Resource and represents an
incoming request submitted by the client. You then create an Integration resource to integrate the
Method with a backend endpoint, also known as the integration endpoint, by forwarding the
incoming request to a specified integration endpoint URI. If necessary, you transform request
parameters or body to meet the backend requirements. For responses, you can create a
MethodResponse resource to represent a request response received by the client and you create an
IntegrationResponse resource to represent the request response that is returned by the backend.
You can configure the integration response to transform the backend response data before
returning the data to the client or to pass the backend response as-is to the client.

To help your customers understand your API, you can also provide documentation for the API, as
part of the API creation or after the API is created. To enable this, add a DocumentationPart
resource for a supported API entity.

To control how clients call an API, use IAM permissions, a Lambda authorizer, or an Amazon
Cognito user pool. To meter the use of your API, set up usage plans to throttle API requests. You
can enable these when creating or updating the API.

You can perform these and other tasks by using the API Gateway console, the API Gateway
REST API, the AWS CLI, or one of the AWS SDKs

5.4 Integration of Lambda with other AWS services

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 44


AWS CLOUD VIRTUAL INTERNSHIP

AWS Lambda integrates with other AWS services to invoke functions or take other actions. These
are some common use cases:

Invoke a function in response to resource lifecycle events, such as with Amazon Simple Storage
Service (Amazon S3). For more information, see Using AWS Lambda with Amazon S3.

Respond to incoming HTTP requests. For more information, see Tutorial: Using Lambda with API
Gateway.

Consume events from a queue. For more information, see Using Lambda with Amazon SQS.

Run a function on a schedule. For more information, see Using AWS Lambda with Amazon
EventBridge (CloudWatch Events).

Depending on which service you're using with Lambda, the invocation generally works in one of
two ways. An event drives the invocation or Lambda polls a queue or data stream and invokes the
function in response to activity in the queue or data stream. Lambda integrates with Amazon Elastic
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 45
AWS CLOUD VIRTUAL INTERNSHIP

File System and AWS X-Ray in a way that doesn't involve invoking functions.

For more information, see Event-driven invocation and Lambda polling. Or, look up the service
that you want to work with in the following section to find a link to information about using that
service with Lambda.

You can also use Lambda functions to interact programmatically with other AWS services using
one of the AWS Software Development Kits (SDKs). For example, you can have a Lambda
function create an Amazon S3 bucket or write data to a DynamoDB table using an API call from
within your function.

6. IAM and Security


6.1 Detail your work on Identity and Access Management (IAM).
1. IAM Concepts:

Users: Individuals or entities who interact with your AWS resources.

Groups: A collection of users, which makes it easier to manage permissions.


Roles: IAM entities that define permissions for services or resources within AWS.

Policies: Documents that define permissions (what users, groups, and roles are allowed to do).

2. IAM Best Practices:

Use Roles for AWS Resources: Instead of using long-term credentials, use IAM roles for EC2
instances, Lambda functions, and other resources to enhance security.

Apply Least Privilege: Only grant the permissions necessary for a user or resource to perform its
tasks.

Use IAM Groups: Assign permissions to groups rather than individual users for
easier management.

Enable MFA (Multi-Factor Authentication): Require MFA for users who have access to sensitive
resources.

Regularly Review Permissions: Periodically review and audit IAM policies and permissions to
ensure they remain appropriate.

Use IAM Policy Conditions: Apply conditions to IAM policies to further restrict access (e.g.,
based on IP address or time of day).

Rotate Access Keys: Regularly rotate access keys to enhance security.

3. IAM Setup:
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 46
AWS CLOUD VIRTUAL INTERNSHIP

Creating IAM Users: Create users and assign them to groups with appropriate permissions.

Creating IAM Roles: Define roles with specific permissions and trust relationships with services
like EC2 or Lambda.

Creating IAM Policies: Write policies that specify permissions and attach them to users, groups,
or roles.

Enabling MFA: Enable multi-factor authentication for users who have access to sensitive
resources.

Configuring Password Policies: Define password complexity requirements for users.

4. IAM Security Measures:

Access Key Rotation: Regularly rotate access keys to minimize the impact of key compromise.

Monitoring and Logging: Use CloudWatch Logs and CloudTrail to monitor IAM activity and
detect unauthorized access.

Credential Report: Use the IAM credential report to check for unused credentials and identify
potential security risks.

IAM Access Analyzer: Use IAM Access Analyzer to analyze access policies for unintended
resource access.

5. IAM for AWS Services:

Use IAM roles to grant permissions to AWS services like Lambda, EC2, and Glue.

Configure cross-account access and trust relationships for sharing resources securely between
AWS accounts.

IAM is a fundamental component of AWS security, enabling you to control and manage access to
your AWS resources. Properly configuring and managing IAM is crucial for maintaining a secure
AWS environment. Always follow best practices and regularly review and update your IAM
policies to align with your organization's security requirements.

6.2 Discuss how you configured roles, policies,and permissions


Several of the previously listed policies grant the ability to configure AWS services with roles
that enable those services to perform operations on your behalf. The job function policies either
specify exact role names that you must use or at least include a prefix that specifies the first part
of the name that can be used. To create one of these roles, perform the steps in the following
procedure.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 47


AWS CLOUD VIRTUAL INTERNSHIP

To create a role for an AWS service (IAM console)


1. Sign in to the AWS Management Console and open the IAM
console at https://console.aws.amazon.com/iam/.
2. In the navigation pane of the IAM console, choose Roles, and then choose Create role.
3. Choose the AWS service role type.
4. Choose the use case for your service. Use cases are defined by the service to include
the trust policy that the service requires.
5. Choose Next.
6. If possible, select the policy to use for the permissions policy. Otherwise, choose
Create policy to open a new browser tab and create a new policy from scratch. For
more information, see Creating IAM policies in the IAM User Guide.
7. After you create the policy, close that tab and return to your original tab. Select the
check box next to the permissions policies that you want the service to have.
Depending on the use case that you selected, the service might let you do any of the
following:
 Nothing, because the service defines the permissions for the role.
 Choose from a limited set of permissions.
 Choose from any permissions.
 Select no policies at this time. However, you can create the policies later, and
then attach them to the role.
8. (Optional) Set a permissions boundary. This is an advanced feature that is available
for service roles, but not for service-linked roles.
Expand the Permissions boundary section and choose Use a permissions boundary to
control the maximum role permissions. IAM includes a list of the AWS managed and
customer managed policies in your account. Select the policy to use for the permissions
boundary or choose Create policy to open a new browser tab and create a new policy
from scratch. For more information, see Creating IAM policies in the IAM User Guide.
After you create the policy, close that tab and return to your original tab to select the
policy to use for the permissions boundary.
9. Choose Next.
10. For Role name, the degree of role name customization is defined by the service. If the
service defines the role's name, you can't edit this option. In other cases, the service
might define a prefix for the role and you can enter an optional suffix. For some services,
you can specify the entire name of your role.
If possible, enter a role name or role name suffix to help you identify the purpose of this
role. Role names must be unique within your AWS account, so don't create roles named
both PRODROLE and prodrole. When a role name is used in a policy or as part of an
ARN, the role name is case sensitive. When a role name appears to customers in the
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 48
AWS CLOUD VIRTUAL INTERNSHIP

console, such as during the sign-in process, the role name is case insensitive. Because
various entities might reference the role, you can't edit the name of the role after it is
created.
11. (Optional) For Description, enter a description for the new role.
12. Choose Edit in the Step 1: Select trusted entities or Step 2: Select permissions
sections to edit the use cases and permissions for the role.
13. (Optional) Add metadata to the role by attaching tags as key-value pairs. For more
information about using tags in IAM, see Tagging IAM resources in the IAM User
Guide.
14. Review the role, and then choose Create role.

6.3 Emphasize the importance of security in AWS


Data Protection: AWS hosts vast amounts of sensitive data for organizations across the world.
Ensuring the security of this data is not only a legal requirement but also vital for maintaining
customer trust.

Financial Implications: Security breaches can lead to financial losses, including direct costs
related to the incident response, regulatory fines, legal fees, and potential loss of business due to
reputational damage.

Compliance Requirements: Many industries and jurisdictions have strict compliance


requirements for data protection (e.g., GDPR, HIPAA). Failing to meet these standards can result
in severe penalties.

Business Continuity: Security incidents, such as data breaches or service disruptions, can disrupt
normal business operations. Robust security measures are necessary to maintain business

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 49


AWS CLOUD VIRTUAL INTERNSHIP

continuity and minimize downtime.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 50


AWS CLOUD VIRTUAL INTERNSHIP

Reputation Management: A security breach can tarnish an organization's reputation, potentially


leading to the loss of customers, partners, and investors. Rebuilding trust is often difficult and
time- consuming.

Data Privacy: AWS customers entrust the platform with their data. Ensuring data privacy and
confidentiality is a fundamental ethical responsibility.

Resource Protection: AWS provides various cloud resources, including compute instances,
storage, and networking. Security measures are essential to protect these resources from misuse or
unauthorized access.

Secure Development: Security should be integrated into the development lifecycle. Neglecting
security during application development can lead to vulnerabilities that may be exploited.

User Identity and Access Management: AWS IAM ensures that only authorized users and
services have access to resources. Misconfigured IAM can lead to data breaches.

Emerging Threat Landscape: The threat landscape is continuously evolving, with new attack
vectors and techniques emerging regularly. Staying vigilant and adapting security measures is
crucial.

Cloud-Native Security: Cloud environments like AWS introduce unique security challenges.
Organizations must understand these challenges and implement cloud-native security controls.

Shared Responsibility Model: AWS operates on a shared responsibility model, where AWS is
responsible for the security of the cloud infrastructure, while customers are responsible for
securing their data, applications, and configurations. Understanding and fulfilling this shared
responsibility is key to a secure environment.

Incident Response: Having a well-defined incident response plan is essential for swiftly
addressing security incidents and minimizing their impact.

Security Awareness: Promoting a culture of security awareness among employees and


stakeholders is critical. People are often the weakest link in the security chain, and education
helps reduce human error.

Continuous Monitoring and Improvement: Security is not a one-time task but an ongoing
process. Continuous monitoring, vulnerability assessments, and regular security audits are
essential for maintaining a high level of security in AWS.

7. Big Data and Analytics


7.1 Introduction to AWS data analytics services like Amazon Redshift and
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 51
AWS CLOUD VIRTUAL INTERNSHIP

Athena.
Amazon Web Services (AWS) offers a comprehensive suite of data analytics services that enable
organizations to process, analyze, and gain valuable insights from large volumes of data. Two key
services in the AWS data analytics ecosystem are Amazon Redshift and Amazon Athena.

1. Amazon Redshift:

Overview:

Amazon Redshift is a fully managed, data warehousing service designed for high-performance
analytics. It is optimized for handling large datasets, making it ideal for data warehousing,
business intelligence, and reporting applications.

Key Features:

Columnar Storage: Redshift stores data in a columnar format, which enables efficient
compression and query performance, especially for analytical workloads.

Massively Parallel Processing (MPP): Redshift distributes query execution across multiple nodes,
allowing for parallel processing and fast query performance.

Integration: It integrates seamlessly with popular BI tools like Tableau, Looker, and Power BI.

Scalability: You can easily scale Redshift clusters up or down as needed to accommodate
changing data volumes and query loads.

Security: Redshift offers robust security features, including encryption, VPC support, IAM
integration, and more.

Data Lake Integration: You can use Redshift Spectrum to query data in your data lake (stored in
Amazon S3) without the need to load it into the data warehouse.

Use Cases:

o Business intelligence and reporting

o Data warehousing for large datasets

o Advanced analytics and data exploration

o Log and clickstream analysis

2. Amazon Athena:

Overview:

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 52


AWS CLOUD VIRTUAL INTERNSHIP

Amazon Athena is an interactive query service that allows you to analyze data in Amazon S3
using standard SQL queries. It is a serverless service, meaning you don't need to manage
infrastructure or provision capacity.

Key Features:

Serverless: No need to set up or manage clusters; you pay only for the queries you run.

SQL Query Language: Athena supports standard SQL queries, making it accessible to users
familiar with SQL.

Schema-on-Read: You can define the schema of your data on the fly, making it easy to analyze
structured and semi-structured data.

Integration: Athena integrates with AWS Glue Data Catalog, making it easier to discover and
access your data.

Security: It integrates with AWS Identity and Access Management (IAM) for fine-grained access
control and supports encryption for data at rest and in transit.

Use Cases:

o Ad-hoc querying and analysis of data in S3

o Log analysis

o Data exploration and discovery

o ETL (Extract, Transform, Load) processing

7.2 Working with data lakes and AWS Glue for ETL
Working with data lakes and AWS Glue for ETL (Extract, Transform, Load) is a common approach

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 53


AWS CLOUD VIRTUAL INTERNSHIP

for organizations looking to manage and analyze large volumes of diverse data. AWS Glue is a
fully managed ETL service that simplifies the process of preparing and loading data into data
lakes and data warehouses. Here's how you can work with data lakes and AWS Glue for ETL:

1. Data Lake Architecture:

Before using AWS Glue, you need to have a data lake architecture in place. A data lake is a
central repository that allows you to store data in its raw, native format. AWS offers Amazon S3
as a highly scalable and cost-effective storage solution for building data lakes. Data lakes can
include structured, semi-structured, and unstructured data.

2. AWS Glue Components:

AWS Glue consists of several key components:

Data Catalog: The AWS Glue Data Catalog is a metadata repository that stores metadata about
data sources, transformations, and targets. It helps Glue understand the structure of your data.

ETL Jobs: AWS Glue ETL jobs are defined using Python or Scala code. These jobs extract data
from sources, transform it as needed, and load it into target destinations.

Crawlers: Crawlers in AWS Glue automatically discover and catalog metadata from your data
sources. They can traverse through data in S3, RDS, Redshift, and other sources to build the Data
Catalog.

3. ETL with AWS Glue:

Here's a typical process for performing ETL using AWS Glue:

a. Data Discovery: Use AWS Glue Crawlers to automatically discover data in your data lake.
Crawlers analyze your data to create metadata tables in the Data Catalog.

b. Data Transformation: Define ETL jobs in AWS Glue. These jobs use the metadata from the
Data Catalog to transform the data. You can use built-in transformations or custom code.

c. Data Loading: Load the transformed data into your target destinations, which can be a data
warehouse (e.g., Amazon Redshift), databases, or other storage solutions.

d. Scheduling: You can schedule ETL jobs to run at specific intervals or in response to events.
AWS Glue handles job execution and scaling automatically.

4. Benefits:

Using AWS Glue for ETL in a data lake architecture offers several benefits:
Scalability: AWS Glue scales resources based on job complexity and data volume, ensuring
efficient processing.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 54
AWS CLOUD VIRTUAL INTERNSHIP

Managed Service: AWS Glue is fully managed, reducing the operational burden on your team.

Cost-Effective: You only pay for the resources used during ETL job execution, making it cost-
effective for various workloads.

Data Catalog: The Data Catalog simplifies data discovery and makes it easier to manage metadata.

Integration: AWS Glue integrates seamlessly with other AWS services like S3, Redshift, and
Athena, enabling a powerful data analytics ecosystem.

5. Use Cases:

o Common use cases for working with data lakes and AWS Glue include:

o Data preparation for analytics and machine learning.

o Data integration and consolidation from various sources.

o Log and clickstream analysis.

o Real-time data processing.

o Building data warehouses and data marts.

7.3 Building data pipelines and performing analysis


Visualization with Amazon Quicksight.

Building data pipelines and performing data analysis in the context of big data and analytics is a complex
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 55
AWS CLOUD VIRTUAL INTERNSHIP

but crucial task for organizations aiming to extract actionable insights from vast and diverse datasets.
Here's a step-by-step guide to building data pipelines and conducting data analysis in the big data realm:

1. Define Data Sources:

Identify the data sources that are relevant to your analytics goals. These sources can include databases, IoT
devices, log files, social media feeds, and more.

Consider both structured and unstructured data sources, as big data often involves a variety of data types.

2. Data Ingestion:

Set up data ingestion mechanisms to collect data from the defined sources. This can include batch
processing, real-time streaming, or a hybrid approach.

Use technologies like Apache Kafka, AWS Kinesis, or Flume for real-time streaming, and tools like
Apache Nifi or AWS DataSync for batch processing.

3. Data Storage:

Choose appropriate data storage solutions to handle the volume and variety of data. Common choices
include data lakes (e.g., Hadoop HDFS, Amazon S3) and distributed databases (e.g., Apache Cassandra,
HBase).

Optimize storage for both cost-effectiveness and performance.

4. Data Processing:

Implement data processing steps to clean, transform, and enrich the raw data. Big data processing
frameworks like Apache Spark and Apache Flink are commonly used for these tasks.

Parallelize processing to take advantage of distributed computing power.

5. Data Integration:

Integrate data from different sources and formats into a unified data model.

Leverage technologies like Apache Hive, Apache Pig, or Apache Beam for data integration.

6. Data Analysis:

Use big data analytics tools and libraries to perform analysis. Apache Hadoop, Apache Spark, and cloud-
based platforms like AWS EMR and Google Dataprep are popular choices.

Implement machine learning algorithms for predictive modeling, clustering, classification, and anomaly
detection.

7. Data Visualization:

Create data visualizations and dashboards to communicate insights effectively. Tools like Tableau, Power
BI, and open-source options like Matplotlib and D3.js can help with visualization.

8. Automate and Orchestrate:


Automate the data pipeline and analysis workflows using tools like Apache Airflow, AWS Step Functions,
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 56
AWS CLOUD VIRTUAL INTERNSHIP
or Kubernetes for container orchestration.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 57


AWS CLOUD VIRTUAL INTERNSHIP

Schedule and monitor pipeline activities to ensure consistent and reliable operation.

9. Data Governance and Security:

Implement data governance practices to ensure data quality, consistency, and compliance with regulations.

Apply encryption, access controls, and auditing to secure sensitive data.

10. Scaling and Performance Optimization:

Continuously optimize the performance of your data pipeline and analytics processes as data volumes and
complexity grow.

Monitor resource usage and adjust the infrastructure as needed to maintain scalability.

11. Documentation and Collaboration:

Document your data pipeline architecture, data lineage, and analysis methodologies to facilitate
collaboration among data engineers, data scientists, and analysts.

12. Continuous Improvement:

Continuously assess the effectiveness of your data pipeline and analytics efforts. Seek feedback and make
adjustments to meet evolving business goals.

8. Application Monitoring and Management


8.1 Implementing application monitoring with Amazon CloudWatch
Implementing application monitoring with Amazon CloudWatch is crucial for ensuring the health,
performance, and availability of your AWS resources and applications. Amazon CloudWatch
provides a comprehensive set of monitoring and observability tools that help you collect and
analyze data from various AWS services, custom applications, and third-party integrations. Here
are the steps to implement application monitoring with Amazon CloudWatch:
1. AWS Resource Integration:
Ensure that your AWS resources (e.g., EC2 instances, Lambda functions, RDS databases) are

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 58


AWS CLOUD VIRTUAL INTERNSHIP

integrated with Amazon CloudWatch. Many AWS services have CloudWatch integration by
default, but for others, you may need to enable it.
2. Install CloudWatch Agent (Optional):
For EC2 instances, you can install the CloudWatch Agent, which allows you to collect system-
level metrics, logs, and custom metrics from your instances. This step is optional but highly
recommended for deeper insights into your instances' performance.
3. Define Custom Metrics (Optional):
Create custom CloudWatch Metrics to monitor application-specific performance and behavior.
You can publish custom metrics using the AWS SDK or AWS Command Line Interface (CLI).
4. Set Up Alarms:
Create CloudWatch Alarms to get notified when specific metric conditions are met. Alarms can
trigger actions like sending notifications via Amazon SNS or auto-scaling your resources.
5. Enable Logging:
Enable CloudWatch Logs for your application's logs. You can create log groups and log streams
to collect and store logs from various sources, including EC2 instances, Lambda functions, and
custom applications.
6. Create Log Metric Filters:
Define CloudWatch Log Metric Filters to extract structured data from your logs. These filters help
you create custom metrics or trigger alarms based on log data patterns.
7. Create Dashboards:
Build CloudWatch Dashboards to create customized views of your application's key performance
indicators (KPIs) and metrics. Dashboards allow you to visualize data from multiple sources on a
single page.
8. Implement Distributed Tracing (Optional):
For microservices architectures, consider implementing distributed tracing using AWS X-Ray. X-
Ray provides insights into how requests flow through your application, helping you identify
bottlenecks and performance issues.
9. Enable Container Insights (For ECS and EKS):

If you're using Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service
(EKS), enable Container Insights to monitor the performance of your containerized applications.
10. Integrate Third-Party Services:
Integrate CloudWatch with third-party services and applications by using CloudWatch Agents,
custom scripts, or third-party monitoring tools that support CloudWatch integration.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 59
AWS CLOUD VIRTUAL INTERNSHIP

11. Implement Anomaly Detection (Optional):


Utilize CloudWatch Anomaly Detection to automatically detect abnormal behavior in your
metrics and trigger alarms when anomalies are detected.
12. Set Up Event Rules (Optional):
Create CloudWatch Event Rules to automate responses to events within your application, such as
scaling resources, triggering Lambda functions, or invoking AWS Step Functions.
13. Continuous Improvement:
Regularly review your CloudWatch metrics, alarms, and logs to identify performance bottlenecks
and areas for optimization. Adjust alarms and resource configurations as needed.

8.2 Learning about AWS cloudTrail for auditing and compliance.


AWS CloudTrail is a service that provides logging and auditing capabilities for your AWS
resources and accounts. It helps you track user activity and resource changes in your AWS
environment, making it a valuable tool for auditing and compliance, as well as application
monitoring and management. Here's how AWS CloudTrail can be used for auditing and
compliance in the context of application monitoring and management:
1. Enable CloudTrail:
To get started, enable AWS CloudTrail in your AWS account. You can do this through the AWS
Management Console, AWS Command Line Interface (CLI), or AWS CloudFormation.
2. Define Trails:
Create CloudTrail trails to specify which AWS services and regions you want to monitor. Trails
define where log files are stored (Amazon S3 bucket) and whether they should be encrypted.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 60
AWS CLOUD VIRTUAL INTERNSHIP

3. Collect and Store Logs:


CloudTrail collects events and API activity from the AWS Management Console, AWS CLI,
AWS SDKs, and other AWS services you specify. These logs are then stored in the configured
Amazon S3 bucket.
4. Real-Time and Near-Real-Time Monitoring:
CloudTrail provides near-real-time event notifications using Amazon CloudWatch Events. This
allows you to set up automated responses to specific events or patterns, enhancing your ability to
monitor and manage your applications proactively.
5. Auditing and Compliance:
CloudTrail logs provide detailed information about who did what in your AWS environment. You
can use these logs to:
Track changes to AWS resources, including configuration changes.
Monitor user and application activity.
Investigate security incidents and unauthorized access.
Ensure compliance with regulatory requirements (e.g., GDPR, HIPAA, PCI DSS).
6. Analysis and Reporting:
You can use tools like Amazon Athena, Amazon QuickSight, or other log analysis solutions to
query and analyze CloudTrail logs for insights into your application's behavior and security posture.
7. Multi-Account and Multi-Region Support:
CloudTrail can be configured to work across multiple AWS accounts and regions, allowing you to
centralize your logging and monitoring efforts.
8. Integration with AWS Config:
AWS Config can be integrated with CloudTrail to provide a comprehensive view of your AWS
resources and their configuration changes.
9. Event History:
CloudTrail keeps a record of all events for 90 days by default. You can extend this retention
period by archiving logs to Amazon Glacier for long-term storage.
10. Customization:
- You can customize the data that CloudTrail logs by specifying which API calls and AWS
resources to monitor and whether to log read and write events.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 61


AWS CLOUD VIRTUAL INTERNSHIP

8.3 Troubleshooting and optimizing AWS resources.


Troubleshooting and optimizing AWS resources in the context of application monitoring and
management is essential for maintaining the health, performance, and cost-efficiency of your cloud-
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 62
AWS CLOUD VIRTUAL INTERNSHIP

based applications. Here's a step-by-step guide to effectively address issues and optimize AWS
resources for application monitoring and management:
1. Define Metrics and Alerts:
Begin by defining the key performance metrics and error thresholds that are critical to your
application's health. Set up CloudWatch alarms to trigger notifications when these thresholds are
breached.
2. Monitor Application Logs:
Use Amazon CloudWatch Logs to collect and centralize logs generated by your application and
AWS services. Set up custom log metrics and filters to extract meaningful information from your
logs.
3. Enable Enhanced Monitoring (EC2 Instances):
If you're using Amazon EC2 instances, enable Enhanced Monitoring to collect detailed OS-level
metrics. This can help diagnose issues related to resource utilization.
4. Distributed Tracing (Optional):
Implement distributed tracing using AWS X-Ray or third-party tools like Jaeger or Zipkin.
Distributed tracing helps you trace requests across microservices and identify bottlenecks.
5. Set Up AWS CloudTrail:
Enable AWS CloudTrail to capture API activity and changes to your AWS resources. This
provides a trail of actions taken on your AWS infrastructure and helps with auditing and
troubleshooting.
6. Continuous Integration/Continuous Deployment (CI/CD):
Implement CI/CD pipelines to automate application deployments. This reduces the risk of
deployment-related issues and streamlines the release process.
7. Implement Auto Scaling:
Use Auto Scaling groups to dynamically adjust the number of application instances based on
traffic. This ensures optimal resource utilization and high availability.
8. Optimize Database Resources:
Review and optimize your database configurations, queries, and indexes to improve performance.
Consider using Amazon RDS Performance Insights for database monitoring.

9. Use Content Delivery Networks (CDNs):


Leverage AWS services like Amazon CloudFront to distribute content closer to your users,
reducing latency and improving application performance.
10. Rightsize Instances:
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 63
AWS CLOUD VIRTUAL INTERNSHIP

- Regularly review the sizing of your EC2 instances and adjust them based on resource utilization

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 64


AWS CLOUD VIRTUAL INTERNSHIP

patterns. Utilize AWS Trusted Advisor for rightsizing recommendations.


11. Utilize Spot Instances (Optional):
- Consider using Amazon EC2 Spot Instances for non-critical workloads that can tolerate
interruptions. Spot Instances offer cost savings compared to On-Demand instances.
12. Monitor Billing and Cost Explorer:
- Keep an eye on your AWS billing and use AWS Cost Explorer to analyze your spending
patterns. Set up billing alerts to be notified of unexpected cost increases.
13. Review Security Groups and Network ACLs:
- Ensure that your security groups and network ACLs are correctly configured to allow necessary
traffic while maintaining security. Adjust rules as needed.
14. Application-Level Optimization:
- Continuously optimize your application code and architecture for performance and resource
efficiency. Explore serverless computing, containerization, and microservices.
15. Continuous Improvement:
- Establish a culture of continuous improvement by regularly reviewing application performance
and resource utilization. Identify areas for optimization and implement changes accordingly.
16. Incident Response Plan:
- Have a well-defined incident response plan in place to quickly address and resolve issues as they
arise. Document common troubleshooting steps for your team.

9. Security Best Practices


9.1 Advanced IAM concepts and roles.
Advanced IAM (Identity and Access Management) concepts and roles play a crucial role in
security best practices in AWS. These concepts help organizations fine-tune access control and
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 65
AWS CLOUD VIRTUAL INTERNSHIP

enforce the

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 66


AWS CLOUD VIRTUAL INTERNSHIP

principle of least privilege, which restricts users' and resources' permissions to the minimum
necessary for their tasks. Here are some advanced IAM concepts and roles, along with security
best practices:

1. Least Privilege Principle:

Implement the principle of least privilege by granting users, roles, and services only the
permissions they require to perform their specific tasks. Avoid giving overly broad permissions.

2. IAM Policy Conditions:

Use IAM policy conditions to further restrict access based on factors like IP address, MFA (Multi-
Factor Authentication) status, request source, and more. For example, you can require MFA for
certain API actions.

3. IAM Roles for EC2 Instances:

Assign IAM roles to Amazon EC2 instances instead of using static credentials. This eliminates the
need to manage access keys within your application code and enhances security.

4. Cross-Account Access:

Use IAM roles to enable cross-account access, allowing trusted AWS accounts to assume roles in
your account. This is helpful for third-party services or multiple AWS accounts within an
organization.

5. Identity Federation:

Implement identity federation using services like AWS Single Sign-On (SSO), AWS Identity
Federation, or third-party identity providers (e.g., Active Directory, Okta). This allows users to
access AWS resources using their existing corporate credentials.

6. IAM Access Analyzer:

Leverage IAM Access Analyzer to identify and manage unintended access to your resources. It
helps you detect and remove overly permissive policies.

7. Permission Boundaries:

Set permission boundaries on IAM roles to limit the permissions that can be delegated by users
and roles. This provides an additional layer of access control.

8. Service Control Policies (SCPs):

Use AWS Organizations to create and attach Service Control Policies to organizational units
(OUs) to set fine-grained access controls and restrict actions across multiple AWS accounts.

9. Resource-Based Policies:
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 67
AWS CLOUD VIRTUAL INTERNSHIP

Implement resource-based policies for AWS services that allow you to specify who can access a
particular resource (e.g., S3 bucket policy, Lambda function policy).

10. Custom Managed Policies:

- Create custom managed policies tailored to specific roles or groups within your organization.
Avoid attaching overly permissive policies directly to users.

11. IAM Access Advisor:

- Utilize IAM Access Advisor to view service-last-accessed information, helping you identify
unused permissions that can be revoked.

12. AWS Organizations:

- Use AWS Organizations to centrally manage and consolidate AWS accounts and apply
organization-level policies for access control and billing.

13. Role Chaining:

- Be cautious with role chaining, where one role assumes another. Ensure that permissions are
carefully managed to prevent unintended escalation of privileges.

14. Regular Review and Audit:

- Conduct regular reviews and audits of IAM policies, roles, and user permissions to ensure that
they align with your organization's security policies and business needs.

15. Multi-Factor Authentication (MFA):

- Enforce MFA for privileged users and root accounts to add an additional layer of security.

16. Continuous Monitoring:

- Implement continuous monitoring solutions like AWS CloudTrail and AWS Config to track and
log changes to IAM policies and roles.

17. Security Automation:

- Use AWS Lambda functions to automate security tasks, such as rotating IAM access keys,
enforcing security best practices, and responding to security events.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 68


AWS CLOUD VIRTUAL INTERNSHIP

Advanced IAM concepts and roles are critical for enforcing robust security practices in AWS. By
following these best practices, you can achieve granular control over permissions, reduce the
attack surface, and ensure that your AWS environment remains secure and compliant.

9.2 Implementing encryption and data protection.


Implementing encryption and data protection is a fundamental security practice to safeguard
sensitive information and ensure the confidentiality and integrity of data. Here are key steps and
best practices for implementing encryption and data protection in your organization:

1. Data Classification:

Begin by classifying your data to identify what needs to be protected. Categorize data as public,
internal, confidential, or sensitive based on its sensitivity and regulatory requirements.

2. Data Encryption at Rest:

Use encryption to protect data when it is stored in databases, file systems, object storage, or
backup solutions.

In AWS, leverage services like Amazon S3 Server-Side Encryption, Amazon RDS encryption,
and AWS Key Management Service (KMS) for managing encryption keys.

3. Data Encryption in Transit:

Encrypt data when it's in transit between clients and servers or between services. Use protocols like
HTTPS/TLS for web traffic and VPNs or Direct Connect for network connections.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 69


AWS CLOUD VIRTUAL INTERNSHIP

4. Encryption Key Management:

Properly manage encryption keys to ensure their security. Use a centralized key management
service like AWS KMS to generate, store, and rotate encryption keys.

5. Identity and Access Management (IAM):

Implement strict access controls and IAM policies to limit who can access and manage encryption
keys and sensitive data.

6. Data Masking and Redaction:

Implement data masking and redaction techniques to conceal sensitive information from
unauthorized users in logs, reports, and user interfaces.

7. Data Loss Prevention (DLP):

Use DLP solutions to automatically detect and prevent the transmission of sensitive data outside
the organization's network or systems.

8. Secure Coding Practices:

Ensure that application developers follow secure coding practices to prevent data exposure
vulnerabilities like SQL injection or improper data handling.

9. Endpoint Security:

Secure endpoints (e.g., laptops, mobile devices) with encryption, device management, and
security policies to protect data stored on these devices.

10. Database Security:

- Apply strong authentication and access controls to databases. Use encryption for data at rest and
in transit. Regularly patch and update database software.

11. Backup and Disaster Recovery:

- Encrypt data in backup solutions and ensure that encryption keys are securely managed.
Implement disaster recovery plans to prevent data loss during disasters.

12. Audit and Monitoring:

- Implement continuous monitoring and auditing of data access and encryption status. Use
services like AWS CloudTrail and AWS Config to track changes and access to resources.

13. Security Awareness and Training:

- Train employees on data protection best practices, including secure handling, sharing, and
storage of sensitive data.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 70


AWS CLOUD VIRTUAL INTERNSHIP

14. Regular Audits and Compliance:

- Conduct regular security audits and assessments to verify compliance with encryption and data
protection policies and regulations (e.g., GDPR, HIPAA).

15. Incident Response Plan:

- Develop an incident response plan that includes procedures for responding to data breaches and
incidents involving sensitive data.

16. Secure File Sharing:

- Use secure file sharing solutions that offer encryption, access controls, and audit trails. Avoid
sharing sensitive data through unsecured channels.

17. Third-Party Vendors and Cloud Services:

- If using third-party vendors or cloud services, ensure they have robust encryption and data
protection mechanisms in place. Review their security practices and agreements.

18. Regulatory Compliance:

- Understand the data protection requirements specific to your industry and geographic region.
Comply with relevant regulations and standards.

19. Encryption for Mobile Applications:

- If you have mobile applications, implement encryption for data stored on mobile devices and
during data transmission.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 71


AWS CLOUD VIRTUAL INTERNSHIP

10. Conclusion
The knowledge and experience I’ve gained during this internship can be highly valuable
for your career in cloud computing and AWS-related roles. As I conclude your internship,
here are some key takeaways:
Hands-On AWS Experience: You've had the opportunity to work with various AWS
services, gaining practical experience in deploying virtual machines, managing storage,
configuring networking, working with databases, and building serverless applications. This
practical knowledge is a strong foundation for future AWS projects.
Cloud Computing Skills: Cloud computing skills, particularly in AWS, are in high demand in
the tech industry. Your internship has equipped you with essential cloud computing skills,
making you more marketable to potential employers.
IAM and Security: Understanding AWS Identity and Access Management (IAM) and
security best practices is crucial for ensuring the security of cloud resources. You've likely
learned how to set up secure access and control permissions within AWS.
Scalability and Cost Optimization: AWS provides tools and services for optimizing costs and
scaling resources as needed. You may have explored auto-scaling, monitoring, and cost
management strategies to keep cloud expenses in check.
Serverless Architecture: Building serverless applications with AWS Lambda and API
Gateway is a modern approach to application development. Your experience in serverless
computing can be valuable for creating efficient and scalable applications.
Database Management: Understanding AWS database services like RDS, DynamoDB, and
Aurora is essential for managing data in the cloud. You've gained insight into how to
create and manage databases in AWS.
Networking and VPC: Knowledge of AWS Virtual Private Cloud (VPC) is vital for
configuring secure and isolated network environments. You've likely learned how to set up
subnets, security groups, and network access control lists (NACLs) within a VPC.
Data Backup and Recovery: AWS offers robust data backup and recovery solutions. You've
explored strategies for data backup, snapshot management, and disaster recovery.
API Gateway and RESTful APIs: Building RESTful APIs with Amazon API Gateway is a
key skill for creating web services and microservices. You've learned how to design and
deploy APIs using API Gateway.
Best Practices: Throughout your internship, you've likely encountered best practices for
cloud architecture, security, and optimization. These best practices are valuable for building
reliable and cost-effective cloud solutions.
As I conclude your AWS Cloud virtual internship, consider how I can further build upon the
knowledge and skills you've gained. AWS offers certification programs that can validate
your expertise and enhance your career prospects. Additionally, staying up-to-date with
AWS developments and continuing to explore new AWS services and features will help you
remain competitive in the cloud computing field.

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 72


AWS CLOUD VIRTUAL INTERNSHIP

ACTIVITY LOG
WEEK Brief Learning Person In-
description of Outcome Charge
the daily Signature
activity
Week -1 Orientation and I have created AWS
Introduction account and learned the
basics of AWS cloud and
accessing of AWS
console..
Week-2 AWS In this week, I’ve learned
Fundamentals the fundamentals of
AWS cloud like EC2,S3
and RDS
Week -3 Networking and Creation of VPC, subnets
Security and implementing the
security practices..

Week-4 Databases Exploring AWS


and Storage databases like
RDS,MangoDB, Aurora
and hands on experience
on them and
understading
backup and recovery and
EBS
Week-5 Serverless Introduction to
Computing Serverless, building and
deploying and exploring
the API’s, Lambda
integration
Week-6 IAM and Security Configuring of IAM
roles, policies and
permissions and study on
importance of IAM
Week-7 Big Data and Introduction AWS
Analytics analytics and work with
ETL and building
pipelines..
Week-8 Application Application monitoring
Monitoring with CloudWatch ,
and CloudTrail and
Management troubleshooting and
billing
Week-9 Security Best Advanced IAM concepts
Practices and implementing
encryption and data
protection.
Week-10 Conclusion of Conclusion on AWS
AWS cloud Cloud computing

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 73


AWS CLOUD VIRTUAL INTERNSHIP

Student Self Evaluation of the Summer Internship

Student Name: ANUMANEDI NAGA SAI KIRAN RegistrationNo:20NG1A1205


Term of Internship: SUMMER From: MAY 2023 To: JULY 2023

Date of Evaluation:
Organisation Name & Address: AMAZON WEB SERVICES

Please rate your performance in the following areas:

Rating Scale: Letter grade of CGPA calculation to be provided

1 Oral communication 1 2 3 4 5
2 Written communication 1 2 3 4 5
3 Proactiveness 1 2 3 4 5
4 Interaction ability with community 1 2 3 4 5
5 Positive Attitude 1 2 3 4 5
6 Self-confidence 1 2 3 4 5
7 Ability to learn 1 2 3 4 5
8 Work Plan and organization 1 2 3 4 5
9 Professionalism 1 2 3 4 5
10 Creativity 1 2 3 4 5
11 Quality of work done 1 2 3 4 5
12 Time Management 1 2 3 4 5
13 Understanding the Community 1 2 3 4 5
14 Achievement of Desired Outcomes 1 2 3 4 5
15 OVERALL PERFORMANCE 1 2 3 4 5

Date: Signature of the Student

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 74


AWS CLOUD VIRTUAL INTERNSHIP

Evaluation by the Supervisor of the Intern Organization

Student Name: ANUMANEDI NAGA SAI KIRAN RegistrationNo:20NG1A1205

Term of Internship: SUMMER From: MAY 2023 To: JULY 2023

Date of Evaluation:
Organisation Name & Address: AMAZON WEB SERVICES

Name & Address of the Supervisor with Mobile Number:

Please rate the student’s performance in the following areas:


Please note that your evaluation shall be done independent of the Student’s self- evaluation

Rating Scale: 1 is lowest and 5 is highest rank

1 Oral communication 1 2 3 4 5
2 Written communication 1 2 3 4 5
3 Proactiveness 1 2 3 4 5
4 Interaction ability with community 1 2 3 4 5
5 Positive Attitude 1 2 3 4 5
6 Self-confidence 1 2 3 4 5
7 Ability to learn 1 2 3 4 5
8 Work Plan and organization 1 2 3 4 5
9 Professionalism 1 2 3 4 5
10 Creativity 1 2 3 4 5
11 Quality of work done 1 2 3 4 5
12 Time Management 1 2 3 4 5
13 Understanding the Community 1 2 3 4 5
14 Achievement of Desired Outcomes 1 2 3 4 5
15 OVERALL PERFORMANCE 1 2 3 4 5

Date: Signature of faculty Supervisor

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 75


AWS CLOUD VIRTUAL INTERNSHIP

INTERNAL ASSESSMENT STATEMENT

Name Of the Student : ANUMANEDI NAGA SAI KIRAN

Programme of Study : B.Tech

Year of Study : IV

Group : INFORMATION TECHONOLOGY

Register No/H.T. No : 20NG1A1205

Name of the College : USHA RAMA COLLEGE OF ENGINEERING AND


TECHONOLOGY

University: JNTUK

Sl.No Evaluation Criterion


Maximum Marks
Marks Awarded

1. Activity Log 10

2. Internship Evaluation 30

3. Oral Presentation 10

GRAND TOTAL 50

Date: Signature of the Faculty supervisor

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 76


AWS CLOUD VIRTUAL INTERNSHIP

EXTERNAL ASSESSMENT STATEMENT


Name Of the Student : ANUMANEDI NAGA SAI KIRAN

Programme of Study : B.Tech

Year of Study IV

Group : INFORMATION TECHONOLOGY

Register No/H.T. No : 20NG1A1205

Name of the College : USHA RAMA COLLEGE OF ENGINEERING AND


TECHONOLOGY

University: JNTUK

Maximum Marks
Sl.No Evaluation Criterion Marks Awarded

1. Internship Evaluation 10

For the grading giving by the Supervisor of the


Intern Organization
2. 20

3. Viva-Voice 20

TOTAL 50

Signature of the Faculty supervisor Signature of the External Expert

Signature of the HOD with Seal

USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 77

You might also like