Cloud Virtual
Cloud Virtual
Submitted to
Jawaharlal Nehru Technology University Kakinada
Department of
INFORMATION TECHONOLOGY
Submitted by:
Reg.No:22NG1A1263
hereby declare that I have completed the mandatory internship from MAY to JULY
TABLE OF CONTENTS
Chapter-1 Orientation and Introduction 1-6 Week 1
1.1 Introduction to the AWS Cloud platform
1.2 Setting up AWS accounts and accessing the AWS
Management Console
1.3 Overview of basic AWS services and terminology
Chapter-2 AWS Fundamentals 6 - 12 Week 2
2.1 Deep dive into AWS core services such as EC2, S3,
and RDS.
Hands-on experience with launching virtual
machines (EC2 instances) and storing data in S3
buckets.
Introduction to AWS Identity and Access
Management (IAM).
Chapter-3 Networking and Security 13 - 19 Week 3
3.1 Understanding AWS Virtual Private Cloud (VPC).
3.2 Configuring VPC, subnets, and security groups.
3.3 Implementing network security best practices.
3.4 Learning about AWS security services like AWS
WAF and AWS Shield.
Chapter-4 Databases and Storage 19 - 29 Week 4
4.1 Exploring AWS database services like RDS,
DynamoDB, and Aurora.
4.2 Hands-on experience with creating and managing
databases.
4.3 Understanding data backup and recovery
strategies.
4.4 Introduction to Amazon EBS and storage options.
Chapter-5 Serverless Computing 30 - 36 Week 5
5.1 Introduction to serverless architecture using AWS
Lambda.
5.2 Building and deploying serverless functions.
5.3 Exploring Amazon API Gateway for creating
RESTful APIs.
1.1.4 Managed Services : AWS offers many managed services that handle operational tasks like
patching, provisioning, and scaling, reducing the administrative burden on users.
1.1.5 DevOps and Automation : AWS supports automation and DevOps practices through
services like AWS Elastic Beanstalk, AWS CodePipeline, and AWS CodeDeploy.
1.1.6 Analytics and Machine Learning: AWS provides services like Amazon Redshift for data
warehousing, Amazon EMR for big data processing, and Amazon SageMaker for machine
learning.
1.1.7 IoT (Internet of Things): AWS IoT Core and related services enable the management of
IoT devices, data, and applications.
1.1.8 Serverless Computing: AWS Lambda allows you to run code without provisioning or
managing servers, making it easy to build highly scalable and cost-efficient applications.
1.1.9 Containers: AWS offers services like Amazon ECS (Elastic Container Service) and Amazon
EKS (Elastic Kubernetes Service) for container management and orchestration.
1.1.10 Content Delivery and Edge Computing: Amazon CloudFront is a content delivery
network (CDN) for fast and secure content delivery, while AWS Wavelength brings AWS services
to the edge for low-latency applications.
1.1.11 Cost Management: AWS provides tools like AWS Cost Explorer and AWS Budgets to
help users monitor and control their cloud costs.
1.1.12 Security and Compliance: AWS takes security seriously, offering features like VPC,
security groups, and identity and access management. AWS also complies with various industry
standards and certifications.
1.1.13 Support and Ecosystem: AWS has a vast ecosystem of partners, a supportive community,
and various support plans to help users with their AWS deployments.
1.1.14 Hybrid and Multi-Cloud: AWS supports hybrid and multi-cloud architectures, allowing
organizations to integrate their on-premises data centers with the cloud or use multiple cloud
providers.
1.1.15 AWS Marketplace: A marketplace for third-party software and services that can be easily
integrated into your AWS environment.
1.2 Setting up AWS accounts and accessing the AWS Management Console
Setting up an AWS account and accessing the AWS Management Console is the first step in using
Amazon Web Services. Here's a step-by-step guide on how to do this:
Setting Up an AWS Account:
Step -1 : Open the AWS Registration Page:
Click the "Create an AWS Account" button to start the account creation process.
Step – 3 :Provide Account Information:
Fill in the required information, including your email address, password, and account name. Ensure
that your password meets AWS security requirements.
Step – 4 : Contact Information:
Enter your contact information, including your name, address, and phone number.
Step – 5 :Payment Information:
Enter your payment information. AWS offers a Free Tier with limited resources for the first 12
months, but you'll need a valid payment method to create an account.
Step – 6 : Choose a Support Plan:
AWS offers different support plans, including a free basic plan. Choose the one that best suits your
needs.
Step – 7 : Identity Verification:
AWS may require identity verification to prevent misuse of its services. You can
choose between receiving a phone call or using a text message for verification.
Step – 8 : Accept AWS Customer Agreement:
Review the AWS Customer Agreement and the AWS Service Terms, and then click "Create
Account and Continue" if you agree with the terms.
2. AWS FUNDAMENTALS
2.1 Deep dive into AWS core services such as EC2, S3, and
Key Concepts:
Instances: These are the virtual servers you launch in the AWS cloud. You can choose from
various instance types, each optimized for different use cases, such as compute-intensive, memory-
intensive, or GPU-accelerated workloads.
AMI (Amazon Machine Image): An AMI is a pre-configured virtual machine image used to
create instances. AWS provides many publicly available AMIs, and you can create your custom
AMIs.
Security Groups: Security groups act as virtual firewalls for your instances. You can define
inbound and outbound traffic rules to control access to your instances.
Key Pairs: Key pairs are used to securely access your EC2 instances. You create a key pair, and
AWS stores the public key while you keep the private key.
Elastic IP Addresses: Elastic IP addresses are static IP addresses that you can allocate to your
instances. They're useful when you want to ensure your instance has a consistent IP address.
Instance Storage: EC2 instances can have instance storage (ephemeral storage) attached. This
storage is temporary and typically used for caching and scratch space.
Elastic Load Balancers: You can use Elastic Load Balancers to distribute incoming traffic across
multiple EC2 instances for high availability and fault tolerance.
Use Cases:
Web hosting, application hosting, scalable and on-demand computing resources, batch processing,
machine learning, and more…
Key Concepts:
Buckets: S3 uses containers called buckets to store objects. Each bucket has a globally unique
name. Objects: Objects are the data files stored in S3 buckets. They can be of any file type, and S3
provides features for versioning, lifecycle management, and access control.
Data Consistency: S3 provides strong read-after-write consistency for all objects, ensuring that
once an object is written, it can be read immediately.
Data Encryption: S3 supports encryption in transit (SSL/TLS) and at rest (server-side and client-
side encryption).
Storage Classes: S3 offers multiple storage classes, including Standard, Intelligent-Tiering, Glacier,
and others. Each class is designed for different use cases and has different costs associated with it.
Access Control: S3 allows you to control access to your objects using bucket policies, IAM
policies, and Access Control Lists (ACLs).
Event Notifications: You can configure S3 to trigger events (e.g., Lambda functions) based on
changes to objects in a bucket.
Use Cases:
Data storage for web applications, backup and archiving, content distribution, big data analytics,
data lakes, and serving static assets for websites.
Amazon RDS is a managed relational database service that simplifies the setup, operation, and
scaling of relational databases such as MySQL, PostgreSQL, SQL Server, Oracle, and MariaDB.
Key Concepts:
DB Instances: RDS provides fully managed database instances running on your choice of database
engine.
Automated Backups: RDS automatically takes daily backups and allows you to create manual
snapshots of your databases.
Multi-AZ Deployments: RDS supports high availability through Multi-AZ deployments, where a
standby instance is automatically provisioned in a separate Availability Zone for failover.
Read Replicas: RDS allows you to create read replicas of your database for read scalability and
redundancy.
Security: RDS offers various security features, including encryption at rest and in transit, VPC
isolation, and IAM database authentication.
Scalability: You can vertically scale (resize) your RDS instances or horizontally scale by adding
read replicas.
Database Engines: RDS supports several popular database engines, each with its own
configuration options and features.
Use Cases:
Hosting web applications, managing e-commerce databases, data warehousing, business
intelligence, and disaster recovery for relational databases.
Amazon EC2 uses Amazon S3 for storing Amazon Machine Images (AMIs). You use AMIs for
launching EC2 instances. In case of instance failure, you can use the stored AMI to
immediately launch another instance, thereby allowing for fast recovery and business
continuity.
Amazon EC2 also uses Amazon S3 to store snapshots (backup copies) of the data volumes. You
can use snapshots for recovering data quickly and reliably in case of application or system failures.
You can also use snapshots as a baseline to create multiple new data volumes, expand the size of
an existing data volume, or move data volumes across multiple Availability Zones, thereby making
your data usage highly scalable. For more information about using data volumes and snapshots,
see Amazon Elastic Block Store.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 16
AWS CLOUD VIRTUAL INTERNSHIP
Objects are the fundamental entities stored in Amazon S3. Every object stored in Amazon S3 is
contained in a bucket. Buckets organize the Amazon S3 namespace at the highest level and identify
the account responsible for that storage. Amazon S3 buckets are similar to internet domain names.
Objects stored in the buckets have a unique key value and are retrieved using a URL. For example,
if an object with a key value /photos/mygarden.jpg is stored in the DOC-EXAMPLE-
BUCKET1 bucket, then it is addressable using the URL https://DOC-EXAMPLE-
BUCKET1.s3.amazonaws.com/photos/mygarden.jpg.
IAM Features
IAM gives you the following features:
Shared access to your AWS account
You can grant other people permission to administer and use resources in your AWS account
without having to share your password or access key.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 17
AWS CLOUD VIRTUAL INTERNSHIP
Granular permissions
You can grant different permissions to different people for different resources. For example, you
might allow some users complete access to Amazon Elastic Compute Cloud (Amazon EC2),
Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Redshift, and
other AWS services. For other users, you can allow read-only access to just some S3 buckets, or
permission to administer just some EC2 instances, or to access your billing information but
nothing else.
Secure access to AWS resources for applications that run on Amazon EC2
You can use IAM features to securely provide credentials for applications that run on EC2
instances. These credentials provide permissions for your application to access other AWS
resources. Examples include S3 buckets and DynamoDB tables.
ACESSING IAM
You can work with AWS Identity and Access Management in any of the following ways.
AWS Management Console
The console is a browser-based interface to manage IAM and AWS resources. For more
information about accessing IAM through the console, see How to sign in to AWS in the AWS
Sign-In User Guide.
AWS Command Line Tools
You can use the AWS command line tools to issue commands at your system's command line
to perform IAM and AWS tasks. Using the command line can be faster and more convenient
than the console. The command line tools are also useful if you want to build scripts that
perform AWS tasks.
The following diagram shows an example VPC. The VPC has one subnet in each of the
Availability Zones in the Region, EC2 instances in each subnet, and an internet gateway to allow
communication between the resources in your VPC and the internet.
Features
The following features help you configure a VPC to provide the connectivity that your
applications need:
Virtual private clouds (VPC)
A VPC is a virtual network that closely resembles a traditional network that you'd operate in your
own data center. After you create a VPC, you can add subnets.
Subnets
A subnet is a range of IP addresses in your VPC. A subnet must reside in a single Availability
Zone. After you add subnets, you can deploy AWS resources in your VPC.
IP addressing
You can assign IP addresses, both IPv4 and IPv6, to your VPCs and subnets. You can also
bring your public IPv4 and IPv6 GUA addresses to AWS and allocate them to resources in your
VPC, such as EC2 instances, NAT gateways, and Network Load Balancers.
Routing
Use route tables to determine where network traffic from your subnet or gateway is directed.
Gateways and endpoints
A gateway connects your VPC to another network. For example, use an internet gateway to
connect your VPC to the internet. Use a VPC endpoint to connect to AWS services privately,
without the use of an internet gateway or NAT device.
Peering connections
Use a VPC peering connection to route traffic between the resources in two VPCs.
Traffic Mirroring
Copy network traffic from network interfaces and send it to security and monitoring appliances
for deep packet inspection.
Transit gateways
Use a transit gateway, which acts as a central hub, to route traffic between your VPCs, VPN
connections, and AWS Direct Connect connections.
VPC Flow Logs
A flow log captures information about the IP traffic going to and from network interfaces in your
VPC.
VPN connections
Connect your VPCs to your on-premises networks using AWS Virtual Private Network (AWS
VPN).
In the AWS Management Console, navigate to the VPC service by clicking on "Services"
and selecting "VPC" under the "Networking & Content Delivery" section.
Create a VPC:
Click on "Your VPCs" in the VPC dashboard, and then click the "Create VPC" button.
Enter a name for your VPC, specify the IPv4 CIDR block (IP address range), and
configure any additional settings as needed.
Create Subnets:
Within your VPC, you can create one or more subnets by specifying a unique CIDR block
for each. Subnets can be public or private, depending on their routing configuration.
Go to the "Subnets" section in the VPC dashboard and click the "Create Subnet" button.
Specify the VPC you created earlier, provide a name, and choose an Availability Zone
(AZ) for the subnet.
Define the CIDR block for the subnet and click "Create."
Create additional subnets for different purposes, such as public-facing subnets for web
servers and private subnets for databases.
Step 3: Configure Route Tables
In the VPC dashboard, navigate to "Route Tables" and click the "Create Route Table"
button.
Give the route table a name and associate it with your VPC.
Edit the route table's routes to control traffic flow. For example, create a public route table
with a route to an Internet Gateway (IGW) for public subnets and a private route table
with routes to a Network Address Translation (NAT) Gateway for private subnets.
Associate Subnets:
Associate each subnet with the appropriate route table, ensuring that public subnets use the
public route table and private subnets use the private route table.
In the VPC dashboard, go to "Security Groups" and click the "Create Security Group"
button.
Configure inbound and outbound rules for the security group to control traffic. Rules are
stateful, meaning that if you allow inbound traffic from a specific IP address, outbound
traffic is automatically allowed in response.
Associate the security group with the relevant EC2 instances, RDS databases, or other
resources. You can do this when launching or modifying resources.
Launch Resources:
Create EC2 instances, RDS databases, or other resources within your subnets and associate
them with the appropriate security groups.
Test Connectivity:
Verify that your resources can communicate as expected. Ensure that security group rules
and routing tables are correctly configured.
Refine as Needed:
Adjust security group rules, subnet configurations, and routing tables based on your testing
and specific requirements.
When you add subnets to your VPC to host your application, create them in
multiple Availability Zones. An Availability Zone is one or more discrete data
centers with redundant power, networking, and connectivity in an AWS Region.
Using multiple Availability Zones makes your production applications highly
available, fault tolerant, and scalable. For more information, see Amazon VPC on
AWS.
Use security groups to control traffic to EC2 instances in your subnets. For more
information, see Security groups.
Use network ACLs to control inbound and outbound traffic at the subnet level. For
more information, see Control traffic to subnets using network ACLs.
Manage access to AWS resources in your VPC using AWS Identity and Access
Management (IAM) identity federation, users, and roles. For more information,
see Identity and access management for Amazon VPC.
Use VPC Flow Logs to monitor the IP traffic going to and from a VPC, subnet, or
network interface. For more information, see VPC Flow Logs.
Use Network Access Analyzer to identify unintended network access to resources
in our VPCs. For more information, see the Network Access Analyzer Guide.
Use AWS Network Firewall to monitor and protect your VPC by filtering inbound
and outbound traffic. For more information, see the AWS Network Firewall Guide.
3.4 Learning about AWS security services like AWS WAF and AWS Shield.
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that
are forwarded to your protected web application resources. You can protect the following
resource types:
Amazon CloudFront distribution
Amazon API Gateway REST API
Application Load Balancer
AWS AppSync GraphQL API
4.1 Exploring AWS database services like RDS, DynamoDB, and Aurora.
The relational database is the most widely used database. SQL databases offer data storage in
interconnected tables. In a relational database, all the data is stored in a logically structured manner
that ensures data integrity. In simple language, you can use a relational database to store and utilise
the information that is related to each other.
Amazon RDS is a managed relational database service that simplifies database administration tasks
like provisioning, patching, backup, recovery, and scaling. It supports popular relational database
engines like MySQL, PostgreSQL, SQL Server, Oracle, and MariaDB.
Now we will see some famous relational database services offered by AWS.
1. Amazon RDS (Relational database service)
Amazon RDS is the most popular database service offered by AWS in the relational database
category. Amazon RDS supports various database products offered by AWS and open sources. In
RDS, clients can easily set up, handle, and scale up their information in the AWS database cloud,
as it is less technical, and clients can operate all the things from the Amazon console with a few
clicks. One of the main advantages of Amazon RDS is that it is cost-effective compared to other
database services. You can connect various apps and launch databases in RDS. Here are some use
cases of Amazon RDS.
Key Features:
Managed Service: AWS handles database maintenance tasks, allowing you to focus on your
application and data.
Multi-AZ Deployment: RDS provides high availability by replicating your database in multiple
Use cases
Amazon RDS is an ideal database service for small and medium-scale eCommerce businesses.
Amazon RDS can provide highly affordable and scalable database solutions to the apps of these
businesses.
You can easily set up and scale your database for your web and mobile applications.
Amazon RDS is good for the online gaming business. It provides a good database infrastructure
that can automatically scale up as per demand.
The good feature of Amazon RDS is that it can automate various tasks like backup and restore,
database set up, auto up-gradation, maintenance, and hardware provisioning.
2. AWS DynamoDB
AWS DyanmoDB is an easy-to-manage and fast NoSQL database service offered by AWS.
DynamoDB can work with multiple servers located in different regions and is equipped with
DynamoDB can handle 10 trillion requests in a day, and it comes with inbuilt backup and restore
features. DynmoDB has 3 components, tables, items, and attributes.
But here, Unlike relational databases, the table is not structured with a fixed number of rows and
columns. And, attributes are similar to data values in relational databases. Various identical
attributes form items.
Amazon DynamoDB is a fully managed NoSQL database service designed for seamless
scalability, high performance, and low latency. It is ideal for applications that require fast and
flexible data storage with automatic scaling.
Key Features:
Serverless and On-Demand: DynamoDB is serverless, so you don't need to manage infrastructure.
You pay only for the resources you use.
Scalability: It can handle massive workloads and automatically scales to accommodate increased
traffic.
Flexible Schema: DynamoDB is schema-less, allowing you to store and retrieve data without
predefined schemas.
Security: It offers fine-grained access control, encryption, and integrates with AWS Identity and
Access Management (IAM).
Streams: You can use DynamoDB Streams to capture changes in your data and trigger event-
driven processing.
Use Cases:
Real-time applications, gaming, IoT, mobile apps, and any application requiring a fast and highly
available NoSQL database.
3. Amazon Aurora
It is a database service that is entirely handled by Amazon RDS. Amazon Aurora is the database
engine created for the cloud, which is very secure, scalable, and high performing.
The storage infrastructure offered by amazon aurora uses a new revolutionary cloud technology
that makes it very efficient and fast. It is 5 times faster than MySQL. Also, Amazon aurora
empowers its users with more features like configuration of storage framework as per their
workload. Even its storage increases automatically by 10 GB with an upper cap of 64 TB, as per
requirement.
One more crucial benefit of amazon aurora is that you can use your existing software, drivers, and
program on amazon aurora, as it is compatible with all popular relational databases like MySQL
and PostgreSQL.
Apart from that, amazon aurora offers multiple features like backup and recovery, security,
monitoring, compliance, and auto restoring data even without backup.
Amazon Aurora is a fully managed, highly available, and high-performance relational database
engine that is compatible with MySQL and PostgreSQL. It provides the performance and
availability of commercial databases at a fraction of the cost.
Key Features:
Performance: Aurora is designed for high performance, with fast read and write operations.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 30
AWS CLOUD VIRTUAL INTERNSHIP
Compatibility: It is compatible with MySQL and PostgreSQL, allowing you to use familiar tools
and drivers.
Replication: Aurora provides automated replication for high availability and failover.
Automatic Backups: It takes continuous backups with no performance impact and offers point-in-
time recovery.
Security: Aurora offers encryption at rest and in transit, IAM database authentication, and VPC
isolation.
Use cases
Amazon Aurora offers very robust database solution services, and businesses can focus on core
improvement areas like building high-quality software and providing good SaaS offerings.
Amazon Aurora offers big storage for online gaming applications.
Amazon aurora can assist businesses in cost-cutting with the help of a large existing database.
13. In the Database authentication section, make sure Password authentication is selected.
14. Open the Additional configuration section, and enter sample for Initial database
name. Keep the default settings for the other options.
15. To create your MySQL DB instance, choose Create database.
Your new DB instance appears in the Databases list with the status Creating.
16. Wait for the Status of your new DB instance to show as Available. Then choose the
DB instance name to show its details.
17. In the Connectivity & security section, view the Endpoint and Port of the DB instance.
Note the endpoint and port for your DB instance. You use this information to connect your web
server to your DB instance.
18. Complete Install a web server on your EC2 instance.
Amazon EBS offers several types of storage volumes, each designed for specific use cases and
performance requirements:
Amazon EBS General Purpose (SSD):
These volumes, known as gp2, provide a balance of price and performance for a wide range of
workloads. They are suitable for most applications and offer low-latency and consistent
performance.
Amazon EBS Provisioned IOPS (SSD):
These volumes, known as io1, are designed for I/O-intensive workloads that require high
performance and low-latency access. You can specify the number of IOPS (Input/Output
Operations Per Second) when provisioning these volumes.
Amazon EBS Throughput Optimized (HDD):
These volumes, known as st1, are optimized for applications that require high throughput for
large, sequential read/write workloads. They are often used for data warehouses and big data
processing. Amazon EBS Cold HDD:
These volumes, known as sc1, are designed for less frequently accessed data that can tolerate
lower performance. They offer cost-effective storage for infrequently used data.
Amazon EBS Magnetic (HDD):
These volumes, known as standard, are the original EBS volume type and are suitable for
applications with modest I/O requirements.
Use Cases for Amazon EBS:
o Amazon EBS is used in a variety of scenarios, including:
o Storing data files, databases, and application code.
o Running operating systems on EC2 instances (boot volumes).
o Providing storage for big data processing and analytics.
o Hosting web applications and content management systems.
5. Serverless Computing
5.1 Introduction to serverless architecture using AWS Lambda.
Serverless architecture is a cloud computing paradigm that abstracts away server management
tasks, allowing developers to focus solely on writing code and building applications without the
need to provision or manage servers. AWS Lambda is a key service provided by Amazon Web
Services (AWS) that enables serverless computing. Here's an introduction to serverless
architecture using AWS Lambda:
No Server Management:
Serverless architecture eliminates the need to manage servers, virtual machines, or containers.
AWS Lambda takes care of server provisioning, scaling, and maintenance.
Event-Driven:
Serverless applications are event-driven, meaning they respond to events or triggers. These events
can be HTTP requests, database changes, file uploads, scheduled tasks, or custom events.
Pay-Per-Use Pricing:
Serverless services like AWS Lambda are billed based on actual usage (e.g., compute time and
memory). You only pay for the resources consumed during the execution of your functions.
Auto Scaling:
Stateless Functions:
Serverless functions are typically stateless, meaning they don't store persistent data between
invocations. Data is often stored externally, such as in databases or object storage.
AWS Lambda:
AWS Lambda is a serverless compute service that allows you to run code in response to events.
Here are key aspects of AWS Lambda:
Functions: In AWS Lambda, you define functions, which are pieces of code that perform specific
tasks. Functions are designed to be small, focused, and stateless.
Event Sources: Lambda functions are triggered by event sources, which can be various AWS
services (e.g., S3, DynamoDB, SNS) or custom events. When an event occurs, Lambda executes
the associated function.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 39
AWS CLOUD VIRTUAL INTERNSHIP
Scalability: AWS Lambda automatically scales your functions in response to incoming events. It
can run multiple instances of your function concurrently to handle high loads.
Integration: Lambda can be integrated with other AWS services, making it a central part of many
serverless applications. It can also be used with API Gateway to create RESTful APIs.
Web APIs: Create RESTful APIs and microservices using Lambda and API Gateway.
Data Processing: Perform data transformation, filtering, and enrichment in response to data events.
Real-time File Processing: Process file uploads, generate thumbnails, and perform content
moderation.
IoT Applications: Handle data from IoT devices and trigger actions based on sensor data.
Automated Backups: Schedule and automate backup tasks, database snapshots, and log
archiving. Event-Driven Automation: Implement event-driven workflows for DevOps and CI/CD
pipelines. Chatbots and Voice Assistants: Develop chatbots and voice-based applications.
Image and Video Analysis: Analyze and process images and videos, including object detection
and recognition.
AWS Lambda, combined with other AWS services, offers a powerful and scalable platform for
building serverless applications. Developers can focus on writing code that solves business
problems while AWS takes care of the underlying infrastructure and scaling. It's a cost-effective
and efficient way to build modern cloud-native applications.
Choose a Runtime: AWS Lambda supports various runtime environments such as Node.js,
Python, Java, Ruby, Go, .NET Core, and custom runtimes. Select the runtime that suits your
application.
Write Code: Write the code for your Lambda function. The code should be stateless and focus on
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 40
AWS CLOUD VIRTUAL INTERNSHIP
Dependencies: If your function has dependencies, include them in your deployment package. For
Node.js, you can use npm; for Python, you can use pip; and so on.
Handler Function: Define a handler function within your code. This function will be the entry
point for your Lambda execution.
Create a Deployment Package: Package your code along with its dependencies into a zip file.
Ensure that the handler function is correctly specified in your AWS Lambda configuration.
Sign in to the AWS Management Console and navigate to the AWS Lambda service.
Create a Function:
Configure Function:
Define the execution role that specifies the permissions your Lambda function will have.
In the "Function code" section, select the "Upload a .zip file" option.
Set Handler: Specify the handler function in the format filename.handler, where filename is the
name of your code file and handler is the name of the handler function.
Set the memory allocated to your Lambda function and the function
Configure Test Event: In the Lambda function's configuration, you can create a test event with
sample input data to test your function.
Test Execution: Execute the test event to verify that your Lambda function behaves as expected.
You can use the Serverless Framework or other deployment tools to automate the deployment
process. The Serverless Framework simplifies AWS Lambda deployments and integrates with
other AWS services.
Add Triggers: Lambda functions are often triggered by various events, such as HTTP requests via
Amazon API Gateway, file uploads to Amazon S3, database changes using AWS DynamoDB
Streams, or custom events.
CloudWatch Logs: AWS Lambda automatically logs function execution details to CloudWatch
Logs. Use CloudWatch Logs to monitor and troubleshoot your Lambda functions.
Configure Concurrency: Adjust the concurrency settings to control how many instances of your
Lambda function run concurrently.
Optimize Costs: Monitor your Lambda function usage and costs, and adjust memory allocation
as API Gateway resources. For example, you use a RestApi resource to represent an API that can
contain a collection of Resource entities. Each Resource entity can in turn have one or more
Method resources. Expressed in the request parameters and body, a Method defines the
application programming interface for the client to access the exposed Resource and represents an
incoming request submitted by the client. You then create an Integration resource to integrate the
Method with a backend endpoint, also known as the integration endpoint, by forwarding the
incoming request to a specified integration endpoint URI. If necessary, you transform request
parameters or body to meet the backend requirements. For responses, you can create a
MethodResponse resource to represent a request response received by the client and you create an
IntegrationResponse resource to represent the request response that is returned by the backend.
You can configure the integration response to transform the backend response data before
returning the data to the client or to pass the backend response as-is to the client.
To help your customers understand your API, you can also provide documentation for the API, as
part of the API creation or after the API is created. To enable this, add a DocumentationPart
resource for a supported API entity.
To control how clients call an API, use IAM permissions, a Lambda authorizer, or an Amazon
Cognito user pool. To meter the use of your API, set up usage plans to throttle API requests. You
can enable these when creating or updating the API.
You can perform these and other tasks by using the API Gateway console, the API Gateway
REST API, the AWS CLI, or one of the AWS SDKs
AWS Lambda integrates with other AWS services to invoke functions or take other actions. These
are some common use cases:
Invoke a function in response to resource lifecycle events, such as with Amazon Simple Storage
Service (Amazon S3). For more information, see Using AWS Lambda with Amazon S3.
Respond to incoming HTTP requests. For more information, see Tutorial: Using Lambda with API
Gateway.
Consume events from a queue. For more information, see Using Lambda with Amazon SQS.
Run a function on a schedule. For more information, see Using AWS Lambda with Amazon
EventBridge (CloudWatch Events).
Depending on which service you're using with Lambda, the invocation generally works in one of
two ways. An event drives the invocation or Lambda polls a queue or data stream and invokes the
function in response to activity in the queue or data stream. Lambda integrates with Amazon Elastic
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 45
AWS CLOUD VIRTUAL INTERNSHIP
File System and AWS X-Ray in a way that doesn't involve invoking functions.
For more information, see Event-driven invocation and Lambda polling. Or, look up the service
that you want to work with in the following section to find a link to information about using that
service with Lambda.
You can also use Lambda functions to interact programmatically with other AWS services using
one of the AWS Software Development Kits (SDKs). For example, you can have a Lambda
function create an Amazon S3 bucket or write data to a DynamoDB table using an API call from
within your function.
Policies: Documents that define permissions (what users, groups, and roles are allowed to do).
Use Roles for AWS Resources: Instead of using long-term credentials, use IAM roles for EC2
instances, Lambda functions, and other resources to enhance security.
Apply Least Privilege: Only grant the permissions necessary for a user or resource to perform its
tasks.
Use IAM Groups: Assign permissions to groups rather than individual users for
easier management.
Enable MFA (Multi-Factor Authentication): Require MFA for users who have access to sensitive
resources.
Regularly Review Permissions: Periodically review and audit IAM policies and permissions to
ensure they remain appropriate.
Use IAM Policy Conditions: Apply conditions to IAM policies to further restrict access (e.g.,
based on IP address or time of day).
3. IAM Setup:
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 46
AWS CLOUD VIRTUAL INTERNSHIP
Creating IAM Users: Create users and assign them to groups with appropriate permissions.
Creating IAM Roles: Define roles with specific permissions and trust relationships with services
like EC2 or Lambda.
Creating IAM Policies: Write policies that specify permissions and attach them to users, groups,
or roles.
Enabling MFA: Enable multi-factor authentication for users who have access to sensitive
resources.
Access Key Rotation: Regularly rotate access keys to minimize the impact of key compromise.
Monitoring and Logging: Use CloudWatch Logs and CloudTrail to monitor IAM activity and
detect unauthorized access.
Credential Report: Use the IAM credential report to check for unused credentials and identify
potential security risks.
IAM Access Analyzer: Use IAM Access Analyzer to analyze access policies for unintended
resource access.
Use IAM roles to grant permissions to AWS services like Lambda, EC2, and Glue.
Configure cross-account access and trust relationships for sharing resources securely between
AWS accounts.
IAM is a fundamental component of AWS security, enabling you to control and manage access to
your AWS resources. Properly configuring and managing IAM is crucial for maintaining a secure
AWS environment. Always follow best practices and regularly review and update your IAM
policies to align with your organization's security requirements.
console, such as during the sign-in process, the role name is case insensitive. Because
various entities might reference the role, you can't edit the name of the role after it is
created.
11. (Optional) For Description, enter a description for the new role.
12. Choose Edit in the Step 1: Select trusted entities or Step 2: Select permissions
sections to edit the use cases and permissions for the role.
13. (Optional) Add metadata to the role by attaching tags as key-value pairs. For more
information about using tags in IAM, see Tagging IAM resources in the IAM User
Guide.
14. Review the role, and then choose Create role.
Financial Implications: Security breaches can lead to financial losses, including direct costs
related to the incident response, regulatory fines, legal fees, and potential loss of business due to
reputational damage.
Business Continuity: Security incidents, such as data breaches or service disruptions, can disrupt
normal business operations. Robust security measures are necessary to maintain business
Data Privacy: AWS customers entrust the platform with their data. Ensuring data privacy and
confidentiality is a fundamental ethical responsibility.
Resource Protection: AWS provides various cloud resources, including compute instances,
storage, and networking. Security measures are essential to protect these resources from misuse or
unauthorized access.
Secure Development: Security should be integrated into the development lifecycle. Neglecting
security during application development can lead to vulnerabilities that may be exploited.
User Identity and Access Management: AWS IAM ensures that only authorized users and
services have access to resources. Misconfigured IAM can lead to data breaches.
Emerging Threat Landscape: The threat landscape is continuously evolving, with new attack
vectors and techniques emerging regularly. Staying vigilant and adapting security measures is
crucial.
Cloud-Native Security: Cloud environments like AWS introduce unique security challenges.
Organizations must understand these challenges and implement cloud-native security controls.
Shared Responsibility Model: AWS operates on a shared responsibility model, where AWS is
responsible for the security of the cloud infrastructure, while customers are responsible for
securing their data, applications, and configurations. Understanding and fulfilling this shared
responsibility is key to a secure environment.
Incident Response: Having a well-defined incident response plan is essential for swiftly
addressing security incidents and minimizing their impact.
Continuous Monitoring and Improvement: Security is not a one-time task but an ongoing
process. Continuous monitoring, vulnerability assessments, and regular security audits are
essential for maintaining a high level of security in AWS.
Athena.
Amazon Web Services (AWS) offers a comprehensive suite of data analytics services that enable
organizations to process, analyze, and gain valuable insights from large volumes of data. Two key
services in the AWS data analytics ecosystem are Amazon Redshift and Amazon Athena.
1. Amazon Redshift:
Overview:
Amazon Redshift is a fully managed, data warehousing service designed for high-performance
analytics. It is optimized for handling large datasets, making it ideal for data warehousing,
business intelligence, and reporting applications.
Key Features:
Columnar Storage: Redshift stores data in a columnar format, which enables efficient
compression and query performance, especially for analytical workloads.
Massively Parallel Processing (MPP): Redshift distributes query execution across multiple nodes,
allowing for parallel processing and fast query performance.
Integration: It integrates seamlessly with popular BI tools like Tableau, Looker, and Power BI.
Scalability: You can easily scale Redshift clusters up or down as needed to accommodate
changing data volumes and query loads.
Security: Redshift offers robust security features, including encryption, VPC support, IAM
integration, and more.
Data Lake Integration: You can use Redshift Spectrum to query data in your data lake (stored in
Amazon S3) without the need to load it into the data warehouse.
Use Cases:
2. Amazon Athena:
Overview:
Amazon Athena is an interactive query service that allows you to analyze data in Amazon S3
using standard SQL queries. It is a serverless service, meaning you don't need to manage
infrastructure or provision capacity.
Key Features:
Serverless: No need to set up or manage clusters; you pay only for the queries you run.
SQL Query Language: Athena supports standard SQL queries, making it accessible to users
familiar with SQL.
Schema-on-Read: You can define the schema of your data on the fly, making it easy to analyze
structured and semi-structured data.
Integration: Athena integrates with AWS Glue Data Catalog, making it easier to discover and
access your data.
Security: It integrates with AWS Identity and Access Management (IAM) for fine-grained access
control and supports encryption for data at rest and in transit.
Use Cases:
o Log analysis
7.2 Working with data lakes and AWS Glue for ETL
Working with data lakes and AWS Glue for ETL (Extract, Transform, Load) is a common approach
for organizations looking to manage and analyze large volumes of diverse data. AWS Glue is a
fully managed ETL service that simplifies the process of preparing and loading data into data
lakes and data warehouses. Here's how you can work with data lakes and AWS Glue for ETL:
Before using AWS Glue, you need to have a data lake architecture in place. A data lake is a
central repository that allows you to store data in its raw, native format. AWS offers Amazon S3
as a highly scalable and cost-effective storage solution for building data lakes. Data lakes can
include structured, semi-structured, and unstructured data.
Data Catalog: The AWS Glue Data Catalog is a metadata repository that stores metadata about
data sources, transformations, and targets. It helps Glue understand the structure of your data.
ETL Jobs: AWS Glue ETL jobs are defined using Python or Scala code. These jobs extract data
from sources, transform it as needed, and load it into target destinations.
Crawlers: Crawlers in AWS Glue automatically discover and catalog metadata from your data
sources. They can traverse through data in S3, RDS, Redshift, and other sources to build the Data
Catalog.
a. Data Discovery: Use AWS Glue Crawlers to automatically discover data in your data lake.
Crawlers analyze your data to create metadata tables in the Data Catalog.
b. Data Transformation: Define ETL jobs in AWS Glue. These jobs use the metadata from the
Data Catalog to transform the data. You can use built-in transformations or custom code.
c. Data Loading: Load the transformed data into your target destinations, which can be a data
warehouse (e.g., Amazon Redshift), databases, or other storage solutions.
d. Scheduling: You can schedule ETL jobs to run at specific intervals or in response to events.
AWS Glue handles job execution and scaling automatically.
4. Benefits:
Using AWS Glue for ETL in a data lake architecture offers several benefits:
Scalability: AWS Glue scales resources based on job complexity and data volume, ensuring
efficient processing.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 54
AWS CLOUD VIRTUAL INTERNSHIP
Managed Service: AWS Glue is fully managed, reducing the operational burden on your team.
Cost-Effective: You only pay for the resources used during ETL job execution, making it cost-
effective for various workloads.
Data Catalog: The Data Catalog simplifies data discovery and makes it easier to manage metadata.
Integration: AWS Glue integrates seamlessly with other AWS services like S3, Redshift, and
Athena, enabling a powerful data analytics ecosystem.
5. Use Cases:
o Common use cases for working with data lakes and AWS Glue include:
Building data pipelines and performing data analysis in the context of big data and analytics is a complex
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 55
AWS CLOUD VIRTUAL INTERNSHIP
but crucial task for organizations aiming to extract actionable insights from vast and diverse datasets.
Here's a step-by-step guide to building data pipelines and conducting data analysis in the big data realm:
Identify the data sources that are relevant to your analytics goals. These sources can include databases, IoT
devices, log files, social media feeds, and more.
Consider both structured and unstructured data sources, as big data often involves a variety of data types.
2. Data Ingestion:
Set up data ingestion mechanisms to collect data from the defined sources. This can include batch
processing, real-time streaming, or a hybrid approach.
Use technologies like Apache Kafka, AWS Kinesis, or Flume for real-time streaming, and tools like
Apache Nifi or AWS DataSync for batch processing.
3. Data Storage:
Choose appropriate data storage solutions to handle the volume and variety of data. Common choices
include data lakes (e.g., Hadoop HDFS, Amazon S3) and distributed databases (e.g., Apache Cassandra,
HBase).
4. Data Processing:
Implement data processing steps to clean, transform, and enrich the raw data. Big data processing
frameworks like Apache Spark and Apache Flink are commonly used for these tasks.
5. Data Integration:
Integrate data from different sources and formats into a unified data model.
Leverage technologies like Apache Hive, Apache Pig, or Apache Beam for data integration.
6. Data Analysis:
Use big data analytics tools and libraries to perform analysis. Apache Hadoop, Apache Spark, and cloud-
based platforms like AWS EMR and Google Dataprep are popular choices.
Implement machine learning algorithms for predictive modeling, clustering, classification, and anomaly
detection.
7. Data Visualization:
Create data visualizations and dashboards to communicate insights effectively. Tools like Tableau, Power
BI, and open-source options like Matplotlib and D3.js can help with visualization.
Schedule and monitor pipeline activities to ensure consistent and reliable operation.
Implement data governance practices to ensure data quality, consistency, and compliance with regulations.
Continuously optimize the performance of your data pipeline and analytics processes as data volumes and
complexity grow.
Monitor resource usage and adjust the infrastructure as needed to maintain scalability.
Document your data pipeline architecture, data lineage, and analysis methodologies to facilitate
collaboration among data engineers, data scientists, and analysts.
Continuously assess the effectiveness of your data pipeline and analytics efforts. Seek feedback and make
adjustments to meet evolving business goals.
integrated with Amazon CloudWatch. Many AWS services have CloudWatch integration by
default, but for others, you may need to enable it.
2. Install CloudWatch Agent (Optional):
For EC2 instances, you can install the CloudWatch Agent, which allows you to collect system-
level metrics, logs, and custom metrics from your instances. This step is optional but highly
recommended for deeper insights into your instances' performance.
3. Define Custom Metrics (Optional):
Create custom CloudWatch Metrics to monitor application-specific performance and behavior.
You can publish custom metrics using the AWS SDK or AWS Command Line Interface (CLI).
4. Set Up Alarms:
Create CloudWatch Alarms to get notified when specific metric conditions are met. Alarms can
trigger actions like sending notifications via Amazon SNS or auto-scaling your resources.
5. Enable Logging:
Enable CloudWatch Logs for your application's logs. You can create log groups and log streams
to collect and store logs from various sources, including EC2 instances, Lambda functions, and
custom applications.
6. Create Log Metric Filters:
Define CloudWatch Log Metric Filters to extract structured data from your logs. These filters help
you create custom metrics or trigger alarms based on log data patterns.
7. Create Dashboards:
Build CloudWatch Dashboards to create customized views of your application's key performance
indicators (KPIs) and metrics. Dashboards allow you to visualize data from multiple sources on a
single page.
8. Implement Distributed Tracing (Optional):
For microservices architectures, consider implementing distributed tracing using AWS X-Ray. X-
Ray provides insights into how requests flow through your application, helping you identify
bottlenecks and performance issues.
9. Enable Container Insights (For ECS and EKS):
If you're using Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service
(EKS), enable Container Insights to monitor the performance of your containerized applications.
10. Integrate Third-Party Services:
Integrate CloudWatch with third-party services and applications by using CloudWatch Agents,
custom scripts, or third-party monitoring tools that support CloudWatch integration.
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 59
AWS CLOUD VIRTUAL INTERNSHIP
based applications. Here's a step-by-step guide to effectively address issues and optimize AWS
resources for application monitoring and management:
1. Define Metrics and Alerts:
Begin by defining the key performance metrics and error thresholds that are critical to your
application's health. Set up CloudWatch alarms to trigger notifications when these thresholds are
breached.
2. Monitor Application Logs:
Use Amazon CloudWatch Logs to collect and centralize logs generated by your application and
AWS services. Set up custom log metrics and filters to extract meaningful information from your
logs.
3. Enable Enhanced Monitoring (EC2 Instances):
If you're using Amazon EC2 instances, enable Enhanced Monitoring to collect detailed OS-level
metrics. This can help diagnose issues related to resource utilization.
4. Distributed Tracing (Optional):
Implement distributed tracing using AWS X-Ray or third-party tools like Jaeger or Zipkin.
Distributed tracing helps you trace requests across microservices and identify bottlenecks.
5. Set Up AWS CloudTrail:
Enable AWS CloudTrail to capture API activity and changes to your AWS resources. This
provides a trail of actions taken on your AWS infrastructure and helps with auditing and
troubleshooting.
6. Continuous Integration/Continuous Deployment (CI/CD):
Implement CI/CD pipelines to automate application deployments. This reduces the risk of
deployment-related issues and streamlines the release process.
7. Implement Auto Scaling:
Use Auto Scaling groups to dynamically adjust the number of application instances based on
traffic. This ensures optimal resource utilization and high availability.
8. Optimize Database Resources:
Review and optimize your database configurations, queries, and indexes to improve performance.
Consider using Amazon RDS Performance Insights for database monitoring.
- Regularly review the sizing of your EC2 instances and adjust them based on resource utilization
enforce the
principle of least privilege, which restricts users' and resources' permissions to the minimum
necessary for their tasks. Here are some advanced IAM concepts and roles, along with security
best practices:
Implement the principle of least privilege by granting users, roles, and services only the
permissions they require to perform their specific tasks. Avoid giving overly broad permissions.
Use IAM policy conditions to further restrict access based on factors like IP address, MFA (Multi-
Factor Authentication) status, request source, and more. For example, you can require MFA for
certain API actions.
Assign IAM roles to Amazon EC2 instances instead of using static credentials. This eliminates the
need to manage access keys within your application code and enhances security.
4. Cross-Account Access:
Use IAM roles to enable cross-account access, allowing trusted AWS accounts to assume roles in
your account. This is helpful for third-party services or multiple AWS accounts within an
organization.
5. Identity Federation:
Implement identity federation using services like AWS Single Sign-On (SSO), AWS Identity
Federation, or third-party identity providers (e.g., Active Directory, Okta). This allows users to
access AWS resources using their existing corporate credentials.
Leverage IAM Access Analyzer to identify and manage unintended access to your resources. It
helps you detect and remove overly permissive policies.
7. Permission Boundaries:
Set permission boundaries on IAM roles to limit the permissions that can be delegated by users
and roles. This provides an additional layer of access control.
Use AWS Organizations to create and attach Service Control Policies to organizational units
(OUs) to set fine-grained access controls and restrict actions across multiple AWS accounts.
9. Resource-Based Policies:
USHA RAMA COLLEGE OF ENGINEERING AND TECHNOLOGY 67
AWS CLOUD VIRTUAL INTERNSHIP
Implement resource-based policies for AWS services that allow you to specify who can access a
particular resource (e.g., S3 bucket policy, Lambda function policy).
- Create custom managed policies tailored to specific roles or groups within your organization.
Avoid attaching overly permissive policies directly to users.
- Utilize IAM Access Advisor to view service-last-accessed information, helping you identify
unused permissions that can be revoked.
- Use AWS Organizations to centrally manage and consolidate AWS accounts and apply
organization-level policies for access control and billing.
- Be cautious with role chaining, where one role assumes another. Ensure that permissions are
carefully managed to prevent unintended escalation of privileges.
- Conduct regular reviews and audits of IAM policies, roles, and user permissions to ensure that
they align with your organization's security policies and business needs.
- Enforce MFA for privileged users and root accounts to add an additional layer of security.
- Implement continuous monitoring solutions like AWS CloudTrail and AWS Config to track and
log changes to IAM policies and roles.
- Use AWS Lambda functions to automate security tasks, such as rotating IAM access keys,
enforcing security best practices, and responding to security events.
Advanced IAM concepts and roles are critical for enforcing robust security practices in AWS. By
following these best practices, you can achieve granular control over permissions, reduce the
attack surface, and ensure that your AWS environment remains secure and compliant.
1. Data Classification:
Begin by classifying your data to identify what needs to be protected. Categorize data as public,
internal, confidential, or sensitive based on its sensitivity and regulatory requirements.
Use encryption to protect data when it is stored in databases, file systems, object storage, or
backup solutions.
In AWS, leverage services like Amazon S3 Server-Side Encryption, Amazon RDS encryption,
and AWS Key Management Service (KMS) for managing encryption keys.
Encrypt data when it's in transit between clients and servers or between services. Use protocols like
HTTPS/TLS for web traffic and VPNs or Direct Connect for network connections.
Properly manage encryption keys to ensure their security. Use a centralized key management
service like AWS KMS to generate, store, and rotate encryption keys.
Implement strict access controls and IAM policies to limit who can access and manage encryption
keys and sensitive data.
Implement data masking and redaction techniques to conceal sensitive information from
unauthorized users in logs, reports, and user interfaces.
Use DLP solutions to automatically detect and prevent the transmission of sensitive data outside
the organization's network or systems.
Ensure that application developers follow secure coding practices to prevent data exposure
vulnerabilities like SQL injection or improper data handling.
9. Endpoint Security:
Secure endpoints (e.g., laptops, mobile devices) with encryption, device management, and
security policies to protect data stored on these devices.
- Apply strong authentication and access controls to databases. Use encryption for data at rest and
in transit. Regularly patch and update database software.
- Encrypt data in backup solutions and ensure that encryption keys are securely managed.
Implement disaster recovery plans to prevent data loss during disasters.
- Implement continuous monitoring and auditing of data access and encryption status. Use
services like AWS CloudTrail and AWS Config to track changes and access to resources.
- Train employees on data protection best practices, including secure handling, sharing, and
storage of sensitive data.
- Conduct regular security audits and assessments to verify compliance with encryption and data
protection policies and regulations (e.g., GDPR, HIPAA).
- Develop an incident response plan that includes procedures for responding to data breaches and
incidents involving sensitive data.
- Use secure file sharing solutions that offer encryption, access controls, and audit trails. Avoid
sharing sensitive data through unsecured channels.
- If using third-party vendors or cloud services, ensure they have robust encryption and data
protection mechanisms in place. Review their security practices and agreements.
- Understand the data protection requirements specific to your industry and geographic region.
Comply with relevant regulations and standards.
- If you have mobile applications, implement encryption for data stored on mobile devices and
during data transmission.
10. Conclusion
The knowledge and experience I’ve gained during this internship can be highly valuable
for your career in cloud computing and AWS-related roles. As I conclude your internship,
here are some key takeaways:
Hands-On AWS Experience: You've had the opportunity to work with various AWS
services, gaining practical experience in deploying virtual machines, managing storage,
configuring networking, working with databases, and building serverless applications. This
practical knowledge is a strong foundation for future AWS projects.
Cloud Computing Skills: Cloud computing skills, particularly in AWS, are in high demand in
the tech industry. Your internship has equipped you with essential cloud computing skills,
making you more marketable to potential employers.
IAM and Security: Understanding AWS Identity and Access Management (IAM) and
security best practices is crucial for ensuring the security of cloud resources. You've likely
learned how to set up secure access and control permissions within AWS.
Scalability and Cost Optimization: AWS provides tools and services for optimizing costs and
scaling resources as needed. You may have explored auto-scaling, monitoring, and cost
management strategies to keep cloud expenses in check.
Serverless Architecture: Building serverless applications with AWS Lambda and API
Gateway is a modern approach to application development. Your experience in serverless
computing can be valuable for creating efficient and scalable applications.
Database Management: Understanding AWS database services like RDS, DynamoDB, and
Aurora is essential for managing data in the cloud. You've gained insight into how to
create and manage databases in AWS.
Networking and VPC: Knowledge of AWS Virtual Private Cloud (VPC) is vital for
configuring secure and isolated network environments. You've likely learned how to set up
subnets, security groups, and network access control lists (NACLs) within a VPC.
Data Backup and Recovery: AWS offers robust data backup and recovery solutions. You've
explored strategies for data backup, snapshot management, and disaster recovery.
API Gateway and RESTful APIs: Building RESTful APIs with Amazon API Gateway is a
key skill for creating web services and microservices. You've learned how to design and
deploy APIs using API Gateway.
Best Practices: Throughout your internship, you've likely encountered best practices for
cloud architecture, security, and optimization. These best practices are valuable for building
reliable and cost-effective cloud solutions.
As I conclude your AWS Cloud virtual internship, consider how I can further build upon the
knowledge and skills you've gained. AWS offers certification programs that can validate
your expertise and enhance your career prospects. Additionally, staying up-to-date with
AWS developments and continuing to explore new AWS services and features will help you
remain competitive in the cloud computing field.
ACTIVITY LOG
WEEK Brief Learning Person In-
description of Outcome Charge
the daily Signature
activity
Week -1 Orientation and I have created AWS
Introduction account and learned the
basics of AWS cloud and
accessing of AWS
console..
Week-2 AWS In this week, I’ve learned
Fundamentals the fundamentals of
AWS cloud like EC2,S3
and RDS
Week -3 Networking and Creation of VPC, subnets
Security and implementing the
security practices..
Date of Evaluation:
Organisation Name & Address: AMAZON WEB SERVICES
1 Oral communication 1 2 3 4 5
2 Written communication 1 2 3 4 5
3 Proactiveness 1 2 3 4 5
4 Interaction ability with community 1 2 3 4 5
5 Positive Attitude 1 2 3 4 5
6 Self-confidence 1 2 3 4 5
7 Ability to learn 1 2 3 4 5
8 Work Plan and organization 1 2 3 4 5
9 Professionalism 1 2 3 4 5
10 Creativity 1 2 3 4 5
11 Quality of work done 1 2 3 4 5
12 Time Management 1 2 3 4 5
13 Understanding the Community 1 2 3 4 5
14 Achievement of Desired Outcomes 1 2 3 4 5
15 OVERALL PERFORMANCE 1 2 3 4 5
Date of Evaluation:
Organisation Name & Address: AMAZON WEB SERVICES
1 Oral communication 1 2 3 4 5
2 Written communication 1 2 3 4 5
3 Proactiveness 1 2 3 4 5
4 Interaction ability with community 1 2 3 4 5
5 Positive Attitude 1 2 3 4 5
6 Self-confidence 1 2 3 4 5
7 Ability to learn 1 2 3 4 5
8 Work Plan and organization 1 2 3 4 5
9 Professionalism 1 2 3 4 5
10 Creativity 1 2 3 4 5
11 Quality of work done 1 2 3 4 5
12 Time Management 1 2 3 4 5
13 Understanding the Community 1 2 3 4 5
14 Achievement of Desired Outcomes 1 2 3 4 5
15 OVERALL PERFORMANCE 1 2 3 4 5
Year of Study : IV
University: JNTUK
1. Activity Log 10
2. Internship Evaluation 30
3. Oral Presentation 10
GRAND TOTAL 50
Year of Study IV
University: JNTUK
Maximum Marks
Sl.No Evaluation Criterion Marks Awarded
1. Internship Evaluation 10
3. Viva-Voice 20
TOTAL 50