KEMBAR78
E-Book AWS+DevOps | PDF | Cloud Computing | Amazon Web Services
0% found this document useful (0 votes)
4K views17 pages

E-Book AWS+DevOps

This document provides an overview of an e-book covering AWS and DevOps. It describes 8 modules that will be covered including cloud computing, AWS services, EC2 instances, EBS, IAM, VPCs, ELB, and auto scaling.

Uploaded by

psparsewar18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4K views17 pages

E-Book AWS+DevOps

This document provides an overview of an e-book covering AWS and DevOps. It describes 8 modules that will be covered including cloud computing, AWS services, EC2 instances, EBS, IAM, VPCs, ELB, and auto scaling.

Uploaded by

psparsewar18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

A comprehensive e-book for your basic understanding

AWS + DevOps: A comprehensive e-book for your basic understanding

AWS + DevOps: A comprehensive e-book for your basic understanding


Course material: e-book
Copyright © 2023 TEKS Academy

Contact Us:
TEKS Academy
Flat No: 501, 5th Floor, Amsri Faust Building,
SD Road, near Reliance Digital Mall,
Regimental bazaar,
Shivaji Nagar,
Secunderabad,
Telangana - 500025

Support: 1800-120-4748
Email: support@teksacdemy.com
Amazon Web Services

Amazon Web Services (AWS) is a cloud computing platform that offers a wide range of services, including
computing, storage, databases, analytics, machine learning, and more. AWS enables businesses and individuals to
build, run, and manage applications and services in the cloud without having to own, operate, or maintain the
underlying infrastructure. AWS offers a range of deployment models, including public, private, and hybrid clouds, and
works with organizations of all sizes and in all industries.

The AWS course covers the following modules:


1. Cloud Computing
What is Cloud?
The term cloud refers to a network or the internet. It is a technology that uses remote servers on the internet to store,
manage, and access data online rather than local drives. The data can be anything such as files, images, documents,
audio, video, and more.

These are the following operations that we can do using cloud computing:

● Developing new applications and services.


● Storage, back up, and recovery of data.
● Hosting blogs and websites.
● Delivery of software on demand.
● Analysis of data.
● Streaming videos and audios.

Evolution of Cloud Computing:


Cloud Computing has evolved from the Distributed system to the current technology. Cloud computing has been used
by all types of businesses, of different sizes and fields.

Cloud Service Models (IaaS, PaaS, SaaS):


There are the following three types of cloud service models.
1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)

Difference between IaaS, PaaS, and SaaS


The below table shows the difference between IaaS, PaaS, and SaaS.

IaaS Paas SaaS


It provides a virtual data It provides virtual platforms It provides web software and
center to store information and tools to create, test, and apps to complete business
and create platforms for app deploy apps. tasks.
development, testing, and
deployment.

It provides runtime
It provides access to It provides software as a
environments and deployment
resources such as virtual service to the end-users.
machines, virtual storage, tools for application
etc.

It is used by network It is used by developers. It is used by end users.


architects.

IaaS provides only PaaS provides SaaS provides


Infrastructure.
Infrastructure + Platform. Infrastructure + Platform

+Software.
2. Amazon Web Services (AWS):
AWS stands for Amazon Web Services which uses distributed IT infrastructure to provide different IT resources on
demand. Our AWS tutorial includes all the topics such as introduction, history of AWS, global infrastructure, features
of AWS, IAM, Storage services, Database services, etc.
Advantages of AWS:

 Flexibility.
 Cost-effectiveness.
 Scalability/Elasticity.
 Security.
 Global Infrastructure.

Key concepts you will learn from Amazon Web Services:

 Global Infrastructure.
 Free tier overview & limitations.
 AWS Account creation.
 Create Billing Alarm for unnecessary billing.
 Create IAM User.
 Assign Multi Factor Authentication to user.
 Login into AWS Management Console.

3. Elastic Compute Cloud (EC2) Instance:


EC2 stands for Amazon Elastic Compute Cloud. Amazon EC2 is a web service that provides resizable
compute capacity in the cloud. Amazon EC2 reduces the time required to obtain and boot new user instances to
minutes rather than in older days, if you need a server then you had to put a purchase order, and cabling is done to
get a new server which is a very time-consuming process. Now, Amazon has provided an EC2 which is a virtual machine
in the cloud that completely changes the industry.

Shared AMIs:

In simple language, a shared AMI is one that an AWS developer creates and shares 41 with other users for use. Shared
AMIs enable any new AWS user to easily get started using the AWS platform. On the flip side, Shared AMIs are not
guaranteed any security or integrity and must be treated as any other foreign code.

You can locate the following shared AMIs from the Amazon EC2 console by selecting the “AMIs” option from the
navigation pane:

 Private images: that contain all the shared AMIs that are private and only for your use.
 Public images: that contain all the shared AMIs that have been created for public use.

Elastic IPs:
An advantage of using Amazon Elastic Compute Cloud (EC2) is the ability to start, stop, create, and terminate
instances at any time. However, this flexibility creates a potential challenge with IP addresses. Restarting a stopped
instance (or re- creating an instance after another instance is terminated) results in a new IP address. How do you
successfully reference a machine when the IP address is constantly changing?
Key concepts you will learn from EC2 Instance:
 Instance Types
 Pricing options
 Working with AMIs
 EC2 Instance Creation
 Windows & Linux instances
 Login access to the Instance
 Security Groups
 Elastic IPs
 Placement Groups
 Key Pairs

4. Elastic Book Store (EBS):


EBS (Elastic Block Storage) volume is a block storage device connected to EBS backed instances. These block
storage devices can be accessed on the instance just like a hard drive. Amazon provides 3 different categories of
storage drives, and each category includes different types of EBS volumes. Following is the detailed list of EBS
volumes provided by Amazon. Add more details like throughput, iops, etc
Solid State Drives (SSD)

 General Purpose SSD


 Provisioned IOPS SSD

Hard Disk Drives (HDD)

 Throughput Optimized HDD


 Cold HDD

Previous Generation

 Magnetic

Key concepts you will learn from EBS:


 EBS Volume Types.
 Instance Store Volumes.
 Optimizing Disk Performance.
 Creating and Deleting Volumes.
 Attach and Detach Volumes.
 Mount and Un-mounting Volumes.

5. Identity and Access Management (IAM):


Identity and Access Management (IAM) in AWS is a service that allows you to create and handle data access.
With IAM, you can make sure that only authorized users have the necessary access to AWS resources, most notably
AWS applications and data. IAM enables you to limit access across various AWS resources to specific users or groups,
and it also provides tools for creating and managing access keys and passwords. Additionally, IAM Identity
Federations come into play when you need to manage and secure access across resources that do not have native
IAM support.
Key concepts you will learn from IAM:
 Creation of Users Accounts.
 Roles in IAM.
 Groups in IAM.
 Account Settings.
 Creating Permissions for Users.
 Deleting Permissions for Users.

6. Virtual Private Cloud (VPC):


An Amazon Web Services (AWS) Virtual Private Cloud (VPC) is a virtual network service that lets you
configure isolated subnets within an AWS cloud region to launch Network Interface Cards (NICs) for Amazon Elastic
Compute Cloud (EC2) instances. You can specify subnets where instances should run and control traffic to those
subnets. VPC enables you to launch instances with IP addresses in a range you specify, providing more control and
flexibility over network configurations.
Key concepts you will learn from VPC:
 Creating a Custom VPC.
 Security Groups.
 Creating Identity GateWay (IGW).
 Connecting Instances in the Gateway.
 Subnets.
 Route Tables.
 VPC Peering.
 NAT Gateway.

7. Elastic Load Balancer (ELB):


Elastic Load Balancing (ELB) is a load-balancing service for Amazon Web Services (AWS) deployments. ELB
automatically distributes incoming application traffic and scales resources to meet traffic demands. ELB helps an IT
team adjust capacity according to incoming application and network traffic. Users enable ELB within a single
availability zone or across multiple availability zones to maintain consistent application performance.
 Detection of unhealthy Elastic Compute Cloud (EC2) instances.
 Spreading instances across healthy channels only.
 Flexible cipher support.
 Centralized management of Secure Sockets Layer (SSL) certificates.
 Optional public key authentication.
 Support for both IPv4 and IPv6.

Key concepts you will learn from ELB:


 How Elastic Load Balancing Works.
 Types of ELB.
 Target Groups.
 Creating Load Balancer.

8. Auto Scaling:
Auto-scaling is a scaling technique you can apply to workloads hosted in a cloud environment. One of the
major benefits of cloud-based hosting is that you can readily scale capacity to whatever extent is needed to support
the demand for your service. Auto-scaling takes that advantage a step further. With auto-scaling, as the demand for a
given workload changes over time, the amount of resources allocated to support that workload adapts automatically
to meet your performance requirements.

Auto Scaling Groups (ASG):


An Amazon Auto-Scaling Group (ASG) is a logical group of Amazon EC2 instances with identical features. ASG
is a cloud services technology that allows for the appropriate allocation of computational resources. Every Amazon
EC2 instance in the group must adhere to auto-scaling policies. The number of occurrences in ASG is used to calculate
the size of ASG.

Key concepts you will learn from Auto Scaling:


 Auto Scaling Components.
 Advantages of Auto Scaling.
 Auto Scaling Groups (ASG).
 Launch Templates.
 Create Auto Scaling Group.
9. Simple Storage Services (S3):
Simple Storage Service (S3) is a cloud-based storage service provided by Amazon Web Services (AWS). It
allows users to store and retrieve any amount of data from anywhere on the web. S3 is designed to be highly
scalable, durable, and secure, making it a popular choice for storing and managing data in the cloud. With S3, users
can easily upload, download, and access their data using various tools and programming interfaces.

Life Cycle Management:


Amazon Web Services (AWS) Simple Storage Service (S3) Life Cycle Management allows you to efficiently
manage your data throughout its lifecycle and reduce costs. It automates the process of moving data between
different S3 storage classes based on the rules you define.

The lifecycle defines two types of actions:


Transition actions: When you define the transition to another storage class. For example, you choose to transit the
objects to the Standard IA storage class 30 days after you have created them or archive the objects to the Glacier
storage class 60 days after you have created them.

Expiration actions: You need to define when objects expire, the Amazon S3 deletes the expired object on your
behalf. Suppose a business generates a lot of data in the form of test files, images, audio, or videos and the data is
relevant for 30 days only.

Key concepts you will learn from S3:


 Classes & Life Cycle.
 Creating and Deleting Buckets.
 Uploading & Deleting Objects.
 Hosting Static Website using S3.

10. Elastic File System (EFS):


The AWS-managed Elastic File System (EFS) is a highly reliable and scalable file storage service. EFS provides
fast access to data by using AWS technology called 'User Agent,' which enables EFS clients to access data
simultaneously, no matter where the client or data is located in the world. EFS can be attached to an EC2 instance or
a container for use as a file system and can be used to store and access files of varying sizes.
Key concepts you will learn from EFS:
 Creating & Deleting EFS.
 Attaching & Detaching EFS to Instance.

11. Route 53:


Amazon Web Services (AWS) offers Route 53, a Domain Name System (DNS) web service that is highly
available and scalable. It allows you to manage hosted zones, route domain names, and translate domain names to IP
addresses. With Route 53, you can easily manage DNS records for your AWS-hosted websites, applications, and
services. It is a highly reliable and scalable service that can handle millions of queries per second.

Route53 can use basically:


Public domain names you own (or buy) or Private domain names that can be
Resolved by your instances in your VPCs.
Route53 has many features such as Load balancing, Health checks, Routing policy like Simple, Failover, Geo
location, Latency, Weighted, Multi value.
You pay Rs.400 per month as per hosted zone.

Key concepts you will learn from Route 53:


 DNS Records overview.
 Routing Policies.
 Hosting sample Website and configuring Policies.

12. Relational Database Services (RDS):


Relational Database Service (RDS) offered by AWS is a managed database service that makes it easier to set
up, operate, and scale a relational database, such as MySQL, Oracle, or SQL Server. With RDS, you can easily launch a
VPC database instance without having to worry about its availability, maintenance, or security.

RDS allows users to create databases in seconds, clone or copy databases within the same VPC for disaster
recovery, and monitor database performance with built-in monitoring tools. RDS offers a range of instance sizes,
plans, and storage options, making it a scalable solution for businesses of all sizes.

Key concepts you will learn from RDS:


 Data Base Instances.
 Data Base Engine.
 Launching a RDS Instances (MySQL, MSSQL & Aurora).
 Multi-AZ & Read Replicas for RDS instances.

13. Elastic Beanstalk:


AWS Elastic Beanstalk is a service offered by Amazon Web Services (AWS) that makes it easy and cost-
effective to deploy, run, and manage containers, with automatic scaling and high availability. It provides a ready-to-
use service that makes it possible to build, test, and deploy web applications on the cloud without requiring
containerization or infrastructure management skills.

Workflow of Elastic Beanstalk:


You can construct an application using Elastic Beanstalk, upload an application version in the form of an
application code bundle (for instance, a Python.war file), and then provide some information about the program. The
AWS resources required to run your code are automatically created and configured by Elastic Beanstalk. You can
manage your environment and roll out new application versions once your environment has launched. The workflow
of Elastic Beanstalk is shown in the diagram below.

It supports the DevOps practice name “rolling deployments.” When enabled, your configuration deployments
work hand in hand with Auto Scaling to ensure there are always a defined number of instances available as
configuration changes are made. It gives you control as Amazon EC2 instances are updated.
Key concepts you will learn from Elastic Beanstalk:
 Deploy, manage, scale an application
 Workflow of Elastic Beanstalk
 Create Application
 Launch Environment
 Manage Environment
 Creating application source bundle
 Modifying the properties of the deployment

14. AWS Monitoring and Notification Services:


AWS Monitoring and Notification Services refers to a variety of tools and features provided by Amazon Web
Services (AWS) to help users monitor their cloud infrastructure and receive notifications about events or issues.
These tools include Cloud Watch, Cloud Trail, Cloud Optimizer, Trusted Advisor, and AWS Config, among others. They
enable users to monitor their AWS resources, automate performance optimization, and set up custom alerts and
notifications for various events or conditions.

Key concepts you will learn from AWS Monitoring and Notification Services:
 Amazon Cloud Watch – Create Topics & Set Alarms
 Simple Notification Services
 Simple Queue Service
 Simple Email Service

DevOps

The DevOps covers the following modules:


1. Introduction:
DevOps is a software development method that emphasizes collaboration and communication between
development and operations teams to streamline development and deployment processes. It aims to improve the
speed and reliability of software delivery by automating and standardising development, testing, and deployment
practices. DevOps involves tools such as continuous integration and continuous delivery, which automate the build,
test, and deployment processes. The goal of DevOps is to improve the quality of software and deliver it more
frequently and reliably to users.

Core DevOps principles

The DevOps methodology comprises four key principles that guide the effectiveness and efficiency of application
development and deployment. These principles, listed below, center on the best aspects of modern software
development.

1. Automation of the software development lifecycle. This includes automating testing, builds, releases,
the provisioning of development environments, and other manual tasks that can slow down or introduce
human error into the software delivery process.
2. Collaboration and communication. A good DevOps team has automation, but a great DevOps team also
has effective collaboration and communication.
3. Continuous improvement and minimization of waste. From automating repetitive tasks to watching
performance metrics for ways to reduce release times or mean-time-to-recovery, high performing
DevOps teams are regularly looking for areas that could be improved.
4. Hyperfocus on user needs with short feedback loops. Through automation, improved communication
and collaboration, and continuous improvement, DevOps teams can take a moment and focus on what
real users really want, and how to give it to them.

By adopting these principles, organizations can improve code quality, achieve a faster time to market, and
engage in better application planning.

2. HA-Proxy (High Availability Proxy):


HA Proxy is a popular open-source HTTP load balancer that can distribute traffic across multiple servers. In
DevOps, HA Proxy can be used as a load balancer in agile environments, allowing for easy scaling of applications as
traffic demands increase. Additionally, HA Proxy can be used to implement health checks and failover mechanisms to
ensure the availability of applications and services. By integrating HA Proxy with tools such as Docker, Kubernetes, and
Jenkins, DevOps teams can further automate their deployments and delivery pipelines.

Key concepts you will learn from HA-Proxy:

 HA Proxy Installation
 HA Proxy Configuration (haproxy.cfg)
 Backend Servers & Ports
 Load Balancing Algorithm
 Roundrobin
 Leastconn
 Multiple HA Proxy Configuration

3. Version Control – GIT:


Git is a version control system that allows multiple people to work on the same codebase simultaneously.
This means that multiple versions of codebase can exist at the same time, allowing each person to work on their own
version without affecting the work of others. Version control also allows developers to track changes to the codebase
and revert to previous versions if needed. Git stores the entire history of codebase, including who made changes and
when, which makes it easy to collaborate and track the progress of a project.
Key concepts you will learn from Version Control – GIT:
 Centralized and Distributed Systems
 Differences between SVN & GIT
 GIT
 GIT Hub Remote Respositoy
 Signup on GIT Hub
 GIT - Clone / Commit / Push
 GIT Rebase & Merge
 GIT Stash, Reset, Checkout
 GIT Clone, Fetch, Pull
 GIT Branch Strategy
 GIT Branch Management
 GIT Hard & Soft reset

4. GIT Lab:
GitLab is a web-based Git repository manager that provides a complete DevOps platform. It offers features for
source code management, continuous integration, monitoring, and more. GitLab can be used for hosting Git
repositories, managing projects, and facilitating collaboration among developers.

Key concepts you will learn from GIT Lab:


 GIT Lab Installation.
 GIT Lab Configuration.
 Managing Projects in GIT Lab.
 Creating Private Repository.
 Repository Maintenance.
 Set up key for Repository.
 Deleting Repository.

5. Build Tools:
Build tools are software programs used in the DevOps process to automate the stages of software development
and testing. These tools are typically used to compile, package, test, and deploy code changes and are designed to
improve the efficiency and reliability of software production. Some common build tools used in DevOps include Jenkins,
Travis CI, and Circle CI.

Key concepts you will learn from Build Tools:


 Java Compiler
 Difference between ANT & MAVEN
 Configure Build.xml
 MAVEN
 Maven Installation
 Maven Build requirements
 Maven POM Builds (porn.xml)
 Maven Build Life Cycle
 Maven Local Repository (.m2)
 Maven Global Repository
 Group ID, Artifact ID, Snapshot
 Maven Dependencies
 Maven Plugins

6. Nexus:
In DevOps environments, Nexus is a tool that Sonatype created for package management and deployment. It
is designed to automate the artefact supply chain, from development to deployment, and facilitate the sharing of
packages among teams and projects. Nexus is capable of serving as both a private and public Maven/Gradle repository,
enabling teams to store and share artefacts securely and efficiently.

Key concepts you will learn from Nexus:


 Sonatype nexus download
 Nexus Configuration
 Configure settings.xml & pom.xml files
 Managing Nexus Releases and Snapshots
 Repository Maintenance
 Nexus user management

7. CI/CD – Jenkins:
Jenkins is a software that allows continuous integration. Jenkins will be installed on a server where
the central build will take place. The following flowchart demonstrates a very simple workflow of how Jenkins
works.

Key concepts you will learn from Nexus:


 Artifact Repository Management.
 Dependency Management.
 Build Automation Integration.
 Security and Access Control.
 Repository Health and Monitoring.
 Support for Multiple Repository Formats.
 Continuous Integration and Deployment (CI/CD) Support.
 Collaboration and Workflow Efficiency.
 Policy Enforcement and Governance.

8. NAGIOS:
Nagios® Core™ is an Open Source system and network monitoring application. It watches hosts and services
that you specify, alerting you when things go bad and when they get better. Nagios Core was originally designed to run
under Linux, although it should work under most other unices as well.
Key concepts you will learn from NAGIOS:

 Monitoring NAGIOS.
 Hosts and Services.
 States and Status information.
 Escalation and Dependencies.
 Web Interface.
 Configuration Files.
 Performance Graphs.

9. TERRAFORM:
Terraform is an open-source infrastructure as code (IaC) tool used in DevOps and cloud computing
environments. It is developed by HashiCorp and enables users to define and provision infrastructure in a declarative
configuration language. The primary goal of Terraform is to automate the provisioning and management of
infrastructure resources, such as virtual machines, storage accounts, network configurations, and more, across various
cloud providers and on-premises environments.

Key concepts you will learn from TF:


 Install Terraform.
 Deploy a single server.
 Deploy a single web server.
 Deploy a configurable web server.
 Deploy a cluster of web servers.
 Deploy a load balancer.
 Clean up.

10. ANSIBLE:
Ansible is simple open source IT engine which automates application deployment, intra service orchestration,
cloud provisioning and many other IT tools. Ansible is easy to deploy because it does not use any agents or custom
security infrastructure. Ansible uses playbook to describe automation jobs, and playbook uses very simple language
i.e. YAML (It’s a human-readable data serialization language & is commonly used for configuration files, but could be
used in many applications where data is being stored) which is very easy for humans to understand, read and write.

Key concepts you will learn from ANSIBLE:


 Infrastructure as Code (IaC).
 Automation.
 Playbooks.
 Modules.
 Inventory.
 Roles.
 Idempotence.
 Integration with Version Control Systems.
 Dynamic Inventories.

11. DOCKER:
Docker is a platform that enables developers to automate the deployment of applications inside lightweight,
portable containers. These containers can run on any machine that has the Docker software installed, providing a
consistent and reliable environment across different environments, such as development, testing, and production.

To find applications that run with Docker, you need to look for Docker images, also sometimes called
container images. Images can be published to a registry for sharing. The biggest registry is run by Docker,
and is called Docker Hub. You can search and browse for images on Docker Hub in a web browser.
Key concepts you will learn from Docker:
 Containerization Basics.
 Docker Architecture.
 Working with Docker images.
 Container Orchestration.
 Integration with CI/CD.
 Networking and Volumes.
 Scalability and Load Balancing.

12. KUBERNETES:
Kubernetes, also known as K8s, is an open-source container orchestration system that automates the
deployment, scaling, and management of containerized applications. It was originally designed by Google and is now
maintained by the Cloud Native Computing Foundation. Kubernetes provides a platform-agnostic way to manage
containers across different cloud providers and on-premises environments.

Key concepts you will learn from Kubernetes:


 Declarative Configuration.
 Pods and containers.
 Scalability and Auto-scaling.
 Rollouts and Rollbacks.
 Storage Orchestration.

13. LINUX ADMIN COMMANDS:


Linux admin commands are used to manage and maintain Linux systems. These commands allow
administrators to perform tasks such as managing user accounts, configuring system settings, monitoring system
performance, and installing and managing software.
Concepts you will learn from Linux Admin Commands:
 File and Directory Management.
 Process Management.
 User and Group Administration.
 System Information.
 Package Management.
 File system Management.
 Remote Administration.
 Shell Scripting.
 System Boot process.
 Performance Monitoring.

You might also like