Merge From Ofoct
Merge From Ofoct
com/KcTMyeLPRSr1IVmKvkEMvn
============================
Course Name : 14-DEVOPS-AWS
============================
=============
Course Info
=============
Duration : 4 Months
Daily Class Notes will shared (soft copy materials) - Life time
#### We will create one whatsapp group for enrolled students for discussion ####s
Note: After this course completion, you can attend interviews with 3 to 4 years of
experince.
=================
Operating System
================
1) Linux OS
2) Shell Scripting
=============
DevOps Tools
=============
3) Nexus & JFrog : Artifactory Servers (to store jar or war files)
10) EFK Stack : To monitor application logs -> (Elasticsearch + FluentD + Kibana)
==========
AWS Cloud
=========
=================
Software Project
=================
=> Collection of programs is called as Software Project
================================
Why we need Software Project ?
================================
======================
Project Architecture
======================
======================
Frontend Technologies
======================
=====================
Backend Technologies
=====================
1) Java
2) .Net
3) Python
4) PHP
5) Node JS
===========
Databases
===========
1) Oracle
2) MySQL
3) PostGres
4) SQL Server
5) Mongo DB
6) Casandra etc....
===================
What is DevOps ?
==================
=> DevOps process is used to establish colloboration between Development team &
Operations team.
=> The main aim of DevOps is to deliver projects to clients with high quality in
less time.
==========================
What is DevOps Life Cycle
=========================
1) Continuous Planning
2) Continuous Development
3) Continuous Integration
4) Continuous Testing
5) Continuous Deployment
6) Continuous Monitoring
7) Continuous Feedback
=============================================
Roles & Responsibilities of DevOps Engineer
=============================================
9) Monitor CI CD pipelines
================
What is SDLC ?
================
=> SDLC represents Software Project Development process from starting to ending
1) Requirements Gathering
2) Analyze Requirements
3) Planning
4) Implementation (Development)
5) Testing
6) Deployment
7) Monitoring
1) Waterfall Model
2) Agile Model
===============
Waterfall Model
===============
Note: If client don't like our delivery then total money wasted and time wasted.
=============
Agile Model
=============
=> Agile is an iterative approach to develop and deliver the application
==========================
What is Infrastructure ?
==========================
The resources that are required to run our business is called as Infrastructure
Ex: Machines, Servers, Databases, Power, Network, Storage, Security & Monitoring
1) Security
2) Scalability
3) Availability
4) Network Issues
5) Diaster Recovery
========================
What is Cloud Computing
=========================
=> It is the process of delivering IT resources on demand basis over the internet
with Pay as you go model.
=> Pay as you go model means use the resources and pay bill for using them just
like our post paid bill / electricity bill/ credit bills etc...
=================
Cloud Providers
================
=> Companies which are providing IT resources over internet are called as Cloud
Providers
============================
Cloud Computing Advantages
===========================
1) Pay as you go
2) Low cost
3) Scalability
4) Availability
5) Reliability
6) Unlimited Storage
7) Security
8) Backup
=======================
What is AWS Cloud ?
=======================
=> 190+ countries are using AWS cloud infrastructure to run their businesses.
==========================
AWS Global Infrastructure
==========================
=> AWS maintaining their infrastructure using Regions & Availability Zones
34 - Regions (Geographical Location)
==========
Revision
==========
======================================
Module-2 : Linux OS + Shell Scripting
======================================
1) Windows OS
2) Linux OS
3) Mac OS
============
Windows OS
============
-> Developed by Microsoft Company
Ex: Watching Movies, Playing Games, Browsing, Storing personal data ....
-> Security features are very less (we need to install Anti Virus softwares)
==========
Linux OS
==========
-> Multi User Based Operating System (multiple users can access at a time)
=================
History of Linux
=================
-> Earlier "Linus Torvalds" used Unix OS and he faced some challenges with Unix OS
and he reported to company to change Unix OS but Company rejected his suggetions.
-> Linus Torvalds started developing his own operating by using "Minux OS"
-> After developing Linux OS "Linux Torvalds" released Linux OS into market for
free of cost along with source code.
-> Several Companies downloaded Linux OS source code and modified according to
their requirement and released into market with their own brand names. Those are
called as Linux Distributions / Flavours .
1) Amazon Linux
2) Ubuntu Linux
3) CentOS Linux
4) Debian Linux
5) Fedora Linux
6) SUSE Linux
7) Kali Linux
8) RED HAT Linux etc..
2) What is Windows OS
3) Linux OS
4) Linux History
5) Linux Distributions
==================================
Environment Setup to Learn Linux
=================================
===============
Linux Commands
===============
cd : change directory
rm -r <dirname>
rm : Remove file
cat : To create files with data, append data to file and display file content
tac f1.txt
cp f1.txt f2.txt
Note: How to copy more than one file data to another file
cat m1.txt m2.txt > m3.txt : Copy two files data to 3rd file
wc : word count (it will display no.oflines, no.of words and no.of characters in
given file)
wc m3.txt
=======================
Text Editors in Linux
=======================
vi : visual editor
$ vi linux.txt
Save and Close : Press :wq to save the changes and close the file.
Close without Saving : press :q! for closing file without saving the changes
============
SED command
===========
=> We can replace one word with another word using SED command without opening file
Note: by default sed command will give output to terminal but it won't make changes
to original file. To make changes to original file we need to use -i in sed command
=============
grep command
=============
================================
Working with Zip files in Linux
================================
# Create zip file (all .txt files will be added to notes.zip file)
$ zip notes *.txt
###### Note: man command is used to get documentation about linux commands #####
$ man vi
$ man ls
$ man zip
##########################
02 - Aug - 2023 (Wednesday)
###########################
===========================
locate and find commands
==========================
=> locate and find commands are used for file search
=> locate command will search for the files in locate database.
========================
Users & Groups in Linux
========================
=> In Every Linux Machine by default one 'root' user account will be available
Note: Root user will have all priviliges to perform any operation.
=> When we launch EC2 instance using "Amazon Linux AMI" we will get 'ec2-user' by
default.
Note: For every user one home directory will be created in Linux machine under
'home' directory
=> To get user account information we can use 'id' command in linux machine
$ id <uname>
====================================
Working with User Accounts in Linux
===================================
=====================================
Working with User Groups in Linux
=====================================
$ cat /etc/group
Note: When we create user in linux machine, it will create user account + user-
group
# Rename user-group
$ groupmod -g <new-name> <existing-name>
# Rename user
$ usermod -l <new-name> <existing-name>
################
03-Aug-2023(Thu)
################
==========================
File Permissions In Linux
==========================
=> In Linux VM everything is treated as file (Ex: normal files, directory files)
1) READ (r)
2) WRITE (w)
3) EXECUTE (x)
Ex: rwxrwxrwx
Note: "+" is used to add permission and " - " is used to remove permission
===================================
File Permissions in Numeric Format
===================================
0 : No Permission
1 : Execute
2 : Write
4 : Read
============================
Working with chown command
===========================
=> chown command is used to change file owner and file group
Note: We can use userid and group id also in chown command instead of names.
#################
04-Aug-2023
###############
===========================
Hard Links and Soft Links
===========================
1) Hard Link
2) Soft Link
---------------------------
Syntax To create Hard Link
--------------------------
$ ln <orginal-file> <link-file>
$ touch m1.txt
$ ln m1.txt m10.txt
$ cat m10.txt
$ ls -li
Note-1: original file and link file will have same inode number
Note-2: When we write data to original file then it will reflect in link file also.
Note-3: When we delete original file link file is not effected (hard link)
---------------------------
Syntax To create soft Link
---------------------------
$ ln -s <original-file> <link-file>
$ touch f1.txt
$ ln -s f1.txt f10.txt
$ ls -li
$ rm f1.txt
$ ls -li
Note: When we remove original file then soft link file will become dangling. We
can't access it.
=============================
Process Management in Linux
============================
=> In Windows machine we can see running processes using "Task Manager"
$ ps
$ top
=> To kill process in linux machine we will use kill command
$ kill -9 <process-id>
=====================
Networking Commands
=====================
# To check connectivity
$ ping <ip>
$ ping www.google.com
$ ping www.facebook.com
========================
Utility Commands
========================
# Display OS information
$ cat /etc/os-release
===========================
Package Managers in Linux
===========================
Ex: Ubuntu
=========================
What is a Sudoers File?
========================
=> A Sudoers file is just like any other file on a Linux Operating System.
=> It plays a vital role in managing what a “User” or “Users in a Group” can do on
the system.
======================
Why the name Sudoers?
======================
=> The word Sudo is the abbreviation of the term “Super User Do”.
=> The users with Sudo permission are called as Sudo Users.
=> Management of all these sudo users is done through a file called as Sudoers
File.
=> This file also lives in the /etc directory just like any other configuration
file.
Note: ‘#’ indicates comment in the file, OS will ignore this line while executing.
Read it as — User Root can Run Any Command as Any User from Any Group on Any Host.
→ The first ALL is used to define HOSTS. We can define Hostname/Ip-Address instead
of ALL. ALL means any host.
→ Second ALL : Third ALL is User:Group. Instead of ALL we can define User or User
with the group like User:Group. ALL:ALL means All users and All groups.
→ Last ALL is the Command. Instead of ALL, we can define a command or set of
commands. ALL means all commands.
Read this as — User “ashok” can Run the command “/usr/bin/cat /etc/shadow” as ROOT
user on all the HOSTS.
=================================
What is .bashrc file in linux ?
=================================
=> For every user account one .bashrc file will be available under user home
directory
=> .bashrc file is hidden file. To see this we need to execute 'ls -la'
=========================
What is .ssh in linux ?
==========================
=> This is hidden directory which contains authorized_keys (pem file public key)
$ cat .ssh/authorized_keys
====================
Linux Architecture
====================
1) Applications / Commands
4) Hardware
=================
Shell Scripting
=================
=> Scripting means writing set of commands in a file and executing them.
=> Scripting is used to automate our daily routine tasks in the project.
Ex: delete temp files, take files backup, archive log files, monitor servers
etc..
$ sh <file-name>
=========
Script-1
========
whoami
pwd
date
ls -l
touch f2.txt
ls -l
=========
Script-2
========
=========
Script-3
========
read name
=========
Script-4
========
read a
read b
let sum=$(($a+$b))
========
3.1) if - elif
3.2) for loop
3.3) while loop
===========
Variables
===========
a=10
b=20
name=ashok
$ echo $USER
$ echo $SHELL
a=10
b=20
name=ashok
===================
Variable Rules
===================
=======================
Command Line Arguments
=======================
=> The arguments which we will pass to script file at the time of execution
ex:
$ sh script5.sh ashok it
=> We can access command line arguments in script file like below
============
Script-6
============
==========
Script-7
=========
let result=$(($1*$2))
====================
Control Statements
====================
=> Script will execute from top to bottom in forward direction (this is default
behaviour)
=> If we want to control script execution flow then we have to use "Control
Statements"
=======================
Conditional statements
=======================
=> Conditional statements are used to execute commands based on given condition
Syntax:
if [ condition ]
then
stmts;
else
stmts;
fi
=> As per above syntax if given condition satisifed then if statments will execute
othewise else statements will execute.
Note: If we want to check more than one conditon then "if - elif - else - fi"
============
script - 8
============
====================
Looping Statements
====================
Ex: print "hello" message for 1000 times (we can't write echo "hello" for 1000
times).
===================================
for loop example (1 to 10 numbers)
===================================
============================
print numbers from 10 to 1
============================
for((i=10; i>=1; i--))
do
echo "$i"
done
===========
while loop
===========
i=10
while [ $i -ge 1 ]
do
echo "$i"
let i--;
done
i=1
while [ $i -le 10 ]
do
echo "$i"
let i++
done
===========
Functions
===========
=> The big task can be divided into smaller tasks using functions.
-------
Syntax
-------
===================================================================================
Q) Write shell script to take file name as input and print content of that file
===================================================================================
---------------------------
approach-1 (read command)
---------------------------
function readFileData(){
echo "enter file name"
read filename
echo "#### file reading start #####"
cat $filename
echo "##### file reading end #####"
}
#calling function
readFileData
----------------------
approach-2 (cmd arg)
----------------------
filename=$1
function readFileData(){
echo "#### file reading start #####"
cat $filename
echo "##### file reading end #####"
}
#calling function
readFileData
===========================================================================
Q) Check presence of given file, if it is not available create that file
============================================================================
# file checking
read filename
if [ -f "$filename" ] ; then
echo "File alredy exist"
else
echo "File not available, hence creating"
touch $filename
===================================================================================
Q) Check presence of given directory, if it is not available create that directory
===================================================================================
# file checking
read dirname
if [ -d "$dirname" ] ; then
echo "Directory alredy exist"
else
echo "Directory not available, hence creating"
mkdir $dirname
===========================
Shell Script Assignments
===========================
Task-1 : Take a number from user and check given number is +ve or -ve
Task-2 : Take a number from user and check given number is even or odd
Task-3 : Take a number from user and check given number is prime number or not
Task-4 : Take a number from user and print multiplication table of given number
like below
5 * 1 = 5
5 * 2 = 10
5 * 3 = 15
..
5 * 10 = 50
=========
Summary
=========
1) What is Shell
2) What is Kernel
3) What is Scripting
4) Why we need Scripting
5) Writing Shell Script files (.sh extension)
6) Executing shell script files ($ sh filename)
7) Variables
8) Taking input using shell script
9) Commandline arguments
10) Conditional Statements (if-elif-else-fi)
11) Working with Loops (for and while)
====================
CRON JOBS in Linux
====================
=> CROND will check for cron schedules every minute. If any job is scheduled then
crond will execute that job based on given schedule.
=================
CRON JOB Syntax
=================
* * * * * <command/script-file>
=> Fift * will represent day of week ( 0 - 6 or Mon, Tues, Wed... Sun)
========================
Sample CRON Expressions
=======================
=========================
What is crontab file ?
=========================
=> In Linux machine, for every user account one crontab file will be available.
$ crontab -e
$ crontab -l
# Remove Crontab file
$ crontab -r
==================
CRON Job practice
==================
$ vi task.sh
touch /home/ubuntu/f1.txt
touch /home/ubunut/f2.txt
$ chmod +x task.sh
$ crontab -e
$ ls - l
========================
Apache Maven
========================
1) What is Java ?
2) Java Project Types
3) Java Project Execution Flow
4) Java Projects Build Process
5) What is Build Tool
6) Why we need tools
7) Maven Introduction
8) Maven Setup
9) Maven Dependencies
10) Maven Goals
11) Maven Repositories
12) Working with Maven
==================
Java Introduction
==================
1) Stand-alone applications
2) Web applications
=> The application which is accessible only in one computer is called as stand-
alone application.
=> The application which can be accessed by multiple users at a time is called as
web application.
============================
Java Project Execution Flow
============================
2) Compile source code (java compiler) => It generates byte code (.class)
5) If project is packaged as war then we need to deploy war file in server (Ex:
tomcat)
=============================
Java Project Build Process
=============================
=> To avoid manual build process, build tools came into picture
=================
What is Maven ?
=================
====================
What Maven can do ?
====================
===============================
Maven Setup in Windows Machine
===============================
===================
Maven Terminology
==================
=> ArcheType
=> Group ID
=> Artifact ID
=> Version
=> Packaging Type
=> Maven Dependencies
=> Maven Goals
=> Maven Repositories
=> Maven Dependencies are nothing but libraries required for project development
=========================================
Creating Maven Stand-alone application
=========================================
=> Open command prompt and execute below command to create maven project
=> Inside the project we can see 'src' folder and pom.xml file
=> From cmd, Go inside project directory and execute maven goals
$ cd <project-directory>
=> Once build got success we can see 'target' directory inside project directory.
It contains byte code and our project packaged file (jar)
====================
Maven Dependencies
====================
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>6.0.11</version>
</dependency>
=> Add above dependency in project pom.xml file under <dependencies/> section and
execute maven goals.
=============
Maven Goals
=============
compile : Compile project source code ( convert .java files to .class files)
=========================================
Creating Maven Web application
=========================================
=> Open command prompt and execute below command to create maven web project
=> From cmd, Go inside project directory and execute maven goals
$ cd <project-directory>
=> Once build got success we can see 'target' directory inside project directory.
It contains byte code and our project packaged file (war).
===================
Maven Repositories
===================
1) Local Repository
2) Cental Repository
3) Remote Repository
=> Local Repository will be created in our machine (.m2 folder)
=> Remote Repository will be maintained by our company to store shared libraries.
Note: Only First time maven will download dependencies from central / remote
repository to local repository.
============================
Working with maven in linux
============================
$ mvn -v
$ java -version
$ mvn -vs
====================
Maven - Summary
===================
1) What is Java
2) Stand-alone vs Web Application
3) Java Project Execution Flow
4) Java Project Build Process
5) Build Tools
6) What is Maven
7) What maven can do
8) Maven Setup in Windows
9) Maven Terminology
10) Maven Project Creation
11) What is pom.xml
12) Maven Dependencies
13) Maven Goals
14) Maven Repositories
15) Working with Maven in Linux VM
=========
Git Hub
=========
-> Git Hub is a platform which is used to store project related files/code
-> All the developers can connect to project repository to store all the source
code (Code Integration will become easy)
- who modified
- when modified
- what modified
- why modified
===================
Environment Setup
===================
===============================
What is Git Hub Repository ?
===============================
=> Repository is a place where we can store project source code / files
=> Project team members will connect with git repository using its URL.
1) Public Repo (anybody can see & you choose who can commit)
====================
Git Bash Commands
====================
git restore : To unstage the files & to discard changes made in files
git pull : To take latest changes from remote repo to local repo
===============
Git Branches
===============
=> Branches are used to maintain multiple code bases in the single repository
1) main (default)
2) develop
3) feature
4) qa
5) uat
6) release
=> If we have branches in git repo then multiple teams can work paralelly without
effecting other teams code.
Note: When we execute 'git clone' command then we will get code from default branch
which is 'main'.
=> In git bash we can switch from one branch to another branch is 'git checkout'
command
===========================
What is Branch Merging ?
===========================
=> Merging changes from one branch to another branch is called as Branch merging.
====================
Git Branches Task
====================
==========================
What is .gitignore file ?
==========================
=> .gitignore is used to exclude files & folders from our commits
Ex: In maven project, we should n't commit target folder to git repository hence
we can give this info to git using .gitignore file.
========================
What is git Conflict ?
========================
=> When two people making changes in same file and same line then we will get
conflicts problem.
============================================
1) How to remove git local commits ?
Note: After executing git revert we have to execute git push also
=> It is used to save working tree changes to temporary area and make working tree
clean.
$ git stash
=> We can get stashed changes back using 'git stash apply'
# Creating tag
$ git tag <tag-name>
================================
1) git config
2) git init
3) git status
4) git add
5) git commit
6) git push
7) git log
8) git rm
9) git clone
10) git branch
11) git checkout
12) git pull
13) git restore
14) git reset
15) git revert
16) git merge
17) git stash
18) git stash apply
==========================
===========================
Realtime Work Process
===========================
1) Developers will send request for DevOps team to create git repository for the
project with manager approval.
2) DevOps team will provide Repo Access for team members (RBAC)
3) DevOps team will create git repo and will share that repo URL with Dev Team.
4) Dev team will push their code into git repo and Dev Team will create required
branches also
Note: DevOps team will decide Branching strategy for the project.
5) DevOps team will clone git repo for build and deployment process
===========
WebServers
===========
=> Users can access our web application by sending request to server
=> We have several servers in the market to run our web applications
a) Tomcat
b) JBoss
c) Glasfish
d) Weblogic
e) WebSphere
==============
Tomcat Server
=============
=> Tomcat is a web server developed by Apache Organization
=> Tomcat server runs on 8080 port number (we can change it)
================================
Tomcat Server folder structure
================================
bin: It contains files to start & stop server (windows : bat , Linux : sh)
temp : Temporary files will be created here (We can delete them)
=============
Tomcat Setup
=============
URL : https://tomcat.apache.org/download-90.cgi
$ wget <url>
=> Go inside tomcat server bin directory and execute below command
$ sh startup.sh
=> Enable tomcat server port in security group inbound rules (8080)
URL : http://ec2-public-ip:8080/war-file-name
==================================
*Lab Task To Peform On Linux VM*
==================================
3) Copy war file into tomcat server webapps folder from target
============================
Tomcat Admin Console Access
==============================
=> By default the Host Manager is only accessible from a browser running on the
same machine as Tomcat. If you wish to modify this restriction, you'll need to edit
the Host Manager's context.xml file.
=> In Manager context.xml file, change <Valve> section like below (allow attribute
value changed)
============================================================================
Add tomact users in "<tomact-folder>/conf/tomact-users.xml" file like below
============================================================================
================================================================================
We can change tomcat server default port in tomact/conf/server.xml file
================================================================================
-> When we change tomact port number in server.xml file then we have to enable that
port in Security Group which is associated with our EC2 instance.
===========
Sonar Qube
===========
=> Using SonarQube we can perform code review to identify developers mistakes in
the code
=> DevOps team is responsible to generate Project Code Review Report and share it
with Development team.
====================
Sonar Issues
====================
=> SonarQube server will identify below types of issues in the project
========================
Sonar Quality Profiles
========================
=> In SonarQube, for every language one Quaylity Profile is available with set of
rules to perform code review.
=> When we perform code review using sonar then it will identify our project
developed using which language based on that it will execute that language specific
quality profile to perform code review.
Note: We can create our Quality Profiles to customize code review for our projects.
====================
Sonar Quality Gate
====================
=> Quality Gate Represents overall project code quality is Passed or Failed
Note: If Code Quality Gate is Failed, we should not deploy that code.
=======================
SonarQube Server Setup
=======================
===============================
SonarServer Setup in Linux
===============================
-> Create EC2 instance with 4 GB RAM (t2.medium) (Amazon Linux AMI)
$ sudo su
$ cd /opt
$ wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-7.8.zip
$ unzip sonarqube-7.8.zip
**** Note: SonarQube server will not run with root user **************************
$ useradd sonar
$ visudo
-> Go to sonar bin directory then goto linux directory and run sonar server
$ cd /opt/sonarqube-7.8/bin/linux-x86-64
$ sh sonar.sh start
$ sh sonar.sh status
Ex: sonar.web.port=6000
URL : http://EC2-VM-IP:9000/
-> After login, we can go to Security and we can enable Force Authentication.
Note: Once your work got completed then stop your EC2 instance because we have
t2.medium so bill be generated.
$ sh sonar.sh status
$ cd ../bin/
$ sh sonar.sh start
$ sh sonar.sh status
================================================
Integrate Sonar server with Java Maven App
=================================================
<properties>
<sonar.host.url>http://15.207.221.244:9000/</sonar.host.url>
<sonar.login>admin</sonar.login>
<sonar.password>admin</sonar.password>
</properties>
$ mvn sonar:sonar
-> After build success, goto sonar dashboard and verify the results
Note: Instead of username and pwd we can configure sonar token in pom.xml
==========================
Working with Sonar Token
==========================
-> Go to Sonar Server Dashboard -> Login -> Click on profile -> My Account ->
Security -> Generate Token
-> Copy the token and configure that token in pom.xml file like below
<sonar.host.url>http://15.207.221.244:9000/</sonar.host.url>
<sonar.login>8114ea8a4a594824e1ff08aa192b59befbbae96e</sonar.login>
=================
Quality Profile
=================
-> For each programming language sonar qube provided one quality profile with set
of rules
-> We can create our own quality profile based on project requirement
- Name : SBI_Project_QP
- Language: Java
- Parent : None
Note: We can make our quality profile as default one then it will be applicable for
all the projects which gets reviewed under this sonar server.
Note: If we have any common ruleset for all projects then we can create one quality
profile and we can use that as parent quality profile for other projects.
-> We can configure custom quality profile to a specific project using below steps
==============
Quality Gate
==============
-> Quality Gate represents set of metrics to identify project quality is Passed or
Failed
===========
Conclusion
===========
-> If project quality gate is failed then we should not accept that code for
deployment.
-> If project is having Sonar issues then development team is responsible to fix
those issues
-> As a DevOps engineer, we will perform Code Review and we will send Code Review
report to Development team (we will send sonar server URL to development team)
=========
Summary
=========
1) What is SonarQube
7) Quality Profiles
8) Quality Gates
##################
Sonatype Nexus
##################
-> Github is a SCM software which is used to store source code of the project
-> Nexus is Artifact Repository which is used to store build artifacts (jar / war)
##################
Nexus Setup
##################
$ sudo su -
$ cd /opt
Links to download :
https://help.sonatype.com/repomanager3/product-information/download
# latest version
$ wget https://download.sonatype.com/nexus/3/nexus-3.40.1-01-unix.tar.gz
$ mv /opt/nexus-3.40.1-01 /opt/nexus
#As a good security practice, Nexus is not advised to run nexus service as a root
user, so create a new user called nexus and grant sudo access to manage nexus
services as follows.
$ useradd nexus
# Add below line in suderos file (just below root user details we can add it)
nexus ALL=(ALL) NOPASSWD: ALL
$ vi /opt/nexus/bin/nexus.rc
run_as_user="nexus"
$ ln -s /opt/nexus/bin/nexus /etc/init.d/nexus
URL : http://IPAddess:8081/
# Default Username
User Name: admin
/opt/nexus/etc/nexus-default.properties
#################################
Integrate Maven App with Nexus
#################################
1) snapshot
2) release
-> If project is under development then that project build artifacts will be stored
into snapshot repository
-> If project development completed and released to production then that project
build artifacts will be stored to release repository
Note: Based on <version/> name available in project pom.xml file it will decide
artifacts should be stored to which repository
-> Nexus Repository details we will configure in project pom.xml file like below
<distributionManagement>
<repository>
<id>nexus</id>
<name>Ashok IT Releases Nexus Repo</name>
<url>http://15.206.128.43:8081/repository/ashokit-release/</url>
</repository>
<snapshotRepository>
<id>nexus</id>
<name>Ashok IT Snapshots Nexus Repo</name>
<url>http://15.206.128.43:8081/repository/ashokit-snapshot/</url>
</snapshotRepository>
</distributionManagement>
-> In settings.xml file, under <servers> tag add below <server> tag
<server>
<id>nexus</id>
<username>admin</username>
<password>admin</password>
</server>
-> Once these details are configured then we can run below maven goal to upload
build artifacts to Nexus Server
Note: When we execute maven deploy goal, internally it will execute 'compile + test
+ package + install + deploy' goals.
##################
Remote Repository
##################
-> Remote repository used for shared libraries (common jars required for multiple
projects)
-> If we want to use few jar files in multiple projects in the company then we will
use Remote Repository to store those jars (libraries).
-> Go to Repositories
-> Create New Repository
-> Choose Maven (Hosted) Repository
-> Give a name for Repository (Ex: ashokit-remote-repository) & Complete the
process
-> Go to BrowseSection
-> Select Remote Repository (By default it is empty)
-> Click on Upload Component
-> Upload Jar file and give groupId, artifactId and Version
groupId : in.ashokit
artifactId : pwd-utils
version : 1.0
-> Take dependency details of uploaded jar file and add in project pom.xml as a
dependency like below
<dependency>
<groupId>in.ashokit</groupId>
<artifactId>pwd-utils</artifactId>
<version>1.0</version>
</dependency>
-> We need to add Remote Repository Details in pom.xml above <dependencies/> tag
<repositories>
<repository>
<id>nexus</id>
<url>http://15.206.128.43:8081/repository/ashokit-remote-
repo/</url>
</repository>
</repositories>
-> After adding the remote repository details in pom.xml then execute maven package
goal and see dependency is downloading from nexus repo or not.
=========================================
How to resolve HTTP Mirror Block Issue ?
=========================================
<mirror>
<id>maven-default-http-blocker</id>
<mirrorOf>dummy</mirrorOf>
<name>Pseudo repository to mirror external repositories initially using
HTTP.</name>
<url>http://0.0.0.0/</url>
<blocked>false</blocked>
</mirror>
===============
Nexus Summary
===============
=======================
CI CD Server (Jenkins)
=======================
CI : Continuous Integration
CD : Continuos Delivery
===========================
What is Build & Deployment
===========================
=> In single day multipe times code will be committed to git hub repository from
Development team so multiple times we have to perform build and deployment process.
Note: If we do build and deployment process manually then it is time taking process
and error prone.
=> To overcome above problems, we need to automate Project Build and Deployment
process.
=> To automate project build and deployment process we will use JENKINS.
===================
What is Jenkins ?
===================
=> Using Jenkins we can deploy any type of project (ex: java, python, dot net,
react, angular).
================
Jenkins Setup
================
URL : http://ec2-public-ip:8080/
pwd : 5fe6ddcc9db244cab6aca5ccf2d6a83a
-> Select "Install Suggested Plugins" card (it will install those plugins)
===============================
Creating First Job in Jenkins
===============================
6) Click on 'Build Number' and then click on 'Console Ouput' to see job execution
details.
$ cd /var/lib/jenkins/workspace/
7) Go to Jenkins home directory and check for the job name --> check the file
created inside the job
=========================================================
Jenkins Job with with GIT Hub Repo + Maven - Integeration
=========================================================
==================================
Maven Installation In Jenkins:
==================================
Jenkins Dashboard -> Manage Jenkins --> Global Tools Configuration -> Add maven
==================================
Sample Git Repo URLS For Practise
==================================
============================================================
JOB-2 :: Steps To Create Jenkins Job with Git Repo + Maven
============================================================
3) Access Jenkins Server Dashboard and Login with your jenkins credentials
6) Click on 'Build Number' and then click on 'Console Ouput' to see job execution
details.
=> Go to jenkins workspace and then go to job folder then goto target folder there
we see war file created.
-----------------------------------------------------------------------------------
--------------
URL : http://EC2-VM-IP:8080/exit/
1) Go to Jenkins Dashboard -> Manage Jenkins --> Manage Plugins -> Goto Available
Tab -> Search For
"Deploy To Container" Plugin -> Install without restart.
4) Run the job now using 'Build Now' option and see see 'Console Output' of job
===================================================
How to Create Jenkins Jobs with Build Parameters
===================================================
=> Build Parameters are used to supply dynamic inputs to run the Job. Using Build
Parameters we can avoid hard coding.
====================================
User & Roles Management In Jenkins
====================================
=> In Our Project multiple teams will be available
=> For every Team member Jenkins login access will be provided.
Note: Every team members will have their own user account to login into jenkins.
=> Operations team members are responsible to create / edit / delete / run jenkins
jobs
=> Dev and Testing team members are only responsible to run the jenkins job.
================================================
How to create users and manage user permissions
================================================
Note: By default admin role will be available and we can create custom role based
on requirement
-> In Role we can configure what that Role assigned user can do in jenkins
=====================================
Working with User Groups in Jenkins
=====================================
=> This plugin allows you to define roles and assign them to users or groups.
=> Click "Add" to create a new role, and specify the permissions for that role.
=> After creating roles, go to "Manage Jenkins" > "Manage Users & Roles."
=> Select a user and click "Assign Roles" to add them to one or more roles.
========================================
Jenkins - Master & Slave Architecture
========================================
=> If we use single machine to run Jenkins server then burden will be increased if
we run multiple jobs at a time.
=> To execute multiple jobs paralelly we will use Master & Slave Configuration
=> Master & Slave configuration is used to reduce burden on Jenkins Server by
distributing tasks/load.
================
Jenkins Master
===============
=> The machine which contains Jenkins Server is called as Jenkins Master machine.
Note: We can run jobs on Jenkins Master machine directley but not recommended.
==============
Jenkins Slave
==============
=> The machine which is connected with 'Jenkins Master' machine is called as
'Jenkins-Slave' machine.
=> Slave Machine will recieve task from 'Master Machine' for job execution.
===============================
Step-1 : Create Jenkins Master
==============================
===============================
Step-2 : Create Jenkins Slave
===============================
$ mkdir slavenode
=====================================================
Step-3: Configure Slave Node in Jenkins Master Node
=====================================================
1) Go to Jenkins Dashboard
2) Go to Manage Jenkins
3) Go to Manage Nodes & Clouds
4) Click on 'New Node' -> Enter Node Name -> Select Permanent Agent
5) Enter Remote Root Directory ( /home/ubuntu/slavenode )
6) Enter Label name as Slave-1
7) Select Launch Method as 'Launch Agent Via SSH'
8) Give Host as 'Slave VM DNS URL'
9) Add Credentials ( Select Kind as : SSH Username with private key )
10) Enter Username as : ubuntu
11) Select Private Key as Enter Directley and add private key
12) Select Host Key Strategy as 'Manually Trusted Key Verification Strategy'
13) Click on Apply and Save (We can see configured slave)
Note: Under Generation Section of Job creation process, Select "Restrict Where This
Project Can Run" and enter Slave Nodel Label name and finish job creation.
Note: Job will be executed on Slave Node (Go to Job Console Ouput and verify
execution details)
=> Build & Deployment
=> Challenges in Manual Build & Deployment
=> Automated Build & Deployment
=> CI & CD
=> Jenkins Introduction
=> Jenkins Setup in Linux
=> Jenkins Job Creation
=> Jenkins Build Parameters
=> User & Role Management in Jenkins (RBAC)
=> Git + Maven + Tomcat + Jenkins
=> Master & Slave Configuration
=================
Jenkins Pipeline
=================
=> Pipeline means set of steps to automate build & deployment process.
=> Using Pipelines we can handle complex build & deployment tasks.
a) Scripted Pipeline
b) Declarative Pipeline
==============================
Jenkins Declarative Pipeline
==============================
pipeline {
agent any
stages {
stage('Git Clone') {
steps {
echo 'Cloning Repository....'
}
}
stage('Maven Build'){
steps{
echo 'Maven Build....'
}
}
stage('Tomcat Deploy'){
steps{
echo "War deployed to tomcat"
}
}
}
}
====================================
Declarative Pipeline (Git + Maven)
====================================
pipeline {
agent any
tools{
maven "Maven-3.9.4"
}
stages {
stage('Clone') {
steps {
git 'https://github.com/ashokitschool/maven-web-app.git'
}
}
stage('Build') {
steps {
sh 'mvn clean package'
}
}
}
}
==================
Scripted Pipeline
==================
node {
stage('Git Clone') {
git credentialsId: 'GIT-Credentials', url:
'https://github.com/ashokitschool/maven-web-app.git'
}
stage('Maven Build'){
def mavenHome = tool name:"Maven-3.9.4", type: "maven";
def mavenPath = "${mavenHome}/bin/mvn";
sh "${mavenPath} clean package"
}
}
##########################################
DevOps Project Setup with CI CD Pipeline
##########################################
1) Git Hub
2) Maven
3) SonarQube
4) Nexus Repo
5) Tomcat
6) Jenkins
##################
Pipeline creation
##################
stage('clone repo') {
=========================================================
2) Create Maven Build Stage (Add maven in global tools)
=========================================================
============================
3) Create SonarQube stage
===========================
Ex: 4b10dc9d10194f31f15d0233bdf8acc619c5ec96
-> Manage Jenkins -> Plugins -> Available -> Sonar Qube Scanner Plugin -> Install
it
-> Manage Jenkins -> Configure System -> Sonar Qube Servers -> Add Sonar Qube
Server
- Name : Sonar-Server-7.8
- Server URL : http://52.66.247.11:9000/ (Give your sonar
server url here)
- Add Sonar Server Token
-> Once above steps are completed, then add below stage in the pipeline
stage('SonarQube analysis') {
withSonarQubeEnv('Sonar-Server-7.8') {
def mavenHome = tool name: "Maven-3.8.6", type: "maven"
def mavenCMD = "${mavenHome}/bin/mvn"
sh "${mavenCMD} sonar:sonar"
}
}
=======================
4) Create Nexus Stage
======================
-> Run nexus VM and create nexus repository
-> Install Nexus Repository Plugin using Manage Plugins ( Plugin Name : Nexus
Artifact Uploader)
=========================
5) Create Deploy Stage
=========================
stage ('Deploy'){
sshagent(['Tomcat-Server-Agent']) {
sh 'scp -o StrictHostKeyChecking=no target/01-maven-web-app.war
ec2-user@43.204.115.248:/home/ec2-user/apache-tomcat-9.0.80/webapps'
}
}
################
Final Pipeline
################
node {
stage('Git Clone'){
git credentialsId: 'GIT-CREDENTIALS', url:
'https://github.com/ashokitschool/maven-web-app.git'
}
stage('Maven Build'){
def mavenHome = tool name: "Maven-3.9.4", type: "maven"
def mavenPath = "${mavenHome}/bin/mvn"
sh "${mavenPath} clean package"
}
stage('Code Review') {
withSonarQubeEnv('Sonar-Server-7.8') {
def mavenHome = tool name: "Maven-3.9.4", type: "maven"
def mavenCMD = "${mavenHome}/bin/mvn"
sh "${mavenCMD} sonar:sonar"
}
}
stage('Quality Gate') {
steps {
timeout(time: 1, unit: 'HOURS') {
def qg = waitForQualityGate()
if (qg.status != 'OK') {
error "Quality Gate failed: ${qg.status}"
}
}
}
}
stage('Nexus Upload'){
nexusArtifactUploader artifacts: [[artifactId: '01-Maven-Web-App',
classifier: '', file: 'target/01-maven-web-app.war', type: 'war']], credentialsId:
'Nexus-Credentials', groupId: 'in.ashokit', nexusUrl: '3.108.217.159:8081',
nexusVersion: 'nexus3', protocol: 'http', repository: 'ashokit-snapshot-repo',
version: '1.0-SNAPSHOT'
}
stage ('Deploy'){
sshagent(['Tomcat-Server-Agent']) {
sh 'scp -o StrictHostKeyChecking=no target/01-maven-web-app.war ec2-
user@43.204.115.248:/home/ec2-user/apache-tomcat-9.0.80/webapps'
}
}
===========================
Pipeline Conditions
===========================
pipeline {
agent any
stages {
stage('Build') {
steps {
// Your build steps here
}
}
stage('SonarQube Analysis') {
steps {
withSonarQubeEnv('YourSonarQubeServer') {
sh 'mvn clean package sonar:sonar'
}
}
}
stage('Quality Gate') {
steps {
timeout(time: 1, unit: 'HOURS') {
def qg = waitForQualityGate()
if (qg.status != 'OK') {
error "Quality Gate failed: ${qg.status}"
}
}
}
}
stage('Deploy') {
when {
expression { currentBuild.resultIsBetterOrEqualTo('SUCCESS') }
}
steps {
// Your deployment steps here
}
}
}
}
################################
Email Notifications In Jenkins
################################
-> With this option we can send email notification to team members after jenkins
job execution completed
Note: Under Advanced section add your gmail account credential for authentication
purpose.
DL : sbiteam@tcs.com
======================================
Scripted Pipeline Email Notification
=======================================
node {
stage('Demo'){
echo 'Hello world'
}
// send to email
emailext (
subject: "STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",
body: """<p>STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p>
<p>Check console output at "<a href='${env.BUILD_URL}'>${env.JOB_NAME}
[${env.BUILD_NUMBER}]</a>"</p>""",
to: 'ashokitschool@gmail.com',
attachLog: true
)
}
==========================================
Declarative Pipeline Email Notification
==========================================
pipeline {
agent any
tools{
maven "Maven-3.9.4"
}
stages {
stage('Clone') {
steps {
git 'https://github.com/ashokitschool/maven-web-app.git'
}
}
stage('Build') {
steps {
sh 'mvn clean package'
}
}
}
post {
failure {
emailext(
subject: "Build Failed: ${currentBuild.fullDisplayName}",
body: "The build ${currentBuild.fullDisplayName} failed.
Please check the console output for more details.",
to: 'ashokitschool@gmail.com',
from: 'ashokit.classes@gmail.com',
attachLog: true
)
}
success {
emailext(
subject: "Build Successful: $
{currentBuild.fullDisplayName}",
body: "The build ${currentBuild.fullDisplayName} was
successful.",
to: 'ashokitschool@gmail.com',
from: 'ashokit.classes@gmail.com',
attachLog: true
)
}
}
====================================
Jenkins Job with Parallel Stages
====================================
pipeline {
agent any
stages {
stage('Clone') {
steps {
echo 'Cloning...'
}
}
stage('Build') {
steps {
echo 'Building......'
}
}
stage('Parallel Stage') {
parallel {
stage('Test') {
steps {
echo 'Testing......'
}
}
stage('Code Review') {
steps {
echo 'Running tasks in parallel - code review'
}
}
stage('Nexus Upload') {
steps {
echo 'Running tasks in parallel - nexus upload'
}
}
}
}
stage('Deploy') {
steps {
echo 'Deploying...'
}
}
}
}
===========================================
Working with Shared Libraries in Jenkins
===========================================
=> Create git repo and push shared libraries related groovy files
=> Configure Shared Libraries in Jenkins (Manage Jenkins -> System -> Global
Pipeline Libraries)
@Library('ashokit-shared-lib') _
pipeline {
agent any
stages{
stage('one'){
steps{
welcome( )
}
}
stage('two'){
steps{
script{
calculator.add(10,10)
calculator.add(20,20)
}
}
}
}
}
=======================================
Jenkins Pipeline with Shared Library
=======================================
@Library('ashokit_shared_lib') _
pipeline{
agent any
tools{
maven "Maven-3.9.4"
}
stages{
stage('Git Clone'){
steps{
gitClone('https://github.com/ashokitschool/maven-web-app.git')
}
}
stage('Build'){
steps{
mavenBuild()
}
}
stage('Code Review'){
steps{
sonarQube()
}
}
}
}
=====================
What is Jenkinsfile
=====================
======================================
Jenins Pipeline with try-catch blocks
======================================
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
try {
// Code that might throw an exception
sh 'make -B'
} catch (Exception e) {
// Handle the exception
echo "Build failed: ${e.message}"
currentBuild.result = 'FAILURE'
}
}
}
}
stage('Test') {
steps {
script {
try {
// Code that might throw an exception
sh 'make test'
} catch (Exception e) {
// Handle the exception
echo "Tests failed: ${e.message}"
currentBuild.result = 'FAILURE'
}
}
}
}
}
post {
always {
echo "Always i will execute"
}
success {
echo "Pipeline succeeded!"
}
failure {
echo "Pipeline failed!"
}
}
}
===========================================
What is Multi Branch Pipeline in Jenkins ?
===========================================
=> In realtime, we will have multiple branches in git repo like below
a) main
b) develop
c) feature
d) release
=> We can create one pipeline and we can build the code from multiple branches at a
time using "Multi Branch Pipeline" concept.
=> When we create Multi Branch Pipeline, it will scan all the branches in given git
repo and it will execute pipeline for all branches.
Note: When we run mutli branch pipeline for secondtime, it will verify code changes
happend in which branches and it will execute pipeline for only those branches.
================
Jenkins Summary
================
===========
AWS Cloud
===========
================
Course Content
================
=============
Course Info
=============
Duration : 45 Days
=====================
Cloud Service Models
=====================
=============
What is AWS
============
=================
Regions & AZ's
=================
Availability Zone -> Data Center (server room) -> 102 Az's
==================
AWS Account Setup
==================
======================
AWS Services Overview
======================
11) EFS : Elastic File System (share files with multiple EC2 instances)
==========================================
How to create infrastrucutre in aws cloud
==========================================
1) Web console (gui)
2) AWS CLI
=================================
AWS S3 (Simple Storage Service)
=================================
-> We can upload & download objects (files) at any point of time using S3
-> When we create a bucket end-point url will be generated to access bucket.
-> When we upload object into bucket, every object will get its own end-point url.
Note: By default, buckets and objects are private (we can them as public).
https://ashokitbucket101.s3.ap-south-1.amazonaws.com/
SB_NG_Docker_K8S_Project_Setup.pdf
=================================
Static Website Hosting using S3
=================================
Step-2) Upload website files & folders into bucket with public read-access
index-document : index.html
error-document : error.html
Note: After enabling static website hosting it generates end-point URL for our
website
3) Database
-> End users will communicate with our application using frontend (user interface).
-> When end-user perform some operation in the front-end then it will send request
to backend. Backend contains business logic of the application.
###########################################
Challenges with Database Setup on our own
###########################################
-> If we use AWS cloud, then AWS will take care of all the above works which are
required to setup Database for our application.
-> In AWS, we have RDS service to create and setup database required for our
applications.
-> We just need to pay the money and use Database using AWS RDS service. DB setup
and maintenence will be taken care by AWS people.
** Using RDS we can Easily set up, operate, and scale a relational database in the
cloud ***
######################################
Steps to create MYSQL DB using AWS RDS
######################################
4) Click on 'Create Database' (It will take few minutes of time to create)
###################
MySQL DB Properties
###################
Endpoint : database-1.cbair9bd9y7d.ap-south-1.rds.amazonaws.com
Uname : admin
Pwd : ashokitdevdb
Port : 3306 (it is default port for mysql db)
#############################
Steps to test MYSQL DB Setup
#############################
Link : https://aka.ms/vs/17/release/vc_redist.x64.exe
Link : https://dev.mysql.com/downloads/workbench/
4) Once we are able to connect with Database then we can execute below queries in
Workbench
###############
MySQL Queries
###############
show databases;
use sbidb;
show tables;
####################################
Working with MySQL client in Ubuntu
####################################
##########################################
Working with MySQL client in AMAZON Linux
##########################################
-u : It represents DB username
-p : It represents DB password
=> We can execute below queries to see the data which we have inserted previously
using Workbench.
show databases;
use sbidb;
show tables;
#################################
Spring Boot App with AWS RDS DB
#################################
URL : https://youtu.be/GSu1g9jvFhY
=================================
AWS S3 (Simple Storage Service)
=================================
-> We can upload & download objects (files) at any point of time using S3
-> When we create a bucket end-point url will be generated to access bucket.
-> When we upload object into bucket, every object will get its own end-point url.
Note: By default, buckets and objects are private (we can them as public).
https://ashokitbucket101.s3.ap-south-1.amazonaws.com/
SB_NG_Docker_K8S_Project_Setup.pdf
=================================
Static Website Hosting using S3
=================================
1) Static Website
2) Dynamic Website
-> The website which gives same response/content for all users is called as static
website.
-> The website which gives response based on user is called as Dynamic website.
Step-2) Upload website files & folders into bucket with public read-access
index-document : index.html
error-document : error.html
Note: After enabling static website hosting it generates end-point URL for our
website
===================
S3 Storage Classes
===================
=> Storage classes are used to specify how frequently we want to access our objects
from S3.
5) Glacier Instant Retrieval : Long Lived Archive data (once in quarter) (MS)
============
Versioning
============
=> If we don't want to replace old objects from bucket then we can enable
Versioning.
=> Versioning we will enable at bucket level and it is applicable at object level
Note: Once we enable versioning, we can't disable that but we can suspend.
=======================
What is Object Locking
=======================
-> It is used to enable the feature WORM (Write once read many times)
-> At the time of bucket creation only we can enable object lock.
-> Object Lock will be enabled at bucket level and it is applicable at object
level.
=================================
What is Transfer Accelaration
=================================
====================================
IAM : Identity & Access Management
====================================
1) Root Account (Super account) - will have access for everything in AWS
Note: Every team member will get IAM account to perform daily activities.
=> In IAM account we can give permissions to access particular services in AWS
cloud.
===================================
Multi Factor Authentication (MFA)
===================================
-> Enable MFA for root account using Google Authenticator app
-> After enabling MFA, logout and login into root account and check behaviour
===============
Best Practices
===============
- When we login AWS using 'email' and 'password', that has complete access to all
AWS services and resources in the account (Root account).
- Strongly recommended that you do not use the "root user" for your everyday tasks,
even the administrative ones.
- Instead, adhere to the best practice of using the root user only to create your
first IAM user. Then securely lock away the root user credentials and use them to
perform only a few account and service management tasks.
- IAM user is truely global, i.e, once IAM user is created it can be accessible in
all the regions in AWS.
3. In MNCs , permissions will not be provided for individual users. Create the
Groups and add the users into it.
Users & Groups are for the users.
Roles are for the AWS Services.
============
IAM Account
============
2) Login into IAM account and EC2 service (can't access because no permission)
=================
IAM User Group
=================
=====================
Create Custom Policy
=====================
================
Create IAM role
=================
1) Create IAM role and attach to ec2 vm
===============
IAM Summary
===============
1) What is IAM ?
2) What is Root Account ?
3) How to enable MFA for root account
#########################
EC2 Service
#########################
Note: To encourage learners, AWS provided EC2 instances (t2.micro type) free for 1
year
(Monthly 750 hours)
-> When we create EC2 instance, AWS will provide default storage using EBS.
-> To create EC2 instance, we need to choose AMI (Amazon machine image)
-> Key-Pair is used to secure EC2 instance.
Note: One Key-Pair we can use for Multiple EC2 instances irrespective of OS
-> For EC2 instance we will attach firewall Security rules using Security Group
########### Lab Task : Launch Windows VM and Connect to it using RDP Client
############
==========================
Launch Linux VM using EC2
==========================
1) Create Linux VM
======================
Types of IP's in AWS
======================
Note: When we re-start EC2 VM then Private IP will not change. It is fixed and used
for internal purpose.
Note: When we create and attach Elastic IP to ec2 instance then it will become
fixed public ip for that instance. Even if we re-start the machine that IP will
remain same.
3) Stop the EC2 instance and Observe IPs associated with that Machine
5) Observe IPs associated with that Machine (Elastic IP will become Public IP)
6) Re-Start the EC2 instance and Observe IPs associated with that Machine
7) De-Associate Elastic IP from EC2 (Action -> Networking -> De-Assocate Elastic
IP)
8) Release Elastic IP back to AWS cloud (Elastic IP -> Actions -> Release)
****** Note: Elastic IPs are chargeble, If we are not using then we have to release
them to AWS ***********
================================================================================
==========================
EC2 Instances Categories
==========================
1) On-Demand Instances
2) Reserved Instances
3) Spot instances
=======================
On-Demand Instances
=======================
=> No Commitment
====================
Reserved Instances
====================
===============
Spot Instances
==============
=> AWS will offer High Capacity systems for Low Price
==========================
Dedicated Host Instances
==========================
===================
EC2 Instance Types
===================
2) Compute Optimized
3) Memory Optimized
4) Storage Optimized
5) Accelarated Computing
========================
What is AMI in AWS ?
==========================
a) Windows AMIs
b) Linux AMIs
c) Mac AMIs
=> AWS provided several Quick Start AMIs to launch Virtual Machines.
=> By Default our AMI visibility is Private (We can make it as public)
############# AMI Lab Task ##############
3) Change AMI visibility to Public (It will be available for all AWS users)
(Optional)
########################
EBS Volumes & Snapshots
########################
=> EBS is a block level storage device that can be associated with EC2 instance
=> EBS providing both Primary and Secondary Storage for EC2 instances
1) Root Volumes
2) Additional Volumes
=> When we launch EC2 VM then by default EBS Root Volume will be attached to our
EC2 instance
=> If we detach EBS root volume from EC2 instance then we can't start that
Instance.
=> We can create Additional EBS volumes and we can attach / detach with EC2
instances.
*** One EBS volume can be attached with only one EC2 instance at a time.
*** EBS Volumes are AVailability Zone Specific ***
=================
EBS Volume Types
==================
###################
Today's Lab Task
###################
2) Create Additional EBS volume with 12 GB (check AZ for VMs and create EBS Vol in
same AZ)
4) Connect to EC2 VM-1 instance and verify volumes (EBS both volumes shud display)
$ lsblk
5) Create a directory and mount EBS Additional volume to created directory & create
files
$ mkdir dir1
$ cd dir1
8) Connect to EC2 VM-2 instance and verify volumes (EBS volume shud display)
9) Create a directory and mount EBS Additional volume to created directory
$ mkdir dir2
$ ls -l dir2
10) Verify data which stored previously in EBS Addn Vol available or not
(it should present)
=================
EBS - Snapshots
=================
=> From volume we can create snapshot and from snapshot we can create volume
=> Snapshots can't be attached with EC2 instance (Only Volumes we can attach with
EC2)
##########################################
Static WebSite Hosting using EC2 instance
##########################################
===========================
Install Web Server (Httpd)
===========================
=> Access our website using EC2 vm public ip (we should be able to see Test web
page)
$ cd /var/www/html
$ sudo vi index.html
=> Access our website using EC2 vm public ip (we should be able to see our content
in web page)
=========================
What is User Data in EC2
=========================
=> User Data is used to execute the script when ec2 instance launched for first
time
=> We can say user data is default script to execute when EC2 instance is launched.
Note: If we stop and start ec2 machine then User data will not execute.
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Life Insurance Server - 1</h1></html>" > index.html
service httpd start
####################################
=> When we run our application in Single Server then we may get below issues
1) Burden on Server
2) Slow Responses
4) App downtime
6) Revenue Loss
=> To overcome these issues we will go for Load Balancing.
######################
Load Balancing in EC2
######################
-> Instead of deploying our application in one server, we will deploy in multiple
servers
-> LBR will distribute traffic to all connected servers using Round Robin algorithm
==================================
Process to Setup Load Balancer
==================================
SSH - 22
HTTP - 80
SGName : ashokit_Security_group
Note: Configure below script as 'User Data' at the time of launching the instance
#! /bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Life Insurance Server - 1</h1></html>" > index.html
service httpd start
Note: Configure below script as 'User Data' at the time of launching the instance
#! /bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Life Insurance Server - 2</h1></html>" > index.html
service httpd start
4) Create Load Balancer ( ALB ) and attach both EC2 instances as one target group
Listner : HTTP : 80
Security Group : ashokit_Security_Group
Note: Once Load Balancer created, DNS will be generated (it will take upto 3 mins)
5) Send a request to Load Balancer DNS URL and see the response
Note
++++++++++
1) DELETE Load balancer
2) Delete EC2 instances
###############################
Types of Load Balancers in AWS
###############################
-> When you want to deal with HTTP and HTTPS requests then Application Load
Balancer is recommended. Application Load Balancer supports path based routing.
-> When you have static ip address and wants to process millions of requests per
second then go for Network Load Balancer (This gives ultra performance)
-> When your application dealing with third party services then go for Gateway Load
Balancer
######################
Microservices
#####################
-> Monolith Architecture means all the fuctionalities will be developed in single
project
-> If we change some code then it may effect on another fucntionality
-> The maintenence will become very difficult when we follow monolith architecture
-> To overcome the problems of Monolith Architecture, now a days people are using
Microservices Architecture in the Realtime.
######################################
Setup LBR with multiple Target Groups
######################################
#! /bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Flights Server - 1</h1></html>" > index.html
service httpd start
#! /bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Flights Server - 2</h1></html>" > index.html
service httpd start
#! /bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Trains Server - 1</h1></html>" > index.html
service httpd start
#! /bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Trains Server - 2</h1></html>" > index.html
service httpd start
-> Create "TrainsTargetGroup" for Trains EC2 instances
-> Configure Routing in Application Balancer to route the request to "Target Group"
based on URL
(At the time of LBR creation choose existing target group as Flights-
TargetGroup)
(Go to LBR -> Listeners -> Click on View/Edit Rules -> Click on + Symbol -> CLick
on Insert Rule -> Add the rule)
=========================
URLS with Query String
========================
URL-1 : http://makemytripalb-1456632845.ap-south-1.elb.amazonaws.com?type=trains
URL-2 : http://makemytripalb-1456632845.ap-south-1.elb.amazonaws.com?type=flights
======
Note
=====
-> After practise Delete Target Groups, LBR and EC2 instances to avoid billing
================
Auto Scaling
=================
=> AWS Auto Scaling monitors your applications and automatically adjusts capacity
to maintain steady, predictable performance at the lowest possible cost.
=> Using AWS Auto Scaling, it’s easy to setup application scaling for multiple
resources across multiple services in minutes.
=> Amazon EC2 Auto Scaling helps you ensure that you have the correct number of
Amazon EC2 instances available to handle the load for your application.
=> The process of increasing and decreasing no.of ec2 instances based on the load
is called as Auto Scaling.
=============================
Advantages with Auto Scaling
=============================
2) Availability : Ensure that your application always has the right amount of
capacity to handle the current traffic demand.
3) Cost Management : Save money by dynamically launching instances when they are
needed and terminate them when they aren't needed.
=================================
How to setup Auto Scaling Group
=================================
=======================
Summary For Revision
======================
1) What is EC2
16) How to host static website using EC2 instance (httpd webserver)
==================
Amazon Cloud Watch
==================
The application automatically collects and provides metrics for CPU utilization,
latency and request counts.
Users can also stipulate additional metrics to be monitored, such as memory usage,
transaction volumes or error rate.
==============================
Cloud Watch & SNS - Lab Task
==============================
Note: When Alarm got triggered, its status will be changed to 'In Alarm'
=> We can Monitor our Alarm history (how many times it got triggered)
(Goto Cloud Watch -> Select Alarms -> Click Alarm -> Click on History)
==========
AWS CLI
==========
===========================
Using the AWS web console:
===========================
============================
AWS Command Line Interface:
============================
Usually, the script provides you with the flexibility to manage multiple AWS
resources, infrastructures effectively.
For example, we can use the script to deploy multiple resources without the need to
go through a complete configuration wizard each time.
============================
Configuring AWS CLI
=============================
3) Install AWS CLI: AWS CLI is available for Windows, MAC and Linux distribution of
OS.
b) MAC and Linux: Please follow these steps (execute below commands)
Note: AWS configure command will ask for access key, secret access key, region and
output format.
================================================
Working with AWS S3 Service using AWS CLI
================================================
Step-1: In this case, we will be using AWS S3 (Simple Storage Service) as an
example.
Step-2: Next, we are going to run “aws s3 ls" (to display bucket lis)
$ aws s3 ls
Step-3: After listing out the content of the existing bucket, let us try to create
a new s3 bucket using AWS CLI
$ aws s3 mb s3://ashokitbucket1004
Step-5: After the command has been executed, let us check, if the bucket has been
created and what is the region of the bucket.
$ aws s3 rb s3://ashokitbucket1004
===========================
List out all ec2 instances
===========================
$ aws ec2 describe-instances
or a tag key:
=========================================
Create a New Key Pair for EC2 Instances
=========================================
-> Before launching a new EC2 instance we’ll need an SSH key pair that we’ll use to
connect to it securely.
The above command will create a new key in the AWS named ashokitkey and pipe the
secret key directly to the location we specify, in this case, ashokitkey.pem.
==========================
Launch New EC2 Instances
==========================
$ aws ec2 run-instances --image-id ami-0a5ac53f63249fba0 --instance-type t2.micro
--key-name ashokitkeypair
=================================
Stop and Start an EC2 Instance
=================================
========================
Terminate an Instance
========================
$ aws ec2 terminate-instances --instance-ids <instance-id>
================================
Creating Security Group in EC2
================================
$ aws ec2 create-security-group --group-name MySecurityGroup --description "My
security group"
=============================
Creating RDS instance (MySQL)
=============================
========================
Delete RDS Instance (MySQL)
========================
$ aws rds delete-db-instance ^
--db-instance-identifier test-mysql-instance ^
--skip-final-snapshot ^
--no-delete-automated-backups
################################
VPC : Virtual Private Cloud
################################
=> A VPC allows users to create and manage their own isolated virtual networks
within the cloud.
=> In a VPC, users can define their own IP address range, subnets, route tables,
and network gateways. It provides control over network configuration, such as
setting up access control policies, firewall rules, and network traffic routing.
=> Overall, a VPC provides a flexible and secure network environment that enables
users to build and manage their cloud-based applications and infrastructure with
greater control and customization.
###################
VPC Terminology
##################
1) VPC
2) CIDR Blocks
3) Subnets (public and private)
4) Route Tables
5) Internet Gateway
6) NAT Gateway
7) VPC Peering
8) Security Groups
9) NACL
###################
Types of IP's
###################
=> There are several types of IP (Internet Protocol) addresses used in computer
networks. Here are the most common types:
1) IPV4
2) IPV6
3) Public IP
4) Priate IP
5) Dynamic IP
6) Static IP Address
Note: These CIDR classes are required for only public ips and not required for
private ips
-> Daily billions of new devices launching and they are using internet
-> If a device wants to use internet then ip is mandatory (we are running out of
ips)
=========
IPV4
=========
=> IPv4 addresses are 32-bit numeric addresses written in four sets of numbers
separated by periods (e.g : 192.168.0.1)
=> It is the most widely used IP version and supports approximately 4.3 billion
unique addresses
=> However, due to the increasing number of devices connected to the internet, IPv4
addresses are running out, leading to the adoption of IPv6.
===========
IPV6
===========
=> IPv6 addresses are 128-bit alphanumeric addresses written in eight sets of four
hexadecimal digits separated by colons (e.g.,
2001:0db8:85a3:0000:0000:8a2e:0370:7334)
=> IPv6 provides a significantly larger address space than IPv4, with approximately
340 undecillion unique addresses.
=> It was introduced to overcome the IPv4 address exhaustion issue and support the
growing number of internet-connected devices.
==========
PUblic Ip
===========
===========
Private IP
===========
============
Dynamic IP
===========
===========
Static IP
===========
==================
VPC Sizing
==================
Note: /24 we will get 256 IPs those are sufficient for our usecases in realtime
##########################
VPC Lab Task For Today
##########################
1) Create VPC
CIDR : 10.0.0.0/16
6) Attach IGW to Public Route Table so that associated subnet will become public
7) Create One EC2 VM in public subnet and another EC2 vm in private subnet.
=====================
Step-1 : Create VPC
=====================
Note: One Route Table will be created for VPC by default. Rename it as "Ashokit-
Private-Route-Table"
===========================
Step-2 : Create 2 Subnets
===========================
-----------------------
Create Subnet-1
----------------------
-> Create Subnet
Name : public-subnet-az-1
------------------------
Create Subnet-2
-------------------------
-> Create Subnet
Name : private-subnet-az-1b
-> AWS will reserve 5 ips in every subnet (we will get only 251)
=================================
Stpe-3 : Create Internet gateway
=================================
Note: By default one IGW will be available and it will be attached to default VPC
-> Attach this IGW to VPC (we can attach IGW to only one VPC)
=================================
Step-4 : Create Route Table
=================================
Note: When we create VPC, we will get only route table by default. It is called as
Main route table.
-> Goto route table and attach route tables to subnets (Subnets association for
Route Tables)
==========================================
Step-5 : Making Subnet as public
==========================================
-> Subnet Associations -> Edit SNET -> Select Public Subnet
======================================
Step - 6 : Create EC2 (Public EC2)
======================================
-> Choose AMI
-> Select VPC
-> Select Public Subnet
-> Assign Public IP as Enable
-> Add SSH and Http Protocols
-> Download KeyPair
-> Launch Instance
========================================
Step - 7 : Create EC2 (Private EC2)
========================================
-> Choose AMI
-> Select VPC
-> Select Private Subnet
-> Assign Public IP as Enable
-> Add SSH (source : custom, Range : 10.0.0.0/16)
-> Download KeyPair
-> Launch Instance
=================================
Step - 8 : Test EC2 Connections
=================================
-> Connect to Public EC2 using MobaXterm (It should allow to connect)
-> Connect to Private EC2 using MobaXterm (It shouldn't allow to connect)
=================================================================================
Step - 9 : Connect with 'private-ec2' from 'public-ec2' using 'ssh' connection
=================================================================================
Note: As both Ec2 instances are available under same VPC, we should be able to
access one machine from another machine.
----------------------
Procedure to access
----------------------
-> Upload pem file into public-ec2 machine (in mobaxterm we have upload option)
-> Execute below command to make ssh connection from public-ec2 to private-ec2
-> Try to ping google from private ec2 (it should not allow because igw is not
available)
=============================
VPC with NAT Gateway Lab Task
=============================
3) After NAT Gateway, we should be able to ping google from 'private-ec2' also
#######################
What is VPC Peering
#######################
VPC Peering: IPV4 or IPV6 traffic routes between VPCs created to establish
communication between one or more multiple VPCs.
=======================
AWS definition:
=======================
=> “A VPC peering connection is a networking connection between two VPCs that
enables you to route traffic between them using private IPv4 addresses or IPv6
addresses.
=> Instances in either VPC can communicate with each other as if they are within
the same network. “
1) Through VPC Peering, traffic stays within the AWS network and not go over the
internet.
2) Non-overlapping CIDRs – The 2 VPCs you are trying to peer, must have a mutually
exclusive set of IP ranges.
(If VPC A & B have peered and VPC A & C have peered, VPC B & C cannot share
contents until there is an exclusive peering done between VPC B & C)
===========================
Will VPC Peering Cost me?
===========================
No. VPC itself won’t cost you, however, the resources deployed inside the VPC and
data transfers are done will cost you.
==================================================
Let’s create VPC Peering to enable communication
==================================================
=> On the left navigation panel under VPC -> Peering Connections:
=> Now you would see the status Pending Acceptance which means, Requestor has sent
a request to the peer now target VPC needs to accept the request.
Now navigate to Route Tables, in Default VPC RT(Route Table) -> Edit routes
172.31.0.0/16 - local
0.0.0.0/0 - Internet-gateway
10.0.0.0/16 - vpc peering (We need to add this)
10.0.0.0/16 - local
0.0.0.0/0 - Internet-gateway
172.31.0.0/16 - vpc (We need to add this)
Edit Security Group of Default and Custom VPC to allow traffic from each other
Default VPC Security Group looks like
SSH - 22 - all
All Traffic
===============================================================
Q ) What is the difference between NACL and Security Groups ?
===============================================================
================
Security Group
================
-> Security Group supports only Allow rules (by default all rules are denied)
Ex : 172.32.31.90 ----> don't accept request from this IP (we can't do this
in SG)
-> Security Groups are applicable at the resource level (manually we have to attach
SG to resource)
-> Multiple Security Groups can be attached to single instance & one instance can
have 5 security groups
-> Security Group acts as First Level of defense for Outgoing traffic
======
NACL
======
-> NACL rules are applicable for all the resources which are part of that Subnet
===========================================================================
#########################
Terraform (IAAC s/w)
#########################
-> Terraform is an open source s/w created by HashiCorp and written in Go
programming language
-> Terraform code is written in the HashiCorp Configuration langauge (HCL) in files
with the extension .tf
-> Terraform allows users to use HashiCorp Configuration Language (HCL) to create
the files containing definitions of the their desired resources.
-> Terraform Supports all most all cloud providers (AWS, AZURE, GCP, Openstack
etc..).
===============================
Terraform vs Cloud Formation
==============================
-> Terraform uses HashiCorp configuration language (HCL) which built by HashiCorp.
It is fully compatible with JSON.
-> AWS Cloud Formation utilizes either JSON or YAML. Cloud formation has a limit of
51,000 bytes for the template body itself.
==========================
Terraform Vs Ansible
==========================
====================================
Terraform Setup - Pre-Requisites
====================================
$ terraform -v
###########################################
Working with EC2 Instance using Terraform
###########################################
1) Create IAM user with Programmatic Access (IAM user should have EC2FullAccess)
$ mkdir terraformscript
$ cd terraformscripts
$ vi FirstTFScript.tf
provider "aws" {
region = "ap-south-1"
access_key = "AKIAJK"
secret_key = "CWSCLb1WRB2Xrdufy6/Lp"
}
$ terraform init
$ terraform fmt
12) Validate Your Script
$ terraform validate
$ terraform plan
Note: When the script got executed it will store that state in a file. If we
execute script again it will not create. If you delete that state file and execute
script again then it will create it.
-> In first script we kept provider and resources info in single script file. We
can keep provider and resources information in seperate files
#########################################
Script to create multiple Ec2 instances
#########################################
provider "aws" {
region = "ap-south-1"
access_key = "AKIA4MGQ5UW757KVKECC"
secret_key = "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"
}
========================
Variables in TypeScript
========================
$ vi vars.tf
variable "ami" {
description="Amazon Machine Image value"
default = "ami-057752b3f1d6c4d6c"
}
variable "instance_type"{
description="Amazon Instance Type"
default = "t2.micro"
}
variable "instances_count"{
description="Total No.of Instances"
default = "2"
}
$ vi main.tf
provider "aws" {
region = "ap-south-1"
}
-> Remove instances_count variable from var.tf file and pass like below
=============================
Comments in Terraform Script
=============================
========================================
Dealing with Secret Key and Access Key
========================================
-> We have configured secret_key and access_key in terraform script file. Instead
of that we can configure them as environment variables.
$ export AWS_ACCESS_KEY_ID="AKIAW4SBVKGXJK"
$ export AWS_SECRET_ACCESS_KEY="CWSCbZOQMkLb1WRB2Xrdufy6/Lp"
$ echo $AWS_ACCESS_KEY_ID
$ echo $AWS_SECRET_ACCESS_KEY
-> Now remove credentials from terraform script and execute it.
=============================
Working with User Data
=============================
-> It is used to execute script when instance launched for first time.
$ vi installHttpd.sh
#!/bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Welcome to Ashok IT...!!</h1></html>" > index.html
service httpd start
-> vi main.tf
provider "aws" {
region = "ap-south-1"
}
provider "aws"{
region = "ap-south-1"
}
bucket = "s3bucketashokit"
acl="private"
versioning{
enabled = true
}
tags = {
Name = "S3 Bucket By Ashok"
}
}
========================================
Create MySQL DB in AWS using Terraform
========================================
provider "aws"{
region = "ap-south-1"
}
=======================================================================
========================================================================
provider "aws" {
region = "ap-south-1"
access_key = "AKIAW4SOHOQRVBVKGXJK"
secret_key = "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"
}
output "my_iam_user_complete_details" {
value = aws_iam_user.my_iam_user
}
output "my_s3_bucket_complete_details" {
value = aws_s3_bucket.my_s3_bucket
}
output "my_s3_bucket_versioning" {
value = aws_s3_bucket.my_s3_bucket.versioning[0].enabled
}
=====================================
IAM User creation with Variables
=====================================
variable "iam_user_name_prefix" {
type = string #any, number, bool, list, map, set, object, tuple
default = "my_iam_user"
}
provider "aws" {
region = "us-east-1"
}
===================================================================================
Change particular resource : $ terraform apply -target = aws_iam_user.my_iam_user
===================================================================================
==================
Elastic Beanstalk
==================
-> Easily we can run our web aplications on AWS cloud using Elastic Beanstalk
service.
-> We just need to upload our project code to Elastic Beanstalk it will take care
of deployment.
-> Elastic Beanstack will take care of softwares and servers which are required to
run our application.
-> Elastic Beanstalk will take care of deployment, capacity provisioning, load
balancer and auto scaling etc..
-> To deploy one java web application manually we need to perform below operations
2) Create Network
8) Create LBR
-> AWS providing infrastructure, we are creating platform using AWS infrastructure
to run our java application (IaaS Model)
=> Instead of we are preparing platform to run our application, we can use Elastic
Beanstalk service to run our web applications.
==================================
Advantages with Elastic Beanstalk
==================================
3) Impossible to outgrow
===========================
Create Sample Application
===========================
=======================
Below Beanstalk events
=======================
Env Creation started
S3 bucket created to store the code
Security Group
VPC
EC2 instances
Webserver installation
Load Balancer
Autoscaling
Cloud watch
URL : http://webapp1-env.eba-rc8b64vg.ap-south-1.elasticbeanstalk.com/
===========================
Elastic Beanstalk Pricing
===========================
=> You pay for Amazon Web Services resources that we create to store and run your
web application, like Amazon S3 buckets and Amazon EC2 instances, LBR and ASG.
==================================================
Procedure to deploy java-spring-boot-application
==================================================
-> Go to our application environment -> Configuration -> Edit Updates, monitoring,
and logging -> Configure below Environment Property
SERVER_PORT = 5000
URL : <Beanstack-domain-url>
======================
Application Versions
======================
sb-rest-api-v1.jar
sb-rest-api-v2.jar
sb-rest-api-v3.jar
-> Serverless computing means run the application without thinking about servers
-> AWS will take care of servers required to run our application
=============
AWS Lambdas
=============
AWS Lambda is a way to run code without creating, managing, or paying for servers.
You supply AWS with the code required to run your function, and you pay for the
time AWS runs it, and nothing more.
Your code can access any other AWS service, or it can run on its own. While some
rules about how long a function has to respond to a request, there’s almost no
limit to what your Lambda can do.
The real power, though, comes from the scalability that Lambda offers you.
AWS will scale your code for you, depending on the number of requests it receives.
Not having to build and pay for servers is nice. Not having to build and pay for
them when your application suddenly goes viral can mean the difference between
survival and virtual death.
==================================
Running Java Code with AWS Lambda
==================================
Ex: in.ashokit.LambdaHandler::handleRequest
===============
Cloud Formation
===============
=> If we design cloud formation template to create Infrasture then we can re-use
that template.
=> Terraform works with almost all cloud platforms available in the market.
==============================================
Creating EC2 instance using Cloud Formation
==============================================
===================================================================================
======
===================
Ansible Tutorial
==================
-> Ansible is one among the DevOps configuration management tools which is famous
for its simplicity.
-> It is an open source software developed by Michael DeHaan and its ownership is
on RedHat
-> This tool is very simple to use yet powerful enough to automate complex multi-
tier IT application environments.
-> The best part is that you don’t even need to know the commands used to
accomplish a particular task.
-> You just need to specify what state you want the system to be in and Ansible
will take care of it.
-> The main components of Ansible are playbooks, configuration management and
deployment.
-> Ansible uses playbooks to automate deploy, manage, build, test and configure
anything
=================
Ansible Features
==================
-> Built on top of Python and hence provides a lot of Python's functionality
-> Follows push based architecture for sending configuration related notifications
===========================
Push Based Vs Pull Based
===========================
-> Agents on the server periodically checks for the configuration information from
central server (Master)
==========================
How Ansible works ?
=======================
Ansible works by connecting to your nodes and pushing out a small program called
Ansible modules to them.
Then Ansible executed these modules and removed them after finished.The library of
modules can reside on any machine, and there are no daemons, servers, or databases
required.
The Management Node is the controlling node that controls the entire execution of
the playbook.
The inventory file provides the list of hosts where the Ansible modules need to be
run.
The Management Node makes an SSH connection and executes the small modules on the
host’s machine and install the software.
1) Controlling Nodes
2) Managed Nodes
3) Ansible Playbook
====================
Controlling Nodes
===================
are usually Linux Servers that are used to access the switches/routers and other
Network Devices.
These Network Devices are referred to as the Managed Nodes.
==================
Managed Nodes
==================
Managed Nodes are stored in the hosts file for Ansible automation.
==================
Ansible Playbook
==================
Ansible Playbooks are expressed in YAML format and serve as the repository for the
various tasks that will be executed on the Managed Nodes (hosts).
Playbooks are a collection of tasks that will be run on one or more hosts.
=================
Inventory file
=================
Ansible's inventory hosts file is used to list and group your servers.
===========================================
Few Important Points About Inventory File
===========================================
Ansible's inventory hosts file is used to list and group your servers. Its default
locaton is /etc/ansible/hosts
=======================
Sample Inventory File
========================
# Blank lines are ignore
# Ungrouped hosts are specifiying before any group headers like below
192.168.122.1
192.168.122.2
192.168.122.3
[webservers]
192.168.122.1
#192.168.122.2
192.168.122.3
[dbserver]
192.168.122.1
192.168.122.2
ashokit-db1.com
ashokit-db2.com
================
Ansible Setup
================
==========================================================================
Step-1 :: Create 3 Amazon Linux VMs in AWS (Free Tier Eligible - t2.micro)
==========================================================================
1 - Control Node
2 - Managed Nodes
==================================================================================
Step-2: Connect to all the 3 VMs and execute below commands in all 3 machines
==================================================================================
1) Create user
pwd
confirm pwd
$ sudo visudo
ansible ALL=(ALL) NOPASSWD: ALL
$ sudo vi /etc/ssh/sshd_config
============================================
Step-3 :: Install Ansible in Control Node
============================================
$ sudo su ansible
$ cd ~
$ python3 --version
$ ansible --version
=================================================================================
Step-4 : Generate SSH Key In Control Node and Copy SSH key into Managed Nodes
==================================================================================
$ sudo su ansible
$ ssh-copy-id ansible@<ManagedNode-Private-IP>
Ex : $ ssh-copy-id ansible@172.31.44.90
Note: Repeat above command by updating HOST IP for all the managed Servers.
3) Update Host Inventory in Ansible Server to add managed node servers details.
$ sudo vi /etc/ansible/hosts
[webservers]
172.31.47.247
[dbservers]
172.31.44.90
4) Use ping module to test Ansible and after successful run you can see the below
output.
=========================
Ansible AD-HOC Commands
=========================
Example:
There are two default groups, all and ungrouped. all contains every host. ungrouped
contains all hosts that don’t have another group
=============
Ping Module
==============
# It will ping all the servers which you have mentioned in inventory file
(/etc/ansible/hosts)
$ ansible all -m ping
=======================
Shell Modules
======================
# Here it will check the disk space use for all the nodes which are from dbservers
group
$ ansible dbservers -a "df -h"
# Here it will check the disk space use for all the nodes which are from webservers
group
$ ansible webservers -a "free -m"
================
Yum Module
===============
# It will install vim package in all node machine which you have menyioned in host
inventory file.
$ ansible all -b -m yum -a "name=vim"
present : install
latest : update to latest
absent : un-install
# to install any software in ubuntu server then we should use apt package manager
Ans) Ansible introduced "package manager" to work with underlying package manager
======================================
YAML ( Yet Another Markup Language )
=====================================
-> We can make use of this language to store data and configuration in a human-
readable format.
===================
Sample YML File Data
===================
Fruit: Apple
Vegetable: Carrot
Liquid: Water
Meet: Chicken
Array/List
++++++++++++
Fruits:
- Orange
- Apple
- Banana
- Guava
Vegetables:
- Carrot
- Cauliflower
- Tomoto
name: Ashok
age: 29
phno: 123456
email: ashokitschool@gmail.com
hobbies:
- cricket
- dance
- singing
person:
id: 101
name: Raju
email: raju@gmail.com
address:
city: Hyd
state: TG
country: India
job:
companyName: IBM
role: Tech Lead
pkg: 25 LPA
hobbies:
- cricket
- chess
- singing
- dance
---
person:
id: 101
name: Raju
email: raju@gmail.com
address:
city: Hyd
state: TG
country: India
job:
companyName: IBM
role: Tech Lead
pkg: 25 LPA
hobbies:
- cricket
- chess
- singing
- dance
---
movie:
name: Bahubali
hero: Prabhas
heroine: Anushka
villian: Rana
director: SS Rajamouli
budget: 100cr
...
==============
Playbooks
============
-> Playbook is a single YAML file, containing one or more ‘plays’ in a list.
-> Plays are ordered sets of tasks to execute against host servers from your
inventory file.
Examples are
a) Execute a command
b) Run a shell script
c) Install a package
d) Shutdown / Restart the hosts
Note : Playbooks YML / YAML starts with the three hyphens ( --- ) and ends with
three dots ( … )
2) Host section – Defines the target machines on which the playbook should run.
This is based on the Ansible host inventory file.
3) Variable section – This is optional and can declare all the variables needed in
the playbook. We will look at some examples as well.
4) Tasks section – This section lists out all the tasks that should be executed on
the target machine. It specifies the use of Modules. Every task has a name which is
a small description of what the task will do and will be listed while the playbook
is run.
=============================
Playbook To Ping All Host Nodes
=============================
---
- hosts: all
gather_facts: no
remote_user: anisble
tasks:
- name : Ping
ping:
remote_user: ansible
...
#name: which is the task name that will appear in your terminal when you run the
playbook.
#remote_user: This parameter was formerly called just user. It was renamed in
Ansible 1.4 to make it more distinguishable from the user module (used to create
users on remote systems).
$ ansible-playbook playbook.yml -v
$ ansible-playbook playbook.yml -vv
$ ansible-playbook playbook.yml -vvv
# It will display the which hosts would be effected by a playbook before run
$ ansible-playbook playbook.yml --list-hosts
================================================
Install HTTPD + copy index.html + Start Service
=================================================
-> Create index.html file in the location where our playbook is creating
---
- hosts: all
become: true
tasks:
- name: Install Httpd
yum:
name: httpd
state: present
- name: Copy index.html
copy:
src: index.html
dest: /var/www/html/index.html
- name: Start Httpd Server
service:
name: httpd
state: started
...
===========
Variables
===========
1) Runtime Variables
2) Playbook Variables
3) Group Variables
4) Host Variables
---
- hosts: all
become: true
tasks:
- name: Install Httpd
yum:
name: "{{package_name}}"
state: present
- name: Copy index.html
copy:
src: index.html
dest: /var/www/html/index.html
- name: Start Http Server
service:
name: "{{package_name}}"
state: started
...
---
- hosts: all
become: true
vars:
package_name: httpd
tasks:
- name: Install Httpd
yum:
name: "{{package_name}}"
state: present
- name: Copy index.html
template:
src: index.html
dest: /var/www/html/index.html
- name: Start Http Server
service:
name: "{{package_name}}"
state: started
...
---
- hosts: all
become: true
tasks:
- name: install software
yum:
name: "{{package_name}}"
state: present
...
group_vars/all.yml
group_vars/<groupName>.yml
Ex:
$ mkdir /etc/ansible/group_vars
$ sudo vi /etc/ansible/group_vars/webservers.yml
package: git
$ sudo vi /etc/ansible/group_vars/dbservers.yml
package: mysql
============
Host vars
=============
-> For every host if we want seperate variables then we should go for host vars
-> vi /etc/ansible/host_vars/172.138.1.1.yml
===================================================
Variable Value we can declare with in playbook
====================
Handlers and Tags
====================
-> Using Handlers we can execute tasks based on other tasks status
Note: If second task status is changed then only i want to execute third task in
playbook
-> Using tag name we can execute particular task and we can skip particular task
also.
---
- hosts: all
become: true
gather_facts: yes
vars:
package_name: httpd
tasks:
- name: install httpd
yum:
name: "{{package_name}}"
state: present
tags:
- install
- name: Copy index.html
copy:
src: index.html
dest: /var/www/html/
tags:
- copy
notify:
Start Httpd Server
handlers:
- name: Start Httpd Server
service:
name: "{{package_name}}"
state: started
...
# Execute the tasks whose tags names are install and copy
$ ansible-playbook handlers_tags.yml --tags "install,copy"
===============
Ansible Vault
===============
-> When we configure uname and pwd in variables files everybody can see them which
is not a good practice
-> When we are dealing with sensitive data then we should secure that data
-> Using Ansible vault we can encrypt and we can decrypt data
=======================
Ansible Vault Commands
=======================
-> You can store vault password in a file and you can give that file as input to
execute playbook
$ vi valutpass
$ ansible-playbook filename.yml --vault-password-file=~/vaultpass
===================================================================================
=======================
==> Amazon Linux / Cent OS / RED HAT OS -----------> yum as package manager
---
- hosts: webservers
tasks:
- name: install maven
yum:
name: maven
state: present
when: ansible_os_family == 'RedHat'
- name : install maven
apt:
name: maven
state: present
when: ansible_os_family == 'Debian'
...
================
Ansible Roles
================
-> Roles are a level of abstraction for Ansible configuration in a modular and
reusable format
-> As you add more and more functionality to your playbooks, they can become
difficult to maintain
-> Roles allow you to break down a complex playbook into separate, smaller chunks
that can be coordinated by a central entry point.
---
- hosts: all
become: true
roles:
- apache
...
2. Roles are a way to group multiple tasks together into one container to do the
automation in very effective manner with clean directory structure.
3. Roles are set of tasks and additional files for a certain role which allow you
to break up the configurations.
4. It can be easily reuse the codes by anyone if the role is suitable to someone.
==============================
How do we create Ansible Roles?
==============================
-> To create an Ansible role, use "ansible-galaxy" command which has the templates
to create it.
$ sudo su ansible
$ cd ~
$ mkdir roles
$ cd roles
Note: In the above command apache is the role name (we can give any name for the
role)
> where, ansible-glaxy is the command to create the roles using the templates.
We have got the clean directory structure with the ansible-galaxy command. Each
directory must contain a main.yml file, which contains the relevant content.
============================
Role Directory Structure:
===========================
tasks –> contains the main list of tasks to be executed by the role.
vars –> other variables for the role. Vars has the higher priority than defaults.
files –> contains files required to transfer or deployed to the target machines via
this role.
templates –> contains templates which can be deployed via this role.
meta –> defines some data / information about this role (author, dependency,
versions, examples, etc,.)
=============================================================
Lets take an example to create a role for Apache Web server
=============================================================
Below is a sample playbook to deploy Apache web server. Lets convert this playbook
code into Ansible role.
- hosts: all
become: true
tasks:
- name: Install Httpd
yum:
name: httpd
state: present
- name: Copy index.html
template:
src: index.html
dest: /var/www/html/index.html
- name: Start Http Server
service:
name: httpd
state: started
First, move on to the Ansible roles directory and start editing the yml files.
$ cd roles/apache
==============
1. Tasks
==============
-> Edit main.yml available in the tasks folder to define the tasks to be executed.
$ vi tasks/main.yml
---
# tasks file for roles/apache
- name: install httpd
yum:
name: httpd
state: present
- name: Copy index.html
copy:
src=index.html
dest=/var/www/html/
notify:
- restart apache
=========
2. Files
==========
-> Copy required files into files directory or create index.html file with content
$ vi files/index.html
==========
3. Handlers
==========
Edit handlers main.yml to restart the server when there is a change. Because we
have already defined it in the tasks with notify option. Use the same name “restart
apache” within the main.yml file as below.
$ vi handlers/main.yml
We have got all the required files for Apache role. Lets apply this role into the
ansible playbook “runsetup.yml” as below to deploy it on the client nodes.
$ vi /home/ansible/runsetup.yml
---
- hosts: all
become: true
roles:
- apache
...
$ ansible-playbook runsetup.yml
If you have created multiple roles, you can use the below format to add them in the
playbook
---
- hosts: all
become: true
roles:
- apache
- jenkins
- java
- maven
- sonar
...
======================================
💥 *What We Learnt in Ansible* 💥
======================================
1) What is Configuration Management
2) What is Ansible
3) Advantages of Ansible
4) Push Based Vs Pull Based Mechanism
5) Ansible Installation
6) Ansible Architecture
7) Host Inventory File
8) Host Groups in Inventory
9) Ansible Ad Hoc Commands
10) YAML
11) Working with YAML in VS Code IDE
12) Playbook Introduction
13) Playbook Commands
14) Writing Playbooks
15) Ansible Modules
16) Variables (local + runtime + group + host)
17) Handlers
18) Tags
19) Ansible Roles
20) Ansible Vault
21) Playbook for Multiple OS Family Based Hosts
22) Ansible Tower (Theory)
===========
Terraform
===========
=> Instead of creating infrastructure manually using GUI, we can write the code to
create infrastructure
=> We will use HCL (Hashicorp Configuration Language) to write the infrastructure
code.
==============================
Terraform Vs Cloud Formation
==============================
=> Terraform supports almost all cloud platforms available in the market.
======================
Terraform Vs Ansible
======================
Ex :
1) Create EC2 VM
2) Create S3 Bucket
3) Create RDS instance
4) Create IAM user etc....
Ex:
========================
Terraform Installation
========================
$ terraform -v
=======================
Terraform Architecture
=======================
=> Write terraform script using HCL and save it with .tf extension.
terraform destory : It is used to delete the resources created with our terraform
script.
============================
Terraform Script Syntax
============================
provider "aws" {
region = "ap-south-1"
access_key = "AKIASAEUF6C7IADBKTM3"
secret_key = "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"
}
====================================
Creating Multiple EC2 Instances
====================================
provider "aws" {
region = "ap-south-1"
access_key = "AKIASAEUF6C7B54ZV3WH"
secret_key = "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"
}
======================================
Dealing with Access Key & Secret key
======================================
=> Instead of configuring access key & secret in terraform script file we can
configure them as environment variables.
$ export AWS_ACCESS_KEY_ID="AKIASAEUF6C7IADBKTM3"
$ export AWS_SECRET_ACCESS_KEY="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"
$ echo $AWS_ACCESS_KEY_ID
$ echo $AWS_SECRET_ACCESS_KEY
==================================
Working with User Data in EC2 VM
==================================
#! /bin/bash
sudo su
yum intall httpd -y
cd /var/www/html
echo "<h1>Welcome To Ashok IT</h1>" > index.html
service httpd start
========================
Variables in Terraform
=======================
Ex:
id = 101
name = ashok
=> We can remove hard coded values from terraform resource script using Variables.
$ vi vars.tf
variable "ami" {
description = "Amazon Machine AMI Value"
default = "ami-02e94b011299ef128"
}
variable "instance_type"{
description = "Represents Instance Type"
default = "t2.micro"
}
$ vi main.tf
==========================================
Create S3 Bucket Using Terraform Script
==========================================
provider "aws"{
region = "ap-south-1"
}
resource "aws_s3_bucket" "ashokits3bucket" {
bucket = "ashokits3001"
acl = "private"
versioning {
enabled = true
}
}
=====================================================
Create RDS Instance (MySQL) Using Terraform Script
======================================================
provider "aws"{
region = "ap-south-1"
access_key = "AKIASAEUF6C7IADBKTM3"
secret_key = "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"
}
allocated_storage = 100
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
identifier = "mydb"
username="ashokit"
password="ashokit123"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
==========================
Ex - 1 : Output Variables
==========================
==========================
Ex - 2 : Output Variables
==========================
variable "instance_type"{
description="Amazon Instance Type"
default = "t2.micro"
}
output "ec2_vm_public_ip" {
value = aws_instance.ec2_vm.public_ip
}
=====================================
Input Variables & Output Variables
=====================================
Output Variables are used to get values from terraform after script got executed.
Ex-1: After IAM user got created, print user details as output
===================
Terraform Modules
===================
=> If we use modules concept then managing terraform scripts will become easy
=========
main.tf
========
module "my_ec2_vm" {
source = "/modules/ec2"
module "my_iam_user" {
source = "/modules/iam_user"
}
=============
Summary
=============
========
Agile
=======
=> Agile Methodology is one of the most famous SDLC methodology in the market
===================
Agile Terminology
===================
1) Backlog Grooming
2) Story
3) Story Points
4) Backlog
5) Sprint Planning
6) Sprint
8) Retrospective
=> Backlog grooming is a meeting where all the team members will discuss about
pending works/items in the project.
3 points - 1 day
5 points - 2 days
8 points - 3 days
=> Backlog means the stories which are pending in the project
=> Sprint Planning is a meeting in which team will disucuss priority stories to
complete
=> Scrum is a meeting in which all team members will give work updates to scrum
master.
=> Retrospective is a meeting in which team members will discuss about sprints
review.
=======
JIRA
=======
1) Story creation
2) Story allocation
3) Reports generation etc...
=> JIRA is a licensed s/w. We can use trial version for practice.
==========================
Application Architecture
==========================
===========================
Tech Stack Of Application
===========================
==========================
Application Environments
==========================
1) DEV
2) SIT
3) UAT
4) PILOT
5) PROD
=> We need to install all required softwares (dependencies) to run our application
==================
What is Docker ?
==================
=> With the help of containerization we can run our app in any machine.
=> Docker will take care of dependencies installation required for app execution.
=====================
Docker Architecture
=====================
1) Dockerfile
2) Docker Image
3) Docker Registry
4) Docker Container
=> Dockerfile is used to specify where is app code and what dependencies are
required for our application execution.
Note: When we run docker image then docker container will start. Docker container
is a Linux virtual machine.
=============================
Install Docker in Linux VM
=============================
# Install Docker
$ sudo yum update -y
$ sudo yum install docker -y
$ sudo service docker start
$ docker -v
==================
Docker Commands
==================
############################################################
# Getting container logs
$ docker logs <container-id>
===========
Dockerfile
===========
1) FROM
2) MAINTAINER
3) RUN
4) CMD
5) COPY
6) ADD
7) WORKDIR
8) EXPOSE
9) ENTRYPOINT
10) USER
======
FROM
======
Ex:
FROM openjdk:17
FROM python:3.3
FROM node:16.0
FROM mysql:8.5
============
MAINTAINER
============
EX:
=====
RUN
=====
=> RUN keyword is used to specify instructions to execute at the time of docker
image creation.
EX:
Note: We can write multiple RUN instructions in single docker file and all those
instructions will be processed in the order.
=====
CMD
=====
=> CMD keyword is used to specify instructions to execute at the time of docker
container creation.
EX:
Note: We can write multiple CMD instructions in single docker file but docker will
process only last CMD instruction.
========
COPY
=========
EX:
=============
ADD Keyword
=============
EX:
==========
WORKDIR
==========
WORKDIR /usr/app
Note: Once WORKDIR is executed the remaining dockerfile keywords will execute from
workdir path.
======
USER
======
========
EXPOSE
========
=> It is used to specify on which port number our application will run in container
Ex:
EXPOSE 8080
===========
ENTRYPOINT
===========
Note:
Ex:
ENTRYPOINT["python", "app.py"]
====================
sample Dockerfile
====================
FROM ubuntu
===========================
========================================
How to push docker image to docker hub
========================================
$ docker images
$ docker login
===================================
Can we change Dockerfile name ?
==================================
Note: We need to pass modified file name as input to build docker image
====================================
Dockerizing Java Web Application
====================================
FROM tomcat:8.0.20-jre8
EXPOSE 8080
==========================================================
$ cd maven-web-app
$ mvn clean package
$ ls -l target
$ docker images
=> Enable host port number in security group and access our application
URL : http://public-ip:host-port/maven-web-app/
============================================
Can we get into docker container machine ?
===========================================
$ docker ps
$ pwd
$ ls -l webapps
$ exit
==========================================
Dockerizing Java Spring Boot Application
==========================================
Note: Spring Boot uses Tomcat as embedded server with 8080 as port number.
FROM openjdk:17
WORKDIR /usr/app/
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "sbapp.jar"]
======================================================================
$ cd spring-boot-docker-app
$ ls -l target
$ docker images
$ docker ps
URL : http://public-ip:host-port/
=====================================
Dockerize Python Flask Application
=====================================
FROM python:3.6
COPY . /app
WORKDIR /app
EXPOSE 5000
===================================================================================
===
$ cd python-flask-docker-app
$ docker ps
URL : http://public-ip:host-port/
===================================================================================
====
===================================================================================
====
1) Application Architecture
2) Tech Stack of application
3) Challenges in Application Deployments
4) App Environments
5) Containerization
6) Docker
7) Docker Architecture
8) What is Dockerfile
9) Dockerfile Keywords
10) How to build Docker images
11) How to push docker img to docker registry
12) How to pull docker images
13) How to run docker images
14) Port mapping & detached mode
15) Container Logs
16) Java web app with Docker
17) Spring Boot App with Docker
18) Python App with Docker
================
Docker Network
===============
=> If we run 2 containers under same network then one contianer can communicate
with another container.
1) bridge
2) host
3) none
=> Bridge network is used to run standalone containers. This will assign one IP for
container. This is default network driver for our container.
=> Host network is used to run standalone container. This will not assign any ip
for our container.
1) Overlay
2) MacvLan
1) Docker Network
2) Docker Compose
3) Docker Volumes
4) Docker Swarm
===============
Docker Compose
===============
=> It is used to manage Multi - Container Based applications
Ex:
HOTELS-API
TRAINS-API
CABS-API
FLIGHTS-API
Note: When we have multiple containers like this management will become very
difficult (create / stop / start)
=> In docker compose, using single command we can create / stop / start multiple
containers at a time.
=> For Docker Compose we need to provide containers information in YML file i.e
docker-compose.yml
=> The default file name is docker-compose.yml (we can change it).
===========================docker-compose.yml==========================
=======================================================================
======================
Docker Compose Setup
======================
# Give permission
$ sudo chmod +x /usr/local/bin/docker-compose
================================================
Spring Boot with MySQL DB Using Docker Compose
================================================
version: "3"
services:
application:
image: sbapp
ports:
- 8080:8080
networks:
- ashokit-sbapp-nw
depends_on:
- mysqldb
volumes:
- /data/sb-app
mysqldb:
image: mysql:5.7
networks:
- ashokit-sbapp-nw
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=sbms
volumes:
- /data/mysql
networks:
ashokit-sbapp-nw:
================================
Application Execution Process
================================
URL : http://public-ip:8080/
# go inside db container
$ docker exec -it <db-container-name> /bin/bash
# connect to mysql-db
$ mysql -h localhost -u root -proot
# check databases
$ show databases;
# select database
$ use sbms;
# show tables
$ show tables;
====================================
Stateful Vs Stateless Containers
====================================
Note: In above springboot application we are using mysql db to store the data. When
we re-create containers we lost our data (This is not accepted in realtime).
=> Even if we deploy latest code or if we re-create containers we should not loose
our data.
================
Docker Volumes
================
=> Volumes are used to persist the data which is generated by Docker container
====================================================
Making Docker Container Statefull using Bind Mount
====================================================
$ mkdir app
version: "3"
services:
application:
image: spring-boot-mysql-app
ports:
- "8080:8080"
networks:
- springboot-db-net
depends_on:
- mysqldb
volumes:
- /data/springboot-app
mysqldb:
image: mysql:5.7
networks:
- springboot-db-net
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=sbms
volumes:
- ./app:/var/lib/mysql
networks:
springboot-db-net:
$ docker-compose up -d
$ docker-compose down
==================
Docker Swarm
==================
-> We will setup Master and Worker nodes using Docker Swarm cluster
-> Master Node will schedule the tasks (containers) and manage the nodes and node
failures
-> Worker nodes will perform the action (containers will run here) based on master
node instructions
-> Docker swarm is embedded in Docker engine ( No need to install Docker Swarm
Seperatley )
==================
Swarm Features
==================
1) Cluster Management
2) Decentralize design
3) Declarative service model
4) Scaling
5) Load Balancing
============================
Docker Swarm Cluster Setup
============================
-> Create 3 EC2 instances (ubuntu) & install docker in all 3 instances using below
2 commands
Note: Enable 2377 port in security group for Swarm Cluster Communications
1 - Master Node
2 - Worker Nodes
# Get Join token from master (this token is used by workers to join with master)
$ sudo docker swarm join-token worker
Note: Copy the token and execute in all worker nodes with sudo permission
====================
Docker Swarm Service
====================
URL : http://master-node-public-ip:8080/java-web-app/
1) What is Docker
2) Docker Advantages
3) Docker Architecture
4) Dockerfile & keywords
5) Docker Images
6) Docker Registry
7) Docker Containers
8) Docker Network
9) Docker Volumes
10) Docker Compose
11) Docker Swarm
12) Java Web App + Docker
13) Spring Boot + Docker
14) Python Flask + Docker
15) Angular + Docker
16) React JS + Docker
17) DOT Net + Docker
18) Spring Boot + MySQL + Docker Compose
=======================
Docker Vs Kubernetes
=======================
=> Packaging our app code and dependencies as single unit for execution is called
as Containerization.
============
Kubernetes
============
(create/stop/start/remove/scale-up/scale-down)
1) Auto Scaling
2) Self Healing
3) Load Balancing
==================
K8S Architecture
==================
=> In K8S cluster we will have Master Node & Worker Nodes
=> Master Node will recieve the request and will assign task to worker nodes.
=> Worker Nodes will perform the action based on task given by Master node.
=======================
K8S Cluster Components
=======================
- API Server
- Schedular
- Controller Manager
- ETCD
2) Worker Node
- Kubelet
- Kube Proxy
- Docker Runtime
- POD
=> API Server will recieve the request given by Kubectl and will store the request
into etcd.
=> Schedular will identify pending requests in ETCD and will identify worker node
to to schedule the task.
Noe: Schedular will communicate with the worker node using Kubelet.
=> Kubelet is called as Node Agent. It will maintain all the info related to worker
node.
=> Controller-manager will veriy all the tasks are working as expected or not.
=> In Every Worker Node, Docker Engine will be available to run Docker Container.
=> POD is a smallest building block that we can deploy in k8s cluster.
===================
K8S Cluster Setup
===================
3) Provider Managed Cluster (Ex : AWS EKS, Azure AKS, GCP GKE....) - Realtime.
================
AWS EKS Cluster
================
=> EKS is highly scalable and robust solution to run k8s components.
==========================
Kubernetes Resources
==========================
1) PODS
2) Services
3) Namespaces
4) ReplicationController (RC) - Outdated
5) ReplicaSet (RS)
6) Deployment
7) DaemonSet
8) StatefulSet
9) IngressController
10) HPA
11) Helm Charts
12) K8S Monitoring(Grafana & Premoethues)
13) EFK Stack (Log Monitoring)
============
What is POD
============
=> When POD is damaged/crashed K8S will replace that pod (Self Healing)
=> When we create multiple PODS for one applications, load will be balanced by k8s.
=> To provide public access for our PODS we need to expose our PODS using K8S
Service concept.
=============
K8S Services
==============
1) Cluster IP
2) NodePort
3) Load Balancer
=====================
What is Cluster IP ?
=====================
=> When pod is crashed/damaged k8s will replace that with new pod
=> Using Cluster IP we can access pods only with in the cluster
=> NodePort service is used to expose our pods outside the cluster.
=> Using NodePort we can access our application with Worker Node Public IP address.
=> When we use Node Public IP to access our pod then all requests will go same
worker node (burden will be increased on the node).
Note : To distribute load to multiple worker nodes we will use LBR service.
=================================
What is Load Balancer Service ?
================================
=> It is used to expose our pods outside cluster using AWS Load Balancer
=> When we access load balancer url, requests will be distributed to all pods
running in all worker nodes.
=================
K8s Namespaces
================
============
Summary
=============
1) What is Containerization
2) What is Orchestration
3) K8S Introduction
4) K8S Advantages
5) K8S Architecture
6) AWS EKS Cluster Setup
7) What is POD
8) What is Service (Cluster IP, Node Port & LBR)
9) What is Namespaces
=============================================================================
-------------------------
K8S Manifest YML Syntax
-------------------------
---
apiVersion :
kind:
metadata:
spec:
...
=====================
K8S POD Manifest YML
======================
---
apiVersion: v1
kind: Pod
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: ashokit/javawebapp
ports:
- containerPort: 8080
...
# Describe pod
$ kubectl describe pod <pod-name>
=============
K8S Service
=============
---
apiVersion : v1
kind: Service
metadata:
name: javawebsvc
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
nodePort: 30070
...
Note: POD lable we will use as 'selector' in service (to identify the pods)
=> Enable NodePort number in worker node security group inbound rules.
URL : http://node-public-ip:nodeport/java-web-app/
============================================================================
Q) Is it mandatory to specify Node Port Number in service manifest yml ?
============================================================================
Ans) No, if we don't specify k8s will assign one random port number in between
30,000 - 32, 767.
======================================
POD & Service in single manifest YML
=======================================
---
apiVersion: v1
kind: Pod
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: ashokit/javawebapp
ports:
- containerPort: 8080
---
apiVersion : v1
kind: Service
metadata:
name: javawebsvc
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
...
Note: In the above manifest yml we have not configured Node Port Number. K8S will
assign one randome number.
=> We need to enable that Node Port number in security group inbound rules.
===============
K8S Namespaces
===============
Note: When we delete a namespace, all the resources which are created under that
namespace also gets deleted.
# check namespaces
$ kubectl get ns
Note: When we don't specify any namespace then it will use 'default' namespace
---------------------------
k8s namespace manifest yml
---------------------------
---
apiVersion: v1
kind: Namespace
metadata:
name: ashokit-ns2
...
=====================================
Namespace + pod + service - manifest
=====================================
---
apiVersion: v1
kind: Namespace
metadata:
name: ashokit-ns3
---
apiVersion: v1
kind: Pod
metadata:
name: javawebapppod
namespace: ashokit-ns3
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: ashokit/javawebapp
ports:
- containerPort: 8080
---
apiVersion : v1
kind: Service
metadata:
name: javawebsvc
namespace: ashokit-ns3
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
...
$ kubectl get ns
================================================================
=> As of now, we have created POD manually using POD Manifest YML
=> If POD is damaged/crashed/deleted then k8s will not create new POD.
=> If we create pod using k8s resources, then pod life cycle will be managed by
k8s.
1) ReplicationController (Outdated)
2) ReplicaSet
3) Deployment
4) DaemonSet
5) StatefulSet
===========
ReplicaSet
===========
Note: When POD is damaged/crashed/deleted then ReplicaSet will create new POD.
=> Always It will maintain given no.of pods count for our application.
=> With this approach we can achieve high availability for our application.
=> By using RS, we can scale up and scale down our PODS count.
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: javawebrs
spec:
replicas: 2
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappconatiner
image: ashokit/javawebapp
ports:
- containerPort: 8080
...
apiVersion: v1
kind: Service
metadata:
name: javawebsvc
spec:
type: LoadBalancer
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
...
$ kubectl get rs
Note: If we configure service type as LoadBalancer then in AWS LBR will be created.
=> Using AWS LBR DNS url we can access our application.
=> K8S supports Auto Scaling when we use 'Deployment' to create pods.
================
K8S Deployment
================
1) Zero downtime
2) Auto Scaling
1) ReCreate
2) RollingUpdate
3) Canary
=> ReCreate means it will delete all existing pods and will create new pods
=> RollingUpdate means it will delete and create new pod one by one ...
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebdeploy
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappconatiner
image: ashokit/javawebapp
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: javawebsvc
spec:
type: LoadBalancer
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
...
==============================
Blue - Green Deployment Model
==============================
===================
Working Process
===================
URl : https://github.com/ashokitschool/kubernetes_manifest_yml_files.git
Step-7: Make Green Pods Live (update selector and apply the yml)
=================
Package Managers
=================
amazon linux vm :
ubuntu linux vm :
===============
What is HELM ?
================
=> HELM is a package manager which is used to install required softwares in k8s
cluster
$ ./get_helm.sh
$ helm
# Before you can install the chart you will need to add the metrics-server repo to
helm
$ helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
$ helm list
=========================================
Metric Server Unavailability issue fix
=========================================
URL : https://www.linuxsysadmins.com/service-unavailable-kubernetes-metrics/
=> Edit the below file and add new properties which are given below
------------------------------------------------------------------
#######################
Kubernetes Monitoring
#######################
=> We can monitor our k8s cluster and cluster components using below softwares
1) Prometheus
2) Grafana
=============
Prometheus
=============
-> Prometheus collects and stores its metrics as time series data
=============
Grafana
=============
-> It provides charts, graphs, and alerts for the web when connected to supported
data sources.
-> Grafana allows you to query, visualize, alert on and understand your metrics no
matter where they are stored. Create, explore and share dashboards.
#########################################
How to deploy Grafana & Prometheus in K8S
##########################################
-> Using HELM charts we can easily deploy Prometheus and Grafana
##################################################
Install Prometheus & Grafana In K8S Cluster using HELM
#################################################
# Search Repo
$ helm search repo prometheus-community
# install prometheus
$ helm install stable prometheus-community/kube-prometheus-stack
# By default prometheus and grafana services are available within the cluster as
ClusterIP, to access them outside lets change it to LoadBalancer.
# Edit Prometheus Service & change service type to NodePort then save and close
that file
$ kubectl edit svc stable-kube-prometheus-sta-prometheus
# Now edit the grafana service & change service type to LoadBalancer then save and
close that file
$ kubectl edit svc stable-grafana
# Check in which nodes our Prometheus and grafana pods are running
$ kubectl get pods -o wide
URL : http://LBR-DNS:9090/
URL : http://LBR-DNS/
UserName: admin
Password: prom-operator
=> Once we login into Grafana then we can monitor our k8s cluster. Grafana will
provide all the data in charts format.
===========================================
Q-1) Self Introduction Of DevOps Engineer
===========================================
=> I am ashok having 4+ yrs exp as a Devops engineer with AWS cloud
Frontend : Angular
Database : Oracle
=> We are using AWS cloud platform to maintain required infrastrucute for
application
=> We are using several devops tools to automate project build and deployment
process
Jenkins : CI CD server
===================
Project Name : IES
===================
Note: Intranet means we can't access this app with public network
=> This application is used to provide insurance plans for new jersey state
citizens.
=> Citizens should visit nearest DHS office to apply for the plan
Note: Every Plan will have some business conditions. If citizen data matching with
business conditions then citizen will be approved for the plan otherwise citizen
will be denied for the plan.
==========================
Roles & Responsibilities
==========================
====================
Projects to mention
====================
2) e-Learning platform
===================
Resume Building
===================
Note: Create new email id and take new phno to apply for jobs
- Technologies
- Project Name
- Project Description (Optional)
- Project Duration
- Project Tech Stack
- Your Roles & Responsibilities
- Your Gender
- DOB
- Marriage Status
====================
How to cover gap ?
====================
###### Note: To prove our exp we need to experience documents from consultancy
========================================================
What are the day to day activities of DevOps Engineer ?
========================================================
=> Tickets assignment will happen in stand-up call / after stand-up call
========================================================================
- CI Job
- CD Job
Project-2 : Git Hub + Maven + Sonar + Nexus + Docker + K8S + Jenkins + Ansible
(springboot)
Project-3 : Angular/React App
################################
Project Setup using below tools
################################
1) Maven
2) Git Hub
3) Jenkins
4) Docker
5) Kubernetes
URL : http://public-ip:8080/
# install docker
$ curl -fsSL get.docker.com | /bin/bash
# Restart Jenkins
$ sudo systemctl restart jenkins
$ curl -o kubectl
https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/
kubectl
$ chmod +x ./kubectl
$ eksctl version
###### Step-5 :: Create IAM role & attach to EKS Management Host ######
usecase - ec2
2) Add permissions
IAM - fullaccess
VPC - fullaccess
ec2 - fullaccess
cloudfomration - fullaccess
administrator - acces
(Select EC2 => Click on -> Security -> attach IAM role we have created)
Syntax:
example:
Note: Cluster creation will take 5 to 10 mins of time (we have to wait)
URL : https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
$ curl -o kubectl
https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/
kubectl
$ chmod +x ./kubectl
###### Step-9 :: Update EKS Cluster Config File in Jenkins Server ######
Note: Execute below command in Eks Management host & copy kube config file data
$ cat .kube/config
=> Execute below commands in Jenkins Server and paste kube config file
$ cd /var/lib/jenkins
$ sudo mkdir .kube
$ sudo vi .kube/config
#####################################################
######## Step-10 : Create Jenkins Pipeline ######
#####################################################
#########################
stage-1 : git clone
#########################
stage('Clone Repo') {
steps {
git credentialsId: 'GIT-Credentials', url:
'https://github.com/ashokitschool/maven-web-app.git'
}
}
#########################
stage-2 : mvn clean build
#########################
stage('Build'){
steps{
sh 'mvn clean package'
}
}
##################################################
stage-3 : build docker image
##################################################
##################################################
Stage-4 : Push docker image into docker hub
##################################################
-> push docker image into docker hub using secret text
-> Use pipeline syntax to generate secret for docker hub account
##########################
Step-5 : Deploy in k8s
#########################
stage ('Deploy'){
steps{
sh 'kubectl delete deployment mavenwebappdeployment'
sh 'kubectl apply -f maven-web-app-deploy.yml'
}
}
stages {
stage('Git Clone') {
steps {
git credentialsId: 'GIT_Credentials', url:
'https://github.com/ashokitschool/maven-web-app.git'
}
}
stage('Maven Build') {
steps {
sh 'mvn clean package'
}
}
stage('Create Image') {
steps {
sh "docker build -t ashokit/mavenwebapp ."
}
}
======================================
Create SonarQube stage
======================================
Ex: sqa_a7e9c43d3a8649618b53d79e203013c25dbe3e3e
-> Manage Jenkins -> Plugins -> Available -> Sonar Qube Scanner Plugin -> Install
it
-> Manage Jenkins -> Configure System -> Sonar Qube Servers -> Add Sonar Qube
Server
- Name : Sonar-Server-7.8
- Server URL : http://52.66.247.11:9000/ (Give your sonar
server url here)
- Add Sonar Server Token
-> Once above steps are completed, then add below stage in the pipeline
stage('SonarQube analysis') {
withSonarQubeEnv('sonar-9.9.3') {
def mavenHome = tool name: "Maven-3.9.6", type: "maven"
def mavenCMD = "${mavenHome}/bin/mvn"
sh "${mavenCMD} sonar:sonar"
}
}
====================================
# Step-4 : Create Nexus Stage
====================================
# Run nexus using docker container
Repo : http://3.108.63.133:8081/repository/ashokit-snapshot-repo/
-> Install Nexus Repository Plugin using Manage Plugins ( Plugin Name : Nexus
Artifact Uploader)
# install docker
$ curl -fsSL get.docker.com | /bin/bash
==================
Final Pipeline
==================
pipeline {
agent any
tools{
maven "Maven-3.9.6"
}
stages {
stage('Clone Repo') {
steps {
git 'https://github.com/ashokitschool/maven-web-app.git'
}
}
stage('Maven Build') {
steps {
sh 'mvn clean package'
}
}
stage('Code Review') {
steps{
withSonarQubeEnv('sonar-9.9.3') {
sh "mvn sonar:sonar"
}
}
}
stage("Nexus Upload"){
steps{
nexusArtifactUploader artifacts: [[artifactId: '01-maven-web-app',
classifier: '', file: 'target/maven-web-app.war', type: 'war']], credentialsId:
'nexus-server', groupId: 'in.ashokit', nexusUrl: '3.108.63.133:8081', nexusVersion:
'nexus3', protocol: 'http', repository: 'ashokit-snapshot-repo', version: '3.0-
SNAPSHOT'
}
}
stage('Docker Image') {
steps {
sh 'docker build -t ashokit/mavenwebapp .'
}
}
stage('Image Push') {
steps {
withCredentials([string(credentialsId: 'docker-acc-pwd-id',
variable: 'dockerpwd')]) {
sh "docker login -u ashokit -p ${dockerpwd}"
sh "docker push ashokit/mavenwebapp"
}
}
}
stage('K8S Deploy') {
steps {
sh 'kubectl apply -f maven-web-app-deploy.yml'
}
}
}
}
1) Monolithic Architecture
2) Microservices Architecture
-------------------------
Monolithic Architecture
-------------------------
----------------------------
Microservices Architecture
----------------------------
hotels-booking
trains-booking
flights-booking
cabs-booking
1) Loosely coupling
4) Maintenence easy
------------------------
Microservices Project
------------------------
-----------
Procedure
----------