KEMBAR78
Merge From Ofoct | PDF | Cloud Computing | Software Engineering
0% found this document useful (0 votes)
72 views202 pages

Merge From Ofoct

The document outlines a 4-month online course on DevOps with AWS, covering 6 modules including DevOps introduction, Linux OS, DevOps tools, AWS cloud services, project setup, and interview preparation. It includes daily class notes and recordings, with a fee of 16,000 INR and class timings from 6:00 PM to 7:30 PM IST, Monday to Saturday. A WhatsApp group will be created for enrolled students for discussions and doubt clarifications.

Uploaded by

sejecir131
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views202 pages

Merge From Ofoct

The document outlines a 4-month online course on DevOps with AWS, covering 6 modules including DevOps introduction, Linux OS, DevOps tools, AWS cloud services, project setup, and interview preparation. It includes daily class notes and recordings, with a fee of 16,000 INR and class timings from 6:00 PM to 7:30 PM IST, Monday to Saturday. A WhatsApp group will be created for enrolled students for discussions and doubt clarifications.

Uploaded by

sejecir131
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 202

Whatsapp Group For Class Updates : https://chat.whatsapp.

com/KcTMyeLPRSr1IVmKvkEMvn

============================
Course Name : 14-DEVOPS-AWS
============================

=> Pre-Requisites : Your time + Your Intrest

=> Course Content : 6 Modules

Module-1 : DevOps Introduction

Module-2 : Linux OS + Shell Scripting

Module-3 : DevOps Tools (20 Tools)

Module-4 : AWS Cloud (15+ Services)

Module-5 : Projects Setup

Module-6 : Interview Guide

=============
Course Info
=============

Course Code : 14-DEVOPS-AWS

Duration : 4 Months

Class Timings: 6:00 PM - 7:30 PM IST (Mon - Sat)

Class Mode : Online Classes ( zoom )

Daily Class Notes will shared (soft copy materials) - Life time

Daily class recording video will be shared (videos - 1 year acces)

Course Fee : 16,000 INR

#### We will create one whatsapp group for enrolled students for discussion ####s

Note: Every day last 15 mins for Doubts clarifications

Note: After this course completion, you can attend interviews with 3 to 4 years of
experince.

=================
Operating System
================

1) Linux OS
2) Shell Scripting
=============
DevOps Tools
=============

1) Maven, Gradle & NPM : Build Tools

2) Git Hub & BitBucket : Source Code Repository Servers

3) Nexus & JFrog : Artifactory Servers (to store jar or war files)

4) SonarQube : Code Review Software

5) Tomcat & JBoss : WebServer For Deployment

6) Docker : Containerization Software ( package code + dependencies )

7) Kubernetes : Orchestration Software ( to manage containers )

8) Jenkins : CI CD Software ( To automate build + deployment process)

9) Grafana + Promethues & Nagios : Monitoring Tools

10) EFK Stack : To monitor application logs -> (Elasticsearch + FluentD + Kibana)

11) JIRA : Project Management Software

12) Terraform : To automate Infrastructure creation

13) Ansible : Configuration Management

==========
AWS Cloud
=========

1) What is Cloud Computing


2) Cloud Advantages
3) AWS Introduction
4) AWS Account Creation

5) EC2 (EBS + LBR + ASG) : To create Virtual machines


6) RDS : To create Relational Databases
7) S3 : Storage Service (Unlimited)
8) Route 53 : Domain Mapping Service
9) VPC : Virtual Private Cloud (Network)
10) IAM : Identity and Access Management
11) Elastic Bean Stack : PaaS Platform
12) ECS : To run Docker containers
13) EKS : To run k8s cluster
14) Cloud Watch : To monitor cloud resources
15) SNS : Notifications
16) Lambdas : Serverless computing
17) Cloud Front : Edge Locations

=================
Software Project
=================
=> Collection of programs is called as Software Project

================================
Why we need Software Project ?
================================

=> To reduce human efforts

Ex: Net Banking, Ticket Booking, Online Training platforms etc....

======================
Project Architecture
======================

=> Software Project Contains 3 Layers

1) Frontend : User interface (Presentation layer)

2) Backend : Business Layer (Business logic)

3) Database : To store data

======================
Frontend Technologies
======================

1) HTML & CSS


2) Java Script
3) BootStrap
4) Angular / React JS

=====================
Backend Technologies
=====================

1) Java
2) .Net
3) Python
4) PHP
5) Node JS

===========
Databases
===========

1) Oracle
2) MySQL
3) PostGres
4) SQL Server
5) Mongo DB
6) Casandra etc....

===================
What is DevOps ?
==================

DevOps = Development + Operations


=> DevOps is a culture

=> DevOps is a process

=> DevOps is set of practices

=> DevOps process is used to establish colloboration between Development team &
Operations team.

=> The main aim of DevOps is to deliver projects to clients with high quality in
less time.

==========================
What is DevOps Life Cycle
=========================

=> We have 7 C's in DevOps life cycle

1) Continuous Planning

2) Continuous Development

3) Continuous Integration

4) Continuous Testing

5) Continuous Deployment

6) Continuous Monitoring

7) Continuous Feedback

=============================================
Roles & Responsibilities of DevOps Engineer
=============================================

1) Setup Infrastructure (Ex: machines, servers, storage, security & network) ::


Terraform

2) Configuration Management :: Ansible

3) Setup Source Code Repositories to integrate developers code :: Git Hub /


BitBucket

4) Setup Aritifact Repositories to store build artifacts (jar/war) : Nexus / JFrog

5) Manage Role Based Access (RBAC)

6) Create CI CD Pipelines :: Jenkins

7) Perform Build & Deployment :: Maven, Docker and Kubernetes

8) Generate Code Review Report :: SonarQube

9) Monitor CI CD pipelines

10) Monitor Infrastructure : Prometheus and Grafana

11) Centralize application Logs :: EFK Stack


12) Issues fixing

================
What is SDLC ?
================

=> SDLC stands for Software Development LifeCycle

=> SDLC represents Software Project Development process from starting to ending

=> We have several stages in SDLC process

1) Requirements Gathering

2) Analyze Requirements

3) Planning

4) Implementation (Development)

5) Testing

6) Deployment

7) Monitoring

=> We have several SDLC methodologies

1) Waterfall Model

2) Agile Model

===============
Waterfall Model
===============

=> It is a linear process to develop and deliver the application

=> All stages will happen one after other

=> Requirements are fixed

=> Budget is fixed

=> Client involvment is very less

=> Client will see the project only at the end

Note: If client don't like our delivery then total money wasted and time wasted.

=============
Agile Model
=============
=> Agile is an iterative approach to develop and deliver the application

=> Planning + Development + Testing + Deployment + Feedback is continuous process


in Agile.

=> We will deliver project to client in Multiple Releases (Sprints)

=> Client involement is very high in project

=> Client feedback is very important to proceed further

=> Requirements are not fixed

=> Budget is not fixed

#### Ex: Client provided 100 requirements for development ####

Sprint - 1 :: 30 requirements : plan + develop + test + deliver + take feedback

Sprint - 2 :: 30 requirements : plan + develop + test + deliver + take feedback

Sprint - 3 :: 30 requirements : plan + develop + test + deliver + take feedback

Sprint - 4 :: 10 requirements : plan + develop + test + deliver + take feedback

==========================
What is Infrastructure ?
==========================

The resources that are required to run our business is called as Infrastructure

Ex: Machines, Servers, Databases, Power, Network, Storage, Security & Monitoring

=> Challenges with IT infrastructure

1) Security

2) Scalability

3) Availability

4) Network Issues

5) Diaster Recovery

=> Purchasing Infrastructure is costly

=> Managing Infrastructure is risky

=> To overcome the above problems of Infrastructure management, now a days


companies are preffering Cloud Computing.

========================
What is Cloud Computing
=========================
=> It is the process of delivering IT resources on demand basis over the internet
with Pay as you go model.

=> Pay as you go model means use the resources and pay bill for using them just
like our post paid bill / electricity bill/ credit bills etc...

=================
Cloud Providers
================

=> Companies which are providing IT resources over internet are called as Cloud
Providers

1) Amazon => AWS Cloud

2) Microsoft => Azure Cloud

3) Google => GCP Cloud

4) IBM => IBM Cloud

5) Oracle => Oracle cloud

============================
Cloud Computing Advantages
===========================

1) Pay as you go

2) Low cost

3) Scalability

4) Availability

5) Reliability

6) Unlimited Storage

7) Security

8) Backup

=======================
What is AWS Cloud ?
=======================

=> AWS stands for "Amazon Web Services"

=> AWS started in the year of 2006

=> 190+ countries are using AWS cloud infrastructure to run their businesses.

==========================
AWS Global Infrastructure
==========================

=> AWS maintaining their infrastructure using Regions & Availability Zones
34 - Regions (Geographical Location)

104 - Az's (Data Centers)

=> Datacenter means Server Room

##### AWS account Setup : https://youtu.be/xi-JDeceLeI ######

==========
Revision
==========

1) DevOps with AWS Course Introduction

2) Pre-Requisites to learn DevOps with AWS

3) Course Content (Route map)

4) DevOps Tools Overview

5) AWS Services Overview

6) What is Software Project & Why

7) Software Project Architecture

8) What is Frontend & Frontend Technologies

9) What is Backend & Backend Technologies

10) What is Database

11) Development Team Roles & Responsibilities

12) Testing Team Roles & Responsibilities

13) DevOps Team Roles & Responsibilities

14) What is DevOps & Why DevOps ?

15) DevOps life cycle (7C's of DevOps)

16) What is SDLC

17) What is Waterfall Model

18) What is Agile Model

19) What is infrastructure

20) What are the challenges with On-Prem infrastructure

21) What is Cloud Computing

22) Cloud Advantages


23) AWS Cloud Introduction

24) AWS Account Setup

=> Module-1 : DevOps Introduction (Completed)

======================================
Module-2 : Linux OS + Shell Scripting
======================================

=> Operating System is a platform which is used to communicate with computers

=> Operating System acts as mediator between users and computers

=> Operating System is mandatory to use a computer

=> There is no use of computer which doesn't have Operating System

=> Using Operating System we can run applications in Computer

Ex: Notepad, Calc, Paint, Browser, VLC Player

=> There are several Operating Systems available in the market

1) Windows OS

2) Linux OS

3) Mac OS

============
Windows OS
============
-> Developed by Microsoft Company

-> It is GUI Based (Graphical User interface)

-> Easy to use

-> Highly recommended for personal use

Ex: Watching Movies, Playing Games, Browsing, Storing personal data ....

-> Security features are very less (we need to install Anti Virus softwares)

-> Windows is Licensed Operating System.

-> Windows is single user based OS

==========
Linux OS
==========

-> It is free and open source operating system

-> Multi User Based Operating System (multiple users can access at a time)

-> Anti Virus S/w not required for Linux Machines

-> Highly recommended for Server Environments

Ex: Database, Jenkins, SonarQube, Nexus, Docker, k8s.....

-> Linux is CLI based (Command Line Interface)

=================
History of Linux
=================

-> Developed by Linus Torvalds

-> Earlier "Linus Torvalds" used Unix OS and he faced some challenges with Unix OS
and he reported to company to change Unix OS but Company rejected his suggetions.

-> Linus Torvalds started developing his own operating by using "Minux OS"

(Li) nus + Mi (nux) ===> Linux

-> After developing Linux OS "Linux Torvalds" released Linux OS into market for
free of cost along with source code.

-> Several Companies downloaded Linux OS source code and modified according to
their requirement and released into market with their own brand names. Those are
called as Linux Distributions / Flavours .

1) Amazon Linux
2) Ubuntu Linux
3) CentOS Linux
4) Debian Linux
5) Fedora Linux
6) SUSE Linux
7) Kali Linux
8) RED HAT Linux etc..

1) What is Operating System

2) What is Windows OS

3) Linux OS

4) Linux History

5) Linux Distributions

==================================
Environment Setup to Learn Linux
=================================

1) Create Account in AWS Cloud & Login into that

2) Create Linux Virtual Machine using EC2 service

3) Connect with Linux VM using MobaXterm software (Windows Users)

4) Practice Linux Commands

===============
Linux Commands
===============

whoami : Displays logged in username

date : displays current date

cal : displays calendar

clear : Clear screen

pwd : Display present working directory

cd : change directory

mkdir : To create directory (folder)

rmdir : To remove empty directory

Note: To remove non-empty directory we will use below command

rm -r <dirname>

ls : List files & directories available in present working directory

ls -l : Long listing of the files & directories in alphabetical order

ls -lr : Long listing of the files & directories in reverse of alphabetical


order

ls -lt : Display latest files on top

ls -ltr : Display old files on top


ls -la : To display hidden files

ls -li : To display files with inode numbers

mv : To rename/move files & directories

touch : To create empty files

rm : Remove file

cat : To create files with data, append data to file and display file content

cat > f1.txt : Create new file with data

cat f1.txt : Print file content from top to bottom

cat -n f1.txt : Print file content with line numbers

cat >> f1.txt : Append data to existing file

tac : To print file content from bottom to top

tac f1.txt

cp : Top copy data from one file to another file

cp f1.txt f2.txt

Note: How to copy more than one file data to another file

cat m1.txt m2.txt > m3.txt : Copy two files data to 3rd file

wc : word count (it will display no.oflines, no.of words and no.of characters in
given file)

wc m3.txt

head : Display first 10 lines of the file

head devops.txt (print first 10 lines)

head -n 15 devops.txt (print first 15 lines)

head -n 30 devops.tx (print first 30 lines)

tail : Display last 10 lines of the file

tail devops.txt (print last 10 lines)

tail -n 15 devops.txt (print last 15 lines)

tail -n 30 devops.tx (print last 30 lines)


tail -n +5 devops.txt (print from 5th line to last line)

=======================
Text Editors in Linux
=======================

vi : visual editor

=> Using vi we can edit files in linux

$ vi linux.txt

Insert mode : To insert/modify data we should go to insert mode (press i)

Esc Mode : After modifications completed we should press Esc key

Save and Close : Press :wq to save the changes and close the file.

Close without Saving : press :q! for closing file without saving the changes

============
SED command
===========

=> SED stands for Stream Editor

=> SED command is used for replacement or substitution

=> We can replace one word with another word using SED command without opening file

$ sed 's/linux/unix/' linux.txt

Note: by default sed command will give output to terminal but it won't make changes
to original file. To make changes to original file we need to use -i in sed command

$ sed -i 's/linux/unix/' linux.txt

# delete last line of the file


$ sed -i '$d' linux.txt

# Delete second line of the file


$ sed -i '2d' linux.txt

# Delete from second line to till last line


$ sed -i '2, $d' linux.txt

=============
grep command
=============

=> Grep stands for global regular expression print

=> It is used for file content search based on given pattern

# Pattern Matching print


$ grep "keyword" filename

# Pattern Matching by ignoring case sensitive


$ grep -i 'devops' mydata.txt

# Print matching pattern lines along with line numbers


$ grep -n 'DEVOPS' mydata.txt

# invert match (print lines which doesn't have given pattern)


$ grep -v 'DEVOPS' mydata.txt

================================
Working with Zip files in Linux
================================

Syntax to create To zip file : $ zip <zip-file-name> <files-to-add>

# Create zip file (all .txt files will be added to notes.zip file)
$ zip notes *.txt

# Print content of zip file


$ zip -sf notes.zip

# Add new file to existing zip


$ zip notes f1.txt

# Delete file from zip


$ zip -d notes f1.txt

# Unzip the files from zip file


$ unzip <zip-file-name>

# Create zip file with encryption (set password)


$ zip -e <filename> <files-to-add>

###### Note: man command is used to get documentation about linux commands #####

$ man vi

$ man ls

$ man zip

##########################
02 - Aug - 2023 (Wednesday)
###########################

===========================
locate and find commands
==========================

=> locate and find commands are used for file search

=> locate command will search for the files in locate database.

$ sudo yum install locate


$ locate 'apache'

=> find command is used for more advanced search in linux os

# search for the file under home directory


$ sudo find /home -name f1.txt

# find empty files under home directory


$ sudo find /home -type f -empty

# find empty directories under home directory


$ sudo find /home -type d -empty

========================
Users & Groups in Linux
========================

=> Linux is Multi User Based OS

=> We can create Multiple User accounts in Linux Machine

=> In Every Linux Machine by default one 'root' user account will be available

root user => super user / Administrator

Note: Root user will have all priviliges to perform any operation.

=> When we launch EC2 instance using "Amazon Linux AMI" we will get 'ec2-user' by
default.

Note: For every user one home directory will be created in Linux machine under
'home' directory

ec2-user => /home/ec2-user

ashok => /home/ashok

raju => /home/raju

rani => /home/rani

=> To get user account information we can use 'id' command in linux machine

$ id <uname>

# switch to root user account


$ sudo su -

# logout from root user account


$ exit

====================================
Working with User Accounts in Linux
===================================

# swith to root user


$ sudo su -

# Create New User Account


$ useradd <uname>

# Set Password for user account


$ passwd <uname>

# Display users we have created


$ cat /etc/passwd

# Delete User (it won't delete user home directory)


$ userdel <uname>

# Delete user along with user home directory


$ userdel <uname> --remove

# switch to user account


$ su <uname>

=====================================
Working with User Groups in Linux
=====================================

# Display all user-groups available in Linux

$ cat /etc/group

Note: When we create user in linux machine, it will create user account + user-
group

# Create New User-Group


$ groupadd <group-name>

# Add user to user-group


$ usermod -aG <group-name> <user-name>

# Remove user from user-group


$ gpasswd -d <user-name> <group-name>

# Display users belongs to user-group


$ sudo lid -g <group-name>

# Rename user-group
$ groupmod -g <new-name> <existing-name>

# Rename user
$ usermod -l <new-name> <existing-name>

# Delete User Group


$ groupdel <group-name>

################
03-Aug-2023(Thu)
################

==========================
File Permissions In Linux
==========================

=> In Linux VM everything is treated as file (Ex: normal files, directory files)

=> We can secure our files using File Permissions

=> We have 3 types of file permissions

1) READ (r)

2) WRITE (w)

3) EXECUTE (x)

=> File Permissions are divided into 3 sections

1) User Permissions (rwx)


2) OWNER Group Permissions(rwx)
3) OTHERS Permissions (rwx)

Ex: rwxrwxrwx

=> To change file permissions in linux we can use 'chmod' command

Note: "+" is used to add permission and " - " is used to remove permission

# Adding write permission for user on f1.txt file


$ chmod u+w f1.txt

# Adding execute permission for group


$ chmod g+x f1.txt

# Remove write permission for group


$ chmod g-w f1.txt

# Add write and execute permission for others


$ chmod o+wx f1.txt

# Remove all permissions for others


$ chmod o-rwx f1.txt

===================================
File Permissions in Numeric Format
===================================

0 : No Permission

1 : Execute
2 : Write

3 : Write + Execute => 2 + 1

4 : Read

5 : Read + Execute => 4 + 1

6 : Read + Write => 4 + 2

7 : Read + Write + Execute => 4 + 2 + 1

$ chmod 777 f1.xt


$ chmod 445 f1.txt
$ chmod 655 f1.txt
$ chmod 444 f1.txt

Q-1) What are default file permissions in Linux in numeric format ?

Q-2) What are default directory permissions in Linux in numeric format ?

============================
Working with chown command
===========================

=> chown command is used to change file owner and file group

# making ashok as owner for f1.txt file


$ sudo chown ashok f1.txt

# making dbteam as owner group for f1.txt file


$ sudo chown :dbteam f1.txt

# Changing owner and group at a time


$ sudo chown ashok:ashok f1.txt

Note: We can use userid and group id also in chown command instead of names.

#################
04-Aug-2023
###############

===========================
Hard Links and Soft Links
===========================

=> In Linux we can create Link files (similar to shortcuts in windows)

=> We have 2 types of links in Linux

1) Hard Link

2) Soft Link
---------------------------
Syntax To create Hard Link
--------------------------

$ ln <orginal-file> <link-file>

$ touch m1.txt

$ ln m1.txt m10.txt

$ cat > m1.txt

$ cat m10.txt

$ ls -li

Note-1: original file and link file will have same inode number

Note-2: When we write data to original file then it will reflect in link file also.

Note-3: When we delete original file link file is not effected (hard link)

---------------------------
Syntax To create soft Link
---------------------------

$ ln -s <original-file> <link-file>

$ touch f1.txt

$ ln -s f1.txt f10.txt

$ ls -li

$ cat > f1.txt

$ cat > f10.txt

$ rm f1.txt

$ ls -li

Note: When we remove original file then soft link file will become dangling. We
can't access it.

=============================
Process Management in Linux
============================

=> Process represents running application

=> In Windows machine we can see running processes using "Task Manager"

=> In Linux machine we can see running process using ps command

$ ps

$ top
=> To kill process in linux machine we will use kill command

$ kill -9 <process-id>

=====================
Networking Commands
=====================

# To get ip address of machine


$ ifconfig

# To check connectivity
$ ping <ip>
$ ping www.google.com
$ ping www.facebook.com

# To download files using url


$ wget <url>
$ wget https://dlcdn.apache.org/tomcat/tomcat-9/v9.0.78/bin/apache-tomcat-
9.0.78.zip

# To send HTTP Request to URL


$ curl <url>

========================
Utility Commands
========================

# Display memory details


$ free

# Install htop command


$ sudo yum insall htop

# memory details with dynamic view


$ htop

# Display OS information
$ cat /etc/os-release

# Display Kernel Version


$ uname -r

# Check all Users Login history In Linux


$ lastlog

# Check Specific User login history


$ last <uname>

# Get last 3 login details


$ last -3

===========================
Package Managers in Linux
===========================

=> Package Managers are used to install softwares in Linux Machines


=> We have several package managers in linux

1) yum (Yellow Dog updater & modified)

Ex: Amazon Linux, RED HAT Linux, CentOS

2) apt (Advanced Packaged Tool)

Ex: Ubuntu

3) dnf (Dandified YUM package)

Ex: It is replacing yum in latest versions of centos

# Install Java in Amazon Linux


$ sudo yum install java

# Check Java version


$ java -version

# Install Maven in Amazon Linux


$ sudo yum install maven

# Check Maven version


$ mvn -v

# Install git client in Amazon Linux


$ sudo yum install git

# Check Git version


$ git -v

# check path of packages we installed


$ whereis java
$ whereis mvn
$ whereis git

=========================
What is a Sudoers File?
========================

=> A Sudoers file is just like any other file on a Linux Operating System.

=> It plays a vital role in managing what a “User” or “Users in a Group” can do on
the system.

======================
Why the name Sudoers?
======================

=> The word Sudo is the abbreviation of the term “Super User Do”.

=> The users with Sudo permission are called as Sudo Users.

=> Management of all these sudo users is done through a file called as Sudoers
File.
=> This file also lives in the /etc directory just like any other configuration
file.

$ sudo cat /etc/sudoers

##### User Privilege Specification #####

root ALL = (ALL:ALL) ALL

Note: ‘#’ indicates comment in the file, OS will ignore this line while executing.

Syntax : User <space> OnHost = (Runas-User:Group) <space> Commands

Example: root ALL = (ALL:ALL) ALL

Read it as — User Root can Run Any Command as Any User from Any Group on Any Host.

→ The first ALL is used to define HOSTS. We can define Hostname/Ip-Address instead
of ALL. ALL means any host.

→ Second ALL : Third ALL is User:Group. Instead of ALL we can define User or User
with the group like User:Group. ALL:ALL means All users and All groups.

→ Last ALL is the Command. Instead of ALL, we can define a command or set of
commands. ALL means all commands.

#### To put it very simple, it is “who where = (as_whom) what”. ####

=> Tnother example to clearly understand the fields in the syntax:

Example : ashok ALL = (root) /usr/bin/cat /etc/shadow

Read this as — User “ashok” can Run the command “/usr/bin/cat /etc/shadow” as ROOT
user on all the HOSTS.

## Edit sudoers file


$ sudo visudo

## close sudeors file


$ ctrl + x

=================================
What is .bashrc file in linux ?
=================================

=> It is used to configure environment variables in Linux Machines

Ex: Java Path, Maven Path etc....

=> For every user account one .bashrc file will be available under user home
directory

=> .bashrc file is hidden file. To see this we need to execute 'ls -la'
=========================
What is .ssh in linux ?
==========================

=> This is hidden directory which contains authorized_keys (pem file public key)

=> We can get our authorized_key using below command

$ cat .ssh/authorized_keys

1) What is Operating System


2) Windows OS
3) Linux OS
4) Linux Distributions (200+)
5) Linux Virtual Machine Setup in AWS
6) Connect with Linux VM using MobaXterm
7) Connect with Linux VM using Putty
8) Linux commands
9) Working with Directories
10) Working with Files
11) Text Editors (vi & sed)
12) Pattern Matching (grep)
13) head & tail
15) User Management
16) Group Management
17) File Permissions (chmod)
18) Ownership (chown)
19) find and locate
20) Working with zip files
21) Network commands
22) Package Managers
23) Sudoers file
24) .bashrc file
25) .ssh Authorized keys

====================
Linux Architecture
====================

1) Applications / Commands

2) Shell : It is mediator between application/users and kernel. It will verify


command syntax.

3) Kernel : It is mediator between Shell and Linux Hardware components

4) Hardware

=================
Shell Scripting
=================

=> Scripting means writing set of commands in a file and executing them.

=> Scripting is used to automate our daily routine tasks in the project.

Ex: delete temp files, take files backup, archive log files, monitor servers
etc..

=> Shell script file will have .sh extension

=> We can execute shell script file like below

$ sh <file-name>

=========
Script-1
========

whoami
pwd
date
ls -l
touch f2.txt
ls -l

=========
Script-2
========

echo "Welcome to Ashok IT";

echo "DevOps Course is Running"

=========
Script-3
========

echo "Enter Your Name";

read name

echo "Good Evening $name"

=========
Script-4
========

echo "Enter first number"

read a

echo "Enter second number"

read b

let sum=$(($a+$b))

echo "Sum is $sum"

========

1) Variables : To store the data

2) Command Line Arguments : To supply input for script file


3) Control Statements : To control execution of script file

3.1) if - elif
3.2) for loop
3.3) while loop

4) functions : To divide large task into smaller tasks

===========
Variables
===========

=> Variables are used to store the data

=> Variables are key-value pairs

a=10
b=20
name=ashok

=> Variables are divided into 2 types

1) Environment Variables / System Variables

$ echo $USER
$ echo $SHELL

2) User Defined Variables

a=10
b=20
name=ashok

===================
Variable Rules
===================

1) We should not use special symbools like -, @, # etc in variable name

2) Variable name shouldn't start with digit

Note: It is recommended to take variable name as UPPERCASE character in scripting.

=======================
Command Line Arguments
=======================

=> The arguments which we will pass to script file at the time of execution

ex:
$ sh script5.sh ashok it

=> We can access command line arguments in script file like below

$# => No.of Args supplied


$0 => Script file name

$1 => First Cmd Argument

$2 => Second CMD Argument

$3 => Third cmd argument

$* => All Cmd arguments

============
Script-6
============

echo "No.of Args Supplied : $#"

echo "First Arg : $1"

echo "Second Arg: $2"

echo "All Args : $*"

##### Execution :: $ sh script6.sh ashok it

==========
Script-7
=========

echo "First Arg : $1"

echo "Second Arg: $2"

let result=$(($1*$2))

echo "Multiply Result : $result"

##### Execution :: $ sh script7.sh 2 4

====================
Control Statements
====================

=> Script will execute from top to bottom in forward direction (this is default
behaviour)

=> If we want to control script execution flow then we have to use "Control
Statements"

=======================
Conditional statements
=======================

=> Conditional statements are used to execute commands based on given condition

Syntax:

if [ condition ]
then
stmts;
else
stmts;
fi

=> As per above syntax if given condition satisifed then if statments will execute
othewise else statements will execute.

Note: If we want to check more than one conditon then "if - elif - else - fi"

============
script - 8
============

echo "Enter your name"


read name

if [ $name == 'raju' ]; then


echo "Hello, $name"

elif [ $name == 'rani' ]; then


echo "Hi, $name"
else
echo "Bye, $name"
fi

====================
Looping Statements
====================

=> Loops are used to execute statements / commands multiple times

Ex: print "hello" message for 1000 times (we can't write echo "hello" for 1000
times).

1) for loop (Range based loop)

2) while loop (Conditional Based loop)

===================================
for loop example (1 to 10 numbers)
===================================

for((i=1; i<=10; i++))


do
echo "$i"
done

============================
print numbers from 10 to 1
============================
for((i=10; i>=1; i--))
do
echo "$i"
done
===========
while loop
===========

=> Print numbers from 10 to 1 using while loop

i=10

while [ $i -ge 1 ]
do
echo "$i"
let i--;
done

=> Print numbers from 1 to 10 using while loop

i=1

while [ $i -le 10 ]
do
echo "$i"

let i++

done

===========
Functions
===========

=> Functions are used to perform some action / task.

=> The big task can be divided into smaller tasks using functions.

=> Functions are re-usable

-------
Syntax
-------

# write the function


function functionName ( ) {
// body
}

# call the function


functionName

===================================================================================
Q) Write shell script to take file name as input and print content of that file
===================================================================================

---------------------------
approach-1 (read command)
---------------------------

function readFileData(){
echo "enter file name"
read filename
echo "#### file reading start #####"
cat $filename
echo "##### file reading end #####"
}

#calling function
readFileData

----------------------
approach-2 (cmd arg)
----------------------

filename=$1

function readFileData(){
echo "#### file reading start #####"
cat $filename
echo "##### file reading end #####"
}

#calling function
readFileData

===========================================================================
Q) Check presence of given file, if it is not available create that file
============================================================================

# file checking

echo "Enter file name"

read filename

if [ -f "$filename" ] ; then
echo "File alredy exist"

else
echo "File not available, hence creating"
touch $filename

echo "File created...."


fi

===================================================================================
Q) Check presence of given directory, if it is not available create that directory
===================================================================================

# file checking

echo "Enter directory name"

read dirname
if [ -d "$dirname" ] ; then
echo "Directory alredy exist"

else
echo "Directory not available, hence creating"
mkdir $dirname

echo "directory created...."


fi

===========================
Shell Script Assignments
===========================

Task-1 : Take a number from user and check given number is +ve or -ve
Task-2 : Take a number from user and check given number is even or odd
Task-3 : Take a number from user and check given number is prime number or not
Task-4 : Take a number from user and print multiplication table of given number
like below

5 * 1 = 5
5 * 2 = 10
5 * 3 = 15
..
5 * 10 = 50

=========
Summary
=========

1) What is Shell
2) What is Kernel
3) What is Scripting
4) Why we need Scripting
5) Writing Shell Script files (.sh extension)
6) Executing shell script files ($ sh filename)
7) Variables
8) Taking input using shell script
9) Commandline arguments
10) Conditional Statements (if-elif-else-fi)
11) Working with Loops (for and while)

====================
CRON JOBS in Linux
====================

=> CRON is a utility in linux which is used to schedule jobs execution

Ex: Run shell script (sysbackup.sh) file for every 1 hour

=> In Realtime we will have several jobs for execution on


hourly/daily/weekly/monthly/yearly basis

1) Take files backup


2) Delete temp files
3) Monitor system resources
4) Send alerts etc...
==========================
What is CROND in Linux?
==========================

=> CROND is a background process in Linux (Dameon Thread)

=> CROND will check for cron schedules every minute. If any job is scheduled then
crond will execute that job based on given schedule.

=================
CRON JOB Syntax
=================

* * * * * <command/script-file>

Note: Read CRON expression from left to right

=> First * will represent Minutes ( 0 - 59 )

=> Second * will represent hour ( 0 - 23 )

=> Third * will represent day of month ( 1 - 31 )

=> Fourth * will represent month of year ( 1 - 12)

=> Fift * will represent day of week ( 0 - 6 or Mon, Tues, Wed... Sun)

========================
Sample CRON Expressions
=======================

Run for every 15 mins :: */15 * * * * <script1.sh>

Run every day @5:00 PM :: 0 17 * * * <script1.sh>

Run every month first day @9:00 AM :: 0 9 1 * * <script1.sh>

#### Cron Expression Generator :: https://crontab.cronhub.io/ ####

=========================
What is crontab file ?
=========================

=> crontab file is used to configure cronjobs for execution

=> In Linux machine, for every user account one crontab file will be available.

# Open crontab file

$ crontab -e

# Display cronjobs schedule

$ crontab -l
# Remove Crontab file

$ crontab -r

==================
CRON Job practice
==================

1) Launch Linux Machine with Ubuntu AMI

2) Connect with Ubuntu Machine using MobaXterm

3) Create shell script file (Ex: task.sh)

$ vi task.sh

touch /home/ubuntu/f1.txt
touch /home/ubunut/f2.txt

4) Give Execute permission for shell script file

$ chmod +x task.sh

5) Open crontab file and configure job schedule

$ crontab -e

6) Configure cronjob like below and close that file

*/1 * * * * /bin/bash /home/ubuntu/task.sh

7) After one minute check present working directory

$ ls - l

Note: files should be created.

========================
Apache Maven
========================

1) What is Java ?
2) Java Project Types
3) Java Project Execution Flow
4) Java Projects Build Process
5) What is Build Tool
6) Why we need tools
7) Maven Introduction
8) Maven Setup
9) Maven Dependencies
10) Maven Goals
11) Maven Repositories
12) Working with Maven

==================
Java Introduction
==================

=> Java is a computer programming language

=> Developed by Sun Microsystem in 1991

=> Using Java we can develop software projects

1) Stand-alone applications
2) Web applications

=> The application which is accessible only in one computer is called as stand-
alone application.

Ex: Calculator, Notepad++, Sublime text, VS Code IDE etc..

=> The application which can be accessed by multiple users at a time is called as
web application.

Ex: Gmail, youtube, facebook, naukri, linkedin etc...

============================
Java Project Execution Flow
============================

1) Developers will write source code (.java files)

2) Compile source code (java compiler) => It generates byte code (.class)

3) Package byte code into jar / war file for execution

=> Stand-alone app will be packaged as jar file (Java Archieve)

=> Web app will be packaged as war file (Web Archieve)

4) If project is packaged as jar then we can execute jar file directley

5) If project is packaged as war then we need to deploy war file in server (Ex:
tomcat)

=============================
Java Project Build Process
=============================

1) Download Required Dependencies


2) Add dependencies to build path

3) Compile source code (.java -> .class)

4) Execute Unit Test cases

5) Package Project as Jar or War

=> Earlier developers used to perform above build process manually.

=> To avoid manual build process, build tools came into picture

=> Using Build Tools we can automate project build process.

############# Maven is a build tool for java applications ###########

=================
What is Maven ?
=================

=> Maven software developed by apache org

=> Maven is free & open source software

=> Maven s/w developed using Java Language

=> Maven is called as Build Tool for Java projects

=> Using Maven we can automate Java projects build process

====================
What Maven can do ?
====================

=> Create Project structure

=> Download required libraries

=> Add libraries to build path

=> Compile project source code

=> Execute Unit Test cases (Junits)

=> Package project as jar / war file

===============================
Maven Setup in Windows Machine
===============================

1) Download and Install Java

2) Set JAVA_HOME and Java Path

3) Verify Java Installation


4) Download Maven Software

5) Set MAVEN_HOME and Maven Path

6) Verify Maven Setup

===================
Maven Terminology
==================

=> ArcheType
=> Group ID
=> Artifact ID
=> Version
=> Packaging Type
=> Maven Dependencies
=> Maven Goals
=> Maven Repositories

=> ArcheType represents type of the project

quickstart => standalone application


web-app => web application

=> Group ID represents company name

Ex: com.tcs, com.ibm, in.ashokit etc...

=> Artifact ID represents Project name

Ex: sbi-app, flipkart-app, insta-app

=> Version represents project version number

Ex: 0.0.1-SNAPHOST, 1.0-RELEASE

=> Packaging type represents packaging format

Ex: jar , war

=> Maven Dependencies are nothing but libraries required for project development

Ex: Junit, Spring-core, Hibernate etc....

Note: We can get maven dependencies from www.mvnrepository.com

=> Maven Goals are used to perform project build process

Ex: clean, compile, test, package etc...


=> Maven Repositories are used to store maven dependencies/libraries

Ex: central repo, remote repo, local repo....

=========================================
Creating Maven Stand-alone application
=========================================

=> Open command prompt and execute below command to create maven project

$ mvn archetype:generate -DgroupId=in.ashokit -DartifactId=my-app -


DarchetypeArtifactId=maven-archetype-quickstart -DarchetypeVersion=1.4 -
DinteractiveMode=false

=> Inside the project we can see 'src' folder and pom.xml file

src : It is used to store project source code (.java files)

pom.xml : Project Object Model (Maven configuration file)

=> From cmd, Go inside project directory and execute maven goals

$ cd <project-directory>

$ mvn clean package

=> Once build got success we can see 'target' directory inside project directory.
It contains byte code and our project packaged file (jar)

====================
Maven Dependencies
====================

=> Maven dependencies means libraries required for the project

Ex: spring-core, junit, hibernate etc..

=> We can find maven dependencies in www.mvnrepository.com

<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>6.0.11</version>
</dependency>

=> Add above dependency in project pom.xml file under <dependencies/> section and
execute maven goals.

$ mvn clean package

=============
Maven Goals
=============

=> Maven Goals are used to perform Project Build Process

=> We have several maven goals like below

clean : To delete target folder

compile : Compile project source code ( convert .java files to .class files)

test : To execute project unit test cases (Junits)

test = compile + test

package : To package our project as jar or war file

package = compile + test + package

install : To publish our project into maven repository

install = compile + test + package + install

=========================================
Creating Maven Web application
=========================================

=> Open command prompt and execute below command to create maven web project

$ mvn archetype:generate -DgroupId=in.ashokit -DartifactId=my-web-app -


DarchetypeArtifactId=maven-archetype-webapp -DarchetypeVersion=1.4 -
DinteractiveMode=false

=> From cmd, Go inside project directory and execute maven goals

$ cd <project-directory>

$ mvn clean package

=> Once build got success we can see 'target' directory inside project directory.
It contains byte code and our project packaged file (war).

===================
Maven Repositories
===================

Repositoriy : It is a place where maven dependencies will be stored

=> We have 3 types of repositories in maven

1) Local Repository

2) Cental Repository

3) Remote Repository
=> Local Repository will be created in our machine (.m2 folder)

=> Central Repository will be maintained by apache organization

=> Remote Repository will be maintained by our company to store shared libraries.

Note: Only First time maven will download dependencies from central / remote
repository to local repository.

=> Maven project will take dependencies from local repository.

============================
Working with maven in linux
============================

=> Launch Linux VM in AWS cloud (Amazon Linux)

=> Connect with Linux VM using MobaXterm

=> Check Maven version

$ mvn -v

=> Install maven using below command

$ sudo yum install maven

=> Check Java & Maven versions

$ java -version
$ mvn -vs

====================
Maven - Summary
===================

1) What is Java
2) Stand-alone vs Web Application
3) Java Project Execution Flow
4) Java Project Build Process
5) Build Tools
6) What is Maven
7) What maven can do
8) Maven Setup in Windows
9) Maven Terminology
10) Maven Project Creation
11) What is pom.xml
12) Maven Dependencies
13) Maven Goals
14) Maven Repositories
15) Working with Maven in Linux VM
=========
Git Hub
=========

-> Git Hub is a platform which is used to store project related files/code

-> In git hub we can create source code repository


-> source code repository is used to store project source code

Note: For every project one repository will be available

-> All the developers can connect to project repository to store all the source
code (Code Integration will become easy)

-> Git Hub repository will monitor all code changes

- who modified
- when modified
- what modified
- why modified

===================
Environment Setup
===================

1) Create account in www.github.com

2) Install git client software

3) Open Git bash and configure your name and email

$ git config --global user.name "your-name"

$ git config --global user.email "your-email"

Note: Configuring name and email is just one time process.

===============================
What is Git Hub Repository ?
===============================

=> Repository is a place where we can store project source code / files

=> For every project one repository will be created

=> Every repository will have an URL like below

### Git Repo URL : https://github.com/ashokitschool/devops_14_app.git

=> Project team members will connect with git repository using its URL.

=> We can create 2 types of repositories

1) Public Repo (anybody can see & you choose who can commit)

2) Private Repo (you choose who can see & commit)

====================
Git Bash Commands
====================

git config : To configure name & email


git init : To initialize working tree

git add : To add files to staging area

git status : To check working tree status

git commit : To commit files from staging to local repo

git remote add : To add remote repo to working tree

git push : To send files from local repo to remote repo

git restore : To unstage the files & to discard changes made in files

git log : To see commit history

git rm : To remove files from local & remote

git clone : To clone remote repo to local machine

git pull : To take latest changes from remote repo to local repo

===============
Git Branches
===============

=> Branches are used to maintain multiple code bases in the single repository

=> In general people will create below branches in git repository

1) main (default)
2) develop
3) feature
4) qa
5) uat
6) release

Note: We can create any no.of branches in single repository.

=> If we have branches in git repo then multiple teams can work paralelly without
effecting other teams code.

Note: When we execute 'git clone' command then we will get code from default branch
which is 'main'.

=> In git bash we can switch from one branch to another branch is 'git checkout'
command

===========================
What is Branch Merging ?
===========================

=> Merging changes from one branch to another branch is called as Branch merging.

=> We will use 'Pull Request' to perform branch merging.

====================
Git Branches Task
====================

1) Create branches in git hub repository

2) Clone git hub repo

3) Switch from main branch to develop branch using 'git checkout'

4) Create file + Add to staging + commit + push that file

5) Create pull request to merge develop branch changes to main branch

==========================
What is .gitignore file ?
==========================

=> .gitignore is used to exclude files & folders from our commits

Ex: In maven project, we should n't commit target folder to git repository hence
we can give this info to git using .gitignore file.

========================
What is git Conflict ?
========================

=> When two people making changes in same file and same line then we will get
conflicts problem.

=> When conflict occurs we have to resolve them.

=> Conflict can occur in two scenarios

1) When we execute git pull command


2) When we are merging branches

============================================
1) How to remove git local commits ?

$ git reset HEAD~1

2) How to revert git commits from remote repo?

$ git revert <commit-id>

Note: After executing git revert we have to execute git push also

3) what is git stash ?

=> It is used to save working tree changes to temporary area and make working tree
clean.

$ git stash

=> We can get stashed changes back using 'git stash apply'

4) Git merge vs Git rebase ?


=> merge & rebase commands are used for branch merging from CLI

merge : It preserve commit history

rebase : It won't preseve commit history

5) What is tag in git ?

=> tags are used to make stage level commits

# Creating tag
$ git tag <tag-name>

# Display all tags


$ git tag

# Push tags to remote repo


$ git push origin --tags

================================

1) git config
2) git init
3) git status
4) git add
5) git commit
6) git push
7) git log
8) git rm
9) git clone
10) git branch
11) git checkout
12) git pull
13) git restore
14) git reset
15) git revert
16) git merge
17) git stash
18) git stash apply

==========================

1) What is Git Hub


2) Git Hub Account Creation
3) Git Client installation
4) Git Repo creation
5) Git Architecture
6) Git Bash Commands
7) Pushing Maven project to git repo
8) Git Branches
9) Branch Merging
10) Pull Request
11) .gitignore file
12) Git Conflicts
13) Working with Bitbucket
14) Working with tags
===================================

🔥 *Git Hub Lab Task* 🔥

1) Create Maven Web Application


2) Package maven project as war file
3) Create repository in github / bitbucket (public repo)
4) Push maven project into repo using gitbash
(target folder shouldn't be commited, add this in .gitignore file)
5) Make changes in pom.xml and push changes to repo using git bash
6) Make code changes in index.jsp file and push to central repo
7) Create 'feature' branch in git repo from main branch
8) Switch to feature branch from git bash
9) Make changes in 'feature' branch pom.xml file
10) Push changes to feature branch
11) Create pull request and merge 'feature' branch changes to 'main' branch

===========================
Realtime Work Process
===========================

1) Developers will send request for DevOps team to create git repository for the
project with manager approval.

2) DevOps team will provide Repo Access for team members (RBAC)

3) DevOps team will create git repo and will share that repo URL with Dev Team.

4) Dev team will push their code into git repo and Dev Team will create required
branches also

Note: DevOps team will decide Branching strategy for the project.

5) DevOps team will clone git repo for build and deployment process

===========
WebServers
===========

=> Server is a software which is used to run Web Applications

=> Server is responsible to handle user requests & responses

=> Users can access our web application by sending request to server

=> We have several servers in the market to run our web applications

a) Tomcat
b) JBoss
c) Glasfish
d) Weblogic
e) WebSphere

==============
Tomcat Server
=============
=> Tomcat is a web server developed by Apache Organization

=> Tomcat server developed using Java language

Note: To run tomcat server, java should be installed.

=> Tomcat server is used to run Java Web Applications

=> Tomcat is free and open source software

=> Tomcat supports multiple operating systems

=> Tomcat server runs on 8080 port number (we can change it)

================================
Tomcat Server folder structure
================================

bin: It contains files to start & stop server (windows : bat , Linux : sh)

windows: startup.bat & shutdown.bat


Linux : statup.sh & shutdown.sh

conf : It contains configuration files

Ex: server.xml , tomcat-users.xml

webapps : It is called as deployment folder. We will keep war files here.

lib : It contains libraries required for server (jars)

logs : Server log messages will be stored here

temp : Temporary files will be created here (We can delete them)

=============
Tomcat Setup
=============

=> Create Linux VM using Amazon Linux AMI in AWS Cloud

=> Connect to Linux VM using mobaxterm

=> Install Java software

$ sudo yum install java

=> We can download tomcat software from its offical website

URL : https://tomcat.apache.org/download-90.cgi

=> Download tomcat server tar file

$ wget <url>

=> Extract tar file


$ tar -xvf <tar-file-name>

=> Go inside tomcat server bin directory and execute below command

$ sh startup.sh

=> Upload project war file in webapps folder

=> Enable tomcat server port in security group inbound rules (8080)

=> Access our web application in browser

URL : http://ec2-public-ip:8080/war-file-name

==================================
*Lab Task To Peform On Linux VM*
==================================

1) Clone Git Repository which contains Maven Web Application

2) Perform Maven Build

3) Copy war file into tomcat server webapps folder from target

4) Access application url in windows machine browser

============================
Tomcat Admin Console Access
==============================

=> By default the Host Manager is only accessible from a browser running on the
same machine as Tomcat. If you wish to modify this restriction, you'll need to edit
the Host Manager's context.xml file.

=> File Location : <tomcat-folder>/webapps/manager/META-INF/context.xml

=> In Manager context.xml file, change <Valve> section like below (allow attribute
value changed)

<Context antiResourceLocking="false" privileged="true" >


<Valve className="org.apache.catalina.valves.RemoteAddrValve" allow=".*" />
</Context>

============================================================================
Add tomact users in "<tomact-folder>/conf/tomact-users.xml" file like below
============================================================================

<role rolename="manager-gui" />


<role rolename="manager-script" />
<role rolename="admin-gui" />

<user username="tomcat" password="tomcat" roles="manager-gui" />


<user username="admin" password="admin" roles="manager-gui,admin-gui,manager-
script"/>

-> Stop the tomact server and start it

================================================================================
We can change tomcat server default port in tomact/conf/server.xml file
================================================================================

-> When we change tomact port number in server.xml file then we have to enable that
port in Security Group which is associated with our EC2 instance.

===========
Sonar Qube
===========

=> Code Quality Checking Tool

=> Using SonarQube we can perform code review to identify developers mistakes in
the code

=> SonarQube s/w developed by using Java Language

=> SonarQube supports 30+ Programming Languages to perform Code Review

=> Using SonarQube we can generate Code Review Report

=> DevOps team is responsible to generate Project Code Review Report and share it
with Development team.

Note: Development is responsible to fix sonar issues.

Note: Code Review is part of project build process.

====================
Sonar Issues
====================

=> SonarQube server will identify below types of issues in the project

=> Bugs (It will harm our code execution)

=> Vulnerabilities (security hot spots)

=> Code Smells (not danger but week design in program)

=> Duplicate Code Blocks (Repeated code)


=> Code Coverage (how many lines of code is tested in unit testing)

========================
Sonar Quality Profiles
========================

Quality Profile : Set of rules to perform code review

=> In SonarQube, for every language one Quaylity Profile is available with set of
rules to perform code review.

=> When we perform code review using sonar then it will identify our project
developed using which language based on that it will execute that language specific
quality profile to perform code review.

Java Project --------> Java Quality Profile ---> Java Rules

Python Project -------> Python Quality Profile ---> Python Rules

PHP Project ----------> PHP Quality Profile ---> PHP Rules

Note: We can create our Quality Profiles to customize code review for our projects.

====================
Sonar Quality Gate
====================

=> Quality Gate Represents overall project code quality is Passed or Failed

Ex - 1 : Duplicate Code > 10 % :: Failed

Ex - 2 : Bugs > 5 % :: Failed

Note: If Code Quality Gate is Failed, we should not deploy that code.

=======================
SonarQube Server Setup
=======================

Minimum RAM Required For SonarSetup is 2 GB

So Take t2.medium instance type which will provide 4 GB RAM

===============================
SonarServer Setup in Linux
===============================

-> Create EC2 instance with 4 GB RAM (t2.medium) (Amazon Linux AMI)

-> Connect with EC2 instance using MobaXterm

-> Install Java software

$ sudo yum install java-1.8.0-amazon-corretto


$ java -version
-> Execute below commands to run sonar

$ sudo su
$ cd /opt
$ wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-7.8.zip
$ unzip sonarqube-7.8.zip

**** Note: SonarQube server will not run with root user **************************

-> Create new user

$ useradd sonar

-> open sudoers file with below command

$ visudo

-> Configure sonar user without pwd in suderos file

sonar ALL=(ALL) NOPASSWD: ALL

# Change ownership & permissions for sonar directory

$ chown -R sonar:sonar /opt/sonarqube-7.8/


$ chmod -R 775 /opt/sonarqube-7.8

# Switch to sonar user


$ su - sonar

-> Go to sonar bin directory then goto linux directory and run sonar server

$ cd /opt/sonarqube-7.8/bin/linux-x86-64

$ sh sonar.sh start

-> Check sonar server status

$ sh sonar.sh status

Note: Sonar Server runs on 9000 port number by default

Note: We can change default port of sonar server ( conf/sonar.properties)

Ex: sonar.web.port=6000

-> Enable Sonar port number in EC2 VM - Security Group

-> Access Sonar Server in Browser

URL : http://EC2-VM-IP:9000/

-> Default Credentials of Sonar User is admin & admin

-> After login, we can go to Security and we can enable Force Authentication.

Note: Once your work got completed then stop your EC2 instance because we have
t2.medium so bill be generated.

$ sh sonar.sh status

Note: If sonar not started, then go to log file and see

$ sudo rm -rf /opt/sonar-folder/temp/

$ cd ../bin/

$ sh sonar.sh start

$ sh sonar.sh status

-> Access sonar server in browser and login into that

================================================
Integrate Sonar server with Java Maven App
=================================================

-> Clone git repository : https://github.com/ashokitschool/SB-REST-H2-DB-APP

-> Configure Sonar Properties under <properties/> tag in "pom.xml"

<properties>
<sonar.host.url>http://15.207.221.244:9000/</sonar.host.url>
<sonar.login>admin</sonar.login>
<sonar.password>admin</sonar.password>
</properties>

-> Go to project pom.xml file location and execute below goal

$ mvn sonar:sonar

-> After build success, goto sonar dashboard and verify the results

Note: Instead of username and pwd we can configure sonar token in pom.xml

==========================
Working with Sonar Token
==========================

-> Go to Sonar Server Dashboard -> Login -> Click on profile -> My Account ->
Security -> Generate Token

-> Copy the token and configure that token in pom.xml file like below

<sonar.host.url>http://15.207.221.244:9000/</sonar.host.url>
<sonar.login>8114ea8a4a594824e1ff08aa192b59befbbae96e</sonar.login>

-> Then build the project using "mvn sonar:sonar" goal

=================
Quality Profile
=================
-> For each programming language sonar qube provided one quality profile with set
of rules

-> Quality Profile means set of rules to perform code review

-> We can create our own quality profile based on project requirement

-> Create One Quality Profile

- Name : SBI_Project_QP
- Language: Java
- Parent : None

Note: We can make our quality profile as default one then it will be applicable for
all the projects which gets reviewed under this sonar server.

Note: If we have any common ruleset for all projects then we can create one quality
profile and we can use that as parent quality profile for other projects.

-> We can configure custom quality profile to a specific project using below steps

- Click on project name


- Go to administration
- Click on quality profile
- Select profile required

==============
Quality Gate
==============

-> Quality Gate represents set of metrics to identify project quality is Passed or
Failed

-> Every Project Quality Gate should be passed

-> In Sonar We have default Quality Gate

-> If required, we can create our own Quality Gate also

===========
Conclusion
===========

-> If project quality gate is failed then we should not accept that code for
deployment.

-> If project is having Sonar issues then development team is responsible to fix
those issues

-> As a DevOps engineer, we will perform Code Review and we will send Code Review
report to Development team (we will send sonar server URL to development team)

=========
Summary
=========
1) What is SonarQube

2) What is Code Review

3) Sonar Server setup in Linux Machine

4) Sonar Server Integration in Maven Project

5) Sonar Token Generation

6) Sonar Server Issue Types

7) Quality Profiles

8) Quality Gates

##################
Sonatype Nexus
##################

-> Nexus is an Open Source Software (OSS) & It is free

-> It is an Artifact Repository Server

-> It is used to store and retrieve build artifacts

-> Nexus software developed using Java Language

Note: To install Nexus s/w we need to install java first

-> Currently people are using Nexus 3.x

Java Build Artifacts: jar and war

Docker : Docker images

Node JS : NPM packages

Q) What is difference between Nexus and GitHub ?

-> Github is a SCM software which is used to store source code of the project

-> Nexus is Artifact Repository which is used to store build artifacts (jar / war)

Q) When we should store project artifact into nexus ?

-> After build and before deployment

##################
Nexus Setup
##################

-> Take t2.medium instance in AWS EC2 service


-> Java s/w is required to install Nexus

-> Connect to t2.medium instance using mobaxterm

########## Nexus S/w Installation Process in Amazon Linux OS #############

$ sudo su -

$ cd /opt

# install java 1.8v


$ sudo yum install java-1.8.0

Links to download :
https://help.sonatype.com/repomanager3/product-information/download

# latest version
$ wget https://download.sonatype.com/nexus/3/nexus-3.40.1-01-unix.tar.gz

$ tar -zxvf nexus-3.40.1-01-unix.tar.gz

$ mv /opt/nexus-3.40.1-01 /opt/nexus

#As a good security practice, Nexus is not advised to run nexus service as a root
user, so create a new user called nexus and grant sudo access to manage nexus
services as follows.

$ useradd nexus

# Give the sudo access to nexus user

# execute below command to open sudoers file


$ visudo

# Add below line in suderos file (just below root user details we can add it)
nexus ALL=(ALL) NOPASSWD: ALL

# Change the owner and group permissions to /opt/nexus and /opt/sonatype-work


directories.

$ chown -R nexus:nexus /opt/nexus


$ chown -R nexus:nexus /opt/sonatype-work
$ chmod -R 775 /opt/nexus
$ chmod -R 775 /opt/sonatype-work

# Open /opt/nexus/bin/nexus.rc file and uncomment run_as_user parameter and set as


nexus user.

$ vi /opt/nexus/bin/nexus.rc
run_as_user="nexus"

# Create nexus as a service

$ ln -s /opt/nexus/bin/nexus /etc/init.d/nexus

# Switch as a nexus user and start the nexus service as follows.


$ su - nexus

# Enable the nexus services


$ sudo systemctl enable nexus

# Start the nexus service


$ sudo systemctl start nexus

# Access the Nexus server from browser.

URL : http://IPAddess:8081/

Note: Enable this 8081 port number in Security Group

# Default Username
User Name: admin

# we can copy nexus password using below command


$ sudo cat /opt/sonatype-work/nexus3/admin.password

-> We can change nexus default properties

/opt/nexus/etc/nexus-default.properties

#################################
Integrate Maven App with Nexus
#################################

-> Create Repositories in Nexus to store build artifacts

-> We will create 2 types of repositories in Nexus

1) snapshot

2) release

-> If project is under development then that project build artifacts will be stored
into snapshot repository

-> If project development completed and released to production then that project
build artifacts will be stored to release repository

-> Create Repositories by selecting "Maven 2 (Hosted)"

Snanpshot Repo URL : http://15.206.128.43:8081/repository/ashokit-snapshot/

Release Repo URL : http://15.206.128.43:8081/repository/ashokit-release/

Note: Based on <version/> name available in project pom.xml file it will decide
artifacts should be stored to which repository

-> Nexus Repository details we will configure in project pom.xml file like below

<distributionManagement>

<repository>
<id>nexus</id>
<name>Ashok IT Releases Nexus Repo</name>
<url>http://15.206.128.43:8081/repository/ashokit-release/</url>
</repository>

<snapshotRepository>
<id>nexus</id>
<name>Ashok IT Snapshots Nexus Repo</name>
<url>http://15.206.128.43:8081/repository/ashokit-snapshot/</url>
</snapshotRepository>

</distributionManagement>

-> Nexus Server Credentials will be configured in Maven "settings.xml" file

-> Maven Location : C:\apache-maven-3.8.5\conf

-> In settings.xml file, under <servers> tag add below <server> tag

<server>
<id>nexus</id>
<username>admin</username>
<password>admin</password>
</server>

-> Once these details are configured then we can run below maven goal to upload
build artifacts to Nexus Server

$ mvn clean deploy

Note: When we execute maven deploy goal, internally it will execute 'compile + test
+ package + install + deploy' goals.

##################
Remote Repository
##################

-> Remote repository used for shared libraries (common jars required for multiple
projects)

-> If we want to use few jar files in multiple projects in the company then we will
use Remote Repository to store those jars (libraries).

-> Remote repository is specific to our company projects

-> Create remote repo in nexus and upload a jar file

-> Go to Repositories
-> Create New Repository
-> Choose Maven (Hosted) Repository
-> Give a name for Repository (Ex: ashokit-remote-repository) & Complete the
process

Note: With above steps Remote Repository got created.


Remote Repo URL : http://13.126.20.221:8081/repository/ashokit-remote-repository/

-> Go to BrowseSection
-> Select Remote Repository (By default it is empty)
-> Click on Upload Component
-> Upload Jar file and give groupId, artifactId and Version

groupId : in.ashokit
artifactId : pwd-utils
version : 1.0

-> Take dependency details of uploaded jar file and add in project pom.xml as a
dependency like below

<dependency>
<groupId>in.ashokit</groupId>
<artifactId>pwd-utils</artifactId>
<version>1.0</version>
</dependency>

-> We need to add Remote Repository Details in pom.xml above <dependencies/> tag

<repositories>
<repository>
<id>nexus</id>
<url>http://15.206.128.43:8081/repository/ashokit-remote-
repo/</url>
</repository>
</repositories>

-> After adding the remote repository details in pom.xml then execute maven package
goal and see dependency is downloading from nexus repo or not.

$ mvn clean package

=========================================
How to resolve HTTP Mirror Block Issue ?
=========================================

=> Make below change in Maven/conf/settings.xml

<mirror>
<id>maven-default-http-blocker</id>
<mirrorOf>dummy</mirrorOf>
<name>Pseudo repository to mirror external repositories initially using
HTTP.</name>
<url>http://0.0.0.0/</url>
<blocked>false</blocked>
</mirror>

===============
Nexus Summary
===============

1) What is Nexus and Why we need to go for Nexus ?


2) How to setup Nexus Server in Linux

3) How to create Repositories in Nexus (snapshot & release)

4) How to upload build artifacts into Nexus Repositories

5) What are Shared Libraries ?

6) How to create Remote Repository ?

7) How to upload Shared Libraries into remote repository

8) How to configure remote repository in pom.xml file

=======================
CI CD Server (Jenkins)
=======================

CI : Continuous Integration

CD : Continuos Delivery

=> CI CD is one appraoch to automate project Build & Deployment process

===========================
What is Build & Deployment
===========================

=> Take latest code from Git Hub Repo

=> Build Source code using Maven

=> Perform Code Review Using Sonar

=> Upload Project Artifact into Nexus

=> Deploy code into server.

=> In single day multipe times code will be committed to git hub repository from
Development team so multiple times we have to perform build and deployment process.

Note: If we do build and deployment process manually then it is time taking process
and error prone.

=> To overcome above problems, we need to automate Project Build and Deployment
process.

=> To automate project build and deployment process we will use JENKINS.

===================
What is Jenkins ?
===================

=> Open source Software & free

=> Developed by using Java Language

=> It is called as CI CD Server

=> It is used to automate project Build and Deployment process.

=> Using Jenkins we can deploy any type of project (ex: java, python, dot net,
react, angular).

================
Jenkins Setup
================

1) Create an EC2 instance with Ubuntu AMI (t2.micro instance)

2) Connect to your EC2 instance using MobaXterm

3) Install Java In Ubuntu VM with below commands

$ sudo apt-get update

$ sudo apt-get install default-jre

4) Install Jenkins in Ubuntu VM with below commands

$ curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee \


/usr/share/keyrings/jenkins-keyring.asc > /dev/null

$ echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \


https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null

$ sudo apt-get update

$ sudo apt-get install jenkins

$ sudo systemctl status jenkins

5) Access Jenkins Server in browser using below URL

URL : http://ec2-public-ip:8080/

Note: Enable 8080 port in security group


6) Get the initial administrative password

$ sudo cat /var/lib/jenkins/secrets/initialAdminPassword

pwd : 5fe6ddcc9db244cab6aca5ccf2d6a83a

-> Provide pwd which we have copied to unlock jenkins

-> Select "Install Suggested Plugins" card (it will install those plugins)

-> Create Admin account

===============================
Creating First Job in Jenkins
===============================

1) Goto Jenkins Dashboard

2) Click on New Item

-> Enter Item Name (Job Name)


-> Select Free Style Project & Click OK
-> Enter some description
-> Click on 'Build' tab
-> Click on 'Add Build Step' and select 'Execute Shell'

3) Enter below shellscript

echo "Hello Guys,"


touch ashokit.txt
echo "Hello Guys, Welcome to Jenkins Classes" >> ashokit.txt
echo "Done..!!"

4) Apply and Save

Note: With above steps we have created JENKINS Job

5) Click on 'Build Now' to start Job execution

6) Click on 'Build Number' and then click on 'Console Ouput' to see job execution
details.

=> Jenkins Home Directory in EC2 : /var/lib/jenkins/workspace/

$ cd /var/lib/jenkins/workspace/

7) Go to Jenkins home directory and check for the job name --> check the file
created inside the job

=========================================================
Jenkins Job with with GIT Hub Repo + Maven - Integeration
=========================================================

Pre-Requisites : Java, Maven and Git client

# Git installation In EC2 VM


$ sudo apt install git -y

==================================
Maven Installation In Jenkins:
==================================

Jenkins Dashboard -> Manage Jenkins --> Global Tools Configuration -> Add maven

==================================
Sample Git Repo URLS For Practise
==================================

Git Hub Repo URL : https://github.com/ashokitschool/maven-web-app.git

============================================================
JOB-2 :: Steps To Create Jenkins Job with Git Repo + Maven
============================================================

1) Connect to EC2 instance in which jenkins server got installed

2) Start Jenkins Server

3) Access Jenkins Server Dashboard and Login with your jenkins credentials

4) Create Jenkins Job with Git Hub Repo

-> New Item


-> Enter Item Name (Job Name)
-> Select 'Free Style Project' & Click OK
-> Enter some description
-> Go to "Source Code Management" Tab and Select "Git"
-> Enter Project "Git Repo URL"
-> Go to "Build tab"
-> Click on Add Build Step and Select 'Inovke Top Level Maven Targets'
-> Select Maven and enter goals 'clean package'
-> Click on Apply and Save

Note: With above steps we have created JENKINS Job

5) Click on 'Build Now' to start Job execution

6) Click on 'Build Number' and then click on 'Console Ouput' to see job execution
details.

=> Jenkins Home Directory in EC2 : /var/lib/jenkins/workspace/

=> Go to jenkins workspace and then go to job folder then goto target folder there
we see war file created.

-----------------------------------------------------------------------------------
--------------

=> Access below URL in browser to stop Jenkins Server

URL : http://EC2-VM-IP:8080/exit/

(Click on Retry using Post button)


=============================================================================
Job-3 :: Steps To Create Jenkins Job with Git Repo + Maven + Tomcat Server
============================================================================

1) Go to Jenkins Dashboard -> Manage Jenkins --> Manage Plugins -> Goto Available
Tab -> Search For
"Deploy To Container" Plugin -> Install without restart.

2) Create Jenkins Job

-> New Item


-> Enter Item Name (Job Name)
-> Select Free Style Project & Click OK
-> Enter some description
-> Go to "Source Code Management" Tab and Select "Git"
-> Enter Project "Git Repo URL"
-> Go to "Build tab"
-> Click on Add Build Step and Select 'Inovke Top Level Maven Targets'
-> Select Maven and enter goals 'clean package'
-> Click on 'Post Build Action' and Select 'Deploy war/ear to
container' option
-> Give path of war file (You can give like this also : **/*.war )
-> Enter Context Path (give project name Ex: java_web_app)
-> Click on 'Add Container' and select Tomcat version 9.x
-> Add Tomcat server credentials (give the username & pwd which is
having manager-script role)
-> Enter Tomact Server URL (http://ec2-vm-ip:tomcat-server-port)
-> Click on Apply and Save

4) Run the job now using 'Build Now' option and see see 'Console Output' of job

5) Once Job Executed successfully, go to tomcat server dashboard and see


application should be displayed.

6) Click on the applicaton name (it should display our application)

===================================================
How to Create Jenkins Jobs with Build Parameters
===================================================

=> Build Parameters are used to supply dynamic inputs to run the Job. Using Build
Parameters we can avoid hard coding.

Ex : We can pass branch name as build parameter.

-> Create New Item


-> Enter Item Name & Select Free Style Project
-> Select "This Project is parameterized" in General Section
-> Select Choice Parameter
-> Name : BranchName
-> Choices : Enter every branch name in nextline
-> Branches to Build : */${BranchName}

====================================
User & Roles Management In Jenkins
====================================
=> In Our Project multiple teams will be available

a) Development team (10)


b) Testing team (5)
c) DevOps Team (3)

=> For every Team member Jenkins login access will be provided.

Note: Every team members will have their own user account to login into jenkins.

=> Operations team members are responsible to create / edit / delete / run jenkins
jobs

=> Dev and Testing team members are only responsible to run the jenkins job.

================================================
How to create users and manage user permissions
================================================

-> Go to Jenkins Dashboard

-> Manage Jenkins -> Manage Users

-> Create Users

-> Go to Configure Global Security

-> Manage Roles & Assign Roles

Note: By default admin role will be available and we can create custom role based
on requirement

-> In Role we can configure what that Role assigned user can do in jenkins

-> In Assign Roles we can add users to particular role

=====================================
Working with User Groups in Jenkins
=====================================

## Step-1 : Install Required Plugins

=> Install Role-based Authorization Strategy Plugin

=> This plugin allows you to define roles and assign them to users or groups.

## Step-2 : Configure Security

=> Go to "Manage Jenkins" > "Configure Security."

=> Select Authorization as "Role-Based Strategy"

=> Click "Save" to apply the changes

## Step-3 : Create User Roles

=> Go to "Manage Jenkins" > "Manage and Assign Roles."


=> Click "Manage Roles" and define new roles based on your requirements (e.g.,
admin, developer, tester).

=> Click "Add" to create a new role, and specify the permissions for that role.

## Step-4 : Assign Users to Roles

=> After creating roles, go to "Manage Jenkins" > "Manage Users & Roles."

=> Select a user and click "Assign Roles" to add them to one or more roles.

## Step-5 : Test the user login functionality

========================================
Jenkins - Master & Slave Architecture
========================================

=> If we use single machine to run Jenkins server then burden will be increased if
we run multiple jobs at a time.

=> If burden increased then system can crash.

=> To execute multiple jobs paralelly we will use Master & Slave Configuration

=> Master & Slave configuration is used to reduce burden on Jenkins Server by
distributing tasks/load.

================
Jenkins Master
===============

=> The machine which contains Jenkins Server is called as Jenkins Master machine.

=> It is used to create the jobs

=> It is used to schedule the jobs

=> It is responsible to distribute Jobs execution to slave machines

Note: We can run jobs on Jenkins Master machine directley but not recommended.

==============
Jenkins Slave
==============

=> The machine which is connected with 'Jenkins Master' machine is called as
'Jenkins-Slave' machine.

=> Slave Machine will recieve task from 'Master Machine' for job execution.

===============================
Step-1 : Create Jenkins Master
==============================

1) Create EC2 instance


2) Connect EC2 using Mobaxterm
3) Install Git client
4) Install Java Software
5) Install jenkins server
6) Add Git, JDK and Maven Plugins
7) Enable Jenkins Port Number in Security Group
8) Access Jenkins Server in Browser and Login

===============================
Step-2 : Create Jenkins Slave
===============================

1) Create EC2 instance (Ubuntu with t2.micro)


2) Connect to EC2 using Mobaxterm
3) Install Git client
4) Install Java Software
5) Create one directory in /home/ubuntu (ex: slavenode)

$ mkdir slavenode

=====================================================
Step-3: Configure Slave Node in Jenkins Master Node
=====================================================

1) Go to Jenkins Dashboard
2) Go to Manage Jenkins
3) Go to Manage Nodes & Clouds
4) Click on 'New Node' -> Enter Node Name -> Select Permanent Agent
5) Enter Remote Root Directory ( /home/ubuntu/slavenode )
6) Enter Label name as Slave-1
7) Select Launch Method as 'Launch Agent Via SSH'
8) Give Host as 'Slave VM DNS URL'
9) Add Credentials ( Select Kind as : SSH Username with private key )
10) Enter Username as : ubuntu
11) Select Private Key as Enter Directley and add private key

Note: Open pem file and copy content add add

12) Select Host Key Strategy as 'Manually Trusted Key Verification Strategy'

13) Click on Apply and Save (We can see configured slave)

******* With above steps Master and Slave Configuration Completed**************

-> Go to Jenkins Server and Create Jenkins Job

Note: Under Generation Section of Job creation process, Select "Restrict Where This
Project Can Run" and enter Slave Nodel Label name and finish job creation.

-> Execute the Job using 'Build Now' option

Note: Job will be executed on Slave Node (Go to Job Console Ouput and verify
execution details)
=> Build & Deployment
=> Challenges in Manual Build & Deployment
=> Automated Build & Deployment
=> CI & CD
=> Jenkins Introduction
=> Jenkins Setup in Linux
=> Jenkins Job Creation
=> Jenkins Build Parameters
=> User & Role Management in Jenkins (RBAC)
=> Git + Maven + Tomcat + Jenkins
=> Master & Slave Configuration

=================
Jenkins Pipeline
=================

=> In Jenkins we can create JOBS in 2 ways

a) Free Style Project (GUI)


b) Pipeline (Using in Realtime)

=> Pipeline means set of steps to automate build & deployment process.

=> Using Pipelines we can handle complex build & deployment tasks.

=> We can create pipelines in 2 ways

a) Scripted Pipeline
b) Declarative Pipeline

==============================
Jenkins Declarative Pipeline
==============================

pipeline {
agent any

stages {
stage('Git Clone') {
steps {
echo 'Cloning Repository....'
}
}
stage('Maven Build'){
steps{
echo 'Maven Build....'
}
}
stage('Tomcat Deploy'){
steps{
echo "War deployed to tomcat"
}
}
}
}

====================================
Declarative Pipeline (Git + Maven)
====================================
pipeline {
agent any

tools{
maven "Maven-3.9.4"
}

stages {
stage('Clone') {
steps {
git 'https://github.com/ashokitschool/maven-web-app.git'
}
}
stage('Build') {
steps {
sh 'mvn clean package'
}
}
}
}

==================
Scripted Pipeline
==================

node {
stage('Git Clone') {
git credentialsId: 'GIT-Credentials', url:
'https://github.com/ashokitschool/maven-web-app.git'
}

stage('Maven Build'){
def mavenHome = tool name:"Maven-3.9.4", type: "maven";
def mavenPath = "${mavenHome}/bin/mvn";
sh "${mavenPath} clean package"
}
}

##########################################
DevOps Project Setup with CI CD Pipeline
##########################################

1) Git Hub
2) Maven
3) SonarQube
4) Nexus Repo
5) Tomcat
6) Jenkins

##################
Pipeline creation
##################

=> Create CI CD Pipeline with below stages


========================
1) Create github stage
========================

Git Repo : https://github.com/ashokitschool/maven-web-app.git

stage('clone repo') {

git credentialsId: 'GIT-Credentials', url:


'https://github.com/ashokitschool/maven-web-app.git'

=========================================================
2) Create Maven Build Stage (Add maven in global tools)
=========================================================

stage ('Maven Build'){


def mavenHome = tool name: "Maven-3.9.4", type: "maven"
def mavenCMD = "${mavenHome}/bin/mvn"
sh "${mavenCMD} clean package"
}

============================
3) Create SonarQube stage
===========================

-> Start Sonar Server

-> Login into Sonar Server & Generate Sonar Token

Ex: 4b10dc9d10194f31f15d0233bdf8acc619c5ec96

-> Add Sonar Token in 'Jenkins Credentials' as Secret Text

-> Manager Jenkins


-> Credentials
-> Add Credentials
-> Select Secret text
-> Enter Sonar Token as secret text

-> Manage Jenkins -> Plugins -> Available -> Sonar Qube Scanner Plugin -> Install
it

-> Manage Jenkins -> Configure System -> Sonar Qube Servers -> Add Sonar Qube
Server

- Name : Sonar-Server-7.8
- Server URL : http://52.66.247.11:9000/ (Give your sonar
server url here)
- Add Sonar Server Token

-> Once above steps are completed, then add below stage in the pipeline

stage('SonarQube analysis') {
withSonarQubeEnv('Sonar-Server-7.8') {
def mavenHome = tool name: "Maven-3.8.6", type: "maven"
def mavenCMD = "${mavenHome}/bin/mvn"
sh "${mavenCMD} sonar:sonar"
}
}

=======================
4) Create Nexus Stage
======================
-> Run nexus VM and create nexus repository

-> Create Nexus Repository

-> Install Nexus Repository Plugin using Manage Plugins ( Plugin Name : Nexus
Artifact Uploader)

-> Generate Nexus Pipeline Syntax

stage ('Nexus Upload'){


nexusArtifactUploader artifacts: [[artifactId: '01-Maven-Web-App', classifier: '',
file: 'target/01-maven-web-app.war', type: 'war']], credentialsId: 'Nexus-
Credentials', groupId: 'in.ashokit', nexusUrl: '13.127.185.241:8081', nexusVersion:
'nexus3', protocol: 'http', repository: 'ashokit-snapshot-repository', version:
'1.0-SNAPSHOT'
}

=========================
5) Create Deploy Stage
=========================

-> Start Tomcat Server

-> Install SSH Agent plugin using Manage Plugins

-> Generate SSH Agent and configure stage

-> Add Tomcat Server as 'Uname with Secret Text'

stage ('Deploy'){
sshagent(['Tomcat-Server-Agent']) {
sh 'scp -o StrictHostKeyChecking=no target/01-maven-web-app.war
ec2-user@43.204.115.248:/home/ec2-user/apache-tomcat-9.0.80/webapps'
}
}

################
Final Pipeline
################

node {

stage('Git Clone'){
git credentialsId: 'GIT-CREDENTIALS', url:
'https://github.com/ashokitschool/maven-web-app.git'
}

stage('Maven Build'){
def mavenHome = tool name: "Maven-3.9.4", type: "maven"
def mavenPath = "${mavenHome}/bin/mvn"
sh "${mavenPath} clean package"
}

stage('Code Review') {
withSonarQubeEnv('Sonar-Server-7.8') {
def mavenHome = tool name: "Maven-3.9.4", type: "maven"
def mavenCMD = "${mavenHome}/bin/mvn"
sh "${mavenCMD} sonar:sonar"
}
}

stage('Quality Gate') {
steps {
timeout(time: 1, unit: 'HOURS') {
def qg = waitForQualityGate()
if (qg.status != 'OK') {
error "Quality Gate failed: ${qg.status}"
}
}
}
}

stage('Nexus Upload'){
nexusArtifactUploader artifacts: [[artifactId: '01-Maven-Web-App',
classifier: '', file: 'target/01-maven-web-app.war', type: 'war']], credentialsId:
'Nexus-Credentials', groupId: 'in.ashokit', nexusUrl: '3.108.217.159:8081',
nexusVersion: 'nexus3', protocol: 'http', repository: 'ashokit-snapshot-repo',
version: '1.0-SNAPSHOT'
}

stage ('Deploy'){
sshagent(['Tomcat-Server-Agent']) {
sh 'scp -o StrictHostKeyChecking=no target/01-maven-web-app.war ec2-
user@43.204.115.248:/home/ec2-user/apache-tomcat-9.0.80/webapps'
}
}

===========================
Pipeline Conditions
===========================

pipeline {
agent any

stages {
stage('Build') {
steps {
// Your build steps here
}
}

stage('SonarQube Analysis') {
steps {
withSonarQubeEnv('YourSonarQubeServer') {
sh 'mvn clean package sonar:sonar'
}
}
}

stage('Quality Gate') {
steps {
timeout(time: 1, unit: 'HOURS') {
def qg = waitForQualityGate()
if (qg.status != 'OK') {
error "Quality Gate failed: ${qg.status}"
}
}
}
}

stage('Deploy') {
when {
expression { currentBuild.resultIsBetterOrEqualTo('SUCCESS') }
}
steps {
// Your deployment steps here
}
}
}
}

################################
Email Notifications In Jenkins
################################

-> We can configure Email notifications in Jenkins

-> With this option we can send email notification to team members after jenkins
job execution completed

-> We need to configure SMTP properties to send emails

-> Go To Manage Jenkins


-> Go To System
-> Add Email Extension Server
-> We will add company provided SMTP server details to send emails.

Note: For practise we can use GMAIL SMTP Properties

SMTP Server : smtp.gmail.com


SMTP Port : 465

Note: Under Advanced section add your gmail account credential for authentication
purpose.

DL : sbiteam@tcs.com

======================================
Scripted Pipeline Email Notification
=======================================
node {
stage('Demo'){
echo 'Hello world'
}

// send to email
emailext (
subject: "STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",
body: """<p>STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p>
<p>Check console output at &QUOT;<a href='${env.BUILD_URL}'>${env.JOB_NAME}
[${env.BUILD_NUMBER}]</a>&QUOT;</p>""",
to: 'ashokitschool@gmail.com',
attachLog: true
)
}

==========================================
Declarative Pipeline Email Notification
==========================================

pipeline {
agent any

tools{
maven "Maven-3.9.4"
}

stages {
stage('Clone') {
steps {
git 'https://github.com/ashokitschool/maven-web-app.git'
}
}
stage('Build') {
steps {
sh 'mvn clean package'
}
}
}

post {
failure {
emailext(
subject: "Build Failed: ${currentBuild.fullDisplayName}",
body: "The build ${currentBuild.fullDisplayName} failed.
Please check the console output for more details.",
to: 'ashokitschool@gmail.com',
from: 'ashokit.classes@gmail.com',
attachLog: true
)
}
success {
emailext(
subject: "Build Successful: $
{currentBuild.fullDisplayName}",
body: "The build ${currentBuild.fullDisplayName} was
successful.",
to: 'ashokitschool@gmail.com',
from: 'ashokit.classes@gmail.com',
attachLog: true
)
}
}

====================================
Jenkins Job with Parallel Stages
====================================

pipeline {
agent any

stages {
stage('Clone') {
steps {
echo 'Cloning...'
}
}

stage('Build') {
steps {
echo 'Building......'
}
}

stage('Parallel Stage') {
parallel {
stage('Test') {
steps {
echo 'Testing......'
}
}
stage('Code Review') {
steps {
echo 'Running tasks in parallel - code review'
}
}
stage('Nexus Upload') {
steps {
echo 'Running tasks in parallel - nexus upload'
}
}
}
}

stage('Deploy') {
steps {
echo 'Deploying...'
}
}
}
}

===========================================
Working with Shared Libraries in Jenkins
===========================================
=> Create git repo and push shared libraries related groovy files

Git Repo : https://github.com/ashokitschool/my_shared_libraries.git

=> Configure Shared Libraries in Jenkins (Manage Jenkins -> System -> Global
Pipeline Libraries)

=> Create Pipeline and use Shared Libraries

@Library('ashokit-shared-lib') _

pipeline {
agent any
stages{
stage('one'){
steps{
welcome( )
}
}
stage('two'){
steps{
script{
calculator.add(10,10)
calculator.add(20,20)
}
}
}
}
}

=======================================
Jenkins Pipeline with Shared Library
=======================================

@Library('ashokit_shared_lib') _

pipeline{
agent any

tools{
maven "Maven-3.9.4"
}

stages{

stage('Git Clone'){
steps{
gitClone('https://github.com/ashokitschool/maven-web-app.git')
}
}

stage('Build'){
steps{
mavenBuild()
}
}
stage('Code Review'){
steps{
sonarQube()
}
}
}
}

=====================
What is Jenkinsfile
=====================

=> Jenkinsfile is used to keep project pipeline code

======================================
Jenins Pipeline with try-catch blocks
======================================

pipeline {
agent any

stages {
stage('Build') {
steps {
script {
try {
// Code that might throw an exception
sh 'make -B'
} catch (Exception e) {
// Handle the exception
echo "Build failed: ${e.message}"
currentBuild.result = 'FAILURE'
}
}
}
}

stage('Test') {
steps {
script {
try {
// Code that might throw an exception
sh 'make test'
} catch (Exception e) {
// Handle the exception
echo "Tests failed: ${e.message}"
currentBuild.result = 'FAILURE'
}
}
}
}
}

post {
always {
echo "Always i will execute"
}
success {
echo "Pipeline succeeded!"
}
failure {
echo "Pipeline failed!"
}
}
}

===========================================
What is Multi Branch Pipeline in Jenkins ?
===========================================

=> In realtime, we will have multiple branches in git repo like below

a) main
b) develop
c) feature
d) release

=> Creating seperate jenkins pipeline for every branch is difficult.

=> We can create one pipeline and we can build the code from multiple branches at a
time using "Multi Branch Pipeline" concept.

=> When we create Multi Branch Pipeline, it will scan all the branches in given git
repo and it will execute pipeline for all branches.

Note: When we run mutli branch pipeline for secondtime, it will verify code changes
happend in which branches and it will execute pipeline for only those branches.

================
Jenkins Summary
================

1) Build & Deployment process


2) Challenges in Manual build & deployment
3) Automated Build & Deployment
4) What is CI CD
5) What is Jenkins
6) Jenkins Setup in Linux VM
7) Freestyle Job creation (GUI)
8) Job Parameters
9) Jenkins User & Roles Management
10) Master & Slave Configuration
11) Jenkins Pipeline
12) Declarative Vs Scripted Pipeline
13) Jenkins Multi Stage Pipeline
14) Jenkins Plugins
15) Jenkins Global Tools
16) Post Build Actions
17) Email Notifications
18) Jenkins Shared Libraries
19) Parallel Stages
20) try-catch blocks in pipeline
21) Multi Branch Pipeline
22) What is Jenkinsfile
23) How to take Jenkins Backup (restore is pending)
AWS Whatsapp Group : https://chat.whatsapp.com/CL52YW830eTBob1nqVJv9x

===========
AWS Cloud
===========

Pre-Requisites : Linux Basics

================
Course Content
================

=> What is Infrastructure


=> On-Premesis Infrastructure
=> Challenges with On-Prem infra
=> What is Cloud Computing
=> Advantages with Cloud
=> Cloud Service Models (IaaS vs PaaS vs SaaS)

=> AWS Introduction

=> AWS Account Setup

=> AWS Services Overview

=> EC2 (Elastic Compute Cloud)

=> EBS (Elastic Block Store) (Volumes & Snapshots)

=> LBR (Load Balancer)

=> ASG (Auto Scaling)

=> S3 (Simple Storage Service) - Unlimited storage

=> RDS (Relational Databases)

=> Dynamo DB (No-SQL Database)

=> EFS (Elastic File System)

=> IAM (Identity & Access Management)

=> AWS CLI

=> VPC (Virtual Private Cloud)

=> Cloud Formation (IAAC)

=> Cloud Front (Content Delivery Network)

=> Route 53 (DNS Mapping)

=> AWS Lambdas (Serverless Computing)

=> Cloud Watch


=> SNS (Notification)

=> Terraform (Infrastructure provising)

=> Ansible (Configuration management)

=============
Course Info
=============

Course Name : AWS Cloud + Terraform + Ansible

Duration : 45 Days

Timings : 8:00 PM - 10:00 PM (IST)

Notes: Softcopy material (topic wise)

Videos : Backup Videos will be provided (1 year)

Course Fee : 10,000 INR

=====================
Cloud Service Models
=====================

a) IAAS (Infrastructure as a service)

b) PAAS (Platform as a service)

c) SAAS (Software as a service)

=============
What is AWS
============

-> Since 2006 aws providing IT resources over internet

-> Pay as you go model

-> 190+ contries using

-> Having Global infrastructure

=================
Regions & AZ's
=================

Region -> Geographical location (32 regions)

Availability Zone -> Data Center (server room) -> 102 Az's

==================
AWS Account Setup
==================

-> We can create free account in aws cloud (free tier)

-> 1 year free access (limited services access)

-> Credit Card required (2 rs for account verification)

Note: No Auto Debits

======================
AWS Services Overview
======================

-> AWS providing 200+ services

1) IAM : Identity & Access management (RBAC)

2) EC2 : Elastic Compute Cloud (Virtual Machines)

3) S3 : Simple Storage Service (Unlimited Storage)

4) RDS : Relational Database Service (Ex: Oracle, MySQL)

5) Route 53 : Domain Name System (DNS) (ex: www.ashokit.in)

6) Elastic Bean stack : PaaS (To run web applications)

7) VPC : Virtual Private Cloud (isolated network for the resources)

8) Cloud Watch : Monitoring resources in AWS

9) SNS : Simple Notification Service

10) CloudFormation : Infrastructure as a code

11) EFS : Elastic File System (share files with multiple EC2 instances)

12) ECS : Elastic Container Service (to run docker containers)

13) ECR : Elastic Container Registry (to store our images)

14) EKS : Elastic K8S Service (K8s control plane)

15) Cloud Front : To configure Edge locations

16) AWS Lambdas : To implement serverless computing

17) AWS CLI : Command line interface

18) Dynamo DB : No-SQL database

==========================================
How to create infrastrucutre in aws cloud
==========================================
1) Web console (gui)

2) AWS CLI

3) Cloud Formation (YML / JSON)

4) Terraform (supports multiple clouds)

=================================
AWS S3 (Simple Storage Service)
=================================

-> S3 is a storage service in AWS cloud

-> S3 supports unlimited storage

-> Using S3 we can store any amount of data from anywhere

-> S3 supports object based storage (files)

-> We can upload & download objects (files) at any point of time using S3

-> In S3, we need to create buckets to store data

Note: Every bucket should have unique name

-> When we create a bucket end-point url will be generated to access bucket.

-> When we upload object into bucket, every object will get its own end-point url.

Note: By default, buckets and objects are private (we can them as public).

-> We can create multiple buckets in s3.

https://ashokitbucket101.s3.ap-south-1.amazonaws.com/
SB_NG_Docker_K8S_Project_Setup.pdf

=================================
Static Website Hosting using S3
=================================

## Git Hub Repo : https://github.com/ashokitschool/s3_static_website_hosting.git

Step-1) Create s3 bucket with unique name

Step-2) Upload website files & folders into bucket with public read-access

Step-3) Enable Static website hosting (in bucket properties)

index-document : index.html
error-document : error.html

Note: After enabling static website hosting it generates end-point URL for our
website

Step-4) Access our website using website endpoint url.


#################################
RDS (Relational Database Service)
#################################

-> Every application will contain 3 layers

1) Front End (User interface)

2) Back End (Business Logic)

3) Database

-> End users will communicate with our application using frontend (user interface).

-> When end-user perform some operation in the front-end then it will send request
to backend. Backend contains business logic of the application.

-> Backend logic will communicate with Database to perform DB operations.

(insert / update / retrieve / delete)

###########################################
Challenges with Database Setup on our own
###########################################

1) Setup Machine to install Database Server


2) Purchase Database Server License
3) Install Database Server in our Machine
4) Setup Network for our machine
5) Setup power for machine
6) Setup a server room to keep our machines
7) Setup AC for room for cool temperature
8) Setup Security for room
9) Setup Database backups
10) Disaster Recovery

-> If we use AWS cloud, then AWS will take care of all the above works which are
required to setup Database for our application.

-> In AWS, we have RDS service to create and setup database required for our
applications.

-> We just need to pay the money and use Database using AWS RDS service. DB setup
and maintenence will be taken care by AWS people.

##### RDS is full managed by AWS #####

** Using RDS we can Easily set up, operate, and scale a relational database in the
cloud ***
######################################
Steps to create MYSQL DB using AWS RDS
######################################

1) Login into AWS management console

2) Goto RDS Service

3) Click on 'Create Database'

Choose a database creation method : Easy Create


Engine Option : MySQL
Template : Free Tier
DB instance Identifier : ihis (Note : you can give anything)
Username : admin
Password : Choose a passord

4) Click on 'Create Database' (It will take few minutes of time to create)

Note: Notedown username and password of the database

5) Once Database created, it will provide database Endpoint URL to access

6) Change Database to Public Access

7) Enable All Traffic in Security Group attached to Database.

###################
MySQL DB Properties
###################

Endpoint : database-1.cbair9bd9y7d.ap-south-1.rds.amazonaws.com
Uname : admin
Pwd : ashokitdevdb
Port : 3306 (it is default port for mysql db)

Note: We need to provide DB properties to project development team / testing team

#############################
Steps to test MYSQL DB Setup
#############################

1) Download and install Visual Studio using below link

Link : https://aka.ms/vs/17/release/vc_redist.x64.exe

2) Download and install MySQL Workbench using below link

Link : https://dev.mysql.com/downloads/workbench/

3) Create Database Connection in MySQL workbench using Database properties

4) Once we are able to connect with Database then we can execute below queries in
Workbench

###############
MySQL Queries
###############

show databases;

create database sbidb;

use sbidb;

show tables;

create table emp_dtls(emp_id integer(10), emp_name varchar(20), emp_salary


integer(10));

insert into emp_dtls values(101, 'Raju', 5000);


insert into emp_dtls values(102, 'Rani', 6000);
insert into emp_dtls values(103, 'Ashok', 7000);

select * from emp_dtls;

####################################
Working with MySQL client in Ubuntu
####################################

$ sudo apt-get update

$ sudo apt-get install mysql-client

$ mysql -h <endpoint> -u <username> -p (click enter and give password)

##########################################
Working with MySQL client in AMAZON Linux
##########################################

$ sudo yum update

$ sudo yum install mysql

$ mysql -h <endpoint-url> -u <username> -p (click enter and give password)

-h : It represents host (DB edpoint URL)

-u : It represents DB username

-p : It represents DB password

Ex: mysql -h database-1.cbair9bd9y7d.ap-south-1.rds.amazonaws.com -u admin -p

=> We can execute below queries to see the data which we have inserted previously
using Workbench.

show databases;

use sbidb;
show tables;

select * from emp_dtls;

#################################
Spring Boot App with AWS RDS DB
#################################

URL : https://youtu.be/GSu1g9jvFhY

=================================
AWS S3 (Simple Storage Service)
=================================

-> S3 is a storage service in AWS cloud

-> S3 supports unlimited storage

-> Using S3 we can store any amount of data from anywhere

-> S3 supports object based storage (files)

-> We can upload & download objects (files) at any point of time using S3

-> In S3, we need to create buckets to store data

Note: Every bucket should have unique name

-> When we create a bucket end-point url will be generated to access bucket.

-> When we upload object into bucket, every object will get its own end-point url.

Note: By default, buckets and objects are private (we can them as public).

-> We can create multiple buckets in s3.

https://ashokitbucket101.s3.ap-south-1.amazonaws.com/
SB_NG_Docker_K8S_Project_Setup.pdf

-> S3 is global service where buckets are regional specific.

=================================
Static Website Hosting using S3
=================================

-> Website nothing but collection of web pages

-> Websites are divided into 2 types

1) Static Website
2) Dynamic Website
-> The website which gives same response/content for all users is called as static
website.

-> The website which gives response based on user is called as Dynamic website.

## Git Hub Repo : https://github.com/ashokitschool/s3_static_website_hosting.git

Step-1) Create s3 bucket with unique name

Step-2) Upload website files & folders into bucket with public read-access

Step-3) Enable Static website hosting (in bucket properties)

index-document : index.html
error-document : error.html

Note: After enabling static website hosting it generates end-point URL for our
website

Step-4) Access our website using website endpoint url.

===================
S3 Storage Classes
===================

=> Storage classes are used to specify how frequently we want to access our objects
from S3.

=> We have several storage classes in s3 like below..

1) Standard (default) : To access object more than once in a month

2) Intelligent-Tiering : Unknown access patterns

3) Standard-IA : Infrequent - Access (Once in a month)

4) One Zone-IA : Stored in a single Availability Zone (Once in a month)

5) Glacier Instant Retrieval : Long Lived Archive data (once in quarter) (MS)

6) Glacier Flexible Retrieval (formerly Glacier) : Once in year (M - H)

7) Glacier Deep Archive : Less than once in year (Hours)

8) Reduced Redundency : Not Cost Effective (not recommended)

============
Versioning
============

Object Name : devops-material.pdf (01-Sep-2023)

Object name : devops-material.pdf (10-sep-2023) --> It will override old one

=> If we don't want to replace old objects from bucket then we can enable
Versioning.

=> Versioning we will enable at bucket level and it is applicable at object level
Note: Once we enable versioning, we can't disable that but we can suspend.

=======================
What is Object Locking
=======================

-> It is used to enable the feature WORM (Write once read many times)

-> At the time of bucket creation only we can enable object lock.

-> Object Lock will be enabled at bucket level and it is applicable at object
level.

=================================
What is Transfer Accelaration
=================================

-> It is used to speed up data transfer process in S3 bucket.

====================================
IAM : Identity & Access Management
====================================

=> 2 Types of accounts we can see in AWS

1) Root Account (Super account) - will have access for everything in AWS

2) IAM Account (we can manage permissions)

=> We shouldn't use root account for our daily activities.

=> For daily activities we are going to use IAM accounts.

Note: Every team member will get IAM account to perform daily activities.

=> In IAM account we can give permissions to access particular services in AWS
cloud.

Note: IAM is free service.

===================================
Multi Factor Authentication (MFA)
===================================

-> It is used to provide additional security for root account.

-> Enable MFA for root account using Google Authenticator app

-> After enabling MFA, logout and login into root account and check behaviour

===============
Best Practices
===============
- When we login AWS using 'email' and 'password', that has complete access to all
AWS services and resources in the account (Root account).
- Strongly recommended that you do not use the "root user" for your everyday tasks,
even the administrative ones.

- Instead, adhere to the best practice of using the root user only to create your
first IAM user. Then securely lock away the root user credentials and use them to
perform only a few account and service management tasks.

- IAM user is truely global, i.e, once IAM user is created it can be accessible in
all the regions in AWS.

1. Main things in IAM is


-Roles
-Users
-Policies / Permissions
-Groups

2. IAM users can be accessible by the following 3 ways.


-through AWS console
-CLI (Command Line Interface)
-API

3. In MNCs , permissions will not be provided for individual users. Create the
Groups and add the users into it.
Users & Groups are for the users.
Roles are for the AWS Services.

============
IAM Account
============

1) Create IAM account and attach policies (RDSFullAcces, S3FullAccess)

2) Login into IAM account and EC2 service (can't access because no permission)

=================
IAM User Group
=================

1) Create User Group

2) Attach Policies to group

3) Add Users to group

=====================
Create Custom Policy
=====================

1) Create policy and add to user group

================
Create IAM role
=================
1) Create IAM role and attach to ec2 vm

===============
IAM Summary
===============

1) What is IAM ?
2) What is Root Account ?
3) How to enable MFA for root account

4) What is IAM account


5) How to create IAM account
6) Programmatic Access Vs Console Access
7) Attaching Policies to User
8) Creating Custom Policy
9) Creating User Group
10) Adding Users to Group
11) Adding Policies to User Group
12) Working with IAM Role

#########################
EC2 Service
#########################

-> EC2 stands for "Elastic Compute Cloud"

-> EC2 is most demanded service in AWS cloud

-> EC2 is used to create Virtual Machines in AWS cloud

-> EC2 VM is called as EC2 instance

EC2 Instance = Computer / Server / VM / Virtual machine

-> EC2 instances are re-sizable (we can change configuration)

-> EC2 is a regional service

-> EC2 is a paid service

-> EC2 instances minimum billing period is 1 hour

Note: To encourage learners, AWS provided EC2 instances (t2.micro type) free for 1
year
(Monthly 750 hours)

-> When we create EC2 instance, AWS will provide default storage using EBS.

EBS Max Size : 16 TB (16000 GB)

Windows VM : 30 GB (free of cost, default)


Linux VM : 8 GB (free of cost, default)

-> For EC2 instance, AWS will provide default VPC.

-> To create EC2 instance, we need to choose AMI (Amazon machine image)
-> Key-Pair is used to secure EC2 instance.

-> Key Pair contains public key and private key

Note: One Key-Pair we can use for Multiple EC2 instances irrespective of OS

-> For EC2 instance we will attach firewall Security rules using Security Group

Windows VM => RDP : 3389

Linux VM => SSH : 22

HTTP => HTTP : 80

HTTPS => HTTPS : 443

Note: One security group we can attach to multiple EC2 instances.

########### Lab Task : Launch Windows VM and Connect to it using RDP Client
############

==========================
Launch Linux VM using EC2
==========================

1) Create Linux VM

2) Connect to Linux VM using MobaXterm

======================
Types of IP's in AWS
======================

1) Public IP ( It is dynamic IP, it is used to connect with EC2 VM from outside)

Note: When we re-start EC2 VM then Public IP gets changed.

2) Private IP ( It is fixed IP, it is used internally in AWS cloud )

Note: When we re-start EC2 VM then Private IP will not change. It is fixed and used
for internal purpose.

3) Elastic IP ( It is fixed Public IP, It is chargeble )

Note: When we create and attach Elastic IP to ec2 instance then it will become
fixed public ip for that instance. Even if we re-start the machine that IP will
remain same.

######################### Lab Task on EC2 IP addresses ###################

1) Create EC2 Instance (Amazon Linux)

2) Observe IPs associated with that Machine


=> Public IP
=> Private IP

3) Stop the EC2 instance and Observe IPs associated with that Machine

=> Public IP will be removed


=> Private IP remains same

4) Create Elastic IP and associate that Elastic IP to ec2 instance

5) Observe IPs associated with that Machine (Elastic IP will become Public IP)

6) Re-Start the EC2 instance and Observe IPs associated with that Machine

=> Public IP remains same


=> Private IP remains same

7) De-Associate Elastic IP from EC2 (Action -> Networking -> De-Assocate Elastic
IP)

8) Release Elastic IP back to AWS cloud (Elastic IP -> Actions -> Release)

****** Note: Elastic IPs are chargeble, If we are not using then we have to release
them to AWS ***********

================================================================================

==========================
EC2 Instances Categories
==========================

1) On-Demand Instances

2) Reserved Instances

3) Spot instances

4) Dedicated Host Instances

=======================
On-Demand Instances
=======================

=> Whenever we want then we can create it

=> Fixed Price (Hourly Based)

=> Pay for use

=> No Prior Payments

=> No Commitment

====================
Reserved Instances
====================

=> It is like advanced booking for Ec2 instances


=> Longterm Commitment

=> Commitement 1 year or 3 years

=> AWS will provide discount when we go for longterm commitment

=> Prior Payment option is available (Partial Payment also available)

===============
Spot Instances
==============

=> It is like bidding or auction

=> AWS will offer High Capacity systems for Low Price

==========================
Dedicated Host Instances
==========================

=> It is physical instance that AWS will provide to customers

=> These are very costly instances

===================
EC2 Instance Types
===================

1) General Purpose Instances

2) Compute Optimized

3) Memory Optimized

4) Storage Optimized

5) Accelarated Computing

========================
What is AMI in AWS ?
==========================

=> AMI stands for Amazon Machine Image

=> AMI contains set of configurations to launch virtual machines

a) Windows AMIs
b) Linux AMIs
c) Mac AMIs

=> AWS provided several Quick Start AMIs to launch Virtual Machines.

=> In AWS, we can create our own AMIs also.

=> By Default our AMI visibility is Private (We can make it as public)
############# AMI Lab Task ##############

1) Create AMI from existing EC2 VM

2) Check AMI display in AMI Dashboard

3) Change AMI visibility to Public (It will be available for all AWS users)
(Optional)

4) Launch New EC2 instance using our AMI

5) Connect to new ec2 vm using MobaXterm

########################
EBS Volumes & Snapshots
########################

=> EBS stands for Elastic Block Store

=> EBS is a block level storage device that can be associated with EC2 instance

=> EBS providing both Primary and Secondary Storage for EC2 instances

=> EBS volumes are divided into 2 categories

1) Root Volumes
2) Additional Volumes

=> When we launch EC2 VM then by default EBS Root Volume will be attached to our
EC2 instance

For Windows by default : 30 GB space in Root Volume

For Linux by default : 8 GB space in Root Volume

######## Note: EBS Volume max size is 16 TB (16000 GB) ######

=> EBS root volume is mandatory to Launch any EC2 instance.

=> If we detach EBS root volume from EC2 instance then we can't start that
Instance.

=> We can create Additional EBS volumes and we can attach / detach with EC2
instances.

=> Additional EBS volumes are optional Volumes.

*** One EC2 instance can have multiple EBS volumes

*** One EBS volume can be attached with only one EC2 instance at a time.
*** EBS Volumes are AVailability Zone Specific ***

=================
EBS Volume Types
==================

=> We have 5 types of EBS volumes

1) General Purpose (Min: 1 GiB, Max: 16384 GiB)

2) Provisioned IOPS (Min: 4 GiB, Max: 16384 GiB)

Note: Provisioned IOPS (io2) supports 65 TB

3) Cold HDD (Min: 125 GiB, Max: 16384 GiB)

4) Throughput Optimized (Min: 125 GiB, Max: 16384 GiB)

5) Magnetic (standard) ( Min: 1 GiB, Max: 1024 GiB )

###################
Today's Lab Task
###################

1) Create 2 EC2 Linux instances with Amazon Linux 2 AMI


(They will have EBS root volumes with 8 GB)

2) Create Additional EBS volume with 12 GB (check AZ for VMs and create EBS Vol in
same AZ)

3) Attach EBS Additional volume of 12 GB to EC2 VM-1 for storing data

4) Connect to EC2 VM-1 instance and verify volumes (EBS both volumes shud display)

$ lsblk

Root Vol : /dev/xvda


Addn Vol : /dev/xvdf

5) Create a directory and mount EBS Additional volume to created directory & create
files

$ sudo mkfs -t ext4 /dev/xvdf

$ mkdir dir1

$ sudo mount /dev/xvdf dir1

$ cd dir1

$ touch f1.txt f2.txt

6) Detach EBS Additional volume from EC2 VM-1

7) Attach previously created EBS Additional volume to EC2 VM-2

8) Connect to EC2 VM-2 instance and verify volumes (EBS volume shud display)
9) Create a directory and mount EBS Additional volume to created directory

$ sudo mkfs -t ext4 /dev/xvdf

$ mkdir dir2

$ sudo mount /dev/xvdf dir2

$ ls -l dir2

10) Verify data which stored previously in EBS Addn Vol available or not
(it should present)

11) Stop Instances and delete volumes

=================
EBS - Snapshots
=================

=> Snapshots are used for Volume Backups

=> Snapshots are Regional Specific (Volumes are AZ specific)

=> From volume we can create snapshot and from snapshot we can create volume

Volume ====> Snapshot =====> Volume

=> Snapshots can't be attached with EC2 instance (Only Volumes we can attach with
EC2)

##########################################
Static WebSite Hosting using EC2 instance
##########################################

=> Website : Collection of web pages

=> We have 2 types of websites

1) Static Website (Same content for all users)

2) Dynamic Website (Content based on user)

=> In order to run a website we need Webserver

=> Webserver is a software which provides runtime environment(platform) to run web


applications.

===========================
Install Web Server (Httpd)
===========================

$ sudo yum update -y


$ sudo yum install httpd

$ sudo systemctl start httpd

Note: Enable HTTP in Security Group Inbound rules

=> Access our website using EC2 vm public ip (we should be able to see Test web
page)

=> We can change content of our website using below commands

$ cd /var/www/html

$ sudo vi index.html

<h1> Welcome to Ashok IT <h1>

<h2> Contact Us : + 91 - 9985296677 </h2>

=> Access our website using EC2 vm public ip (we should be able to see our content
in web page)

=========================
What is User Data in EC2
=========================

=> User Data is used to execute the script when ec2 instance launched for first
time

=> We can say user data is default script to execute when EC2 instance is launched.

Note: If we stop and start ec2 machine then User data will not execute.

sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Life Insurance Server - 1</h1></html>" > index.html
service httpd start

####################################

=> When we run our application in Single Server then we may get below issues

1) Burden on Server

2) Slow Responses

3) Server can crash

4) App downtime

5) Effect on Brand Value

6) Revenue Loss
=> To overcome these issues we will go for Load Balancing.

######################
Load Balancing in EC2
######################

-> Load Balancing is the process of distributing application load to multiple


servers

-> Instead of deploying our application in one server, we will deploy in multiple
servers

-> All the servers will be connected to Load Balancer

-> LBR will distribute traffic to all connected servers using Round Robin algorithm

==================================
Process to Setup Load Balancer
==================================

1) Create Security Group with below Protocols in Inbound Rules

SSH - 22
HTTP - 80

SGName : ashokit_Security_group

2) Create first EC2 instance (EC2-1) and Host Web Application

Note: Configure below script as 'User Data' at the time of launching the instance

#! /bin/bash

sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Life Insurance Server - 1</h1></html>" > index.html
service httpd start

3) Create second EC2 instance (EC2-2) and host web application

Note: Configure below script as 'User Data' at the time of launching the instance

#! /bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Life Insurance Server - 2</h1></html>" > index.html
service httpd start

4) Create Load Balancer ( ALB ) and attach both EC2 instances as one target group

Scheme : Internet facing

Select : New Target Group (Add both EC2 instances)

Listner : HTTP : 80
Security Group : ashokit_Security_Group

Note: Once Load Balancer created, DNS will be generated (it will take upto 3 mins)

5) Send a request to Load Balancer DNS URL and see the response

(it should distribute traffic to both servers)

Note
++++++++++
1) DELETE Load balancer
2) Delete EC2 instances

###############################
Types of Load Balancers in AWS
###############################

-> We have 4 types of Load balancers in AWS

1) Application Load Balancer

2) Network Load Balancer

3) Gateway Load Balancer

4) Classic Load Balancer (Will be retiering on Aug 15, 2022)

-> When you want to deal with HTTP and HTTPS requests then Application Load
Balancer is recommended. Application Load Balancer supports path based routing.

-> When you have static ip address and wants to process millions of requests per
second then go for Network Load Balancer (This gives ultra performance)

-> When your application dealing with third party services then go for Gateway Load
Balancer

######################
Microservices
#####################

-> In realtime, project will be developed in 2 ways

1) Monolith Architecture based project

2) Microservices Architecture based project

-> Monolith Architecture means all the fuctionalities will be developed in single
project
-> If we change some code then it may effect on another fucntionality
-> The maintenence will become very difficult when we follow monolith architecture

-> To overcome the problems of Monolith Architecture, now a days people are using
Microservices Architecture in the Realtime.

-> In Microservices architecture, project functionality will be divided into


multiple services
-> Every service is a seperate project
-> When we change functionality of one service it will not effect on another
service
-> Every service will be running as a seperate project in seperate server
-> Easy to maintain

######################################
Setup LBR with multiple Target Groups
######################################

-> Create Security Group with SSH-22 and HTTP-80 Protocols

1) Create EC2 Instance-1 For Flights (Flights-Server-1)

#! /bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Flights Server - 1</h1></html>" > index.html
service httpd start

2) Create EC2 Instance-2 for Flights (Flights-Server-2)

#! /bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Flights Server - 2</h1></html>" > index.html
service httpd start

-> Create "Flights Target Group" with Flights EC2 instances

3) Create EC2 instance-1 for Trains (Trains-Server-1)

#! /bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Trains Server - 1</h1></html>" > index.html
service httpd start

4) Create EC2 Instance-2 for Trains (Trains-Server-2)

#! /bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Trains Server - 2</h1></html>" > index.html
service httpd start
-> Create "TrainsTargetGroup" for Trains EC2 instances

-> Create Application Load Balancer

-> Configure Routing in Application Balancer to route the request to "Target Group"
based on URL

(At the time of LBR creation choose existing target group as Flights-
TargetGroup)

####### AfFter Load Balacner Created We need to configure Routing Rules


########

(Go to LBR -> Listeners -> Click on View/Edit Rules -> Click on + Symbol -> CLick
on Insert Rule -> Add the rule)

Note: In this scenario i have added query string rule

Rule-1 ==> Key : type, value = flights ==> Forward it to "Flights-


TargetGroup"

Rule-2 ==> Key : type, value = trains ==> Forward it to "Trains-TargetGroup"

Default ALB URL : http://makemytriplbr-1672601111.ap-south-1.elb.amazonaws.com/

=========================
URLS with Query String
========================

URL-1 : http://makemytripalb-1456632845.ap-south-1.elb.amazonaws.com?type=trains

URL-2 : http://makemytripalb-1456632845.ap-south-1.elb.amazonaws.com?type=flights

======
Note
=====

-> After practise Delete Target Groups, LBR and EC2 instances to avoid billing

================
Auto Scaling
=================

=> AWS Auto Scaling monitors your applications and automatically adjusts capacity
to maintain steady, predictable performance at the lowest possible cost.

=> Using AWS Auto Scaling, it’s easy to setup application scaling for multiple
resources across multiple services in minutes.

=> Amazon EC2 Auto Scaling helps you ensure that you have the correct number of
Amazon EC2 instances available to handle the load for your application.

=> The process of increasing and decreasing no.of ec2 instances based on the load
is called as Auto Scaling.
=============================
Advantages with Auto Scaling
=============================

1) Fault Tolerenece : Detect when an instance is unhealthy, terminate it, and


launch an instance to replace it.

2) Availability : Ensure that your application always has the right amount of
capacity to handle the current traffic demand.

3) Cost Management : Save money by dynamically launching instances when they are
needed and terminate them when they aren't needed.

=================================
How to setup Auto Scaling Group
=================================

1) Create Launch Template

2) Create AutoScaling Group with Launch Template

3) Configure Desired, Min and Max Capacity

4) Attach AutoScaling Group to particular Target Group

5) Configure Scaling Policy

=======================
Summary For Revision
======================

1) What is EC2

2) What is Key Pair

3) What is Security Group (Firewalls Protocol)

4) Inbound Rules & Outbound Rules

5) Launching Windows VM using EC2

6) Launching Linux VM using EC2

7) Connecting to EC2 VM using MobaXterm and Putty

8) Instance Types & Families

9) Types of IPs in AWS (Public, Private and Elastic IP)

10) What is AMI & How to create our own AMIs

11) What is EBS (Elastic Block Storage)

12) EBS Volumes (Root & Additional Volumes)

13) EBS Volume Types

14) Snapshots & Life Cycle Manager


15) How to copy the data from one EC2 instance to another EC2 instance ?

16) How to host static website using EC2 instance (httpd webserver)

17) What is User Data in EC2 ?

18) What is Load Balancer ?

19) What is Target Group ?

20) Monlith Vs Microservices

21) LBR with Multiple Target Groups ( Routing Rules )

22) Types of Load Balancer

23) Auto Scaling

==================
Amazon Cloud Watch
==================

Amazon CloudWatch is a component of Amazon Web Services that provides monitoring


for AWS resources and the customer applications running on the Amazon
infrastructure.

CloudWatch enables real-time monitoring of AWS resources such as Amazon Elastic


Compute Cloud (EC2) instances, Amazon Elastic Block Store (EBS) volumes, Elastic
Load Balancing and Amazon Relational Database Service (RDS) instances.

The application automatically collects and provides metrics for CPU utilization,
latency and request counts.

Users can also stipulate additional metrics to be monitored, such as memory usage,
transaction volumes or error rate.

==============================
Cloud Watch & SNS - Lab Task
==============================

1) Create SNS Topic with Email Notification (Standard Create)

2) Configure Email Subscription in SNS Topic (Confirm Subscription recieved in


email)
3) Select EC2 Instance -> Action -> Monitor and trouble Shoot -> Manager Cloud
Watch Alarms -> Create cloud watch Alarm

Alaram Notification : Select SNS Topic which we have created


Alaram Thresh Hold : Avg CPU >= 5%

4) Connect to EC2 instance

5) Increase load on the EC2 instance using "stress" software

$ sudo yum install stress -y


$ sudo stress --cpu 8 -v --timeout 180s

6) Observe the behaviour of Cloud Watch / Alaram / SNS


(We should get Email Notification)

Note: When Alarm got triggered, its status will be changed to 'In Alarm'

=> We can Monitor our Alarm history (how many times it got triggered)

(Goto Cloud Watch -> Select Alarms -> Click Alarm -> Click on History)

==========
AWS CLI
==========

AWS provides two ways of infrastructure configurations

1) AWS Management Web Console

2) AWS CLI (Command Line Interface)

===========================
Using the AWS web console:
===========================

It is a graphical method to connect to various AWS resources, their configuration,


modification, etc. It is simple to use and does not require knowledge of scripting.

============================
AWS Command Line Interface:
============================

Usually, the script provides you with the flexibility to manage multiple AWS
resources, infrastructures effectively.

For example, we can use the script to deploy multiple resources without the need to
go through a complete configuration wizard each time.

============================
Configuring AWS CLI
=============================

1) Create AWS Account: In order to configure AWS CLI.


2) Create IAM user with Security Credentials and note down Access Keys

Access Key : AKIASAEUF6C7FBPQIPUU


Secret Key : hFtfeoxpkamUFK9yVLIR0rtJj7xXSlONccNDzydN

3) Install AWS CLI: AWS CLI is available for Windows, MAC and Linux distribution of
OS.

a) For windows : https://awscli.amazonaws.com/AWSCLIV2.msi (download and


install)

b) MAC and Linux: Please follow these steps (execute below commands)

$ sudo apt-get install -y python-dev python-pip


$ sudo pip install awscli

3) Once Installation completed then execute below commands

$ aws --version (It should give AWS CLI version)


$ aws configure (To connect with AWS Cloud)

Note: AWS configure command will ask for access key, secret access key, region and
output format.

######### CLI Documentation : https://docs.aws.amazon.com/cli/latest/reference/


##########

================================================
Working with AWS S3 Service using AWS CLI
================================================
Step-1: In this case, we will be using AWS S3 (Simple Storage Service) as an
example.

In brief, AWS S3 is an object storage service.

Step-2: Next, we are going to run “aws s3 ls" (to display bucket lis)

$ aws s3 ls

Step-3: After listing out the content of the existing bucket, let us try to create
a new s3 bucket using AWS CLI

$ aws s3 mb s3://ashokitbucket1004

Step-4: As a result of the command execution, the bucket should be created

Step-5: After the command has been executed, let us check, if the bucket has been
created and what is the region of the bucket.

Step-6: Delete bucket

$ aws s3 rb s3://ashokitbucket1004

===========================
List out all ec2 instances
===========================
$ aws ec2 describe-instances

Note : It will list down all the data in JSON format

# For example, we can search for instances with a given type


$ aws ec2 describe-instances --filters Name=instance-type,Values=t2.micro

or a tag key:

$ aws ec2 describe-instances --filters "Name=tag-key,Values="EC2VM-2"

=========================================
Create a New Key Pair for EC2 Instances
=========================================

-> Before launching a new EC2 instance we’ll need an SSH key pair that we’ll use to
connect to it securely.

$ aws ec2 create-key-pair --key-name ashokitkey --output text > ashokitkey.pem

The above command will create a new key in the AWS named ashokitkey and pipe the
secret key directly to the location we specify, in this case, ashokitkey.pem.

==========================
Launch New EC2 Instances
==========================
$ aws ec2 run-instances --image-id ami-0a5ac53f63249fba0 --instance-type t2.micro
--key-name ashokitkeypair

=================================
Stop and Start an EC2 Instance
=================================

$ aws ec2 stop-instances --instance-ids <instance-id>

Ex: aws ec2 stop-instances --instance-ids i-003788b5ba504825f

And start again:

$ aws ec2 start-instances --instance-ids <instance-id>

Ex: $ aws ec2 start-instances --instance-ids i-003788b5ba504825f

========================
Terminate an Instance
========================
$ aws ec2 terminate-instances --instance-ids <instance-id>

Ex: $ aws ec2 terminate-instances --instance-ids i-003788b5ba504825f

================================
Creating Security Group in EC2
================================
$ aws ec2 create-security-group --group-name MySecurityGroup --description "My
security group"
=============================
Creating RDS instance (MySQL)
=============================

$ aws rds create-db-instance --db-instance-identifier test-mysql-instance --db-


instance-class db.t3.micro --engine mysql --master-username admin --master-user-
password secret99 --allocated-storage 20

========================
Delete RDS Instance (MySQL)
========================
$ aws rds delete-db-instance ^
--db-instance-identifier test-mysql-instance ^
--skip-final-snapshot ^
--no-delete-automated-backups

################################
VPC : Virtual Private Cloud
################################

=> VPC stands for Virtual Private Cloud.

=> It is a virtual network environment provided by cloud computing platforms like


Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

=> A VPC allows users to create and manage their own isolated virtual networks
within the cloud.

=> In a VPC, users can define their own IP address range, subnets, route tables,
and network gateways. It provides control over network configuration, such as
setting up access control policies, firewall rules, and network traffic routing.

=> Overall, a VPC provides a flexible and secure network environment that enables
users to build and manage their cloud-based applications and infrastructure with
greater control and customization.

###################
VPC Terminology
##################

1) VPC
2) CIDR Blocks
3) Subnets (public and private)
4) Route Tables
5) Internet Gateway
6) NAT Gateway
7) VPC Peering
8) Security Groups
9) NACL

###################
Types of IP's
###################

=> There are several types of IP (Internet Protocol) addresses used in computer
networks. Here are the most common types:

1) IPV4
2) IPV6
3) Public IP
4) Priate IP
5) Dynamic IP
6) Static IP Address

Note: Earlier we have only IPV4

-> IP ranges we will do with CIDR

-> CIDR stands for Class less inter domain range

Note: These CIDR classes are required for only public ips and not required for
private ips

-> Daily billions of new devices launching and they are using internet

-> If a device wants to use internet then ip is mandatory (we are running out of
ips)

-> To overcome this 1PV4 problem IPV6 came into picture

=========
IPV4
=========

=> IPv4 addresses are 32-bit numeric addresses written in four sets of numbers
separated by periods (e.g : 192.168.0.1)

=> It is the most widely used IP version and supports approximately 4.3 billion
unique addresses

=> However, due to the increasing number of devices connected to the internet, IPv4
addresses are running out, leading to the adoption of IPv6.

===========
IPV6
===========

=> IPv6 addresses are 128-bit alphanumeric addresses written in eight sets of four
hexadecimal digits separated by colons (e.g.,
2001:0db8:85a3:0000:0000:8a2e:0370:7334)

=> IPv6 provides a significantly larger address space than IPv4, with approximately
340 undecillion unique addresses.

=> It was introduced to overcome the IPv4 address exhaustion issue and support the
growing number of internet-connected devices.

==========
PUblic Ip
===========

A public IP address is a unique address assigned to a device connected to the


internet. It allows the device to communicate with other devices and services on
the internet. Public IP addresses are routable across the internet and are
typically assigned by Internet Service Providers (ISPs) or cloud providers

===========
Private IP
===========

A private IP address is an address used within a private network, such as a home or


office network. These addresses are not routable on the internet and are reserved
for local use. Private IP addresses provide internal communication within a
network, and devices with private IP addresses can access the internet through a
network address translation (NAT) mechanism.

============
Dynamic IP
===========

Dynamic IP Address: A dynamic IP address is an IP address that is assigned to a


device temporarily by a DHCP (Dynamic Host Configuration Protocol) server. The
assignment of dynamic IP addresses is typically managed by ISPs, and the addresses
can change over time when the device reconnects to the network. This type of
address is commonly used in home internet connections.

===========
Static IP
===========

Static IP Address: A static IP address is an IP address that is manually assigned


to a device and remains fixed, unlike dynamic IP addresses. Static IP addresses are
often used for servers, network devices, or services that require a consistent
address for easy access. They are usually configured manually on the device or
assigned by the network administrator.

==================
VPC Sizing
==================

-> Sizing will be calculated in 2 power

-> If we give range as /16

10.0.0.1/16 = > 2 power (32-16) => 2 Power 16 => 65,536


10.0.0.1/32 = > 2 power (32-32) => 2 Power 0 => 1
10.0.0.1/31 = > 2 power (32-31) => 2 Power 1 => 2
10.0.0.1/30 = > 2 power (32-30) => 2 Power 2 => 4
10.0.0.1/29 = > 2 power (32-29) => 2 Power 3 => 8
10.0.0.1/28 = > 2 power (32-28) => 2 Power 4 => 16 (AWS supports from /28)

=> /29 to /32 AWS won't support

-> We can start giving subnet ranges from /16 to /28

-> Recommended to use /24

10.0.0.1/24 = > 2 power (32-24) => 2 Power 8 => 256

Note: /24 we will get 256 IPs those are sufficient for our usecases in realtime

##########################
VPC Lab Task For Today
##########################

1) Create VPC

(It will create one Route Table by default name it as ashokit-private-rt)

CIDR : 10.0.0.0/16

2) Create 2 Subnets (Public and Private Subnets)

public Subnet CIDR : 10.0.0.0/24

private Subnet CIDR : 10.0.1.0/24

3) Create Internet Gateway and Attach to our VPC

4) Create one new Route Table (Name it as public Route Table)

5) Peform Subnet Association with Route Tables

=> public-rt => public-sn

=> private-rt => private-sn

6) Attach IGW to Public Route Table so that associated subnet will become public

7) Create One EC2 VM in public subnet and another EC2 vm in private subnet.

8) Test connectivity of both vms using MobaXterm.

Note: We should be able to connect with EC2 vm created in public sn and we


shouldn't be able to connect with EC2 vm created in private subnet.

=====================
Step-1 : Create VPC
=====================

-> Create a VPC

-> CIDR Block : 10.0.0.0/16


-> Select No IPV6 CIDR blok

-> Select Default Tenancy

-> Create VPC

Note: After creating VPC verify its details

(DNS hostnames -> Disabled)

Note: One Route Table will be created for VPC by default. Rename it as "Ashokit-
Private-Route-Table"

===========================
Step-2 : Create 2 Subnets
===========================

-----------------------
Create Subnet-1
----------------------
-> Create Subnet

Name : public-subnet-az-1

-> select availability zone as 1a

-> CIDR Block : 10.0.0.0/24 (It will take 256 ips)

------------------------
Create Subnet-2
-------------------------
-> Create Subnet

Name : private-subnet-az-1b

-> select availability zone as 1b

-> CIDR Block : 10.0.1.0/24 (It will take 256 ips)

Note: Every subnet will have Route Table and NACL

-> AWS will reserve 5 ips in every subnet (we will get only 251)

=================================
Stpe-3 : Create Internet gateway
=================================
Note: By default one IGW will be available and it will be attached to default VPC

-> Create custom Internet Gateway (ashokit-vpc-igw)

-> Attach this IGW to VPC (we can attach IGW to only one VPC)

=================================
Step-4 : Create Route Table
=================================

Note: When we create VPC, we will get only route table by default. It is called as
Main route table.

Note : Change existing route table name 'ashokit-vpc-private-rt'

-> Create one new Route Table (ashokit-vpc-public-rt)


-> Choose vpc and create it

Now We have 2 route tables

-> Goto route table and attach route tables to subnets (Subnets association for
Route Tables)

Private Route Table should have Private Subnet


Public Route Table should have Public Subnet

==========================================
Step-5 : Making Subnet as public
==========================================

-> Goto public Route Table -> Edit Routes

-> Add Destination as 0.0.0.0/0 and Target as IGW -> Save

-> Subnet Associations -> Edit SNET -> Select Public Subnet

======================================
Step - 6 : Create EC2 (Public EC2)
======================================
-> Choose AMI
-> Select VPC
-> Select Public Subnet
-> Assign Public IP as Enable
-> Add SSH and Http Protocols
-> Download KeyPair
-> Launch Instance

Note: Goto VPC and Enable DNS Host Enable

========================================
Step - 7 : Create EC2 (Private EC2)
========================================
-> Choose AMI
-> Select VPC
-> Select Private Subnet
-> Assign Public IP as Enable
-> Add SSH (source : custom, Range : 10.0.0.0/16)
-> Download KeyPair
-> Launch Instance

=================================
Step - 8 : Test EC2 Connections
=================================

-> Connect to Public EC2 using MobaXterm (It should allow to connect)

-> Connect to Private EC2 using MobaXterm (It shouldn't allow to connect)
=================================================================================
Step - 9 : Connect with 'private-ec2' from 'public-ec2' using 'ssh' connection
=================================================================================

Note: As both Ec2 instances are available under same VPC, we should be able to
access one machine from another machine.

----------------------
Procedure to access
----------------------

-> Upload pem file into public-ec2 machine (in mobaxterm we have upload option)

-> Execute below command to provide permission to access pem file

$ chmod 400 <pem-file-name>

-> Execute below command to make ssh connection from public-ec2 to private-ec2

$ ssh -i "pem-file-name" ec2-user@private-ec2-vm-private-ip

Ex: ssh -i "ashokitnewkey.pem" ec2-user@65.2.73.111

Note: It should establish connection (this is internal connection)

-> Try to ping google from private ec2 (it should not allow because igw is not
available)

=============================
VPC with NAT Gateway Lab Task
=============================

1) Create NAT gateway in public subnet

2) Add NAT gateway in 'private-subnet-routute-table'

3) After NAT Gateway, we should be able to ping google from 'private-ec2' also

Note: Delete Elastic IP and NAT Gateway after practise

#######################
What is VPC Peering
#######################

VPC Peering: IPV4 or IPV6 traffic routes between VPCs created to establish
communication between one or more multiple VPCs.

=======================
AWS definition:
=======================

=> “A VPC peering connection is a networking connection between two VPCs that
enables you to route traffic between them using private IPv4 addresses or IPv6
addresses.

=> Instances in either VPC can communicate with each other as if they are within
the same network. “
1) Through VPC Peering, traffic stays within the AWS network and not go over the
internet.

2) Non-overlapping CIDRs – The 2 VPCs you are trying to peer, must have a mutually
exclusive set of IP ranges.

3) Transitive VPC Peering – not allowed i.e

(If VPC A & B have peered and VPC A & C have peered, VPC B & C cannot share
contents until there is an exclusive peering done between VPC B & C)

===========================
Will VPC Peering Cost me?
===========================

No. VPC itself won’t cost you, however, the resources deployed inside the VPC and
data transfers are done will cost you.

==================================================
Let’s create VPC Peering to enable communication
==================================================

To establish the connection, lets create VPC peering

=> On the left navigation panel under VPC -> Peering Connections:

VPC (Requester) = ashokit_aws_custom_vpc

VPC (Accepter) = default_vpc

=> Now you would see the status Pending Acceptance which means, Requestor has sent
a request to the peer now target VPC needs to accept the request.

=> Go to VPC Peering -> Click on Actions -> Accept Request

=> Now we need to make entries in Route Tables

Now navigate to Route Tables, in Default VPC RT(Route Table) -> Edit routes

########## Default VPC Route Table should have 3 routes #########

(Local + all traffic with IGW + ashokit_aws_custom_vpc IP Range)

172.31.0.0/16 - local
0.0.0.0/0 - Internet-gateway
10.0.0.0/16 - vpc peering (We need to add this)

########### Custom VPC Route Table should have 3 routes #########

(Local + All traffic with IGW + Default VPC IP Range)

10.0.0.0/16 - local
0.0.0.0/0 - Internet-gateway
172.31.0.0/16 - vpc (We need to add this)

########### Allow Traffic in VPC Security Groups ###########

Edit Security Group of Default and Custom VPC to allow traffic from each other
Default VPC Security Group looks like
SSH - 22 - all
All Traffic

Custom VPC Security Group would look like


SSH - 22 - all
All Traffic

==========================Test VPC Peering Connectivity ==========================

# Ping default-vpc EC2-VM private IP from ashokit-custom-vpc vm


$ ping <private-ip>

# Ping ashokit-vpc EC2-VM private IP from default-vpc vm


$ ping <private-ip>

===============================================================
Q ) What is the difference between NACL and Security Groups ?
===============================================================

================
Security Group
================

-> Security Group acts as a Firewall to secure our resources

-> Security Group contains Inbound Rules & Outbound Rules

inbound rules ---> incoming traffic


outbound rules ---> outgoing traffic

-> In One security group we can add 50 Rules

-> Security Group supports only Allow rules (by default all rules are denied)

-> We can't configure deny rule in security group

Ex : 172.32.31.90 ----> don't accept request from this IP (we can't do this
in SG)

-> Security Groups are applicable at the resource level (manually we have to attach
SG to resource)

-> Multiple Security Groups can be attached to single instance & one instance can
have 5 security groups

-> Security Groups are statefull


(Any changes applied to incoming rules then those changes will be applicable for
Outgoing Rules also)

-> In Security Group we can configure Rule destination as CIDR and IP

-> Security Group acts as First Level of defense for Outgoing traffic
======
NACL
======

-> NACL stands for Network Access Control List

-> NACL acts as a firewall for our Subnets in VPC

-> NACL applicable at the subnet level

-> NACL rules are applicable for all the resources which are part of that Subnet

-> NACL rules are stateless


(Any changes applied to incoming rules will not be applicable for outgoing rules,
we need to do that manually)

-> In NACL we can configure both Allow & Deny rules

Ex: We can block particual IP address (192.168.2.4) to connect with EC2


instance

-> One subnet can have only one NACL

Note: One NACL can be added to multiple subnets

-> NACL supports rule destination as only CIDR

-> NACL acts as first level of Defense for Incoming Traffic

( Security Group acts as First Level of defense for Outgoing traffic )

===========================================================================

#########################
Terraform (IAAC s/w)
#########################
-> Terraform is an open source s/w created by HashiCorp and written in Go
programming language

-> Terraform is an infrastructure as code (IaaC) software tool

-> To automate infrastructure creation in cloud platforms we will use Terraform.

-> Infrastructure as code is the process of managing infrastructure in a file or


files rather than manually configuring resources using user interface (UI)

-> Terraform code is written in the HashiCorp Configuration langauge (HCL) in files
with the extension .tf

-> Terraform allows users to use HashiCorp Configuration Language (HCL) to create
the files containing definitions of the their desired resources.

-> Terraform Supports all most all cloud providers (AWS, AZURE, GCP, Openstack
etc..).

===============================
Terraform vs Cloud Formation
==============================

-> Terraform developed by HashiCorp


-> CloudFormation developed by AWS

-> Terraform supports many cloud providers


-> Cloud Formation willl support only in AWS

-> Terraform uses HashiCorp configuration language (HCL) which built by HashiCorp.
It is fully compatible with JSON.

-> AWS Cloud Formation utilizes either JSON or YAML. Cloud formation has a limit of
51,000 bytes for the template body itself.

==========================
Terraform Vs Ansible
==========================

-> Terraform developed by HashiCorp


-> Ansible is also an open source software

-> Terraform is an infrastructure as a Code, which means they are designed to


provision the servers themselves.
-> Ansible is a configuration management tool. Which means ansibled designed o
install and manage software on existing servers.

-> Terraform is ideal for creating, managing and improving infrastructure.


-> Ansible is ideal for software provisioning, application deployment and
configuration management.

====================================
Terraform Setup - Pre-Requisites
====================================

1) Cloud Platform Account (AWS, Azure, GCP, Openstack etc..)


2) IAM User account (Secret Key and Access Key)
3) IAM User should have resources Access
###############################
Terraform Installation
#############################

1) Create EC2 instance ( Amazon Linux )

2) Connect to EC2 VM using Mobaxterm and execute below commands

$ sudo yum install -y yum-utils shadow-utils

$ sudo yum-config-manager --add-repo


https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo

$ sudo yum -y install terraform

$ terraform -v

###########################################
Working with EC2 Instance using Terraform
###########################################

1) Create IAM user with Programmatic Access (IAM user should have EC2FullAccess)

2) Download Secret Key and Access Key

3) Write First Terraform Script

$ mkdir terraformscript
$ cd terraformscripts
$ vi FirstTFScript.tf

provider "aws" {
region = "ap-south-1"
access_key = "AKIAJK"
secret_key = "CWSCLb1WRB2Xrdufy6/Lp"
}

resource "aws_instance" "AWSServer" {


ami = "ami-057752b3f1d6c4d6c"
instance_type = "t2.micro"
key_name = "ashokitkey"
security_groups = ["default"]
tags = {
Name = "MyEC2-VM"
}
}

10) Initialize Terraform using init command

$ terraform init

11) Format your script (indent spaces)

$ terraform fmt
12) Validate Your Script

$ terraform validate

13) Create Execution Plan For Your Script

$ terraform plan

14) Create Infrastructure

$ terraform apply -auto-approve

Note: When the script got executed it will store that state in a file. If we
execute script again it will not create. If you delete that state file and execute
script again then it will create it.

15) Destory Infrastructure

$ terraform destroy -auto-approve

-> In first script we kept provider and resources info in single script file. We
can keep provider and resources information in seperate files

Ex : proder.tf & main.tf

#########################################
Script to create multiple Ec2 instances
#########################################

provider "aws" {
region = "ap-south-1"
access_key = "AKIA4MGQ5UW757KVKECC"
secret_key = "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"
}

resource "aws_instance" "AWSVM_Server" {


count = "2"
ami = "ami-05c8ca4485f8b138a"
instance_type = "t2.micro"
key_name = "linux"
security_groups = ["ashokit_security_group"]
tags = {
Name = "REDHAT-EC2-VM"
}
}

Note: Once it is created, then destory infrastructure using below command

$ terraform destroy -auto-approve

========================
Variables in TypeScript
========================

-> Variables are used to store data in key-value format


Ex:
id = 101
name = Raju

-> We can maintain variables in seperate file

$ vi vars.tf

variable "ami" {
description="Amazon Machine Image value"
default = "ami-057752b3f1d6c4d6c"
}

variable "instance_type"{
description="Amazon Instance Type"
default = "t2.micro"
}

variable "instances_count"{
description="Total No.of Instances"
default = "2"
}

-> Create main tf file using variables

$ vi main.tf

provider "aws" {
region = "ap-south-1"
}

resource "aws_instance" "AWSServer" {


count="${var.instances_count}"
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "ashokitnewkey"
security_groups = ["default"]
tags = {
Name = "EC2 VM - ${count.index}"
}
}

Note: We can supply variables in runtime also

-> Remove instances_count variable from var.tf file and pass like below

$ terraform apply -var instances_count="4" -auto-aprove

=============================
Comments in Terraform Script
=============================

# - single line comment

// - single line comment (java style)

/* and */ - Multi line comments

========================================
Dealing with Secret Key and Access Key
========================================

-> We have configured secret_key and access_key in terraform script file. Instead
of that we can configure them as environment variables.

$ export AWS_ACCESS_KEY_ID="AKIAW4SBVKGXJK"
$ export AWS_SECRET_ACCESS_KEY="CWSCbZOQMkLb1WRB2Xrdufy6/Lp"

-> To verify environment variables we can use echo command

$ echo $AWS_ACCESS_KEY_ID
$ echo $AWS_SECRET_ACCESS_KEY

-> Now remove credentials from terraform script and execute it.

Note: We are setting provider credentials in terminal so these variables will be


available for current session. If we want to set permanently add them in .bashrc
file

=============================
Working with User Data
=============================

-> It is used to execute script when instance launched for first time.

-> Create Userdata in one file

$ vi installHttpd.sh

#!/bin/bash
sudo su
yum install httpd -y
cd /var/www/html
echo "<html><h1>Welcome to Ashok IT...!!</h1></html>" > index.html
service httpd start

$ chmod u+x installHttpd.sh

-> create main scrit in main.tf file

-> vi main.tf

provider "aws" {
region = "ap-south-1"
}

resource "aws_instance" "AWSServer" {


ami = "ami-057752b3f1d6c4d6c"
instance_type = "t2.micro"
key_name = "ashokitnewkey"
security_groups = ["default"]
user_data = "${file("installHttpd.sh")}"
tags = {
Name = "AshokIT-Web-Server"
}
}
==========================================
Creating S3 bucket using Terraform script
==========================================

-> Add S3 policy for IAM user

-> Execute below terraform script to create s3 bucket in AWS

provider "aws"{
region = "ap-south-1"
}

resource "aws_s3_bucket" "s3bucketashokit"{

bucket = "s3bucketashokit"
acl="private"

versioning{
enabled = true
}

tags = {
Name = "S3 Bucket By Ashok"
}
}

========================================
Create MySQL DB in AWS using Terraform
========================================

-> Provider RDS access for IAM user

-> Execute below script to create MySQL DB in AWS cloud

provider "aws"{
region = "ap-south-1"
}

resource "aws_db_instance" "default" {


allocated_storage = 100
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
}

=======================================================================

Assignment - 1 : Write Terraform Script to create VPC in AWS cloud.

Assignment - 2 : Write Terraform Script to create IAM user account.

========================================================================
provider "aws" {
region = "ap-south-1"
access_key = "AKIAW4SOHOQRVBVKGXJK"
secret_key = "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"
}

resource "aws_iam_user" "my_iam_user" {


name = "my_iam_user_abc_updated"
}

resource "aws_s3_bucket" "my_s3_bucket" {


bucket = "my-s3-bucket-ashokit-001"
versioning {
enabled = true
}
}

resource "aws_s3_bucket_versioning" "versioning_example" {


bucket = aws_s3_bucket.my_s3_bucket.id
versioning_configuration {
status = "Enabled"
}
}

output "my_iam_user_complete_details" {
value = aws_iam_user.my_iam_user
}

output "my_s3_bucket_complete_details" {
value = aws_s3_bucket.my_s3_bucket
}

output "my_s3_bucket_versioning" {
value = aws_s3_bucket.my_s3_bucket.versioning[0].enabled
}

=====================================
IAM User creation with Variables
=====================================

variable "iam_user_name_prefix" {
type = string #any, number, bool, list, map, set, object, tuple
default = "my_iam_user"
}

provider "aws" {
region = "us-east-1"
}

resource "aws_iam_user" "my_iam_users" {


count = 1
name = "${var.iam_user_name_prefix}_${count.index}"
}

===================================================================================
Change particular resource : $ terraform apply -target = aws_iam_user.my_iam_user

===================================================================================

==================
Elastic Beanstalk
==================

-> End-to-end web application management service in AWS cloud

-> In AWS, Elastic Beanstalk provides Platform As a Service (PaaS)

-> Easily we can run our web aplications on AWS cloud using Elastic Beanstalk
service.

-> We just need to upload our project code to Elastic Beanstalk it will take care
of deployment.

-> Elastic Beanstack will take care of softwares and servers which are required to
run our application.

-> Elastic Beanstalk will take care of deployment, capacity provisioning, load
balancer and auto scaling etc..

-> To deploy one java web application manually we need to perform below operations

1) Create Security Group

2) Create Network

3) Create Virtual Machine (s)

4) Install Java software in Virtual machine

5) Install Webserver to run java web application

6) Deploy application to server

7) Re-start the server

8) Create LBR

9) Create AutoScaling etc...

-> AWS providing infrastructure, we are creating platform using AWS infrastructure
to run our java application (IaaS Model)

=> Instead of we are preparing platform to run our application, we can use Elastic
Beanstalk service to run our web applications.

=> Elastic Beanstalk is providing Platform as a service.

==================================
Advantages with Elastic Beanstalk
==================================

1) Fast and simple to begin


2) Developer productivity

3) Impossible to outgrow

4) Complete resource control

===========================
Create Sample Application
===========================

-> Create Application

-> Choose the name

-> Select the platform

-> Create Elastic Beanstack with EC2 role

-> Create Instance Profile Role with Required Policies (AWSElasticBeanstalkWebTier)

=======================
Below Beanstalk events
=======================
Env Creation started
S3 bucket created to store the code
Security Group
VPC
EC2 instances
Webserver installation
Load Balancer
Autoscaling
Cloud watch

URL : http://webapp1-env.eba-rc8b64vg.ap-south-1.elasticbeanstalk.com/

===========================
Elastic Beanstalk Pricing
===========================

=> There’s no additional charge for Elastic Beanstalk.

=> You pay for Amazon Web Services resources that we create to store and run your
web application, like Amazon S3 buckets and Amazon EC2 instances, LBR and ASG.

==================================================
Procedure to deploy java-spring-boot-application
==================================================

1) Create one application in Elastic Beanstalk

2) Choose Platform as Java

3) Select Upload Your Code option and upload spring-boot-jar file

4) Select Required role

5) Select VPC and AZ's


6) Create Environment

-> Go to our application environment -> Configuration -> Edit Updates, monitoring,
and logging -> Configure below Environment Property

SERVER_PORT = 5000

Note: After changing the port Environment will be re-started

URL : <Beanstack-domain-url>

======================
Application Versions
======================

-> In Beanstalk we can maintain mutliple versions of our application

sb-rest-api-v1.jar
sb-rest-api-v2.jar
sb-rest-api-v3.jar

-> We can deploy particular version of jar file based on demand.


========================
Serverless Computing
========================

-> Serverless computing means run the application without thinking about servers

-> AWS will take care of servers required to run our application

-> AWS lambdas are used to implement serverless computing

=============
AWS Lambdas
=============
AWS Lambda is a way to run code without creating, managing, or paying for servers.

You supply AWS with the code required to run your function, and you pay for the
time AWS runs it, and nothing more.

Your code can access any other AWS service, or it can run on its own. While some
rules about how long a function has to respond to a request, there’s almost no
limit to what your Lambda can do.

The real power, though, comes from the scalability that Lambda offers you.

AWS will scale your code for you, depending on the number of requests it receives.
Not having to build and pay for servers is nice. Not having to build and pay for
them when your application suddenly goes viral can mean the difference between
survival and virtual death.

==================================
Running Java Code with AWS Lambda
==================================

1) Create Lambda Function with 'java 11' runtime

2) Upload jar file in 'Code Source'


3) Configure Handler in Runtime

Class Name : in.ashokit.LambdaHandler

Method Name : handleRequest

Handler Syntax : className :: methodName

Ex: in.ashokit.LambdaHandler::handleRequest

===============
Cloud Formation
===============

=> Cloud Formation service is used to provision infrastructure in AWS Cloud.

=> Cloud Formation works based on 'Infrastructure as a code' (IAAC)

=> Cloud Formation supports JSON and YML configurations

Note: If we create infrastructure manually it takes lot of time and it is error


prone.

=> If we design cloud formation template to create Infrasture then we can re-use
that template.

Note: Cloud Formation service works only in AWS Cloud.

Note: The alternate for 'Cloud Formation' service is 'TERRAFORM' tool.

=> Terraform works with almost all cloud platforms available in the market.

==============================================
Creating EC2 instance using Cloud Formation
==============================================

=> Goto AWS Management Console and Navigate to 'Cloud Formation'

=> Click on Create Stack and upload below Template File

------------------------------------ Ec2 - creation - using - yml - file


-------------------------

Description: Ashok IT - Build Linux Web Server


Parameters:
LatestAmiId:
Description: AMI Linux EC2
Type: 'AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>'
Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2'
Resources:
webserver1:
Type: AWS::EC2::Instance
Properties:
InstanceType: "t2.micro"
ImageId: !Ref LatestAmiId
SecurityGroupIds:
- !Ref WebserverSecurityGroup
Tags:
- Key: Name
Value: webserver1
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
yum install httpd -y
service httpd start
chkconfig httpd on
cd /var/www/html
echo "<br>" >> index.html
echo "<h2><b>Ashok IT EC2 Linux Demo</b></h2>" >>index.html
WebserverSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Enable Port 80
Tags:
- Key: Name
Value: webserver-sg
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
------------------------------------------------------------------------

=> Verify EC2 dashboard, we can see EC2 instance created

=> Access EC2 VM public in browser.

===================================================================================
======

===================
Ansible Tutorial
==================

-> Ansible is one among the DevOps configuration management tools which is famous
for its simplicity.
-> It is an open source software developed by Michael DeHaan and its ownership is
on RedHat

-> Ansible is an open source IT Configuration Management, Deployment &


Orchestration tool.

-> This tool is very simple to use yet powerful enough to automate complex multi-
tier IT application environments.

-> Ansible is an automation tool that provides a way to define infrastructure as


code.

-> Infrastructure as code (IAC) simply means that managing infrastructure by


writing code rather than using manual processes.

-> The best part is that you don’t even need to know the commands used to
accomplish a particular task.

-> You just need to specify what state you want the system to be in and Ansible
will take care of it.

-> The main components of Ansible are playbooks, configuration management and
deployment.

-> Ansible uses playbooks to automate deploy, manage, build, test and configure
anything

-> Ansible was written in Python

=================
Ansible Features
==================

-> Ansible manages machines in an agent-less manner using SSH

-> Built on top of Python and hence provides a lot of Python's functionality

-> YAML based playbooks

-> Uses SSH for secure connections

-> Follows push based architecture for sending configuration related notifications

===========================
Push Based Vs Pull Based
===========================

-> Tools like Puppet and Chef are pull based

-> Agents on the server periodically checks for the configuration information from
central server (Master)

-> Ansible is push based

-> Central server pushes the configuration information on target servers.


=====================
What Ansible can do ?
=====================
1) Configuration Management
2) App Deployment
3) Continous Delivery

==========================
How Ansible works ?
=======================

Ansible works by connecting to your nodes and pushing out a small program called
Ansible modules to them.

Then Ansible executed these modules and removed them after finished.The library of
modules can reside on any machine, and there are no daemons, servers, or databases
required.

The Management Node is the controlling node that controls the entire execution of
the playbook.

The inventory file provides the list of hosts where the Ansible modules need to be
run.

The Management Node makes an SSH connection and executes the small modules on the
host’s machine and install the software.

Ansible removes the modules once those are installed so expertly.

It connects to the host machine executes the instructions, and if it is


successfully installed, then remove that code in which one was copied on the host
machine.

Ansible basically consists of three components

Ansible requires the following components in order to automate Network


Infrastructure.

1) Controlling Nodes
2) Managed Nodes
3) Ansible Playbook

====================
Controlling Nodes
===================
are usually Linux Servers that are used to access the switches/routers and other
Network Devices.
These Network Devices are referred to as the Managed Nodes.

==================
Managed Nodes
==================
Managed Nodes are stored in the hosts file for Ansible automation.

==================
Ansible Playbook
==================
Ansible Playbooks are expressed in YAML format and serve as the repository for the
various tasks that will be executed on the Managed Nodes (hosts).

Playbooks are a collection of tasks that will be run on one or more hosts.

=================
Inventory file
=================

Ansible's inventory hosts file is used to list and group your servers.

Its default locaton is /etc/ansible/hosts

Note: In inventory file we can mention IP address or Hostnames also

===========================================
Few Important Points About Inventory File
===========================================

-> Comments begins with '#' character


-> Blank lines are ignore
-> Groups of hosts are delimited by '[header]' elements
-> You can enter hostnames or ip addresses
-> A hostname/ip can be a member of multiple groups
-> Ungrouped hosts are specifying before any group headers like below

Ansible's inventory hosts file is used to list and group your servers. Its default
locaton is /etc/ansible/hosts

Note: In inventory file we can mention IP address or Hostnames also

=======================
Sample Inventory File
========================
# Blank lines are ignore

# Ungrouped hosts are specifiying before any group headers like below

192.168.122.1
192.168.122.2
192.168.122.3

[webservers]
192.168.122.1
#192.168.122.2
192.168.122.3

[dbserver]
192.168.122.1
192.168.122.2
ashokit-db1.com
ashokit-db2.com

================
Ansible Setup
================
==========================================================================
Step-1 :: Create 3 Amazon Linux VMs in AWS (Free Tier Eligible - t2.micro)
==========================================================================

1 - Control Node
2 - Managed Nodes

==================================================================================
Step-2: Connect to all the 3 VMs and execute below commands in all 3 machines
==================================================================================

1) Create user

$ sudo useradd ansible


$ sudo passwd ansible

pwd
confirm pwd

2)Configure user in sudoers file

$ sudo visudo
ansible ALL=(ALL) NOPASSWD: ALL

3) Update sshd config file

$ sudo vi /etc/ssh/sshd_config

-> comment PasswordAuthentication no


-> PermitEmptyPasswords yes

4) Restart the server

$ sudo service sshd restart

Note: Do the above steps in all the 3 machines

============================================
Step-3 :: Install Ansible in Control Node
============================================

-> Switch to Ansible user

$ sudo su ansible
$ cd ~

-> Install Python

$ sudo yum install python3 -y

-> Check python version

$ python3 --version

-> Install PIP (It is a python package manager)


$ sudo yum -y install python3-pip

-> Install Ansible using Python PIP

$ pip3 install ansible --user

-> Verify ansible version

$ ansible --version

-> Create ansible folder under /etc

$ sudo mkdir /etc/ansible

=================================================================================
Step-4 : Generate SSH Key In Control Node and Copy SSH key into Managed Nodes
==================================================================================

1) Now generate SSH key in Ansible Server (Control Node):

$ sudo su ansible

# Generate ssh key using below command


$ ssh-keygen

2) Copy it to Managed Nodes as ansible user

$ ssh-copy-id ansible@<ManagedNode-Private-IP>

Ex : $ ssh-copy-id ansible@172.31.44.90

Note: Repeat above command by updating HOST IP for all the managed Servers.

3) Update Host Inventory in Ansible Server to add managed node servers details.

$ sudo vi /etc/ansible/hosts

[webservers]
172.31.47.247

[dbservers]
172.31.44.90

4) Use ping module to test Ansible and after successful run you can see the below
output.

# ping all managed nodes listed in host inventory file


$ ansible all -m ping

#ping only webservers listed in host inventory file


$ ansible webservers -m ping

#ping only dbservers listed in host inventory file


$ ansible dbservers -m ping

=========================
Ansible AD-HOC Commands
=========================

=> To run any ansible command we will follow below syntax

# ansible [ all / groupName / HostName / IP ] -m <Module Name> -a <args>

Note: Here -m is the module name and -a is the arguments to module.

Example:

# It will display date from all host machines.


$ ansible all -m shell -a date

# It will display uptime from all host machines.


$ ansible all -m shell -a uptime

There are two default groups, all and ungrouped. all contains every host. ungrouped
contains all hosts that don’t have another group

=============
Ping Module
==============

# It will ping all the servers which you have mentioned in inventory file
(/etc/ansible/hosts)
$ ansible all -m ping

# It will display the output in single line.


$ ansible all -m ping -o

=======================
Shell Modules
======================

# Date of all machines


$ ansible all -m shell -a 'date'

# Redhat release of all the machines


$ ansible all -m shell -a 'cat /etc/*release'

# Kind of mount on all the machines


$ ansible all -m shell -a 'mount'

# Check the service status on all the machines


$ ansible all -b -m shell -a 'service sshd status'

# Here it will check the disk space use for all the nodes which are from dbservers
group
$ ansible dbservers -a "df -h"

# Here it will check the disk space use for all the nodes which are from webservers
group
$ ansible webservers -a "free -m"

# Here it will display date from from webservers group


$ ansible webservers -a "date"

================
Yum Module
===============

# It will install vim package in all node machine which you have menyioned in host
inventory file.
$ ansible all -b -m yum -a "name=vim"

# Check git version in all machines


$ ansible all -m shell -a "git --version"

# to install git client in all node machines


$ ansible all -m shell -b -a "yum install git -y"

# To installl git only in webserver nodes


$ ansible webservers -m shell -b -a "yum install git -y"

$ ansible all -m shell -b -a "name=git state=present"


$ ansible all -m shell -b -a "name=git state=latest"
$ ansible all -m shell -b -a "name=git state=absent"

present : install
latest : update to latest
absent : un-install

# to install any software in ubuntu server then we should use apt package manager

$ ansible all -m apt -a "name="git state="present"

# To install httpd package in all node machines


$ ansible all -b -m yum -a "name=httpd state=present"

Note: Here state=present, is not a mandatory, it is by default.

# To update httpd package in all node machines.


$ ansible all -b -m yum -a "name=httpd state=latest"

# To remove httpd package in all node machines.


$ ansible all -b -m yum -a "name=httpd state=absent"

$ ansible all -m copy -a "src="index.html dest=/var/www/html/index.html"

start httpd service

$ ansible all -b -m service -a "name=httpd state=started"

$ ansible all -b -m shell -a "service httpd start"

Note: For privilige escalations we can use -b option

Q) Irrespective of underlying OS which module we can use to manage


packages(softwares) using package manager in Ansible ?

Ans) Ansible introduced "package manager" to work with underlying package manager

======================================
YAML ( Yet Another Markup Language )
=====================================

-> YAML Ain’t markup language

-> We can make use of this language to store data and configuration in a human-
readable format.

-> YAML files will have .yml as an extension

-> Official Website : https://yaml.org/

===================
Sample YML File Data
===================
Fruit: Apple
Vegetable: Carrot
Liquid: Water
Meet: Chicken

Array/List
++++++++++++
Fruits:
- Orange
- Apple
- Banana
- Guava

Vegetables:
- Carrot
- Cauliflower
- Tomoto

Here - dash indicate the element of any array.

name: Ashok
age: 29
phno: 123456
email: ashokitschool@gmail.com
hobbies:
- cricket
- dance
- singing

# person data in yml

person:
id: 101
name: Raju
email: raju@gmail.com
address:
city: Hyd
state: TG
country: India
job:
companyName: IBM
role: Tech Lead
pkg: 25 LPA
hobbies:
- cricket
- chess
- singing
- dance

# using --- hypens to seperate the data

---
person:
id: 101
name: Raju
email: raju@gmail.com
address:
city: Hyd
state: TG
country: India
job:
companyName: IBM
role: Tech Lead
pkg: 25 LPA
hobbies:
- cricket
- chess
- singing
- dance
---
movie:
name: Bahubali
hero: Prabhas
heroine: Anushka
villian: Rana
director: SS Rajamouli
budget: 100cr
...

==============
Playbooks
============

-> Playbook is a single YAML file, containing one or more ‘plays’ in a list.

-> Plays are ordered sets of tasks to execute against host servers from your
inventory file.

-> Play defines a set of activities (tasks) to run on managed nodes.

-> Task is an action to be performed on the managed node

Examples are

a) Execute a command
b) Run a shell script
c) Install a package
d) Shutdown / Restart the hosts
Note : Playbooks YML / YAML starts with the three hyphens ( --- ) and ends with
three dots ( … )

Playbook contains the following sections:

1) Every playbook starts with 3 hyphens ‘---‘

2) Host section – Defines the target machines on which the playbook should run.
This is based on the Ansible host inventory file.

3) Variable section – This is optional and can declare all the variables needed in
the playbook. We will look at some examples as well.

4) Tasks section – This section lists out all the tasks that should be executed on
the target machine. It specifies the use of Modules. Every task has a name which is
a small description of what the task will do and will be listed while the playbook
is run.

=============================
Playbook To Ping All Host Nodes
=============================
---
- hosts: all
gather_facts: no
remote_user: anisble
tasks:
- name : Ping
ping:
remote_user: ansible
...

#hosts: The tasks will be executing in specified group of servers.

#name: which is the task name that will appear in your terminal when you run the
playbook.

#remote_user: This parameter was formerly called just user. It was renamed in
Ansible 1.4 to make it more distinguishable from the user module (used to create
users on remote systems).

Note : Remote users can also be defined per task.

# Run the playbook Using below command


$ ansible-playbook <<Playbbok file name>>

# It will run the playbook.yml playbook in verbose

$ ansible-playbook playbook.yml -v
$ ansible-playbook playbook.yml -vv
$ ansible-playbook playbook.yml -vvv

$ It will provide help on ansible_playbook command


$ ansible-playbook --help

# It will check the syntax of a playbook


$ ansible-playbook playbook.yml --syntax-check

# It will do in dry run.


$ ansible-playbook playbook.yml --check

# It will display the which hosts would be effected by a playbook before run
$ ansible-playbook playbook.yml --list-hosts

# It execute one-step-at-a-time, confirm each task before running with


(N)o/(y)es/(c)ontinue
$ ansible-playbook playbook.yml --step

================================================
Install HTTPD + copy index.html + Start Service
=================================================

-> Create index.html file in the location where our playbook is creating

-> Create yml file with below content

---
- hosts: all
become: true
tasks:
- name: Install Httpd
yum:
name: httpd
state: present
- name: Copy index.html
copy:
src: index.html
dest: /var/www/html/index.html
- name: Start Httpd Server
service:
name: httpd
state: started
...

-> Execute the playbook yml using ansible-playbook command

===========
Variables
===========

-> Variables are used to store the data

ex: id = 101, name = ashok, age = 30

-> We can use 3 types of variables in Ansible

1) Runtime Variables
2) Playbook Variables
3) Group Variables
4) Host Variables

---
- hosts: all
become: true
tasks:
- name: Install Httpd
yum:
name: "{{package_name}}"
state: present
- name: Copy index.html
copy:
src: index.html
dest: /var/www/html/index.html
- name: Start Http Server
service:
name: "{{package_name}}"
state: started
...

=> We can pass variable value in run time like below

$ ansible-playbook <filename.yml> --extra-vars package_name=httpd

-> We can define variables with in the playbook also

---

- hosts: all
become: true
vars:
package_name: httpd
tasks:
- name: Install Httpd
yum:
name: "{{package_name}}"
state: present
- name: Copy index.html
template:
src: index.html
dest: /var/www/html/index.html
- name: Start Http Server
service:
name: "{{package_name}}"
state: started
...

---
- hosts: all
become: true
tasks:
- name: install software
yum:
name: "{{package_name}}"
state: present
...

$ ansible-playbook filename --extra-vars package_name=mysql


=================
Group Variables
=================

-> In webservers i want to install git software

-> In dbservers i want to install mysql software

-> We can achieve this using group variables

-> group vars files should be created at host inventory location

-> host-inventory location : /etc/ansible

group_vars/all.yml
group_vars/<groupName>.yml

Ex:

$ mkdir /etc/ansible/group_vars

$ sudo vi /etc/ansible/group_vars/webservers.yml
package: git

$ sudo vi /etc/ansible/group_vars/dbservers.yml
package: mysql

============
Host vars
=============

-> Server specific variables

-> In one group we can have multiple servers

-> For every host if we want seperate variables then we should go for host vars

-> mkdir /etc/ansible/host_vars

-> create a file with host name or ip

-> vi /etc/ansible/host_vars/172.138.1.1.yml

===================================================
Variable Value we can declare with in playbook

Variable value we can supply in runtime

Variable value we can declare in hosts_vars

Variable value we can declare in group_vars


==================================================

====================
Handlers and Tags
====================

-> In playbook all tasks will be executed by default.

-> Handlers are used to notify the tasks to execute.

-> Using Handlers we can execute tasks based on other tasks status

Note: If second task status is changed then only i want to execute third task in
playbook

-> To inform the handler to execute we will use 'notify' keyword

-> Using Tag we can map task to a tag-name

-> Using tag name we can execute particular task and we can skip particular task
also.

---
- hosts: all
become: true
gather_facts: yes
vars:
package_name: httpd
tasks:
- name: install httpd
yum:
name: "{{package_name}}"
state: present
tags:
- install
- name: Copy index.html
copy:
src: index.html
dest: /var/www/html/
tags:
- copy
notify:
Start Httpd Server
handlers:
- name: Start Httpd Server
service:
name: "{{package_name}}"
state: started
...

# to display all tags available in playbook


$ ansible-playbook handlers_tags.yml --list-tags

# Execute a task whose tag name is install


$ ansible-playbook handlers_tags.yml --tags "install"

# Execute the tasks whose tags names are install and copy
$ ansible-playbook handlers_tags.yml --tags "install,copy"

# Execute all the tasks in playbook by skipping install task


$ ansible-playbook handlers_tags.yml --skip-tags "install"

===============
Ansible Vault
===============

-> It is used to secure our data

-> When we configure uname and pwd in variables files everybody can see them which
is not a good practice

-> When we are dealing with sensitive data then we should secure that data

-> Using Ansible Vault we can protect or secure our data

-> Using Ansible vault we can encrypt and we can decrypt data

En-cryption ---> Converting data from readable format to un-readable format

De-Cryption ---> Converting data from un-readable format to readable format

=======================
Ansible Vault Commands
=======================

$ ansible-vault encrypt <playbook-yml> : To encrypt our yml file

$ ansible-vault view <playbook-yml> : To see original data from encrypted file

$ ansible-vault edit <playbook-yml> : To edit data in original format

$ ansible-vault decrypt <playbook-yml> : To decrypt the file

$ ansible-vault rekey <playbook-yml> : To change vault password

-> To encrypt a playbook we need to set one vault password

-> while executing playbook we need to pass vault password

$ ansible-playbook <filename>.yml --ask-vault-pass

-> You can store vault password in a file and you can give that file as input to
execute playbook

$ vi valutpass
$ ansible-playbook filename.yml --vault-password-file=~/vaultpass

# We can see encrypted file in human readable format


$ ansible-vault view /etc/ansible/group_vars/all.yml

# We can edit encrypted file in human readable format


$ ansible-vault edit /etc/ansible/group_vars/all.yml

# We can decrypt the file


$ ansible-vault decrypt /etc/ansible/group_vars/all.yml

# To update vault password we can use rekey


$ ansible-vault rekey /etc/ansible/group_vars/all.yml

===================================================================================
=======================

Q) Write a playbook to install maven in Web servers. In Webservers group few


machines are RED HAT based and few machines are Ubuntu based.

==> Amazon Linux / Cent OS / RED HAT OS -----------> yum as package manager

==> Ubuntu / Debian OS -------> apt as package manager

---
- hosts: webservers
tasks:
- name: install maven
yum:
name: maven
state: present
when: ansible_os_family == 'RedHat'
- name : install maven
apt:
name: maven
state: present
when: ansible_os_family == 'Debian'
...

================
Ansible Roles
================

-> Roles are a level of abstraction for Ansible configuration in a modular and
reusable format

-> As you add more and more functionality to your playbooks, they can become
difficult to maintain

-> Roles allow you to break down a complex playbook into separate, smaller chunks
that can be coordinated by a central entry point.

# Sample playbook with Role

---
- hosts: all
become: true
roles:
- apache
...

1. Ansible roles are consists of many playbooks, which is similar to modules in


puppet and cook books in chef. We term the same in ansible as roles.

2. Roles are a way to group multiple tasks together into one container to do the
automation in very effective manner with clean directory structure.
3. Roles are set of tasks and additional files for a certain role which allow you
to break up the configurations.

4. It can be easily reuse the codes by anyone if the role is suitable to someone.

5. It can be easily modify and will reduce the syntax errors.

==============================
How do we create Ansible Roles?
==============================

-> To create an Ansible role, use "ansible-galaxy" command which has the templates
to create it.

$ sudo su ansible

$ cd ~

$ mkdir roles

$ cd roles

$ ansible-galaxy init apache

Note: In the above command apache is the role name (we can give any name for the
role)

> where, ansible-glaxy is the command to create the roles using the templates.

> init is to initiliaze the role.

> apache is the name of the role

List out the directory created under apache

$ sudo yum install tree


$ tree apache

We have got the clean directory structure with the ansible-galaxy command. Each
directory must contain a main.yml file, which contains the relevant content.

============================
Role Directory Structure:
===========================

tasks –> contains the main list of tasks to be executed by the role.

handlers –> contains handlers, which will execute based on notify

defaults –> default variables for the role.

vars –> other variables for the role. Vars has the higher priority than defaults.

files –> contains files required to transfer or deployed to the target machines via
this role.
templates –> contains templates which can be deployed via this role.

meta –> defines some data / information about this role (author, dependency,
versions, examples, etc,.)

=============================================================
Lets take an example to create a role for Apache Web server
=============================================================

Below is a sample playbook to deploy Apache web server. Lets convert this playbook
code into Ansible role.

- hosts: all
become: true
tasks:
- name: Install Httpd
yum:
name: httpd
state: present
- name: Copy index.html
template:
src: index.html
dest: /var/www/html/index.html
- name: Start Http Server
service:
name: httpd
state: started

First, move on to the Ansible roles directory and start editing the yml files.

$ cd roles/apache

==============
1. Tasks
==============

-> Edit main.yml available in the tasks folder to define the tasks to be executed.

$ vi tasks/main.yml

---
# tasks file for roles/apache
- name: install httpd
yum:
name: httpd
state: present
- name: Copy index.html
copy:
src=index.html
dest=/var/www/html/
notify:
- restart apache

=========
2. Files
==========
-> Copy required files into files directory or create index.html file with content

$ vi files/index.html

<write content here>

==========
3. Handlers
==========

Edit handlers main.yml to restart the server when there is a change. Because we
have already defined it in the tasks with notify option. Use the same name “restart
apache” within the main.yml file as below.

$ vi handlers/main.yml

- name: restart apache


service:
name: httpd
state: restarted

We have got all the required files for Apache role. Lets apply this role into the
ansible playbook “runsetup.yml” as below to deploy it on the client nodes.

$ vi /home/ansible/runsetup.yml

---
- hosts: all
become: true
roles:
- apache
...

-> Execute playbook which contains apache role

$ ansible-playbook runsetup.yml

If you have created multiple roles, you can use the below format to add them in the
playbook

---
- hosts: all
become: true
roles:
- apache
- jenkins
- java
- maven
- sonar
...

======================================
💥 *What We Learnt in Ansible* 💥
======================================
1) What is Configuration Management
2) What is Ansible
3) Advantages of Ansible
4) Push Based Vs Pull Based Mechanism
5) Ansible Installation
6) Ansible Architecture
7) Host Inventory File
8) Host Groups in Inventory
9) Ansible Ad Hoc Commands
10) YAML
11) Working with YAML in VS Code IDE
12) Playbook Introduction
13) Playbook Commands
14) Writing Playbooks
15) Ansible Modules
16) Variables (local + runtime + group + host)
17) Handlers
18) Tags
19) Ansible Roles
20) Ansible Vault
21) Playbook for Multiple OS Family Based Hosts
22) Ansible Tower (Theory)

===========
Terraform
===========

=> It is free and open source software developed by HashiCorp.

=> Terraform is used to create/provision infrastructure in cloud platform.


=> Terraform is called as IAAC software

============ IAAC : Infrastructure as a code ================

=> Instead of creating infrastructure manually using GUI, we can write the code to
create infrastructure

=> Terraform supports almost all cloud platforms.

Ex: AWS, Azure, GCP etc...

=> We will use HCL (Hashicorp Configuration Language) to write the infrastructure
code.

==============================
Terraform Vs Cloud Formation
==============================

=> Cloud Formation is used to create infrastructure only in AWS cloud.

=> Terraform supports almost all cloud platforms available in the market.

======================
Terraform Vs Ansible
======================

=> Terraform is used for infrastructure provisioning.

Ex :

1) Create EC2 VM
2) Create S3 Bucket
3) Create RDS instance
4) Create IAM user etc....

=> Ansible is used for configuration management.

Ex:

1) Install a software in multiple machines


2) Restart Services
3) Copy Files etc...

========================
Terraform Installation
========================

1) Create Linux VM in AWs Cloud

2) Connect with Linux VM using MobaXterm

3) Execute below commands to setup Terraform

$ sudo yum install -y yum-utils shadow-utils

$ sudo yum-config-manager --add-repo


https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
$ sudo yum -y install terraform

4) Verify terraform installation

$ terraform -v

=======================
Terraform Architecture
=======================

=> Write terraform script using HCL and save it with .tf extension.

=> Execute Terraform commands

terraform init : Initialize terraform

terraform fmt : Format terraform script indent spacing (optional)

terraform validate: Verify terraform script syntax is valid or not

terraform plan : Create Execution plan for the terraform script

terraform apply : Create actual resource in cloud based on given plan

terraform destory : It is used to delete the resources created with our terraform
script.

============================
Terraform Script Syntax
============================

provider "aws" {
region = "ap-south-1"
access_key = "AKIASAEUF6C7IADBKTM3"
secret_key = "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"
}

resource "aws_instance" "linux-vm"{


ami = "ami-02e94b011299ef128"
instance_type = "t2.micro"
key_name = "ashokitkeypair"
security_groups = ["default"]
tags = {
Name = "AshokIT-Linux-VM"
}
}

====================================
Creating Multiple EC2 Instances
====================================
provider "aws" {
region = "ap-south-1"
access_key = "AKIASAEUF6C7B54ZV3WH"
secret_key = "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"
}

resource "aws_instance" "linux-vm"{


count = "2"
ami = "ami-02e94b011299ef128"
instance_type = "t2.micro"
key_name = "ashokitkeypair"
security_groups = ["default"]
tags = {
Name = "AshokIT-Linux-VM"
}

======================================
Dealing with Access Key & Secret key
======================================

=> Instead of configuring access key & secret in terraform script file we can
configure them as environment variables.

$ export AWS_ACCESS_KEY_ID="AKIASAEUF6C7IADBKTM3"

$ export AWS_SECRET_ACCESS_KEY="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"

=> Verify environment variable values

$ echo $AWS_ACCESS_KEY_ID

$ echo $AWS_SECRET_ACCESS_KEY

==================================
Working with User Data in EC2 VM
==================================

// create script file


$ vi installHttpd.sh

#! /bin/bash

sudo su
yum intall httpd -y
cd /var/www/html
echo "<h1>Welcome To Ashok IT</h1>" > index.html
service httpd start

// Give Executio permission


$ chmod u+x installHttpd.sh

// create resource script


resource "aws_instance" "linux-vm" {
ami = "ami-02e94b011299ef128"
instance_type = "t2.micro"
key_name = "ashokitkeypair"
security_groups = ["default"]
user_data = file("installHttpd.sh")
tags = {
Name = "AshokIT-Linux-VM"
}
}

========================
Variables in Terraform
=======================

=> Variables are used to store the data in key-value format

Ex:
id = 101
name = ashok

=> We can remove hard coded values from terraform resource script using Variables.

=> We will maintain variables in seperate tf file

$ vi vars.tf

variable "ami" {
description = "Amazon Machine AMI Value"
default = "ami-02e94b011299ef128"
}

variable "instance_type"{
description = "Represents Instance Type"
default = "t2.micro"
}

$ vi main.tf

resource "aws_instance" "linux-vm" {


ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "ashokitkeypair"
security_groups = ["default"]
user_data = file("installHttpd.sh")
tags = {
Name = "AshokIT-Linux-VM"
}
}

==========================================
Create S3 Bucket Using Terraform Script
==========================================

provider "aws"{
region = "ap-south-1"
}
resource "aws_s3_bucket" "ashokits3bucket" {

bucket = "ashokits3001"
acl = "private"

versioning {
enabled = true
}
}

=====================================================
Create RDS Instance (MySQL) Using Terraform Script
======================================================

provider "aws"{
region = "ap-south-1"
access_key = "AKIASAEUF6C7IADBKTM3"
secret_key = "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"
}

resource "aws_db_instance" "ashokitrds" {

allocated_storage = 100
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
identifier = "mydb"
username="ashokit"
password="ashokit123"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true

==========================
Ex - 1 : Output Variables
==========================

// create provider info in provider.tf file


provider "aws"{
region = "ap-south-1"
}

// create variable in inputs.tf file


variable "iam_user_name" {
description="IAM User name declared here"
default = "my_iam_user_raju"
}

// create resource in main.tf file


resource "aws_iam_user" "my_iam_user" {
name = "${var.iam_user_name}"
}

// create output in outputs.tf file


output "my_iam_user_complete_details" {
value = aws_iam_user.my_iam_user
}

==========================
Ex - 2 : Output Variables
==========================

// create provider info in provider.tf file


provider "aws"{
region = "ap-south-1"
}

// create variable in inputs.tf file


variable "ami" {
description="Amazon Machine Image value"
default = "ami-057752b3f1d6c4d6c"
}

variable "instance_type"{
description="Amazon Instance Type"
default = "t2.micro"
}

// create resource in main.tf file


resource "aws_instance" "ec2_vm" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "ashokitkeypair"
security_groups = ["default"]
tags = {
Name = "EC2 VM - ${count.index}"
}
}

// create output in outputs.tf file


output "ec2_vm_info_complete_info" {
value = aws_instance.ec2_vm
}

output "ec2_vm_public_ip" {
value = aws_instance.ec2_vm.public_ip
}

=====================================
Input Variables & Output Variables
=====================================

Input Variables are used to supply values to terraform script

Output Variables are used to get values from terraform after script got executed.

Ex-1: After IAM user got created, print user details as output

Ex-2: After EC2 VM got created, print EC2 VM public ip


Ex-3: After S3 bucket got created, print S3 bucket info

Ex-4: After RDS DB got created, print DB endpoint

===================
Terraform Modules
===================

=> A Terraform module is a set of Terraform configuration files in a single


directory

=> Terraform modules are used to seperate resource wise configuration

=> If we use modules concept then managing terraform scripts will become easy

=========
main.tf
========

module "my_ec2_vm" {

source = "/modules/ec2"

module "my_iam_user" {

source = "/modules/iam_user"
}

=============
Summary
=============

1) What is Infrastructure as a code (IAAC)


2) What is Terraform
3) Terraform Vs Cloud Formation
4) Terraform Vs Ansible
5) Terraform Setup
6) Terraform Architecture
7) Terraform Commands (init, validate, fmt, plan, apply, destory)
8) Terraform State File
9) How to seperate provider & resource (provider.tf & main.tf)
10) Configure Access Keys as Environment Variables
11) Variables in Terraform
12) Create EC2 instance using Terraform Script
13) Working With User Data in Terraform
14) Create S3 bucket using Terraform
15) Create RDS Instance using Terraform
16) IAM User creation
17) VPC with EC2
18) Output Variables
19) Terraform Modules
Git Repo :: https://github.com/ashokitschool/Terraform_Projects.git

========
Agile
=======

=> Agile Methodology is one of the most famous SDLC methodology in the market

-> Planning + Design + Development + Testing + Delivery is a continuous process.

=> In Agile methodology we will deliver project to client in multiple releases.


They are called as Sprints.

Ex: Sprint-1, Sprint-2, Sprint-3....

=> In Agile methodology everyday we will have Scrum meeting.

Note: Scrum meeting is also called as Status call / standup call.

===================
Agile Terminology
===================

1) Backlog Grooming
2) Story
3) Story Points
4) Backlog
5) Sprint Planning
6) Sprint
8) Retrospective

=> Backlog grooming is a meeting where all the team members will discuss about
pending works/items in the project.

=> Story means a task which we need to work on.

=> Story points will represent duration to complete the story

3 points - 1 day

5 points - 2 days

8 points - 3 days

=> Backlog means the stories which are pending in the project

=> Sprint Planning is a meeting in which team will disucuss priority stories to
complete

=> Sprint represents set of stories to complete in given duration

Note: Sprint duration will be 2 weeks (industry standard)

=> Scrum is a meeting in which all team members will give work updates to scrum
master.

=> Retrospective is a meeting in which team members will discuss about sprints
review.

1) what went well


2) what went wrong
3) any lessons learnt
4) any improvements required
5) new ideas....

=======
JIRA
=======

=> JIRA s/w developed by Atlasian company

=> JIRA is called as Project Management software

=> JIRA is used to manage project work.

1) Story creation
2) Story allocation
3) Reports generation etc...

=> JIRA is also used for Bug Reporting....

=> JIRA is a licensed s/w. We can use trial version for practice.

==========================
Application Architecture
==========================

1) Frontend : User Interface

2) Backend : Business Logic

3) Database : Persistence Store (data)

===========================
Tech Stack Of Application
===========================

Frontend Technology : Angular 13

Backend Technology : Java 17

Database : MySQL 8.5

WebServer : Tomcat 9.0

==========================
Application Environments
==========================
1) DEV
2) SIT
3) UAT
4) PILOT
5) PROD

=> As a devops enginner we are responsible to setup infrastructure to run our


application

=> We need to install all required softwares (dependencies) to run our application

Note: We need to setup dependencies in all environments to run our application.

Note: There is a chance to get enviromental issues.

=> To simplify application execution in any machine we can use Docker

==================
What is Docker ?
==================

=> Docker is a free software

=> Docker is used for containerization

=> With the help of containerization we can run our app in any machine.

Container = App Code + Required Dependencies

=> Docker will take care of dependencies installation required for app execution.

=> We can application portable using Docker.

=====================
Docker Architecture
=====================

1) Dockerfile
2) Docker Image
3) Docker Registry
4) Docker Container

=> Dockerfile is used to specify where is app code and what dependencies are
required for our application execution.

Note: Dockerfile is required to build docker image.

=> Docker Image is a package which contains code + dependencies

=> Docker Registry is used to store Docker Images.

=> Docker Container is used to run our application.

Note: When we run docker image then docker container will start. Docker container
is a Linux virtual machine.
=============================
Install Docker in Linux VM
=============================

Step-1 : Create EC2 VM (amazon linux)

Step-2 : Execute below commands

# Install Docker
$ sudo yum update -y
$ sudo yum install docker -y
$ sudo service docker start

# Add ec2-user user to docker group


$ sudo usermod -aG docker ec2-user

# Exit from terminal and Connect again (press R)


$ exit

# Verify Docker installation

$ docker -v

==================
Docker Commands
==================

docker images : To display docker images available in our system

docker ps : To display running docker containers in our system

docker ps -a : To display running & stopped container

docker pull <image-id> : To download docker image from docker registry

docker rmi <image-id> : To delete docker image

docker run <image-id> : To create docker container

docker stop <container-id> : To stop running container

docker rm <container-id> : To remove stopped container

######## Running hello-world docker image ########

$ docker run hello-world

########## Running spring boot app docker image ##########

$ docker pull ashokit/spring-boot-rest-api

$ docker run -d -p 9090:9090 ashokit/spring-boot-rest-api

Note: Enable 9090 in security group inbound rules

App URL : http://public-ip:9090/welcome/ashok

############################################################
# Getting container logs
$ docker logs <container-id>

# Remove stopped containers and un-used images


$ docker system prune -a

===========
Dockerfile
===========

=> Dockerfile contains set of instructions to build docker image

=> Dockerfile contains below keywords

1) FROM
2) MAINTAINER
3) RUN
4) CMD
5) COPY
6) ADD
7) WORKDIR
8) EXPOSE
9) ENTRYPOINT
10) USER

======
FROM
======

=> It is used to specify base image for our application

Ex:

FROM openjdk:17

FROM python:3.3

FROM node:16.0

FROM mysql:8.5

============
MAINTAINER
============

=> MAINTAINER is used to specify who is author of this Dockerfile

EX:

MAINTAINER Ashok <ashok.b@oracle.com>

=====
RUN
=====

=> RUN keyword is used to specify instructions to execute at the time of docker
image creation.
EX:

RUN 'git clone <repo>'


RUN 'mvn clean package'

Note: We can write multiple RUN instructions in single docker file and all those
instructions will be processed in the order.

=====
CMD
=====

=> CMD keyword is used to specify instructions to execute at the time of docker
container creation.

EX:

CMD 'java -jar app.jar'

Note: We can write multiple CMD instructions in single docker file but docker will
process only last CMD instruction.

========
COPY
=========

=> It is used to copy files from host machine to container machine

EX:

COPY target/app.jar /usr/app/app.jar

COPY target/webapp.war /usr/app/tomcat/webapp/

COPY app.py /usr/app/

=============
ADD Keyword
=============

=> It is also used to copy files from source to destination.

EX:

ADD target/app.jar /usr/app/app.jar

ADD <http-url> /usr/app/app.jar

Note: ADD keyword will support for URL as source.

==========
WORKDIR
==========

=> It is used to set working directory (directory navigation)


EX:

WORKDIR /usr/app

Note: Once WORKDIR is executed the remaining dockerfile keywords will execute from
workdir path.

======
USER
======

=> It is used to set USER to run commands

========
EXPOSE
========

=> It is used to specify on which port number our application will run in container

Ex:

EXPOSE 8080

Note: It is only to provide inforation. We can't change container port using


EXPOSE.

===========
ENTRYPOINT
===========

=> Entrypoint is used to execute instructions when docker contianer is creating.

Note:

Ex:

ENTRYPOINT["java", "-jar", "app.jar"]

ENTRYPOINT["python", "app.py"]

====================
sample Dockerfile
====================

FROM ubuntu

MAINTAINER Ashok <ashokit@gmail.com>

RUN echo 'hello-run-1'


RUN echo 'hello-run-2'

CMD echo 'Hi-cmd-1'


CMD echo 'Hi-cmd-2'

===========================

# Create Docker Image


$ docker build -t <image-name> .
# List Docker Images
$ docker images

# Run docker image


$ docker run <image-id>

========================================
How to push docker image to docker hub
========================================

$ docker build -t <image-name> .

$ docker tag <img-name> <tag-name>

$ docker images

$ docker login

$ docker push <tag-name>

===================================
Can we change Dockerfile name ?
==================================

Yes, we can do that.

Note: We need to pass modified file name as input to build docker image

$ docker build -t <img-name> -f <file-name> .

====================================
Dockerizing Java Web Application
====================================

=> Java web app will be packaged as a 'war' file

=> To run war file we need a webserver (Ex: Tomcat)

=> War file we need to deploy in tomcat server 'webapps' folder

============= Dockerfile for Java web app ================

FROM tomcat:8.0.20-jre8

MAINTAINER Ashok <ashok@oracle.com>

EXPOSE 8080

COPY target/maven-web-app.war /usr/local/tomcat/webapps/

==========================================================

Java Web App Git Repo : https://github.com/ashokitschool/maven-web-app.git

$ git clone https://github.com/ashokitschool/maven-web-app.git

$ cd maven-web-app
$ mvn clean package

$ ls -l target

$ docker build -t <img-name> .

$ docker images

$ docker run -d -p <host-port:container-port> <img-name>

=> Enable host port number in security group and access our application

URL : http://public-ip:host-port/maven-web-app/

============================================
Can we get into docker container machine ?
===========================================

Yes, using below commands

$ docker ps

$ docker exec -it <container-id> /bin/bash

$ pwd

$ ls -l webapps

$ exit

==========================================
Dockerizing Java Spring Boot Application
==========================================

=> Spring Boot is a java based framework to develop applications

=> Spring boot applications will be packaged as a jar file

=> To run spring boot application we need to run Jar file.

$ java -jar <jar-file-name>

Note: Spring Boot uses Tomcat as embedded server with 8080 as port number.

============== Dockerfile For Spring Boot App ======================

FROM openjdk:17

MAINTAINER Ashok <ashok@oracle.com>

COPY target/sbapp.jar /usr/app/

WORKDIR /usr/app/

EXPOSE 8080
ENTRYPOINT ["java", "-jar", "sbapp.jar"]

======================================================================

Spring Boot App Git Repo : https://github.com/ashokitschool/spring-boot-docker-


app.git

$ git clone https://github.com/ashokitschool/spring-boot-docker-app.git

$ cd spring-boot-docker-app

$ mvn clean package

$ ls -l target

$ docker build -t sbapp .

$ docker images

$ docker run -d -p 9090:8080 sbapp

$ docker ps

Note: Enable 9090 in security group

=> Access application with URL

URL : http://public-ip:host-port/

=====================================
Dockerize Python Flask Application
=====================================

=> Python is a scripting language

=> We don't need any build tool for python app

=> Directley we can run python programs

=> Flask is a python library to develop rest apis in python.

=> To download flask library we will use 'python pip software'

Note: We will configure dependencies in requirements.txt

=============== Dockerfile for Python Flask App =================

FROM python:3.6

MAINTAINER Ashok Bollepalli "ashokitschool@gmail.com"

COPY . /app

WORKDIR /app
EXPOSE 5000

RUN pip install -r requirements.txt

ENTRYPOINT ["python", "app.py"]

===================================================================================
===

Python App Git Repo : https://github.com/ashokitschool/python-flask-docker-app.git

$ git clone https://github.com/ashokitschool/python-flask-docker-app.git

$ cd python-flask-docker-app

$ docker build -t <img-name> .

$ docker run -d -p 5000:5000 <img-name>

$ docker ps

Note: Enable 5000 in security group

=> Access application with URL

URL : http://public-ip:host-port/

===================================================================================
====

Task : Dockerize Angular & React Applications

===================================================================================
====

1) Application Architecture
2) Tech Stack of application
3) Challenges in Application Deployments
4) App Environments
5) Containerization
6) Docker
7) Docker Architecture
8) What is Dockerfile
9) Dockerfile Keywords
10) How to build Docker images
11) How to push docker img to docker registry
12) How to pull docker images
13) How to run docker images
14) Port mapping & detached mode
15) Container Logs
16) Java web app with Docker
17) Spring Boot App with Docker
18) Python App with Docker

================
Docker Network
===============

=> Network is all about communication

=> Docker network is used to provide isolated network for containers

=> If we run 2 containers under same network then one contianer can communicate
with another container.

=> By default we have 3 networks in Docker

1) bridge
2) host
3) none

=> Bridge network is used to run standalone containers. This will assign one IP for
container. This is default network driver for our container.

=> Host network is used to run standalone container. This will not assign any ip
for our container.

=> None means no network will be available.

=> We can use 2 other networks also in docker

1) Overlay
2) MacvLan

=> Overlay network is used for Orchestration purpose (Docker Swarm)

=> Macvlan network will assign physical Ip for our container.

# Display docker networks


$ docker network ls

# create docker network


$ docker network create ashokit-nw

# remove docker network


$ docker network rm ashokit-nw

# inspect docker network


$ docker network inspect ashokit-nw

# Create Docker container with custom network


$ docker run -d --network ashokit-nw -p 9090:8080 sbapp

1) Docker Network
2) Docker Compose
3) Docker Volumes
4) Docker Swarm

===============
Docker Compose
===============
=> It is used to manage Multi - Container Based applications

=> Now a days projects are developing based on Microservices architecture.

=> Microservices means multiple backend apis will be avialable

Ex:

HOTELS-API
TRAINS-API
CABS-API
FLIGHTS-API

=> For every API we need to create seperate container.

Note: When we have multiple containers like this management will become very
difficult (create / stop / start)

=> To overcome these problems we will use Docker Compose.

=> In docker compose, using single command we can create / stop / start multiple
containers at a time.

=> For Docker Compose we need to provide containers information in YML file i.e
docker-compose.yml

=> docker-compose.yml file contains all containers information.

=> The default file name is docker-compose.yml (we can change it).

===========================docker-compose.yml==========================

version : It represents compose yml version

services: It represents containers info (image-name, port mapping etc..)

networks: Represents docker network to run our containers

volumes : Represents containers storage location

=======================================================================

======================
Docker Compose Setup
======================

# Download docker compose


$ sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-
compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

# Give permission
$ sudo chmod +x /usr/local/bin/docker-compose

# Check docker compose is installed or not


$ docker-compose --version

================================================
Spring Boot with MySQL DB Using Docker Compose
================================================
version: "3"
services:
application:
image: sbapp
ports:
- 8080:8080
networks:
- ashokit-sbapp-nw
depends_on:
- mysqldb
volumes:
- /data/sb-app
mysqldb:
image: mysql:5.7
networks:
- ashokit-sbapp-nw
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=sbms
volumes:
- /data/mysql
networks:
ashokit-sbapp-nw:

================================
Application Execution Process
================================

# clone git repo


$ git clone https://github.com/ashokitschool/spring-boot-mysql-docker-compose.git

# go inside project directory


$ cd spring-boot-mysql-docker-compose

# build project using maven


$ mvn clean package

# build docker image


$ docker build -t spring-boot-mysql-app .

# check docker images


$ docker images

# create containers using docker-compose


$ docker-compose up -d

# check docker containers


$ docker compose ps

### Access application & insert few records for testing.

URL : http://public-ip:8080/

# go inside db container
$ docker exec -it <db-container-name> /bin/bash

# connect to mysql-db
$ mysql -h localhost -u root -proot
# check databases
$ show databases;

# select database
$ use sbms;

# show tables
$ show tables;

# select table data


$ select * from book;

# comeout from mysql-db & from container


$ exit
$ exit

====================================
Stateful Vs Stateless Containers
====================================

Stateless container : Data will be deleted after container got deleted

Stateful Container : Data will be maintained permanently

Note: Docker Containers are stateless container (by default)

Note: In above springboot application we are using mysql db to store the data. When
we re-create containers we lost our data (This is not accepted in realtime).

=> Even if we deploy latest code or if we re-create containers we should not loose
our data.

=> To maintain data permenently we need to make our container as Stateful


Container.

=> To make container as stateful, we need to use Docker Volumes concept.

================
Docker Volumes
================

=> Volumes are used to persist the data which is generated by Docker container

=> Volumes are used to avoid data loss

=> Using Volumes we can make container as statefull container

=> We have 3 types of volumes in Docker

1) Anonymous Volume ( No Name )


2) Named Volume
3) Bind Mounts

# Display docker volumes


$ docker volume ls
# Create Docker Volume
$ docker volume create <vol-name>

# Inspect Docker Volume


$ docker volume inspect <vol-name>

# Remove Docker Volume


$ docker volume rm <vol-name>

# Remove all volumes


$ docker system prune --volumes

====================================================
Making Docker Container Statefull using Bind Mount
====================================================

=> Create a directory on host machine

$ mkdir app

=> Map 'app' directory to container in docker-compose.yml file like below

version: "3"
services:
application:
image: spring-boot-mysql-app
ports:
- "8080:8080"
networks:
- springboot-db-net
depends_on:
- mysqldb
volumes:
- /data/springboot-app

mysqldb:
image: mysql:5.7
networks:
- springboot-db-net
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=sbms
volumes:
- ./app:/var/lib/mysql
networks:
springboot-db-net:

=> Start Docker Compose Service

$ docker-compose up -d

=> Access the application and insert data

=> Delete Docker Compose service using below command

$ docker-compose down

=> Again start Docker Compose service


$ docker-compose up -d

=> Access application and see data (it should be available)

==================
Docker Swarm
==================

-> It is a container orchestration platform

-> Orchestration means managing processes

-> Docker Swarm is used to setup Docker Cluster

-> Cluster means group of servers

-> We will setup Master and Worker nodes using Docker Swarm cluster

-> Master Node will schedule the tasks (containers) and manage the nodes and node
failures

-> Worker nodes will perform the action (containers will run here) based on master
node instructions

-> Docker swarm is embedded in Docker engine ( No need to install Docker Swarm
Seperatley )

==================
Swarm Features
==================
1) Cluster Management
2) Decentralize design
3) Declarative service model
4) Scaling
5) Load Balancing

============================
Docker Swarm Cluster Setup
============================

-> Create 3 EC2 instances (ubuntu) & install docker in all 3 instances using below
2 commands

$ curl -fsSL https://get.docker.com -o get-docker.sh


$ sudo sh get-docker.sh

Note: Enable 2377 port in security group for Swarm Cluster Communications

1 - Master Node
2 - Worker Nodes

-> Connect to Master Machine and execute below command


# Initialize docker swarm cluster
$ sudo docker swarm init --advertise-addr <private-ip-of-master-node>

Ex : $ sudo docker swarm init --advertise-addr 172.31.41.217

# Get Join token from master (this token is used by workers to join with master)
$ sudo docker swarm join-token worker

Note: Copy the token and execute in all worker nodes with sudo permission

Ex: sudo docker swarm join --token SWMTKN-1-


4pkn4fiwm09haue0v633s6snitq693p1h7d1774c8y0hfl9yz9-8l7vptikm0x29shtkhn0ki8wz
172.31.37.100:2377

-> In Docker swarm we need to deploy our application as a service.

====================
Docker Swarm Service
====================

-> Service is collection of one or more containers of same image

-> There are 2 types of services in docker swarm

1) Replica (default mode)


2) global

$ sudo docker service create --name <serviceName> -p <hostPort>:<containerPort>


<imageName>

Ex : $ sudo docker service create --name java-web-app -p 8080:8080


ashokit/javawebapp

Note: By default 1 replica will be created

Note: We can access our application using below URL pattern

URL : http://master-node-public-ip:8080/java-web-app/

# check the services created


$ sudo docker service ls

# we can scale docker service


$ docker service scale <serviceName>=<no.of.replicas>

# inspect docker service


$ sudo docker service inspect --pretty <service-name>

# see service details


$ sudo docker service ps <service-name>

# Remove one node from swarm cluster


$ sudo docker swarm leave

# remove docker service


$ sudo docker service rm <service-name>
*Docker summary*

1) What is Docker
2) Docker Advantages
3) Docker Architecture
4) Dockerfile & keywords
5) Docker Images
6) Docker Registry
7) Docker Containers
8) Docker Network
9) Docker Volumes
10) Docker Compose
11) Docker Swarm
12) Java Web App + Docker
13) Spring Boot + Docker
14) Python Flask + Docker
15) Angular + Docker
16) React JS + Docker
17) DOT Net + Docker
18) Spring Boot + MySQL + Docker Compose

=======================
Docker Vs Kubernetes
=======================

1) Containerization : Running our application inside container

Ex: Docker Containers

=> Packaging our app code and dependencies as single unit for execution is called
as Containerization.

2) Orchestration : Managing Containers

Ex: Docker Compose, Docker Swarm, Kubernetes, Open Shift...

============
Kubernetes
============

=> It is an open source s/w

=> K8S is called as Orchestration Tool

=> K8S is used to manage our containers

(create/stop/start/remove/scale-up/scale-down)

=> K8S software developed by Google company

=> K8S software developed using Go Language.

=> Kubernetes is also called as K8S


=====================
Advantages with K8S
=====================

1) Auto Scaling

2) Self Healing

3) Load Balancing

==================
K8S Architecture
==================

=> K8S will follow cluster architecture

=> Cluster means group of servers

=> In K8S cluster we will have Master Node & Worker Nodes

Note: Master Node is called as Control Plane.

=> Master Node will recieve the request and will assign task to worker nodes.

=> Worker Nodes will perform the action based on task given by Master node.

=======================
K8S Cluster Components
=======================

1) Control Plane (Master Node)

- API Server
- Schedular
- Controller Manager
- ETCD

2) Worker Node

- Kubelet
- Kube Proxy
- Docker Runtime
- POD

Note: Kubectl is used to communicate with K8S cluster.

=> API Server will recieve the request given by Kubectl and will store the request
into etcd.

=> ETCD is an internal db of k8s cluster.

=> Schedular will identify pending requests in ETCD and will identify worker node
to to schedule the task.

Noe: Schedular will communicate with the worker node using Kubelet.
=> Kubelet is called as Node Agent. It will maintain all the info related to worker
node.

=> Kube Proxy will provide network for cluster communication.

=> Controller-manager will veriy all the tasks are working as expected or not.

=> In Every Worker Node, Docker Engine will be available to run Docker Container.

=> In K8S docker container will be created inside POD.

=> POD is a smallest building block that we can deploy in k8s cluster.

Note: Inside one POD we can create multiple containers.

Note: In K8S, everything will be represented as POD.

===================
K8S Cluster Setup
===================

1) Mini Kube (Single Node Cluster) - Only For Practice

2) Kubeadm Cluster (Self Managed Cluster)

3) Provider Managed Cluster (Ex : AWS EKS, Azure AKS, GCP GKE....) - Realtime.

================
AWS EKS Cluster
================

EKS - Elastic Kubernetes Service

=> EKS provides readymade Kubernetes Control Plane.

=> EKS is highly scalable and robust solution to run k8s components.

Note: EKS is not free.

EKS Setup : https://github.com/ashokitschool/DevOps-Documents/blob/main/EKS-


Setup.md

==========================
Kubernetes Resources
==========================

1) PODS
2) Services
3) Namespaces
4) ReplicationController (RC) - Outdated
5) ReplicaSet (RS)
6) Deployment
7) DaemonSet
8) StatefulSet
9) IngressController
10) HPA
11) Helm Charts
12) K8S Monitoring(Grafana & Premoethues)
13) EFK Stack (Log Monitoring)

============
What is POD
============

=> POD is a smallet building block in k8s cluter

=> Application will be deployed in K8S as a POD

=> We can create multiple pods for one application

=> To create POD we need docker image

=> For Every POD k8s will assign one IP address

=> When POD is damaged/crashed K8S will replace that pod (Self Healing)

=> When we create multiple PODS for one applications, load will be balanced by k8s.

Note: By default we can access PODS only with in the cluster.

=> To provide public access for our PODS we need to expose our PODS using K8S
Service concept.

=============
K8S Services
==============

=> Service is used to expose PODS.

=> We have 3 types of Services in k8s.

1) Cluster IP
2) NodePort
3) Load Balancer

=====================
What is Cluster IP ?
=====================

=> POD is a short lived object

=> When pod is crashed/damaged k8s will replace that with new pod

=> When POD is re-created IP will be changed.

Note: It is not recommended to access pods using POD IP

=> Cluster IP service is used to link all PODS to single ip.

=> Cluster IP is a static ip to access pods

=> Using Cluster IP we can access pods only with in the cluster

Ex: Database Pods, Cache Pods, Kafka Pods etc...


============================
What is NodePort service ?
============================

=> NodePort service is used to expose our pods outside the cluster.

=> Using NodePort we can access our application with Worker Node Public IP address.

=> When we use Node Public IP to access our pod then all requests will go same
worker node (burden will be increased on the node).

Note : To distribute load to multiple worker nodes we will use LBR service.

=================================
What is Load Balancer Service ?
================================

=> It is used to expose our pods outside cluster using AWS Load Balancer

=> When we access load balancer url, requests will be distributed to all pods
running in all worker nodes.

=================
K8s Namespaces
================

=> Namespaces are used to group the resources.

Ex: frontend app pods under one namespace

backend app pods under one namespace

db pods under one namespace

============
Summary
=============

1) What is Containerization
2) What is Orchestration
3) K8S Introduction
4) K8S Advantages
5) K8S Architecture
6) AWS EKS Cluster Setup
7) What is POD
8) What is Service (Cluster IP, Node Port & LBR)
9) What is Namespaces

=============================================================================

=> In K8S we will use manifest yml to deploy our applications

-------------------------
K8S Manifest YML Syntax
-------------------------
---
apiVersion :
kind:
metadata:
spec:
...

# Execute manifest yml using kubectl


$ kubectl apply -f <manifest-yml>

=====================
K8S POD Manifest YML
======================

---
apiVersion: v1
kind: Pod
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: ashokit/javawebapp
ports:
- containerPort: 8080
...

# check pods running in cluster


$ kubectl get pods

# Create PODS using manifest YML


$ kubectl apply -f <pod-manifest-yml>

# Check POD running in which worker node


$ kubectl get pods -o wide

# Describe pod
$ kubectl describe pod <pod-name>

# Check POD Logs

$ kubectl logs <pod-name>

=============
K8S Service
=============

=> Service is used to expose our pods

---
apiVersion : v1
kind: Service
metadata:
name: javawebsvc
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
nodePort: 30070
...

Note: POD lable we will use as 'selector' in service (to identify the pods)

# Check k8s services running


$ kubectl get service

# create service using manifest yml


$ kubectl apply -f <service-manifest-yml>

=> Enable NodePort number in worker node security group inbound rules.

=> Access our application

URL : http://node-public-ip:nodeport/java-web-app/

============================================================================
Q) Is it mandatory to specify Node Port Number in service manifest yml ?
============================================================================

Ans) No, if we don't specify k8s will assign one random port number in between
30,000 - 32, 767.

======================================
POD & Service in single manifest YML
=======================================

---
apiVersion: v1
kind: Pod
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: ashokit/javawebapp
ports:
- containerPort: 8080
---
apiVersion : v1
kind: Service
metadata:
name: javawebsvc
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
...

$ kubectl apply -f <manifest-yml>

$ kubectl get pods

$ kubectl get svc

Note: In the above manifest yml we have not configured Node Port Number. K8S will
assign one randome number.

=> We need to enable that Node Port number in security group inbound rules.

===============
K8S Namespaces
===============

=> Namespaces are used to group k8s resources logically

frontend-app-pods ===> create under one namespace

backend-app-pods ===> create under one namespace

database-pods ===> create under one namespace

=> We can create multiple namespaces in k8s cluster

Ex: ashokit-frontend-ns, ashokit-backend-ns, ashokit-db-ns etc...

=> Each namespace is isolated with other namespaces.

Note: When we delete a namespace, all the resources which are created under that
namespace also gets deleted.

# check namespaces
$ kubectl get ns

Note: When we don't specify any namespace then it will use 'default' namespace

# Get all resources available under default namespace


$ kubectl get all

# Retrieve all resources available under particular namespace


$ kubectl get all -n <namespace-name>

# get pods of default namespace


$ kubectl get pods

# get pods of kube-system namespace


$ kubectl get pods -n kube-sytem
=> We can create k8s namespace in 2 ways

1) Using kubectl command directley

2) Using k8s manifest yml

Ex:-1 : $ kubectl create ns <namespace-name>

---------------------------
k8s namespace manifest yml
---------------------------

---
apiVersion: v1
kind: Namespace
metadata:
name: ashokit-ns2
...

$ kubectl apply -f <ns-manifest-yml>

=====================================
Namespace + pod + service - manifest
=====================================

---
apiVersion: v1
kind: Namespace
metadata:
name: ashokit-ns3
---
apiVersion: v1
kind: Pod
metadata:
name: javawebapppod
namespace: ashokit-ns3
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: ashokit/javawebapp
ports:
- containerPort: 8080
---
apiVersion : v1
kind: Service
metadata:
name: javawebsvc
namespace: ashokit-ns3
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
...

$ kubectl get ns

$ kubectl apply -f <manifest-yml>

$ kubectl get all

$ kubectl get all -n ashokit-ns3

================================================================

=> As of now, we have created POD manually using POD Manifest YML

=> If we create POD manually then we don't get self-healing capability

=> If POD is damaged/crashed/deleted then k8s will not create new POD.

=> If pod damaged then our application will be down.

Note: We shouldn't create POD directley to deploy our application.

Note: We need to use k8s resources to create pods.

=> If we create pod using k8s resources, then pod life cycle will be managed by
k8s.

1) ReplicationController (Outdated)
2) ReplicaSet
3) Deployment
4) DaemonSet
5) StatefulSet

===========
ReplicaSet
===========

=> It is a k8s resource

=> It is used to create pods in k8s

=> It will take care of POD lifecycle.

Note: When POD is damaged/crashed/deleted then ReplicaSet will create new POD.

=> Always It will maintain given no.of pods count for our application.

=> With this approach we can achieve high availability for our application.

=> By using RS, we can scale up and scale down our PODS count.

---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: javawebrs
spec:
replicas: 2
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappconatiner
image: ashokit/javawebapp
ports:
- containerPort: 8080
...
apiVersion: v1
kind: Service
metadata:
name: javawebsvc
spec:
type: LoadBalancer
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
...

$ kubectl get pods

$ kubectl get svc

$ kubectl get rs

$ kubectl apply -f <yml>

$ kubctl get all

$ kubectl delete pod <pod-name>

$ kubectl get pods

Note: If we configure service type as LoadBalancer then in AWS LBR will be created.

=> Using AWS LBR DNS url we can access our application.

$ kubectl scale rs javawebrs --replicas 4

$ kubectl get pods

$ kubectl scale rs javawebrs --replicas 3


Note: In ReplicaSet, scale up & scale down is manual process.

=> K8S supports Auto Scaling when we use 'Deployment' to create pods.

================
K8S Deployment
================

=> It is one of the k8s resource/component

=> It is most recommended approach to deploy our applications in k8s.

=> Deployment will manage pod life cycle.

=> We have below advantages with K8s Deployment

1) Zero downtime

2) Auto Scaling

3) Rolling Update & Rollback

=> We have below deployment strategies

1) ReCreate
2) RollingUpdate
3) Canary

=> ReCreate means it will delete all existing pods and will create new pods

=> RollingUpdate means it will delete and create new pod one by one ...

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebdeploy
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappconatiner
image: ashokit/javawebapp
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: javawebsvc
spec:
type: LoadBalancer
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
...

$ kubectl delete all --all

$ kubectl get all

$ kubectl apply -f <yml>

$ kubectl get all

Note: Access app using LBR URL

$ kubectl scale deployment javawebdeploy --replicas 4

$ kubectl get pods

$ kubectl scale deployment javawebdeploy --replicas 3

==============================
Blue - Green Deployment Model
==============================

Blue Env Docker image : ashokit/javawebapp

Green Env Docker image : ashokit/javawebapp:v2

===================
Working Process
===================

Step-0 : Clone below git repo

URl : https://github.com/ashokitschool/kubernetes_manifest_yml_files.git

Step-1 : Navigate into blue-green directory

Step-2 : Create Blue Pods deployment

Step-3 : Create Live Service to expose blue pods

Step-4 : Access App using Live Service LBR DNS Url


(blue pods response we should be able to see in browser)

Step-5 : Create Green Pods deployment

Step-6 : Check all pods running

Step-7: Make Green Pods Live (update selector and apply the yml)

Step-8 : Access App using Live Service LBR DNS Url

(green pods response we should be able to see in browser)

=================
Package Managers
=================

=> In Linux os we have package manager to install required softwares/packages

Ex: yum, apt ...

amazon linux vm :

- sudo yum install java

- sudo yum install git

- sudo yum install maven

ubuntu linux vm :

- sudo apt install java

- sudo apt install git

- sudo apt install maven

===============
What is HELM ?
================

=> HELM is a package manager which is used to install required softwares in k8s
cluster

=> HELM will use charts to install required packages

=> Chart means collection of configuration files (manifest ymls)

####### HELM Charts ###########

=> Using HELM chart we can install promethues server

=> Using HELM chart we can install grafana server


##################
Helm Installation
##################

$ curl -fsSl -o get_helm.sh


https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3

$ chmod 700 get_helm.sh

$ ./get_helm.sh

$ helm

-> check do we have metrics server on the cluster

$ kubectl top pods

$ kubectl top nodes

# check helm repos


$ helm repo ls

# Before you can install the chart you will need to add the metrics-server repo to
helm
$ helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/

# Install the chart


$ helm upgrade --install metrics-server metrics-server/metrics-server

$ kubectl top pods

$ kubectl top nodes

$ helm list

$ helm delete <release-name>

=========================================
Metric Server Unavailability issue fix
=========================================

URL : https://www.linuxsysadmins.com/service-unavailable-kubernetes-metrics/

$ kubectl edit deployments.apps -n kube-system metrics-server

=> Edit the below file and add new properties which are given below

--------------------------- Existing File--------------------


spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443

--------------------------- New File--------------------


---
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls=true
- --kubelet-preferred-address-types=InternalIP

------------------------------------------------------------------

$ kubectl top pods

$ kubectl top nodes

#######################
Kubernetes Monitoring
#######################

=> We can monitor our k8s cluster and cluster components using below softwares

1) Prometheus
2) Grafana

=============
Prometheus
=============

-> Prometheus is an open-source systems monitoring and alerting toolkit

-> Prometheus collects and stores its metrics as time series data

-> It provides out-of-the-box monitoring capabilities for the k8s container


orchestration platform.

=============
Grafana
=============

-> Grafana is an analysis and monitoring tool

-> Grafana is a multi-platform open source analytics and interactive visualization


web application.

-> It provides charts, graphs, and alerts for the web when connected to supported
data sources.

-> Grafana allows you to query, visualize, alert on and understand your metrics no
matter where they are stored. Create, explore and share dashboards.

Note: Graphana will connect with Prometheus for data source.

#########################################
How to deploy Grafana & Prometheus in K8S
##########################################

-> Using HELM charts we can easily deploy Prometheus and Grafana
##################################################
Install Prometheus & Grafana In K8S Cluster using HELM
#################################################

# Add the latest helm repository in Kubernetes


$ helm repo add stable https://charts.helm.sh/stable

# Add prometheus repo to helm


$ helm repo add prometheus-community https://prometheus-community.github.io/helm-
charts

# Update Helm Repo


$ helm repo update

# Search Repo
$ helm search repo prometheus-community

# install prometheus
$ helm install stable prometheus-community/kube-prometheus-stack

# Get all pods


$ kubectl get pods

Node: You should see prometheus pods running

# Check the services


$ kubectl get svc

# By default prometheus and grafana services are available within the cluster as
ClusterIP, to access them outside lets change it to LoadBalancer.

# Edit Prometheus Service & change service type to NodePort then save and close
that file
$ kubectl edit svc stable-kube-prometheus-sta-prometheus

# Now edit the grafana service & change service type to LoadBalancer then save and
close that file
$ kubectl edit svc stable-grafana

# Verify the service if changed to LoadBalancer


$ kubectl get svc

# Check in which nodes our Prometheus and grafana pods are running
$ kubectl get pods -o wide

=> Access Promethues server using below URL

URL : http://LBR-DNS:9090/

=> Access Grafana server using below URL

URL : http://LBR-DNS/

=> Use below credentials to login into grafana server

UserName: admin
Password: prom-operator

=> Once we login into Grafana then we can monitor our k8s cluster. Grafana will
provide all the data in charts format.

===========================================
Q-1) Self Introduction Of DevOps Engineer
===========================================

=> I am ashok having 4+ yrs exp as a Devops engineer with AWS cloud

=> Currently i am working for XXX company

=> Coming to my project, i am working on fullstack based application (insurance)

Frontend : Angular

Backend : Spring Boot with microservices

Database : Oracle

=> We are using AWS cloud platform to maintain required infrastrucute for
application

EC2 : Virtual machines

LBR : To distribute load

ASG : To scale up & scale down

S3 : To store application files

RDS : To create database

VPC : For isolated network

Cloud Watch : To monitor resources in cloud

Route 53 : Domain mapping

IAM : Identity & Access Mgmt

=> We are using several devops tools to automate project build and deployment
process

Git Hub : Source code repo server

Maven : Build tool for spring boot app (backend)

NPM : Build tool for angular app

SonarQube : To perform code review

Nexus : Artifactory Servers

Docker : For containerization


K8S : For Orchestration

Jenkins : CI CD server

Prometheus & Grafana : for k8s resources monitoring

EFK : Logs monitoring

JIRA : For project management

Terraform : To provision infrastructure in AWS

Ansible : For configuration mgmt

===================
Project Name : IES
===================

=> IES stands for Integrated Eligibility System

=> Client : New Jersey State Gov (USA)

=> It is an intranet application will be accessed only in DHS offices

Note: Intranet means we can't access this app with public network

=> This application is used to provide insurance plans for new jersey state
citizens.

=> Citizens should visit nearest DHS office to apply for the plan

=> IES supports below insurance plans

1) SNAP : Food Plan

2) CCAP : Child Care

3) Medicare : Health Plan (age >= 65)

4) Medicaid : Health Plan

5) QHP : Commercial Health Plan

Note: Every Plan will have some business conditions. If citizen data matching with
business conditions then citizen will be approved for the plan otherwise citizen
will be denied for the plan.

==========================
Roles & Responsibilities
==========================

1) Creating Git Hub Repositories based on dev team requests

2) Role Based Access for Git Repositories

3) Setup Shared Libraries in Nexus Repo


4) Writing Docker files

5) Creating k8s manifest ymls

6) Creating Jenkins Pipelines

7) Monitoring Jenkins Pipelines

8) Env Creation (DEV, SIT, UAT and PROD)

9) Infrastructure provisioning using Terraform

10) Writing playbooks for configuration mgmt

11) Working with cloud services

12) Cluster Monitoring & application monitoring

====================
Projects to mention
====================

1) Job Portal (Name : monster -> foundit)

2) e-Learning platform

===================
Resume Building
===================

Note: Create new email id and take new phno to apply for jobs

Section-1) Name + Email + Phno

Section-2) Professional Summary

- Your experience with devops tools


- Your experience with cloud services
- Your experience with operating systems

Section-3) Technical Skills

- Technologies

Section-4) Educational Qualifiacation (Highest degree)

Project-5) Project (s)

- Project Name
- Project Description (Optional)
- Project Duration
- Project Tech Stack
- Your Roles & Responsibilities

Note: To keep more than one project we need to repeat section-5


6) Personal Details

- Your Gender
- DOB
- Marriage Status

7) Declartion Note & Thankyou Note

====================
How to cover gap ?
====================

If you have 3 years gap

1 Project : Nov-2020 to till date (project duration)

If you have 3 years gap

1 project : May-2020 to Dec-2022

2 project : Jan-2023 to till date

If you have 4 years gap

Mar-2019 to Feb - 2020 : Attended trainings (upskill)

Mar-2020 to till date : IT exp

1 project : July-2020 to Sep-2022

2 project : Oct-2022 to till date

If you have 5 years gap

Mar-2018 to April - 2019 : Attended trainings (upskill)

May-2020 to till date : IT Experience

1 project : July-2020 to Sep-2022

2 project : Oct-2022 to till date

If you have more than 5 years gap

- I have done business for 3 years (restaurant) and closed it due to


loss

- from 3 years i am working in IT

- For ladies (marriage & kids) - maternity

###### Note: To prove our exp we need to experience documents from consultancy

========================================================
What are the day to day activities of DevOps Engineer ?
========================================================

=> First Thing we need to check is our email inbox

=> If required we need to provide replies to those mails

=> Check JIRA dashboard (any tickets assigned)

=> Join stand-up call to discuss action plan for today

=> Tickets assignment will happen in stand-up call / after stand-up call

=> We will be working with below types of tasks as a devops engineer

- Repo Creation / Branch creations/ RBACs


- Build new Env (Dev/SIT/UAT/PILOT/PROD)
- Access Permissions
- Infrastructure provisioning
- Pipeline creation/execution
- Monitoring
- Trouble Shooting
- Infra Backup Plan
- Automation through scripts
- KT Documentation
- R & D
- In House Trainings
- Conducting KT sessions

Note: We will store our documents at below locations

-> Git Hub Repo


-> Sharepoint
-> Confluence pages

=> Check calender for meetings schedule

- Meeting with our team


- Meeting with Onshore team
- Meeting with Dev Team

=> Join status call to give work updates to scrum master

- What we have done for today


- What we are working on
- When our task will be completed
- Any challenges to complete the task

========================================================================

Project-1 : Git Hub + Maven + Docker + K8S + Jenkins (springboot)

- CI Job
- CD Job

Project-2 : Git Hub + Maven + Sonar + Nexus + Docker + K8S + Jenkins + Ansible
(springboot)
Project-3 : Angular/React App

Project-4 : Microservices Deployment (fullstack)

################################
Project Setup using below tools
################################

1) Maven
2) Git Hub
3) Jenkins
4) Docker
5) Kubernetes

######## Step-1 : Jenkins Server Setup ########

1) Create Ubuntu VM using AWS EC2

2) Install Java & Jenkins using below commands

$ sudo apt-get update

$ sudo apt-get install default-jdk

$ wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key


add -

$ sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ >


/etc/apt/sources.list.d/jenkins.list'

$ sudo apt-get update

$ sudo apt-get install jenkins

$ sudo systemctl status jenkins

# Copy jenkins admin pwd


$ sudo cat /var/lib/jenkins/secrets/initialAdminPassword

# Open jenkins server in browser using VM public ip

URL : http://public-ip:8080/

-> Create Admin Account & Install Required Plugins in Jenkins

####### Step - 2 : Install Maven & Git in Jenkins ######

$ sudo apt install maven -y

$ sudo apt install git -y


####### Step - 3 : Setup Docker in Jenkins ######

# install docker
$ curl -fsSL get.docker.com | /bin/bash

# Add Jenkins user to docker group


$ sudo usermod -aG docker jenkins

# Restart Jenkins
$ sudo systemctl restart jenkins

###### Step-4 :: Create EKS Management Host in AWS ######

1) Launch new EC2 VM ( Ubuntu )

2) Connect to machine and install kubectl using below commands

$ curl -o kubectl
https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/
kubectl

$ chmod +x ./kubectl

$ sudo mv ./kubectl /usr/local/bin

$ kubectl version --short --client

3) Install AWS CLI latest version using below commands

$ sudo apt install unzip


$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install
$ aws --version

4) Install eksctl using below commands

$ curl --silent --location


"https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -
s)_amd64.tar.gz" | tar xz -C /tmp

$ sudo mv /tmp/eksctl /usr/local/bin

$ eksctl version

###### Step-5 :: Create IAM role & attach to EKS Management Host ######

1) Create New Role using IAM service

usecase - ec2

2) Add permissions

IAM - fullaccess
VPC - fullaccess
ec2 - fullaccess
cloudfomration - fullaccess
administrator - acces

3) Enter Role Name (eksroleec2)

4) Attach created role to EKS Management Host

(Select EC2 => Click on -> Security -> attach IAM role we have created)

###### Step-6 :: Create EKS Cluster using eksctl ######

Syntax:

eksctl create cluster --name cluster-name \


--region region-name \
--node-type instance-type \
--nodes-min 2 \
--nodes-max 2 \
--zones <AZ-1>,<AZ-2>

example:

$ eksctl create cluster --name ashokit-cluster1 --region us-east-1 --node-type


t2.medium --zones us-east-1a,us-east-1b

Note: Cluster creation will take 5 to 10 mins of time (we have to wait)

$ kubectl get nodes

###### Step-7 :: Install AWS CLI in JENKINS Server ######

URL : https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"


$ unzip awscliv2.zip
$ sudo ./aws/install
$ aws --version

###### Step-8 :: Install Kubectl in JENKINS Server ######

$ curl -o kubectl
https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/
kubectl

$ chmod +x ./kubectl

$ sudo mv ./kubectl /usr/local/bin

$ kubectl version --short --client

###### Step-9 :: Update EKS Cluster Config File in Jenkins Server ######

Note: Execute below command in Eks Management host & copy kube config file data

$ cat .kube/config
=> Execute below commands in Jenkins Server and paste kube config file

$ cd /var/lib/jenkins
$ sudo mkdir .kube
$ sudo vi .kube/config

# check eks nodes


$ kubectl get nodes

Note: We should be able to see EKS cluster nodes here.

#####################################################
######## Step-10 : Create Jenkins Pipeline ######
#####################################################

-> Create jenkins declarative pipeline script

#########################
stage-1 : git clone
#########################

-> generate pipeline syntax for git clone with credentials

git credentialsId: 'c87aff7e-f5f1-4756-978f-3379694978e6', url:


'https://github.com/ashokitschool/maven-web-app.git'

stage('Clone Repo') {
steps {
git credentialsId: 'GIT-Credentials', url:
'https://github.com/ashokitschool/maven-web-app.git'
}
}

#########################
stage-2 : mvn clean build
#########################

stage('Build'){
steps{
sh 'mvn clean package'
}
}

##################################################
stage-3 : build docker image
##################################################

stage ('Build Docker Image'){


steps{
sh "docker build -t ashokit/app1 ."
}
}

##################################################
Stage-4 : Push docker image into docker hub
##################################################
-> push docker image into docker hub using secret text

-> Use pipeline syntax to generate secret for docker hub account

stage ('Docker Push'){


steps{
withCredentials([string(credentialsId: 'Docker-Acc-Pwd', variable:
'dockerpwd')]) {
sh "docker login -u ashokit -p ${dockerpwd}"
sh "docker push ashokit/mavenwebapp"
}
}
}

##########################
Step-5 : Deploy in k8s
#########################

stage ('Deploy'){
steps{
sh 'kubectl delete deployment mavenwebappdeployment'
sh 'kubectl apply -f maven-web-app-deploy.yml'
}
}

######################### Final Script ##############################


pipeline {
agent any

stages {
stage('Git Clone') {
steps {
git credentialsId: 'GIT_Credentials', url:
'https://github.com/ashokitschool/maven-web-app.git'
}
}
stage('Maven Build') {
steps {
sh 'mvn clean package'
}
}
stage('Create Image') {
steps {
sh "docker build -t ashokit/mavenwebapp ."
}
}

stage ('Docker Push'){


steps{
withCredentials([string(credentialsId: 'Docker-Acc-Pwd', variable:
'dockerpwd')]) {
sh "docker login -u ashokit -p ${dockerpwd}"
sh "docker push ashokit/mavenwebapp"
}
}
}
stage ('Deploy'){
steps{
sh 'kubectl delete deployment mavenwebappdeployment'
sh 'kubectl apply -f maven-web-app-deploy.yml'
}
}
}
}

======================================
Create SonarQube stage
======================================

### Run Sonar Using Docker

$ docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube:lts-community

## Enable 9000 port number in security group

-> Login into Sonar Server & Generate Sonar Token

Ex: sqa_a7e9c43d3a8649618b53d79e203013c25dbe3e3e

-> Add Sonar Token in 'Jenkins Credentials' as Secret Text

-> Manager Jenkins


-> Credentials
-> Add Credentials
-> Select Secret text
-> Enter Sonar Token as secret text

-> Manage Jenkins -> Plugins -> Available -> Sonar Qube Scanner Plugin -> Install
it

-> Manage Jenkins -> Configure System -> Sonar Qube Servers -> Add Sonar Qube
Server

- Name : Sonar-Server-7.8
- Server URL : http://52.66.247.11:9000/ (Give your sonar
server url here)
- Add Sonar Server Token

-> Once above steps are completed, then add below stage in the pipeline

stage('SonarQube analysis') {
withSonarQubeEnv('sonar-9.9.3') {
def mavenHome = tool name: "Maven-3.9.6", type: "maven"
def mavenCMD = "${mavenHome}/bin/mvn"
sh "${mavenCMD} sonar:sonar"
}
}

====================================
# Step-4 : Create Nexus Stage
====================================
# Run nexus using docker container

$ docker run -d -p 8081:8081 --name nexus sonatype/nexus3

## Enable 8081 port number in security group inbound rule

## login into nexus server

-> Create Nexus Repository

Repo : http://3.108.63.133:8081/repository/ashokit-snapshot-repo/

-> Install Nexus Repository Plugin using Manage Plugins ( Plugin Name : Nexus
Artifact Uploader)

-> Generate Nexus Pipeline Syntax

stage ('Nexus Upload'){


nexusArtifactUploader artifacts: [[artifactId: 'Maven-Web-App', classifier: '',
file: 'target/maven-web-app.war', type: 'war']], credentialsId: 'Nexus-
Credentials', groupId: 'in.ashokit', nexusUrl: '13.127.185.241:8081', nexusVersion:
'nexus3', protocol: 'http', repository: 'ashokit-snapshot-repository', version:
'1.0-SNAPSHOT'
}

# install docker
$ curl -fsSL get.docker.com | /bin/bash

# Add Jenkins user to docker group


$ sudo usermod -aG docker ubuntu

==================
Final Pipeline
==================

pipeline {
agent any

tools{
maven "Maven-3.9.6"
}

stages {
stage('Clone Repo') {
steps {
git 'https://github.com/ashokitschool/maven-web-app.git'
}
}
stage('Maven Build') {
steps {
sh 'mvn clean package'
}
}

stage('Code Review') {
steps{
withSonarQubeEnv('sonar-9.9.3') {
sh "mvn sonar:sonar"
}
}
}

stage("Nexus Upload"){
steps{
nexusArtifactUploader artifacts: [[artifactId: '01-maven-web-app',
classifier: '', file: 'target/maven-web-app.war', type: 'war']], credentialsId:
'nexus-server', groupId: 'in.ashokit', nexusUrl: '3.108.63.133:8081', nexusVersion:
'nexus3', protocol: 'http', repository: 'ashokit-snapshot-repo', version: '3.0-
SNAPSHOT'
}
}

stage('Docker Image') {
steps {
sh 'docker build -t ashokit/mavenwebapp .'
}
}
stage('Image Push') {
steps {
withCredentials([string(credentialsId: 'docker-acc-pwd-id',
variable: 'dockerpwd')]) {
sh "docker login -u ashokit -p ${dockerpwd}"
sh "docker push ashokit/mavenwebapp"
}
}
}

stage('K8S Deploy') {
steps {
sh 'kubectl apply -f maven-web-app-deploy.yml'
}
}
}
}

Project can be developed in 2 ways

1) Monolithic Architecture

2) Microservices Architecture

-------------------------
Monolithic Architecture
-------------------------

-> Monolithic Architecture means everything will be developed in single application

1) Burden will be increased on server

2) Single Point of failure


3) Re-Deploy entire application

Note: It is outdated approach in the industry

----------------------------
Microservices Architecture
----------------------------

-> Microservices architecture nothing but collection of services which are


independentley deployable, executable and deliverable.

Ex: makemytrip application

hotels-booking
trains-booking
flights-booking
cabs-booking

-> Every functionality will be developed as seperate service.

1) Loosely coupling

2) No Need to re-deploy entire application

3) There is no single point of failure

4) Maintenence easy

------------------------
Microservices Project
------------------------

Backend app : Springboot app

Git Repo : https://github.com/ashokitschool/contact_backend_app.git

Frontend app : Angular App

Git Repo : https://github.com/ashokitschool/contact_ui_ng_app.git

-----------
Procedure
----------

1) Create CI job for backend app

2) Create CD job for backend app

Note: Backend CI job should trigger backend CD job

3) Run Backend CI job and test backend app functionality

4) Configure backend app url in frontend app (src/app/contact.service.ts)

5) Create CI job for frontend app

6) Create CD job for for frontend app


Note: Frontend CI job should trigger frontend CD job

7) Trigger frontend CI job and test frontend app functionality

Note: Frontend app should communicate backend app

You might also like