KEMBAR78
Unit-I - Introduction To Devops | PDF | Virtual Machine | Software Development
0% found this document useful (0 votes)
22 views59 pages

Unit-I - Introduction To Devops

The document provides an overview of DevOps, defining it as a methodology that integrates development and operations teams to enhance collaboration and efficiency in software delivery. It outlines the DevOps lifecycle, which includes phases such as continuous integration, continuous delivery, and continuous deployment, emphasizing practices that improve deployment frequency and reduce failure rates. Additionally, it discusses tools like Jenkins and concepts like Agile and CI/CD that support the DevOps culture.

Uploaded by

vasanthi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views59 pages

Unit-I - Introduction To Devops

The document provides an overview of DevOps, defining it as a methodology that integrates development and operations teams to enhance collaboration and efficiency in software delivery. It outlines the DevOps lifecycle, which includes phases such as continuous integration, continuous delivery, and continuous deployment, emphasizing practices that improve deployment frequency and reduce failure rates. Additionally, it discusses tools like Jenkins and concepts like Agile and CI/CD that support the DevOps culture.

Uploaded by

vasanthi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 59

Devops

1
Introduction to Devops
What Is Devops
History of Devops
Devops definition
DevOps Main Objectives
DevOps and Software Development Life Cycle
Waterfall Model
Agile Model
Continuous Integration & Deployment
Jenkins
Containers and Virtual Development
Docker
Vagrant
Configuration Management Tools
Ansible.
• Agile is a software development methodology
that focuses on iterative, incremental, small, and
rapid releases of software, along with customer
feedback. It addresses gaps and conflicts
between the customer and developers.

3
What exactly is DevOps?
• It is nothing but the practice or methodology of
making ‘Developers’ and the ‘Operations’ team
work together.

4
Gaps Between Customers and Developers

• In the case of the Waterfall model, there was a


gap between customers’ software requirements
and the developers, which was overcome by
Agile

5
• While in the case of the Agile method, there was still a
gap between the development and operations folks.
• It was in this scenario DevOps was introduced in order
to overcome the gap between developers and the
operations team.

6
DevOps is a set of practices intended to reduce
the time between committing a change to a
system and the change being placed into normal
production, while ensuring high quality.

7
DevOps is a culture that's different from traditional corporate cultures
and requires a change in mindset, processes, and tools.

It is often associated with continuous integration (CI) and


continuous delivery (CD) practices, which are software engineering
practices, but also with Infrastructure as Code (IaC), which
consists of codifying the structure and configuration of infrastructure.
The term DevOps was introduced in 2007-2009 by Patrick Debois,
Gene Kim, and John Willis, and it represents the combination of
Development (Dev) and Operations (Ops).

It has given rise to a movement that advocates bringing developers


and operations together within teams. This delivers added business
value to users more quickly, which makes it more competitive in the
market.
DevOps culture is a set of practices that reduce the barriers
between developers, who want to innovate and deliver
faster, and operations, who want to guarantee the stability
of production systems and the quality of the system changes
they make.

DevOps culture is also the extension of agile processes


(Scrum, XP, and so on), which makes it possible to reduce
delivery times and already involves developers and
business teams.
To facilitate this collaboration and to improve communication
between Dev and Ops, there are several key elements in the processes
that must be put in place, as shown here:
• More frequent application deployments with integration and
continuous delivery (called CI/CD).
• The implementation and automation of unitary and integration
tests, with a process focused on behavior-driven design (BDD) or
test-driven design (TDD).
• The implementation of a means of collecting feedback from users.
• Monitoring applications and infrastructure
The following diagram illustrates the three axes of DevOps culture
– the collaboration between Dev and Ops, the processes, and the use
of tools:
"DevOps is the union of people, processes, and products to
enable continuous delivery of value to our end users.“
As a software development practice, DevOps has some
distinctive objectives:
• Improve deployment frequency
• Allow the software product to achieve a faster time to
market
• Decrease the failure rate of new releases
• Shorten the lead time between fixes
• Enable infrastructure to move as quickly as developers
need it to
What is the DevOps Lifecycle
The DevOps lifecycle is a set of automated deployment
processes of workflow. It was introduced to ensure continuous
product delivery and is usually symbolized as an infinity loop. It is a
combination of DevOps phases that work together for better and
more effective product development.
What are the different phases in DevOps?
• The various phases of the DevOps lifecycle are as
follows:
• Plan - Initially, there should be a plan for the type of
application that needs to be developed. Getting a rough
picture of the development process is always a good
idea.
• Code - The application is coded as per the end-user
requirements.
• Build - Build the application by integrating various
codes formed in the previous steps.
• Test - This is the most crucial step of the application
development. Test the application and rebuild, if
necessary. 15
• Integrate - Multiple codes from different programmers
are integrated into one.
• Deploy - Code is deployed into a cloud environment for
further usage. It is ensured that any new changes do not
affect the functioning of a high traffic website.
• Operate - Operations are performed on the code if
required.
• Monitor - Application performance is monitored.
Changes are made to meet the end-user requirements.

16
17
DevOps practices is the process of continuous integration and
continuous delivery, also known as CI/CD. In fact, behind the
acronyms of CI/CD, there are three practices:
• Continuous integration (CI)
• Continuous delivery (CD)
• Continuous deployment
Continuous integration (CI)
In the following definition given by Martin Fowler, three key
things are mentioned – members of a team, integrate, and as
quickly as possible:

"Continuous integration is a software development practice where


members of a team integrate their work frequently... Each
integration is verified by an automated build (including test) to
detect integration errors as quickly as possible.”

That is, CI is an automatic process that allows you to check the


completeness of an application's code every time a team member
makes a change. This verification must be done as quickly as
possible.
Implementing CI
• Therefore, to set up CI, it is necessary to have a Source Code
Manager (SCM) that will centralize the code of all members.
This code manager can be of any type: Git, SVN(Subversion),
or Team Foundation Version Control (TFVC).
• It's also important to have an automatic build manager (CI
server) that supports continuous integration, such as Jenkins,
GitLab CI, TeamCity, Azure Pipelines, GitHub Actions, Travis
CI, and Circle CI.
• Each team member will work on the application code daily,
iteratively, and incrementally (such as in agile and scrum
methods). Each task or feature must be partitioned from other
developments with the use of branches.
• Regularly, even several times a day, members archive or commit
their code and preferably with small commits (trunks) that can
easily be fixed in the event of an error. This will be integrated
into the rest of the code of the application, with the rest of the
commits of the other members.
Integrating all the commits is the starting point of the CI
process.
This process, which is executed by the CI server, needs to be
automated and triggered at each commit. The server will retrieve the
code and then do the following:
• Build the application package – compilation, file transformation,
and so on.
• Perform unit tests (with code coverage)
• This CI process must be optimized as soon as possible so that
it can run fast, and so that developers can gather quick feedback
on the integration of their code. For example, code that has been
archived and does not compile or whose test execution fails can
impact and block the entire team.
• Sometimes, bad practices can cause tests to fail during CI. To
deactivate this test’s execution, you must take it is not serious, it
is necessary to deliver quickly, or the code that compiles it is
essential as an argument.

• With an optimized and complete CI process, the developer can


quickly fix their problems and improve their code or discuss it
with the rest of the team and commit their code for a new
integration. Let's look at the following diagram:
• This diagram shows the cyclical steps of continuous
integration. This includes the code being pushed into the SCM
by the team members and the build and test being executed by
the CI server. The purpose of this process is to provide rapid
feedback to members.
CONTINUOUS DELIVERY (CD)
• Once continuous integration has been completed, the
next step is to deploy the application automatically in
one or more non-production environments, which is
called staging. This process is called continuous
delivery (CD).
• CD often starts with an application package being prepared by
CI, which will be installed based on a list of automated tasks.
• These tasks can be of any type: unzip, stop and restart service,
copy files, replace configuration, and so on.
• The execution of functional and acceptance tests can also be
performed during the CD process.
• It is very common to link CI to CD in an integration
environment; that is, CI deploys at the same time in an
environment.
• This is necessary so that developers can not only execute unit
tests but also verify the application as a whole (UI and
functional) at each commit, along with the integration of the
developments of the other team members.
• If changes (improvements or bug fixes) are to be made to the
code following verification in one of these environments, once
done, the modifications will have to go through the CI and CD
cycle again.
The tools that are set up for CI/CD are often used with
other solutions, as follows:
• A package manager:
This constitutes the storage space of the packages
generated by CI and recovered by CD. These managers
must support feeds, versioning, and different types of
packages. There are several on the market, such as Nexus,
ProGet, Artifactory, and Azure Artifacts.
• A configuration manager:
This allows you to manage configuration changes during
CD; most CD tools include a configuration mechanism
with a system of variables.
What is important in a CD process is that the deployment to the
production environment – that is, to the end user – is triggered
manually by approved users.

The continuous delivery workflow


• The preceding diagram clearly shows that the CD
process is a continuation of the CI process. It represents
the chain of CD steps, which are automatic for staging
environments but manual for production deployments.
It also shows that the package is generated by CI and
stored in a package manager, and that it is the same
package that is deployed in different environments.

• A staging environment (stage) is a nearly exact replica


of a production environment for software testing.
Staging environments are used to test codes, builds
and updates to ensure quality under a production-like
environment before application deployment.
Continuous deployment
• Continuous deployment is an extension of CD, but this
time, with a process that automates the entire CI/CD
pipeline from the moment the developer commits their
code to deployment in production through all of the
verification steps.
• The continuous deployment process must also take into account
all of the steps to restore the application in the event of a
production problem.
• Continuous deployment can be implemented by using and
implementing feature toggle techniques (or feature flags), which
involves encapsulating the application's functionalities in
features and activating its features on demand, directly in
production, without having to redeploy the code of the
application.
• Another technique is to use a blue-green production
infrastructure, which consists of two production environments,
one blue and one green. First, we deploy to the blue
environment, then to the green one; this will ensure that no
downtime is required.

The continuous deployment workflow


• The preceding diagram is almost the same as that of
CD, but with the difference that it depicts automated
end-to-end deployment.
• CI/CD processes are therefore an essential part of
DevOps culture, with CI allowing teams to integrate
and test the coherence of its code and to obtain quick
feedback regularly.
• CD automatically deploys on one or more staging
environments and hence offers the possibility to test the
entire application until it is deployed in production.
• Finally, continuous deployment automates the ability to
deploy the application from commit to the production
environment.
1. Continuous Development –
• This step is crucial in defining the vision for the entire
software development process. It focuses mostly on project
planning and coding.
• At this phase, stakeholders and project needs are gathered
and discussed.
• In addition, the product backlog is maintained based on
customer feedback and is divided down into smaller releases
and milestones to facilitate continuous software development.
• Once the team reaches consensus on the business
requirements, the development team begins coding to meet
those objectives.
• It is an ongoing procedure in which developers are obliged to
code whenever there are modifications to the project
requirements or performance difficulties.
2. Continuous Integration –
• Continuous integration is the most important stage of
the DevOps lifecycle.
• At this phase, updated code or new functionality and
features are developed and incorporated into the
existing code.
• In addition, defects are spotted and recognized in the
code at each level of unit testing during this phase, and
the source code is updated accordingly.
• This stage transforms integration into a continuous
process in which code is tested before each commit.
• In addition, the necessary tests are planned during this
period.
3. Continuous Testing –
• Some teams conduct the continuous testing phase prior
to integration, whereas others conduct it after integration.
• Using Docker containers, quality analysts regularly test
the software for defects and issues during this phase. In
the event of a bug or error, the code is returned to the
integration phase for correction.
• Moreover, automation testing minimises the time and
effort required to get reliable findings.
• During this stage, teams use technologies like as
Selenium.
• In addition, continuous testing improves the test
assessment report and reduces the cost of delivering and
maintaining test environments.
4. Continuous Deployment –
• This is the most important and active step of the
DevOps lifecycle, during which the finished code
is released to production servers.
• Continuous deployment involves configuration
management to ensure the proper and smooth
deployment of code on servers.
• Throughout the production phase, development
teams deliver code to servers and schedule upgrades
for servers, maintaining consistent configurations.
• This methodology enabled the constant release of
new features in production.
5. Continuous Feedback –
• Constant feedback was implemented to assess and
enhance the application’s source code.
• During this phase, client behaviour is routinely examined
for each release in an effort to enhance future releases and
deployments.
• Companies can collect feedback using either a structured or
unstructured strategy.
• Under the structural method, input is gathered using
questionnaires and surveys.
• In contrast, feedback is received in an unstructured manner
via social media platforms.
• This phase is critical for making continuous delivery
possible in order to release a better version of the program.
6. Continuous Monitoring –
• During this phase, the functioning and features of the
application are regularly monitored to detect system
faults such as low memory or a non-reachable server.
• This procedure enables the IT staff to swiftly detect app
performance issues and their underlying causes.
Whenever IT teams discover a serious issue, the
application goes through the complete DevOps cycle
again to determine a solution.
• During this phase, however, security vulnerabilities can
be recognized and corrected automatically.
7. Continuous Operations –
• The final phase of the DevOps lifecycle is essential for
minimizing scheduled maintenance and other planned
downtime.
• Typically, developers are forced to take the server
offline in order to perform updates, which increases the
downtime and could cost the organisation a large
amount of money.
• Eventually, continuous operation automates the app’s
startup and subsequent upgrades. It eliminates
downtime using container management platforms such
as Kubernetes and Docker.
JENKINS
• Jenkins is an open-source automation tool written in Java that
helps automate various tasks in software development, primarily
in the domain of continuous integration and continuous
delivery (CI/CD).
• It provides a platform for building, testing, and deploying
software projects, allowing developers to automate repetitive
tasks and streamline the software delivery process.
• Jenkins is a continuous integration and build server.
• It is used to manually, periodically, or automatically build
software development projects.
• Jenkins is used by teams of all different sizes, for projects with
various languages.
• Its a server-based system that runs in servlet containers like
Apache etc.,
Jenkins - History
• 2005 - Hudson was first release by Kohsuke
Kawaguchi of Sun Microsystems
• 2010 – Oracle bought Sun Microsystems
• Due to a naming dispute, Hudson was renamed to
Jenkins
• Oracle continued development of Hudson (as a branch
of the original)
What can Jenkins do?
• Generate test reports
• Integrate with many different Version Control Systems
• Push to various artifact repositories
• Deploys directly to production or test environments
• Notify stakeholders of build status
• Containers are a streamlined way to build, test, deploy,
and redeploy applications on multiple environments
from a developer’s local laptop to an on-premises data
center and even the cloud.
• Two popular tools and platforms used to build and
manage containers. These are Docker and
Kubernetes.
• Containers are a form of operating system virtualization. A
single container might be used to run anything from a small
microservice or software process to a larger application.

• Inside a container are all the necessary executables, binary


code, libraries, and configuration files.
How is a container different from a Virtual Machine?
• A complete operating system, as well as the application, is
included when using virtual machine technology .

• A physical device running two virtual machines consists of a


hypervisor and two separate top layer operating systems.

• For Docker (Containers) running a single OS, the physical


engine runs two containerized programs, and all containers use
the same operating system kernel.
CONTAINERS VS. VIRTUAL MACHINES (VMS)
• People sometimes confuse container technology with virtual
machines (VMs) or server virtualization technology. Although
there are some basic similarities, containers are very different
from VMs.
• Virtual machines run in a hypervisor environment where each
virtual machine must include its own guest operating system
inside it, along with its related binaries, libraries, and
application files. This consumes a large amount of system
resources and overhead, especially when multiple VMs are
running on the same physical server, each with its own guest
OS.
• In contrast, each container shares the same host OS or system
kernel and is much lighter in size, often only megabytes. This
often means a container might take just seconds to start
(versus the gigabytes and minutes required for a typical VM).
Ansible
• Ansible is an IT automation engine written in Python.
With Ansible it is possible to automate provisioning,
orchestration(planning), configuration management and
deployment of applications.
• Ansible Playbooks are written using YAML syntax, so
that you have it in human-readable format and no
complex knowledge is required to understand what it
does. In practice, you can pass your Ansible Playbooks
to a third person and in couple of minutes he/she will
have an idea how you manage provisioning for your
product.
Docker
• Docker is a nice toy to build and deploy any kind of
application into lightweight Linux containers.
• It’s important to understand that Docker is not a VM.
• Unlike VM’s, Docker is based on AUFS.
• It shares the same kernel and filesystem of the
machine where it is hosted.
• It comes with a great CLI which makes interaction with
Docker engine really easy and supports versioning of
the images.
Vagrant
• Vagrant is a virtual machine manager.
• It is easy to configure, and by default comes with
support of the providers such as Docker, VirtualBox
and VMware.
• The great thing about Vagrant is that you can use all
modern provisioning tools (e.g Chef, Puppet, Ansible)
to install and configure software on the virtual machine.
Configuration Management Tools
Configuration management is an automated method for
maintaining computer systems and software in a known,
consistent state.
• There are several components in a configuration management
system.
• Managed systems can include servers, storage, networking,
and software.
• These are the targets of the configuration management system.
The goal is to maintain these systems in known, determined
states.
• Another aspect of a configuration management system is the
description of the desired state for the systems.
• The third major aspect of a configuration management system is
automation software, which is responsible for making sure that
the target systems and software are maintained in the desired
state.
Benefits
When combined with automation, configuration management
can improve efficiency because manual configuration processes
are replaced with automated processes. This also makes it
possible to manage more targets with the same or even fewer
resources.
Ten best and most popular (in no particular order) configuration
management tools DevOps.
i. Ansible
ii. Terraform
iii. Chef Infra
iv. Vagrant
v. TeamCity
vi. Puppet Enterprise
vii. Octopus Deploy
viii. SaltStack
ix. AWS Config
x. Microsoft Endpoint Manager
• Server provisioning
Server provisioning is the process of setting up
physical or virtual hardware; installing and
configuring software, such as the
• operating system and applications; and
• connecting it to middleware, network, and
storage components.
• Provisioning can encompass all of the
operations needed to create a new machine
and bring it to the desired state, which is
defined according to business requirements.

You might also like