DevOps
‘Development’ and ‘Operations’
DEVOPS
Development
Development
• Development phase is where all the core software development work
happens.
• As input, it takes in plans for the current iteration, usually in the form
of task assignments.
• Then it produces software artifacts that express the updated
functionality.
• Development requires not only the tools that are used to write code,
but also supporting services like version control, issue management,
and automated testing.
Operations
• The processes, practices, and IT services necessary to meet the
business needs of internal and external users.
• ITOps professionals ensure IT service delivery by supporting
appropriate processes to run a successful IT-enabled business.
• Functionalities covered by IT Operations Teams:
-Network infrastructure
-Server management
-Service desk support
-Incident & security management
DevOps
• DevOps combines development and operations to increase the
efficiency, speed, and security of software development and delivery
compared to traditional processes.
• A more nimble software development lifecycle results in a
competitive advantage for businesses and their customers.
• DevOps practices enable software development (dev) and operations
(ops) teams to accelerate delivery through automation, collaboration,
fast feedback, and iterative improvement.
• DevOps helps increase the organization’s speed to deliver software
applications and services.
How DevOps Works?
• Under a DevOps model, development and operations teams
are no longer “siloed.”
• These two teams are sometimes merged into a single team
where the engineers work across the entire application
lifecycle, from development and test to deployment to
operations, and develop a range of skills not limited to a
single function.
• In some DevOps models, quality assurance and security
teams may also become more tightly integrated with
development and operations and throughout the application
lifecycle.
How DevOps Works?
• When security is the focus of everyone on a DevOps team,
then it is referred to as DevSecOps.
• These teams use practices to automate processes that
historically have been manual and slow.
• They use a technology stack and tooling which help them
operate and evolve applications quickly and reliably.
Benefits of DevOps
• Speed
-Move at high velocity so you can innovate for customers
faster, adapt to changing markets better, and grow more
efficient at driving business results.
-The DevOps model enables developers and operations teams
to achieve these results.
-For example, microservices and continuous delivery let teams
take ownership of services and then release updates to them
quicker.
Benefits of DevOps
• Rapid Delivery
-Increase the frequency and pace of releases so as to innovate and
improve product faster.
-The quicker you can release new features and fix bugs, the faster
you can respond to your customers’ needs and build competitive
advantage.
-Continuous integration and continuous delivery are practices that
automate the software release process, from build to deploy.
Benefits of DevOps
• Reliability
-Ensure the quality of application updates and infrastructure changes to
reliably deliver at a more rapid pace while maintaining a positive
experience for end users.
-Use practices like continuous integration and continuous delivery to
test that each change is functional and safe.
-Monitoring and logging practices helps to stay informed of
performance in real-time.
Benefits of DevOps
• Scale
-Operate and manage infrastructure and development processes at scale.
-Automation and consistency helps to manage complex or changing
systems efficiently and with reduced risk.
-Example, infrastructure as code helps to manage development, testing,
and production environments in a repeatable and more efficient manner.
Benefits of DevOps
• Improved Collaboration
-Build more effective teams under a DevOps cultural model, which
emphasizes values such as ownership and accountability. Developers
and operations teams collaborate closely, share many responsibilities,
and combine their workflows.
-This reduces inefficiencies and saves time.
Benefits of DevOps
• Security
-Move quickly while retaining control and preserving compliance.
-Adopt to a DevOps model without sacrificing security by using
automated compliance policies, fine-grained controls, and configuration
management techniques.
-Example, using infrastructure as code and policy as code, it is possible
to define and then track compliance at scale.
DevOps Practices
• Continuous Integration
• Continuous Delivery
• Microservices
• Infrastructure as Code
• Monitoring and Logging
• Communication and Collaboration
Continuous Integration
• A software development practice where developers regularly merge
their code changes into a central repository, after which automated
builds and tests are run.
• The key goals of continuous integration are to find and address bugs
quicker, improve software quality, and reduce the time it takes to
validate and release new software updates.
Continuous Delivery
• A software development practice where code changes are
automatically built, tested, and prepared for a release to production.
• It expands upon continuous integration by deploying all code changes
to a testing environment and/or a production environment after the
build stage.
• When continuous delivery is implemented properly, developers will
always have a deployment-ready build artifact that has passed through
a standardized test process.
Microservices
• The microservices architecture is a design approach to build a single
application as a set of small services.
• Each service runs in its own process and communicates with other
services through a well-defined interface using a lightweight mechanism,
typically an HTTP-based application programming interface (API).
• Microservices are built around business capabilities; each service is
scoped to a single purpose.
• Use different frameworks or programming languages to write
microservices and deploy them independently, as a single service, or as a
group of services.
Infrastructure as Code
• Infrastructure as code is a practice in which infrastructure is
provisioned and managed using code and software development
techniques, such as version control and continuous integration.
• The cloud’s API-driven model enables developers and system
administrators to interact with infrastructure programmatically, and at
scale, instead of needing to manually set up and configure resources.
• Thus, engineers can interface with infrastructure using code-based
tools and treat infrastructure in a manner similar to how they treat
application code.
• Because they are defined by code, infrastructure and servers can
quickly be deployed using standardized patterns, updated with the
latest patches and versions, or duplicated in repeatable ways.
Infrastructure as Code
Configuration Management
• Developers and system administrators use code to automate operating
system and host configuration, operational tasks, and more. The use of
code makes configuration changes repeatable and standardized.
• It frees developers and systems administrators from manually
configuring operating systems, system applications, or server
software.
Infrastructure as Code
Policy as Code
• With infrastructure and its configuration codified with the cloud,
organizations can monitor and enforce compliance dynamically and at
scale.
• Infrastructure that is described by code can thus be tracked, validated,
and reconfigured in an automated way.
• This makes it easier for organizations to govern changes over
resources and ensure that security measures are properly enforced in a
distributed manner.
• This allows teams within an organization to move at higher velocity
since non-compliant resources can be automatically flagged for further
investigation or even automatically brought back into compliance.
Monitoring and Logging
• Organizations monitor metrics and logs to see how application and
infrastructure performance impacts the experience of their product’s
end user.
• By capturing, categorizing, and then analyzing data and logs generated
by applications and infrastructure, organizations understand how
changes or updates impact users, shedding insights into the root causes
of problems or unexpected changes.
• Active monitoring becomes increasingly important as services must be
available 24/7 and as application and infrastructure update frequency
increases.
• Creating alerts or performing real-time analysis of this data also helps
organizations more proactively monitor their services.
Communication and Collaboration
• Increased communication and collaboration in an organization is one
of the key cultural aspects of DevOps.
• The use of DevOps tooling and automation of the software delivery
process establishes collaboration by physically bringing together the
workflows and responsibilities of development and operations.
• Building on top of that, these teams set strong cultural norms around
information sharing and facilitating communication through the use of
chat applications, issue or project tracking systems, and wikis.
• This helps speed up communication across developers, operations, and
even other teams like marketing or sales, allowing all parts of the
organization to align more closely on goals and projects.
DevOps tools
DevOps lifecycle
• Plan
-Organize the work that needs to be done, prioritize it, and track its
completion.
• Create
-Write, design, develop and securely manage code and project data with
your team.
• Verify
-Ensure that your code works correctly and adheres to your quality
standards — ideally with automated testing.
• Package
-Package your applications and dependencies, manage containers, and
build artifacts.
• Secure
-Check for vulnerabilities through static and dynamic tests, fuzz testing,
and dependency scanning.
DevOps lifecycle
• Release
-Deploy the software to end users.
• Configure
-Manage and configure the infrastructure required to support your applications.
• Monitor
-Track performance metrics and errors to help reduce the severity and frequency
of incidents.
• Govern
-Manage security vulnerabilities, policies, and compliance across your
organization.
Docker
• Docker is an open platform for developing, shipping, and running
applications.
• A software platform that allows to build, test, and deploy applications
quickly.
• Docker enables to separate applications from infrastructure so as to
deliver software quickly.
• Docker helps to quickly deploy and scale applications into any
environment and code will run.
• Docker packages software into standardized units called containers
that have everything the software needs to run including libraries,
system tools, code, and runtime.
• Use Docker containers as a core building block creating modern
applications and platforms.
How Docker works?
• Docker works by providing a standard
way to run code.
• Docker is an operating system for
containers.
• Similar to how a virtual machine
virtualizes server hardware; containers
virtualize the operating system of a
server.
• Docker is installed on each server and
provides simple commands you can use
to build, start, or stop containers.
Container
• A container is a loosely isolated environment .
• The isolation and security allows to run many containers
simultaneously on a given host.
• Containers are lightweight and contain everything needed to run the
application, so no need to rely on what is currently installed on the
host.
• Easily share containers while working, and be sure that everyone to
whom it is shared with gets the same container that works in the same
way.
Docker provides tooling and a platform to manage the lifecycle of the containers:
1.Develop an application and its supporting components using
containers.
2.The container becomes the unit for distributing and testing the
application.
3.When ready, deploy the application into production environment, as a
container or an orchestrated service.
This works the same whether the production environment is a local data
center, a cloud provider, or a hybrid of the two.
Docker architecture
Docker architecture
• Docker uses a client-server architecture.
• The Docker client talks to the Docker daemon, which does the heavy
lifting of building, running, and distributing your Docker containers.
• The Docker client and daemon can run on the same system, or you can
connect a Docker client to a remote Docker daemon.
• The Docker client and daemon communicate using a REST API, over
UNIX sockets or a network interface.
• Another Docker client is Docker Compose, that lets to work with
applications consisting of a set of containers.
Docker architecture
The Docker daemon
• The Docker daemon (dockerd) listens for Docker API requests and manages
Docker objects such as images, containers, networks, and volumes.
• A daemon can also communicate with other daemons to manage Docker
services.
The Docker client
• The Docker client (docker) is the primary way that many Docker users
interact with Docker.
• When you use commands such as docker run, the client sends these
commands to dockerd, which carries them out.
• The docker command uses the Docker API.
• The Docker client can communicate with more than one daemon.
Docker architecture
Docker Desktop
• Docker Desktop is an easy-to-install application for your Mac, Windows or Linux
environment that enables you to build and share containerized applications and
microservices.
• Docker Desktop includes the Docker daemon (dockerd), the Docker client
(docker), Docker Compose, Docker Content Trust, Kubernetes, and Credential
Helper.
Docker architecture
Docker registries
• A Docker registry stores Docker images.
• Docker Hub is a public registry that anyone can use, and Docker is
configured to look for images on Docker Hub by default.
• You can even run your own private registry.
• When you use the docker pull or docker run commands, the required
images are pulled from your configured registry.
• When you use the docker push command, your image is pushed to
your configured registry.
Docker objects
• When you use Docker, you are creating and using images, containers, networks,
volumes, plugins, and other objects.
Images
• An image is a read-only template with instructions for creating a Docker container.
• Often, an image is based on another image, with some additional customization.
• Create own images or might only use those created by others and published in a
registry.
• To build own image, create a Dockerfile with a simple syntax for defining the steps
needed to create the image and run it.
• Each instruction in a Dockerfile creates a layer in the image.
• When you change the Dockerfile and rebuild the image, only those layers which have
changed are rebuilt.
• This is part of what makes images so lightweight, small, and fast, when compared to
other virtualization technologies.
When to use Docker?
MICROSERVICES
• Build and scale distributed application architectures by taking advantage of standardized code
deployments using Docker containers.
CONTINUOUS INTEGRATION & DELIVERY
• Accelerate application delivery by standardizing environments and removing conflicts between
language stacks and versions.
DATA PROCESSING
• Provide big data processing as a service. Package data and analytics packages into portable
containers that can be executed by non-technical users.
CONTAINERS AS A SERVICE
• Build and ship distributed applications with content and infrastructure that is IT-managed and
secured.
Dockerfile
• The very basic building block of a Docker
image is a Dockerfile
• A Dockerfile is a simple text file with
instructions and arguments.
• Docker can build images automatically by
reading the instructions given in a
Dockerfile.
• In a Dockerfile Everything on left is
INSTRUCTION, and on right is an
ARGUMENT to those instructions.
• The file name is "Dockerfile" without any
extension.
Dockerfile
Instruction Explanation
FROM To specify the base image which can be pulled from a container registry.
RUN Executes commands during the image build process.
ENV Sets environment variables inside the image.
COPY Copies local files and directories to the image
EXPOSE Specifies the port to be exposed for the Docker container.
ADD It is a more feature-rich version of the COPY instruction.
WORKDIR Sets the current working directory.
VOLUME It is used to create or mount the volume to the Docker container
Dockerfile
Instruction Explanation
USER Sets the user name and UID when running the container.
LABEL It is used to specify metadata information of Dockerimage.
ARG Is used to set build-time variables with key and value.
CMD It is used to execute a command in a running container.
ENTRYPOINT Specifies the commands that will execute when the Docker
container starts. If not specified it defaults to /bin/sh -c.
Build Docker Image Using Dockerfile
• Step 1: Create the required Files and folders
• Step 2: Create a sample HTML file & Config file
• Step 3: Choose a Base Image
• Step 3: Create the Dockerfile
• Step 4: Build your first Docker Image
• Step 5: Test the Docker Image
Build Docker Image Using Dockerfile
Docker Networks
• Docker networking enables a user to link a Docker container to as
many networks as he/she requires.
• Docker Networks are used to provide complete isolation for Docker
containers.
• A user can add containers to more than one network.
Advantages of Docker Networking:
• They share a single operating system and maintain containers in an
isolated environment.
• It requires fewer OS instances to run the workload.
• It helps in the fast delivery of software.
• It helps in application portability.
How Does Docker Networking Work?
• Docker File has the responsibility of
building a Docker Image using the build
command.
• Docker Image contains all the project’s
code.
• Using Docker Image, any user can run the
code to create Docker Containers.
• Docker has its own cloud-based registry
called Docker Hub, where users store and
distribute container images.
Docker Compose
• A tool that assists in defining and sharing multi-container applications.
• By using Compose,define the services in a YAML file, as well as spin
them up and tear them down with one single command.
• Docker Compose is used for running multiple containers as a single
service.
• Each of the containers here run in isolation but can interact with each
other when required.
• Docker Compose files are very easy to write in a scripting language
called YAML, which is an XML-based language .
• Another great thing about Docker Compose is that users can activate
all the services (containers) using a single command.
Docker Compose
• Example:
An application that requires an NGINX server and Redis database. Create a
Docker Compose file that can run both the containers as a service without the
need to start each one separately.
Benefits of Docker Compose
• Single host deployment - This means you can run everything on a single piece
of hardware
• Quick and easy configuration - Due to YAML scripts
• High productivity - Docker Compose reduces the time it takes to perform tasks
• Security - All the containers are isolated from each other, reducing the threat
landscape
Basic Commands in Docker Compose
• Start all services: Docker Compose up
• Stop all services: Docker Compose down
• Install Docker Compose using pip: pip install -U Docker-compose
• Check the version of Docker Compose: Docker-compose-v
• Run Docker Compose file: Docker-compose up -d
• List the entire process: Docker ps
• Scale a service - Docker Compose up -d -scale
• Use YAML files to configure application services - Docker
Compose.yml
Run Docker on AWS
• AWS provides support for both Docker open-source and commercial
solutions.
• There are a number of ways to run containers on AWS such as:
-Amazon Elastic Container Service (ECS)
-Amazon Elastic Container Service for Kubernetes (EKS)
-AWS Fargate
-Amazon Elastic Container Registry (ECR)
Amazon ECS
• A highly scalable, high-performance container orchestration service to
run Docker containers on the AWS cloud.
• Launch thousands of containers across the cloud using your preferred
continuous integration and delivery (CI/CD) and automation tools.
• Optimize your time with AWS Fargate serverless compute for
containers, which eliminates the need to configure and manage control
plane, nodes, and instances.
• Save up to 50 percent on compute costs with autonomous
provisioning, auto-scaling, and pay-as-you-go pricing.
• Integrate seamlessly with AWS management and governance
solutions, standardized for
• compliance with virtually every regulatory agency around the globe.
Amazon ECS: How it Works?
Usecases:
• Deploy in a hybrid environment
• Support batch processing
• Scale web applications
AWS Fargate
• Serverless compute for containers.
• Deploy and manage applications, not infrastructure.
• Fargate removes the operational overhead of scaling, patching, securing, and
managing servers.
• Monitor applications via built-in integrations with AWS services like
Amazon CloudWatch Container Insights.
• Gather metrics and logs with third-party tools.
• Improve security through workload isolation by design.
• Amazon ECS tasks and Amazon EKS pods run in their own dedicated
runtime environment.
• Fargate scales the compute to closely match your specified resource
requirements.
• With Fargate, there is no over-provisioning and paying for additional servers.
AWS Fargate: How it works?
Use cases
• Web apps, APIs, and microservices
• Run and scale container workloads
• Support AI and ML training applications
• Optimize Costs
Amazon EKS
• The most trusted way to start, run, and scale Kubernetes.
• Reduce costs with efficient compute resource provisioning and
automatic Kubernetes application scaling.
• Ensure a more secure Kubernetes environment with security patches
automatically applied to your cluster’s control plane.
Amazon EKS:How it works?
How it works?
• Amazon EKS is a managed Kubernetes service to run Kubernetes in
the AWS cloud and on-premises data centers.
• In the cloud, Amazon EKS automatically manages the availability and
scalability of the Kubernetes control plane nodes responsible for
scheduling containers, managing application availability, storing
cluster data, and other key tasks.
• With Amazon EKS, you can take advantage of all the performance,
scale, reliability, and availability of AWS infrastructure, as well as
integrations with AWS networking and security services.
• On-premises, EKS provides a consistent, fully-supported Kubernetes
solution with integrated tooling and simple deployment to AWS
Outposts, virtual machines, or bare metal servers.
Use cases
• Deploy across hybrid environments
• Model machine learning (ML) workflows
• Build and run web applications
Amazon ECR
• Easily store, share, and deploy your container software anywhere.
• Push container images to Amazon ECR without installing or scaling
infrastructure, and pull images using any management tool.
• Share and download images securely over Hypertext Transfer Protocol
Secure (HTTPS) with automatic encryption and access controls.
• Access and distribute your images faster, reduce download times, and
improve availability using a scalable, durable architecture.
Amazon ECR:How it works?
Use cases
• Manage software vulnerabilities
• Streamline your deployment workloads
• Manage image lifecycle policies