Dockerizing Your Applications
•Definition: The process of
encapsulating an application
along with its environment into
a Docker container.
• Steps:
• Write a Dockerfile with the
necessary instructions.
• Build the Docker image from
the Dockerfile.
• Run the Docker container from
the built image.
FROM node:14
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "server.js"]
4.
Docker And IBMCloud
• Overview: IBM Cloud allows users to deploy, manage, and
scale Docker containers easily.
• Key Features:
• Integration with Kubernetes for orchestration.
• Managed container services for easier deployment.
• Easy scaling and resource management.
• Use Case Scenario: Deploying microservices on IBM Cloud
using Docker.
5.
Modernizing Traditional Applications
•Objective: Transitioning legacy applications to Docker containers to
leverage modern cloud infrastructure.
• Benefits:
• Simplified application deployment and management.
• Increased scalability and resilience.
• Access to advanced features in the cloud.
• Process:
• Assessment of legacy applications.
• Containerization using Docker.
• Deployment to cloud environments.
6.
Moving To CloudAnd Beyond
• Task: Transitioning applications and databases from on-premises
infrastructure to cloud environments.
• Key Steps:
• Assess current infrastructure.
• Choose the right cloud model (IaaS, PaaS, SaaS).
• Migrate applications and test post-migration functionality.
• Challenges:
• Data transfer speeds.
• Compatibility issues.
• Staff training on new technologies.
7.
Moving To CloudAnd Beyond
• IaaS (Infrastructure as a Service)
• Provides storage, networking, and virtualization services
• A pay-as-you-go service that offers a lot of control over operating systems
• Can be used for high-performance computing, software development, and web hosting
• PaaS (Platform as a Service)
• Provides a platform for software development
• Hosts the hardware and software on its own infrastructure
• Allows users to build apps without having to host them on-premise
• Primarily useful for developers and programmers
• SaaS (Software as a Service)
• Provides full software solutions that are ready-to-use
• Offloads all infrastructure and application management to the SaaS vendor
• Users only need to create an account, pay a fee, and start using the application
8.
Moving To CloudAnd Beyond
• PaaS
• AWS Elastic Beanstalk
• Google App Engine
• Adobe Commerce
• Heroku
• Force.com
• Apache Stratos
• Red Hat OpenShift
• IaaS
• Amazon Web Services
• Microsoft Azure
• Google Compute Engine
(GCE)
• DigitalOcean
• Linode
• Rackspace
• Cisco Metapod
9.
Containerization Vs Virtualization
•Definitions:
• Containerization: Encapsulating applications and their dependencies in containers,
sharing the OS kernel.
• Virtualization: Running multiple operating systems on a single physical machine
using hypervisors.
• Differences:
• Resource Efficiency: Containers are lightweight; VMs consume more resources.
• Startup Time: Containers start in seconds; VMs take minutes.
• Isolation: VMs provide stronger isolation at the OS level; containers are less isolated.
• Use Cases:
• Use containers for microservices; use virtualization for legacy apps.
10.
Containerization Vs Virtualization
•Definitions:
• Containerization: Encapsulating applications and their dependencies in containers,
sharing the OS kernel.
• Virtualization: Running multiple operating systems on a single physical machine
using hypervisors.
• Differences:
• Resource Efficiency: Containers are lightweight; VMs consume more resources.
• Startup Time: Containers start in seconds; VMs take minutes.
• Isolation: VMs provide stronger isolation at the OS level; containers are less isolated.
• Use Cases:
• Use containers for microservices; use virtualization for legacy apps.
11.
Choosing Between ContainerizationVs
Virtualization
• Criteria for Choosing:
• Resource Efficiency:
• Containerization: More efficient in terms of resource usage; share the host
OS kernel.
• Virtualization: Higher resource overhead as each VM runs a full OS.
• Isolation Needs:
• Containerization: Less isolation; suitable for lightweight
applications.
• Virtualization: Greater isolation; ideal for running different
operating systems or secure environments.
12.
Choosing Between ContainerizationVs
Virtualization
• Speed of Deployment:
• Containerization: Starts in seconds; quick to deploy and scale.
• Virtualization: Takes minutes for VM boot-up; slower startup
process.
• Development and Testing Environments:
• Containerization: Excellent for microservices and CI/CD pipelines
due to quick setup.
• Virtualization: Suitable for simulating entire multi-OS
environments for testing.
13.
Containerization Vs Virtualization:
Scenarios
•When to Use Containerization:
• Scenario: A development team working on a new microservices-
based application.
• Reason: Containers can be quickly built, tested, and deployed.
They allow the team to isolate services and use the same
environment across development, testing, and production.
14.
Containerization Vs Virtualization:
Scenarios
•When to Use Virtualization:
• Scenario: An enterprise running multiple legacy applications that
require different operating systems.
• Reason: Virtualization provides strong isolation and compatibility
with existing software, as each application can run in its own VM
with a separate OS.
15.
Containerization Vs Virtualization
AspectContainerization Virtualization
Resource Efficiency High (lightweight)
Moderate to Low
(heavyweight)
Isolation Less (shared kernel)
More (separate OS for each
VM)
Startup Time Seconds Minutes
Ideal Use Case Microservices, CI/CD
Legacy apps, full OS
environments
16.
Containerization Vs Virtualization:
Summary
•Containerization is generally preferred for cloud-native
applications and microservices due to its efficiency and
speed.
• Virtualization is the choice for scenarios requiring strict
isolation, multiple OS needs, or legacy application support.
17.
Pros and Consof Application
Containerization
• Advantages:
• Portability across environments.
• Consistent application performance.
• Resource efficiency and faster deployment.
• Disadvantages:
• Security risks due to shared kernel.
• Complexity in orchestration at scale.
• Dependence on orchestration tools for managing containers.
18.
Running Your OwnDocker Containers
• Refer to Labs 3 to 5 for knowing how to run your own
docker container/s
19.
Docker Desktop ForWindows
• Overview: Docker Desktop provides a graphical interface
for managing Docker containers and images on Windows.
• Installation Steps:
• Refer to Windows section in Lab 1
• Key Features:
• Easy-to-use UI for managing images and containers.
• Built-in Kubernetes support.
• Integration with Windows filesystem.
20.
Finding Your DockerVersion
• Keeping an eye on your
docker version is essential
• This helps to setup
complaint versions when
working across projects
• Also, newer features and
their availability depends
on the versions
21.
Running Your FirstNGINX
• NGINX is an open-source web server software that's used for
load balancing, caching, and reverse proxying. It's also used as
a proxy server for email protocols like IMAP, POP3, and SMTP.
• What it does:
• Load balancing: Distributes traffic across multiple servers
• Caching: Stores and retrieves content to improve performance
• Reverse proxying: Acts as a traffic light to direct requests to the
appropriate server
• Media streaming: Supports streaming of audio and video
• HTTPS server: Provides HTTPS server capabilities
22.
Running Your FirstNGINX
• Why it's used:
• It's designed for high performance and stability
• It can handle many concurrent connections without using extra
resources
• It's well-suited for high-traffic websites and applications
23.
Running Your FirstNGINX
• How it works:
• It uses an event-driven, asynchronous architecture to manage
multiple connections efficiently
• It can compress large images or video files to save bandwidth
• It can send responses in chunks instead of the entire file at once
• Who created it:
• Russian developer Igor Sysoev created NGINX and released it in
2004
Docker Repository
• Definition:A Docker repository is a collection of related
Docker images, typically hosted on Docker Hub
• Key Concepts:
• Public Repositories: Open to everyone (default on Docker Hub).
• Private Repositories: Restricted access for teams/projects.
• Example: The official NGINX repository can be found at:
https://hub.docker.com/_/nginx.
26.
Docker Tags
• Definition:
•Tags are identifiers attached to Docker images to differentiate
between versions.
• Format: repository:tag (e.g., nginx:latest, myapp:v1.0).
• Default Tag:
• If no tag is specified, Docker defaults to the latest tag.
• Purpose:
• Facilitate version control and management of image libraries.
27.
Docker Tag: Examples
•Common Tags:
• nginx:latest: The most
recent version of Nginx.
• mysql:5.7: Specific version
of MySQL.
• myapp:v1.2.3: Custom
application with a specific
versioning scheme.
28.
Docker Tag Scenario
•Scenario: Managing Application Releases
• Use tagging to mark the different stages of application
development:
• myapp:dev: Development version.
• myapp:staging: Pre-production version.
• myapp:prod: Production version.
• Benefits:
• Simplifies rollback processes by allowing developers to revert to
previous versions.
29.
Tagging Scheme
• VersioningStrategies:
• Semantic Versioning: Major.Minor.Patch (e.g., 1.0.0, 2.1.4).
• Date-Based Tags: Tags based on date (e.g., 2025-01-30).
• Best Practices:
• Maintain a consistent tagging pattern.
• Avoid using "latest" indiscriminately for production environments.
30.
Docker Images
• Definition:
•Immutable snapshots
containing the files and
settings needed to run
applications.
• Building Docker Images:
• Created from Dockerfiles
using the command:
31.
What Are DockerLayers?
• Definition:
• Docker layers are the building blocks of Docker images.
• Each layer represents a change made to the image, such as adding
files or installing software.
• Layering Concept:
• Layers in Docker are stacked on top of one another to form a
complete image, where the top layer is writeable, and all
underlying layers are read-only.
32.
Layering Structure
• HowLayers Work:
• When a Dockerfile is
processed, each command
generates a new layer.
• Layers are cached, allowing
unchanged layers to be
reused in subsequent
builds.
33.
Benefits Of Layering
•Efficiency:
• Layers allow Docker to reuse existing images when building new ones, making builds
faster.
• Reduced Disk Space:
• Shared layers among multiple images save disk space effectively.
• Cache Mechanism:
• Docker caches layers to improve build times. If a layer hasn’t changed, it uses the cached
version rather than re-creating it.
• Example:
• If you change a single line in your Dockerfile, only that layer and the ones above it will
need to be rebuilt.
• I will explain this scenario, now!
34.
Layering Inspections
• ViewingLayer Details:
• Use the docker history
command to view the
layers of a specific image.
• “Command: docker
history xyzz:tag”
• The output will show each
layer, its size, and the
command that created it.
35.
Layer Caching InAction
Example Docker File
Breakdown of Layers in the
Example
• Layers
• Layer 1: Base image (ubuntu:20.04)
• Layer 2: Result of RUN apt-get update
• Layer 3: Result of RUN apt-get install -y curl
• Layer 4: Result of COPY . /app
• Layer 5: Result of CMD ["bash",
"/app/start.sh"] Impact:
• If you edit the content of /app, only
Layer 4 and Layer 5 will change, and
the other layers will be reused.
36.
Layer Modification AndWhat
Happens
• What Happens on
Change:
• Modifying an instruction in
a Dockerfile results in a
rebuild of that layer and all
subsequent layers.
• Given Example:
• Changing Layer 2 will
require rebuilding Layer 3,
Layer 4, and Layer 5.
37.
Layer Management: BestPractices
• Combine Commands:
• Reduce the number of layers by combining RUN commands.
• For example,
• RUN apt-get update && apt-get install -y curl wget
• Minimize Layers:
• Remove unnecessary files and do not create intermediate files unless
needed.
• Order Matters:
• Place frequently changing commands (e.g., COPY) towards the bottom
to take advantage of caching.
38.
Layers: Summary
• Dockerlayers optimize image creation and help in effective
caching.
• Understanding layers can significantly improve efficiency
and speed in the Docker build process.
• Managing layers thoughtfully can lead to smaller, faster, and
cleaner Docker images.
39.
Dockerfiles
• Definition:
• ADockerfile is a text file that contains a series of instructions used
to build a Docker image.
• Purpose:
• Automates the process of creating Docker images that package
applications and their dependencies.
• Simple Structure:
• Each line in a Dockerfile is a command that is executed in the
order defined in the file.
40.
Dockerfile: Basic Structure
•Format:
• A Dockerfile consists of a
set of instructions followed
by key parameters.
• Explanation of the
Example:
• FROM: Base image.
• LABEL: Metadata about the
image (optional).
41.
Dockerfile: Instructions
• FROM:Defines the base image.
• RUN: Executes commands on top of the previous layer.
• COPY: Copies files from the host filesystem into the image.
• ADD: Similar to COPY but can also extract TAR files and fetch remote
URLs.
• CMD: Specifies the default command to run when the container
starts.
• ENTRYPOINT: Configures the container to run as an executable.
• ENV: Sets environment variables.
42.
Dockerfile: Best Practices
•Use Official Base Images:
• Start from recognized base images when creating Dockerfiles for better
security and maintenance.
• Minimize Layers:
• Combine commands whenever possible to reduce the number of layers
and image size.
• Optimize RUN Commands:
• Use && to chain commands together, creating fewer layers.
• Use .dockerignore:
• To exclude unnecessary files from the build context, use a .dockerignore file.
43.
Managing Containers AndImages
• Containers: Containers are lightweight, portable units that
package an application and all its dependencies (libraries,
configurations, etc.) into a single, unified package
• Images:
• A Docker image is a snapshot of a container's filesystem at a specific
point in time, which can be used to create containers
• Images are immutable and stored in a repository like Docker Hub
• Difference:
• A container is a running instance of an image Images are read-only
• Containers are running instances that can be modified.
44.
Docker Volumes
• Whatare Docker Volumes?:
• Volumes are persistent storage mechanisms for Docker
containers.
• They are stored outside the container filesystem, ensuring data
persistence even when containers are removed or recreated.
• Difference Between Volumes and Bind Mounts:
• Volumes are managed by Docker and offer better isolation
• Bind Mounts link specific files/directories from the host to the
container.
45.
Docker Volume Drivers
•What are Volume Drivers?:
• Volume drivers allow Docker to use external storage backends
for volumes.
• This enables the use of cloud storage, network storage (e.g.,
NFS), or custom storage solutions.
• Default Driver: Docker uses the local driver by default,
which stores volumes on the host filesystem.
46.
Volume Driver Types
•local (Default Volume Driver)
• Stores data on the local filesystem of the Docker host
• Supports options like specific mount points, file system types, and
performance tuning
• docker volume create --driver local my_local_volume
• nfs (Network File System)
• Mounts a shared network file system as a volume
• Useful for sharing data across multiple containers and hosts
• Requires an NFS server
• docker volume create --driver local --opt type=nfs --opt
o=addr=192.168.1.100,rw --opt device=:/path/to/nfs my_nfs_volume
47.
Volume Driver Types
•Tmpfs
• Stores data in memory instead of disk.Data is lost when the container stops
• Improves performance for temporary data storage
• docker run --tmpfs /app:rw,size=100m,mode=1777 my_container
• Rexray
• Provides storage management across multiple platforms like AWS EBS,
Azure Disk, and Google Persistent Disk.
• Supports dynamic volume provisioning.
• docker plugin install rexray/ebs #(elastic block store)
• docker volume create --driver rexray/ebs --name my_ebs_volume
• docker run -v my_ebs_volume:/data my_container
48.
Volume Driver Types
•portworx
• Software-defined storage solution for Kubernetes and Docker
Swarm.
• Provides highly available storage with replication and snapshots.
• longhorn
• Lightweight, cloud-native distributed block storage for
Kubernetes.
• Provides snapshots, backups, and disaster recovery.
49.
Volume Driver Types
•cinder
• Cinder is OpenStack’s block storage service, which allows containers to use
persistent storage across different hosts.
• It provides dynamic volume provisioning and high availability.
• Best suited for OpenStack cloud environments where applications require
persistent storage.
• cloudstor
• Cloudstor is a Docker volume plugin designed for AWS, Azure, and other cloud
platforms
• It automatically provisions cloud storage resources like EBS (AWS) or Azure Disk
Storage
• Optimized for Docker Swarm environments
50.
Volume Driver Types:Cloud Services
• AWS EBS (Amazon Elastic Block Store)
• Driver: rexray/ebs
• Use Case: Persistent block storage for Docker containers on AWS
• Azure Disk Storage
• Driver: cloudstor:azure
• Use Case: Persistent storage for Azure virtual machines running
Docker.
51.
Volume Driver Types:Cloud Services
• Google Persistent Disk (GCP)
• Driver: rexray/gcepd
• Use Case: Persistent storage for containers running in Google
Cloud
• DigitalOcean Block Storage
• Driver: rexray/do
• Use Case: Attach block storage volumes to Docker containers in
DigitalOcean.
52.
Networking In Docker
•Why Docker Networking is Important?:
• Docker containers need to communicate with each other and
the outside world.
• Networking in Docker is used to manage how containers
connect to each other and the host system.
• Try:
• docker inspect <container_id>
• The above command shows detailed networking settings of the
container
53.
Networking In Docker:Bridge
Network
• Concept
• The bridge network is the default network that Docker assigns to
containers when no specific network is mentioned
• It is an internal virtual network that allows containers to
communicate with each other within the same Docker host
• The host system (or external network) can only access containers
through published ports
54.
Networking In Docker:Bridge
Network
• How It Works
• Each container gets an internal IP address and can
communicate with other containers on the same bridge network
using their container name
• By default, containers on a bridge network cannot communicate
with other bridge networks or the host unless explicitly configured
55.
Networking In Docker:Bridge
Network
• Implementation
• Creating the bridge network with a name
• docker network create my_bridge_network
• Assigning the created network to two containers
• docker run -d --name container1 --network my_bridge_network nginx docker
run -d --name container2 --network my_bridge_network alpine ping
container1
• Now both containers can ping each other with the name because
the DNS resolution is automatically done within a same bridge
network
• ping container1 (from the shell of container 2)
56.
Networking In Docker:Bridge
Network
• Use Cases
• Suitable for multi-container applications running on the same
host
• Useful when you need internal container-to-container
communication
• Ideal for simple setups like Docker Compose applications
57.
Networking In Docker:Host Network
• Concept
• The host network removes network isolation between the
container and the host
• The container directly shares the network stack of the host
machine
• There is no separate internal IP assigned to the container
58.
Networking In Docker:Host Network
• How It Works
• Containers on a host network don’t get their own private Ips
• They use the host’s network interfaces instead
• This means the container can bind directly to the host’s ports
without port mapping
59.
Networking In Docker:Host Network
• Implementation
• Run a container with host networking
• docker run --network host nginx
• You can now reach the web server directly on the host’s IP and
port 80
• Access the NGINX container
• http://localhost:80
• Remember we need to expose the port and do port
mapping usually to access the dockers running web service
60.
Networking In Docker:Host Network
• Use Cases
• Best for performance-sensitive applications like media streaming
or VoIP
• Useful for monitoring agents that need access to host network
details
• Simplifies network configurations by removing the need for port
mappings
61.
Networking In Docker:Overlay
Network
• Concept
• An overlay network is used to enable container communication
across multiple hosts in a Docker Swarm cluster
• It abstracts networking to allow containers to talk to each other
regardless of which host they are running on
62.
Networking In Docker:Overlay
Network
• How It Works
• Overlay networks use VXLAN encapsulation to allow
communication between containers running on different
physical or virtual machines
• You must be in Swarm mode to create an overlay network
63.
Networking In Docker:Overlay
Network
• Implementation
• Initialize a Docker Swarm
• docker swarm init
• Create an overlay network
• docker network create -d overlay my_overlay_network
• Deploy a service using the overlay network
• docker service create --name my_service --network my_overlay_network
nginx
• Run another service that can communicate across hosts
• docker service create --name db --network my_overlay_network mysql
64.
Networking In Docker:Overlay
Network
• Use Cases
• Best for multi-host applications running on Docker Swarm
• Used for distributed microservices
• Ideal for high availability applications where containers run on
different machines
• Context
• If you’re deploying a microservices application with frontend,
backend, and database containers across multiple cloud nodes,
an overlay network ensures they can communicate seamlessly