KEMBAR78
Docker Getting Started v3 | PDF | Virtual Machine | Computer Network
0% found this document useful (0 votes)
57 views132 pages

Docker Getting Started v3

Docker is a powerful containerization platform that simplifies the deployment and management of software applications by packaging them with their dependencies. It addresses challenges in the IT industry similar to those faced in transportation, allowing developers to focus on building applications without worrying about environment inconsistencies. Docker integrates well with DevOps tools like Jenkins, enabling efficient continuous delivery and resource management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views132 pages

Docker Getting Started v3

Docker is a powerful containerization platform that simplifies the deployment and management of software applications by packaging them with their dependencies. It addresses challenges in the IT industry similar to those faced in transportation, allowing developers to focus on building applications without worrying about environment inconsistencies. Docker integrates well with DevOps tools like Jenkins, enabling efficient continuous delivery and resource management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 132

Getting Started Docker

Unit 1
Docker is a very powerful containerization platform – not to mention
very popular. It is used extensively and provides a large number of
integrations, including GitHub, Jenkins, Kubernetes and Terraform. We
will be exploring the concepts and how best we can leverage Docker.

Presented by – Seshagiri Sriram


Why do we
need
Docker?

2 www.guvi.in
Shipping Transportation Challenges
Problem
• When goods are transported, they have to pass through a variety of
different means Ex:- trucks, forklifts, cranes, trains, and ships.
• These means have to be able to handle a wide variety of goods of
different sizes and with different requirements e.g. sacks of coffee,
drums of hazardous chemicals, boxes of electronic goods, fleets of
luxury cars, and racks of refrigerated lamb).
• This was a cumbersome and costly process, requiring manual labor,
such as dock workers, to load and unload items by hand at each
transit point

3 www.guvi.in
Shipping Transportation Challenges
Solution
• The transport industry was revolutionized by the introduction of the intermodal container.
• These containers come in standard sizes and are designed to be moved between modes
of transport with a minimum of manual labor.
Advantages :
• All transport machinery is designed to handle these containers, from the forklifts and
cranes to the trucks, trains, and ships.
• For Ex :- Refrigerated and insulated containers are available for transporting temperature
sensitive goods, such as food and pharmaceuticals.
• The benefits of standardization also extend to other supporting systems, such as the
labeling and sealing of containers.
• This means the transport industry can let the producers of goods worry about the
contents of the containers so that it can focus on the movement and storage of the
containers themselves.

4 www.guvi.in
Relevance in IT Industry

Current IT Industry Problem :


• We have a similar issue to the one seen by the transport industry—we have to
continually invest substantial manual effort to move code between
environments.
• A typical modern system may include Javascript frameworks, NoSQL
databases, message queues, REST APIs, and backends all written in a variety of
programming languages
• This stack has to run partly or completely on top of a variety of hardware—from
the developer’s laptop and the in-house testing cluster to the production cloud
provider.

5 www.guvi.in
Relevance in IT Industry
Solution
Much as the intermodal containers simplified the transportation of goods, Docker
containers simplify the transportation of software applications.
Developers can concentrate on building the application and shipping it through
testing and production without worrying about differences in environment and
dependencies.
Operations can focus on the core issues of running containers, such as allocating
resources, starting and stopping containers, and migrating them between servers.

6 www.guvi.in
How Does Docker Fit Into Devops Eco-System
Docker forms a lethal combination with Jenkins:
The lethal combination of Jenkins and Docker is proving to be very valuable for DevOps teams.
By leveraging the tight integration with source code control mechanisms such as Git, Jenkins can initiate a build process
each time a developer commits his code. This process results in a new Docker image which is instantly available across
environments.
Organizations are deploying private Docker registries to publish and maintain their internal Docker images

Docker makes it easier to test exactly what you deploy:


Docker containers encourage a central tenet of continuous delivery – reuse the same binaries at each step of the
pipeline to ensure no errors are introduced in the build process itself. The complexities of the configuration
Management tools are considerable and complicated to manage. Most of the problems stem from complex
integrations.
Example: You have to make sure that you are using the same libraries and other important elements every time.
Making sure that all those environments are perfectly aligned can be a lot of work.
Docker solves that because you are starting with a given base image, and it’s guaranteed to be the same image
everywhere.
7 www.guvi.in
How Does Docker Fit Into Devops Eco-System
Docker has revolutionized software packaging and deployment:
Instead of deploying the final set of artifacts such as EXE and JAR files to the target environment, ops teams are now
packaging the entire application as a Docker Image.
This image shares the same build version before getting published to a central registry.
It is then picked up by various environments – development, testing, staging, and production – for final deployment

Docker containers provide the basis for immutable infrastructure:


Applications can be added, removed, cloned and/or its constituencies can change, without leaving any residues behind.
Whatever mess a failed deployment can cause is constrained within the container.
Deleting and adding become so much easier that you stop thinking about how to update the state of a running
application.

8 www.guvi.in
What is
Docker ?

9 www.guvi.in
Docker
Docker is a Containerization platform which packages your application and all its dependencies together in the
form of Containers so as to ensure that your application works seamlessly in any environment be it
Development or Test or Production.

Container 1 Container 2

App 1 App 2
BINS / LIBS BINS / LIBS

Docker Engine

Host OS

10 www.guvi.in
Containers v/s Virtual Machines

11 www.guvi.in
Virtualization; Adopted By VMs
Virtualization Technique

App 1 App 2 App 3 Advantages


BINS BINS BINS ▪ Multiple OS In Same Machine
& & & ▪ Easy Maintenance & Recovery
VMs LIBS LIBS LIBS
▪ Lower Total Cost Of Ownership
Guest Guest Guest
OS OS OS Disadvantages
▪ Multiple VMs Lead To Unstable Performance
Hypervisor ▪ Hypervisors Are Not As Efficient As Host OS
Host OS ▪ Long Boot-Up Process ( Approx. 1 Minute )

12 www.guvi.in
Containerization
Note: Containerization Is Just Virtualization At The OS Level

Containerization Technique

App 1 App 2 App 3 Advantages Over Virtualization

Containers BINS BINS BINS ▪ Containers On Same OS Kernel Are Lighter & Smaller
& & & ▪ Better Resource Utilization Compared To VMs
LIBS LIBS LIBS
▪ Short Boot-Up Process ( 1/20th of a second )
Container Engine

Host OS

13 www.guvi.in
Benefits of Docker over VM’s

14 www.guvi.in
VM vs. Docker

15 www.guvi.in
Resource/Memory Utilization
In case of Virtual Machines

6 GB 4 GB 2 GB VM 1
Size
4 GB
Total Memory: 3 GB 1 GB VM 2
Start-up 16 GB

Integration 6 GB 2 GB 4 GB VM 3

🡪 Memory Used: 9 GB

🡪 Memory wasted: 7 GB

7 Gb of Memory is blocked and


cannot be allotted to a new VM

16 www.guvi.in
Resource/Memory Utilization
In case of Virtual Machines In case of Docker

Memory 4 GB App 1
Allotted: 4 GB

6 GB 4 GB 2 GB VM 1
Size
4 GB
Total Memory: 3 GB 1 GB VM 2 Total Memory: Memory 3 GB App 2
Start-up 16 GB 16 GB Allotted: 35 GB

Integration 6 GB 2 GB 4 GB VM 3

Memory 2 GB App 3
Allotted: 10 GB

🡪 Memory Used: 9 GB

🡪 Memory wasted: 7 GB 🡪 Memory Used: 9 GB

7 Gb of Memory is blocked and Only 9 GB memory utilized;


cannot be allotted to a new VM 7 GB can be allotted to a new Container

17 www.guvi.in
Building and Deployment
In case of Virtual Machines In case of Docker
Build 1 Build 2

Ruby v1 Ruby v2 Build 1 Build 2


BINS BINS Ruby v1 Ruby v2
& & LIBS Virtual Machines Container
Size BINS BINS
LIBS
Start-up & &
Guest Guest LIBS LIBS
Integration OS OS
Host OS Kernel
Host OS Kernel

New Builds 🡪 Multiple OS 🡪 Separate Libraries New Builds 🡪 Same OS 🡪 Separate Libraries 🡪
🡪 Heavy 🡪 More Time Lightweight 🡪 Less Time

18 www.guvi.in
Integration in VMs

Integration In Virtual Machines Is Possible, But:


▪ Costly Due To Infrastructure Requirements
▪ Not Easily Scalable

Size
Start-up Jenkins
Integration

19 www.guvi.in
When to use what?

•Containers allow you to run more applications on a physical machine than VMs.
•If resources are a constraint, containers may be a better choice.
•With containers you can create a portable, consistent operating environment
for development, testing, and deployment.
•From a security viewpoint, important subsystems like SELINUX are outside the
scope of containers
•therefore a person with super user privileges in a container can in theory undermine
underlying operating system.
•There is no easy way to make a packaged box handle all upstream
dependencies.
•Wrong packaging is harder to debug and resolve, thus increasing time spent in QA activities.

20 www.guvi.in
When to use what?

•What is the scope of your work? In our example, we have only Jenkins.
•If we needed to run other applications, it makes more sense to use a VM over
a container.
•Do you plan to use this instance of Jenkins only on one OS? If yes, proceed
with a container over a VM.
•The thumb rules are
•If planning to use multiple instances of an application over minimum number
of servers, containers are the best options.
•If planning to run multiple applications with greater requirement for security,
then use VM.
•In all cases, a proper Cost Benefit Analysis process is required to
decide which option is best for your organization.

21 www.guvi.in
Docker – A DevOps Ally

Integration in Docker is Faster, Cheap & Easily Scalable

Jenkins
Size
Start-up
Integration

22 www.guvi.in
Architecture

23 www.guvi.in
Docker Architecture

Client Docker_Host Registry

docker build Docker Daemon

docker pull Containers Images

docker run

24 www.guvi.in
Docker Daemon

▪ The Docker daemon runs on a host machine.


▪ The user uses the Docker client to interact with the daemon.

Why Use Docker Daemon?


▪ Responsible for creating, running, and monitoring containers
▪ Building and storing images

25 www.guvi.in
Docker Client

▪ The Docker client is the primary user interface to Docker.


▪ It accepts commands and configuration flags and communicates with a Docker
daemon via HTTP.
▪ One client can even communicate with multiple unrelated daemons.

Why Use Docker Daemon?


▪ Since all communication has to be done over HTTP - it is easy
connect to remote Docker
▪ The API used for communication with daemon allows
developers to write programs that interface directly with the
daemon, without using Docker

26 www.guvi.in
Docker Registry

▪ Docker Registry is a storage component for Docker Images


▪ We can store the Images in either Public / Private repositories
▪ Docker Hub is Docker’s very own cloud repository

Why Use Docker Registries?


▪ Control where your images are being stored
▪ Integrate image storage with your in-house development workflow

27 www.guvi.in
Private or Public Registry?

▪ When choosing between these, some points to consider are:


•Performance, depending mainly on roll-out frequency and cluster size.
•Security issues such as access control and digitally signing Docker image.
▪ With a private registry, you
▪Are in full control.
▪No external dependencies in the CD pipeline, so your build really do build faster (or appear to
be faster).
▪Have to manage storage yourself – and this can increase drastically as the adoption of Devops
and number of build increase.

▪ Before doing any decisions,


▪Evaluate how many builds/images are being pushed and the average increase in size.
▪Also factor in growth rates for your applications/builds
▪These will give you metrices for determining network bandwidth and storage requirements.

28 www.guvi.in
Which Public Registry?

▪ Factors for making a selection include:


▪ Costs for storage and support for number of images
▪ Workflow support including integration with your version control
system
▪ Some of them like Coreos Quay.IO fully support integration
with GIT.
▪ You should not have to drastically change your workflow to
support external registries.

29 www.guvi.in
Docker Images & Containers

run

Docker Images Docker Containers

▪ Read Only Template Used To Create Containers ▪ Isolated Application Platform


▪ Built By Docker Users ▪ Contains Everything Needed To Run The Application
▪ Stored In Docker Hub Or Your Local Registry ▪ Built From One Or More Images

30 www.guvi.in
Docker Architecture In Action

Client Docker_Host Registry

docker build Docker Daemon

docker pull Containers Images

docker run

build

31 www.guvi.in
Docker Architecture In Action

Client Docker_Host Registry

docker build Docker Daemon

docker pull Containers Images

docker run

build
pull

32 www.guvi.in
Docker Architecture In Action

Client Docker_Host Registry

docker build Docker Daemon

docker pull Containers Images

docker run

build
pull
run

33 www.guvi.in
Docker’s Plugins
And Plumbing

34 www.guvi.in
Docker - Plugin’s And Plumbing
The Docker engine and the Docker Hub do not in-and-of themselves constitute a complete solution for working with
containers

Docker API
✔ API level allowing components to hook into the Docker Engine

Docker Compose
✔ Tool for building and running applications composed of multiple Docker containers
✔ Used in development and testing rather than production

Docker Machine
✔ Installs and configures Docker hosts on local or remote resources
✔ Machine also configures the Docker client, making it easy to swap between environments

Docker Kitematic
✔ Kitematic is a Mac OS and Windows GUI for running and managing Docker containers

35 www.guvi.in
Docker - Plugin’s And Plumbing
Docker Trusted Registry
✔ Docker’s on premise solution for storing and managing Docker images
✔ A local version of Docker Hub that can integrate with an existing security infrastructure and helps
organizations comply with regulations regarding the storage and security of data
✔ Only non–open source product from Docker Inc.

Docker Swarm – Docker’s Clustering Solution


✔ Used to group several Docker hosts, allowing the user to treat them as a unified resource

Orchestration and cluster management


✔ In large container deployments, tooling is essential in order to monitor and manage the system
✔ Each new container needs to be placed on a host, monitored, and updated
✔ The system needs to respond to failures or changes in load by moving, starting, or stopping containers
appropriately.
✔ Several competing solutions in the area, including Kubernetes from Google

36 www.guvi.in
Docker Topics

37 www.guvi.in
Docker Topics

38 www.guvi.in
Working with
Docker Images

39 www.guvi.in
What is an Image?
✔ An image is basically a text file with a set of pre-written commands and saved as a file usually called as a
Docker file

✔ Docker images are made up of multiple layers which are read-only filesystem

✔ A layer is created for each instruction in a Docker file and sits on top of the previous layers

✔ When an image is turned into a container the Docker engine takes the image and adds a read-write
filesystem on top (as well as initializing various settings such as the IP address, name, ID, and resource
limits)

Docker File

Image Container

40 www.guvi.in
Few Basic Commands
✔ Few Basic Commands:

• $ docker help :- Displays all the useful commands for Docker and other general help commands
• $ docker images :- Displays a list of existing images in Docker system. It also displays the following details:

• REPOSITORY: Name of the repository

• TAG: Every image has an attached tag.

• IMAGE ID: Each image is assigned a unique ID

• CREATED: The date when the image was created

• SIZE: The size of the image

41 www.guvi.in
Few Basic Commands
✔ $ docker ps :- Displays the list of active containers. It also displays the following details:

• CONTAINER ID: Each container is assigned a unique ID

• IMAGE: Every image has an attached tag.

• COMMAND: Each image is assigned a unique ID

• CREATED : The date when the image was created

• STATUS: This shows whether the container is active or not

• PORTS: The number of exposed ports (needed for networking)

• NAMES: Name of container which is automatically assigned by Docker. It contains first name, “_” and last name

✔ $ docker ps – a :- Displays the list of all the container processes which are running or have run in the past

42 www.guvi.in
Create Our First Image: Hello-World
✔ The following actions will be performed to pull the image:

• Search for images which start with “hello” word from Docker hub

• Select the image which we are going to pull

• Pull the selected image from Docker Hub

• Search for the copy of the image in our local repository

• Initiate a container based on “Hello-world” image

• Examine container Details

43 www.guvi.in
Search – Hello World
✔ Command: $ docker search hello

✔ Command: $ docker pull hello-world

44 www.guvi.in
Image Pull – Hello World
✔ Command: $ docker images
Result Details:

• REPOSITORY: This is our local repository

• TAG: Since we did not specify the Tag in our command, we got the latest version

• IMAGE ID: This is the ID of the image

• CREATED: This is the time when the image was created

• SIZE: This is the size of the image

45 www.guvi.in
Image History
✔ Image History:
• Can get the history of the base image to understand how the base image was built
• Can see the different layers that were used during the build of image
• Gives ImageID, command that created particular layer and the size of the layer

✔ Command: $ docker history ImageName

46 www.guvi.in
Image History – Why
• Psychologically users tend to use the latest images. But sometimes we may need to use older
images for following reasons.
• Supporting clients who have not yet been migrated to newer technologies. Assume Client
A is requiring Library Version 1.0 which was released around January 2013. An image built
around this time is more likely to be supporting Library Version 1.0
• History also allows to see if a particular images (esp. from a 3rd party) is even under active
development/support.
• As an example, if an image was last built in January 2012, it is unlikely we will consider
this as a base image for our development.

47 www.guvi.in
Tagging
✔ Tagging:
• Each image has a default tag associated with it
• Default tag is set by the image maintainer
• Command: $docker tag centos (Image ID or Image Name) Tag (Name of the desired tag)

48 www.guvi.in
Tagging – Types and best Practices
• Docker uses the term tagging to refer to a label applied to an image (e.g. –t) or refer to string applied to
end of image name (e.g. jenkins:latest). The latter is usually referred to as a version tag.
• For version tags, there are really no clear cut best practices
• Usually practices for versioning in GIT are applied as is to docker tagging also.
• Docker’s automated builds lets a user link a “version tag” to either to a branch or a tag in the git history.
• A “branch” in this case can refer either to a different git branch or merely a different sub-directory.
• Matching to a Git tag provides the most clear-cut use of the docker version-tag; providing a
relatively static version stable link. (Food for thought: Is this a good practice or not?)
• Using the version tag to indicate any other difference is a widespread practice but with no clear use
case except for supporting multiple dockerfiles in same repository.

49 www.guvi.in
Image Distribution - Repositories
✔ What is a Repository?
• A collection of images.
• There can be three kinds of repositories
• Local: Can be saved on the system. All the images which are pulled from Public or Private
repository gets saved on the local repository
• Private:
• Can get one free from Docker
• If you need other private repositories then you need to pay
• Requires Username and Password
• Public (Docker Hub):
• Need to sign up. We will be discussing this in detail

50 www.guvi.in
Hierarchy For Image Storing
✔ There is a hierarchical system for storing images, where following terminology is used:

Image Registry
Storing:
A service responsible for hosting and distributing images.
The default registry is the Docker Hub.

Repository

A collection of related images (usually providing different


versions of the same application or service)

Tag

An alphanumeric identifier attached to images within a


repository (e.g., 14.04 or stable)

51 www.guvi.in
Best practices for Image Storing
✔ As far as possible, use namespaces (We will be dealing with namespaces
later on)
✔ Version tags as far as possible should be mapped to GIT branching tags and
not used for indicating other differences.
✔ Clearly indicate for personal images, the use case for the image e.g. dev, qa,
production.
✔ Assume you are the person responsible for pushing images to the repository.
Automate the process of pushing images to the repository, which involves:
✔ Push images
✔ Remove images locally
✔ Repull images (Sounds close enough to GIT Best Practices?)

52 www.guvi.in
Pushing Images to Docker Hub
✔ Step 1: To push the images to Docker Hub, first login to Docker Hub

✔ Step 2: Create a new public repository with your name

✔ Step 3: Make sure your local image’s name is the same as Docker repo’s name

✔ Step 4: If it is not the same, tag your local image to give it the same repo name as the repository you
created on Docker Hub using the command: $ docker tag <Local image name> <Docker Hub repo
name>

✔ Step 4: Now, to push the Image to Docker Hub, use the command: $ docker push <Docker Hub repo
name>

53 www.guvi.in
Pushing Images to Docker Hub(Contd..)

54 www.guvi.in
Pushing Images to Docker Hub(Contd..)

55 www.guvi.in
Pushing Images to Docker Hub(Contd..)
✔ Image is pushed to Docker hub using command: $ docker push
seshagirisriram(username)/sriram_hello_world(Name of image)

56 www.guvi.in
Image Namespaces
✔ Namespaces:
• Namespacing ensures users cannot be confused about where images have come
from
• Example: If using the Centos image, it is the official image from Docker Hub and not
some other registry’s version of Centos image
✔ Following are three namespaces pushed Docker images, which can be identified from the
image name:
• Names Prefixed With A String:
• Names prefixed with a string and /, such as /nginx belong to the “user” namespace
• These are images on Docker Hub that have been uploaded by a given user
• Example: docker/nginx is the nginx image uploaded by the user docker
57 www.guvi.in
Image Namespaces
• Simple Names:
• Names such as Debian and Ubuntu, with no prefixes or /s, belong to “root” namespace
• There are official images for most common software packages, which should be your first port
of call when looking for an image to use

• Names Prefixed With Hostname or IP:


• Names prefixed with a hostname or IP are images hosted on third-party registries (not the
Docker Hub)
• These include self - hosted registries for organizations, as well as competitors to the Hub, such
as quay.io
• Example: localhost:5000/wordpress refers to an WordPress image hosted on a local registry

58 www.guvi.in
Image Selection – Base Image
✔ Base Images
• When creating your own images, you will need to decide which base
image to start from
• The best-case scenario is that you don’t need to create an image at all you can
just use an existing one and mount your configuration files and/or data into it
• This is likely to be the case for common application software, such as databases
and web servers, where there are official images available
• In general, you are better off using an official image than rolling your own

59 www.guvi.in
•Image Selection – Base Image(Contd..)
✔ Benefits of Using Base Images
• You get the benefit of other people’s work and experience in figuring out
how best to run the software inside a container

✔ If Base Image Is Not Available


• If there is a particular reason an official image doesn’t work for you,
consider opening an issue on the parent project, as it is likely others are
facing similar problems or know of workarounds.

60 www.guvi.in
•When to select base image and when to create

✔ Benefits of Using Base Images


• You get the benefit of other people’s work and experience in figuring out
how best to run the software inside a container

✔ If Base Image Is Not Available


• If there is a particular reason an official image doesn’t work for you,
consider opening an issue on the parent project, as it is likely others are
facing similar problems or know of workarounds.

61 www.guvi.in
Containers

62 www.guvi.in
What we will cover
✔ Attaching to a container

✔ Container Life Cycle:

• Initiating a container

• Examining existing containers

• Naming a container

• Attaching to a container

• Stopping a container

• Restarting a container

• Removing a container

✔ Inspect our existing containers and look for important information

63 www.guvi.in
Connection Modes
✔ Container can be connected in the following two modes:

Connection Modes

Detached Mode Root User Mode

64 www.guvi.in
Detached Mode Vs Root Mode
Detached Mode Root User Mode
✔Command: $ docker run –itd ubuntu:xenial ✔Command: $ docker run –it ubuntu:xenial
• I – Interactive • I – interactive
• T – Connected to terminal • T – connected to terminal
• D – Detached mode

65 www.guvi.in
Detached Mode Vs Root Mode(Contd..)
Detached Mode Root User Mode
• User manages from Daemon • User manages from the Root

• Container does not exit after the process within • Container exits after the process within the
the container is over. container is over

• Container could be stopped at a later stage • Container can be restarted at a later stage
though. though

• Using the $docker attach command user can • Using the $docker exit command user can
attach as a root user attach to the daemon

• Allows control over other containers • Allows control over the container to which user
is attached

66 www.guvi.in
Outputs in connected modes
✔ Command: $ docker ps -a

67 www.guvi.in
Selection between Connection Modes
✔ Containers started in detached mode exit when the root process used to run the
container exits.
✔ A container in detached mode cannot be automatically removed when it stops.
✔ If this is not your use case and you do wish to automatically remove them, then you
would not use the detached mode option.
✔ 2 examples that do not use detached mode is the use of Dockers plugins in Jenkins
where we want to remove the containers after a Jenkins job is executed. This runs in
a root user mode with attaching to the foreground and pretending to be a pseudo
terminal and on completion close the container.
✔ Another example will be the starting of a service – where we need to start the
service. However, using “service nginx start” with the –d option starts the nginx
server but this cannot be used as is since container stops after the command
executed.
✔ The detached mode is used when you want to run one off commands and
(possibly) have the output of the commands send it outputs to some shared data
volume for processing later.
68 www.guvi.in
Docker File System - Initiate A Container
✔ Command: docker run hello-world:latest

Container can be initiated using both the “Image name” as well as “Image ID” along with the
required tag

69 www.guvi.in
Examine Existing Containers
✔ Command: docker ps

✔ Question: What happened to the container we initiated from our image “hello-world?

70 www.guvi.in
Examine Existing Containers
✔ Command: docker ps

Question: What happened to the container we initiated from our image “hello-world?

✔ Command: docker ps -a

Display inference:
This container with the name “pedantic_jepsen” , with CONTAINER ID (7a45c308c3fa) ,got created 3
minutes ago from the and exited 3 minutes ago as soon as its processes were executed.

71 www.guvi.in
Naming Containers
✔ Docker assigns default names to the container. The usual format would be
firstname_secondname.
✔ However Docker gives us the privilege to name our containers. This can be
done with the use of the –name command

72 www.guvi.in
Getting Attached To a Container
✔ When we are running containers in Detached mode (Daemonised Mode) we
can still attach to the container if required.
✔ Use the docker attach command to do so.

73 www.guvi.in
•Stopping a container: Default Mode
✔ When container is initiated in detached mode. It keeps running. It can
be stopped in following two ways:
• Get attached to the root and exit
• Stop the container using the “stop” command

By default a container exists as soon as its processes are executed

74 www.guvi.in
Restarting A Container
✔ Start a stopped container by using the “start” command: $ docker
start

75 www.guvi.in
Inspect Containers
✔ Inspect containers and to view important information regarding config, IP address etc.
where both the running and stopped containers can be inspected by the command: $
docker inspect

76 www.guvi.in
Why would we want to inspect containers?
✔ The basic usage of inspecting containers is
✔ to verify that everything is fine
✔ debug issues (if any)
✔ The inspect command is used to inspect
✔ Containers
✔ Networks
✔ Nodes
✔ ……..
✔ The classical usage of these commands are in the Docker Pipeline plugin of Jenkins. This
is used to work with Docker Swarm to inspect all nodes in a Swarm that can be used to
run the jobs
✔ As a very advanced use case, the output of the docker inspect is used as an input to
data visualizers to present your organization with real time data on containers, networks
and nodes used in your docker infrastructure.
77 www.guvi.in
Removing Images: Use Cases

Removing Images: Use Case 1: Without Associated Containers


Use Cases

Use Case 2: With Associated Containers

With Associated Containers


Without Associated Containers Use Case 3: Removing by force
Removing by force

78 www.guvi.in
Removing Images: With Associated Containers
✔ Removing Images with attached container:

• Docker will not allow you to remove an image if there are


containers attached with the image

• The running containers will have to be removed first and then the
With Associated Containers image can be removed
Without Associated Containers • Using –f command, the images can be removed forcefully making
Removing by force
the associated containers orphan which is not advisable though.

79 www.guvi.in
Removing Images: Without Associated Containers
✔ Can use the command: $ docker rmi (Image Name/ID)

With Associated Containers


Without Associated Containers
Removing by force

80 www.guvi.in
Removing Images: Removing by force
✔ Docker allows to remove images with associated containers forcefully.
This is not recommended though.

✔ Can use the –f command to perform this action.

With Associated Containers


Without Associated Containers
Removing by force

81 www.guvi.in
When to use remove with –f option?

• Use this very rarely and only for images that have
containers.
• As best practices, containers should be disposed off
when they are done.
• If this is not possible, you will need to script to dispose of
the containers before removing the image.
• REITERATED WARNING – use of the –f option is never a
good option, unless you know that the image itself
warranted removal because of
•Poor performance
•Security issues

82 www.guvi.in
Docker
Networking

83 www.guvi.in
Default Networks
✔ When Docker is installed, it creates three networks automatically, which can
be listed using the docker network Command : @docker ls

84 www.guvi.in
Default Networks - Types
✔ Bridge Network:
• The bridge network represents the docker0 network present in all Docker installations
• Docker daemon connects containers to this network by default.

✔ Host Network:
• The host network adds a container on the hosts network stack.
• You’ll find the network configuration inside the container is identical to the host.

✔ None Network:
• The none network adds a container to a container-specific network stack. That
container lacks a network interface.

85 www.guvi.in
Default Bridge Network Details
✔ With the exception of the bridge network, you really don’t need to interact with
these default networks.
✔ While you can list and inspect them, you cannot remove them. They are
required by your Docker installation.
✔ Command : $ docker network inspect bridge
✔ The Engine automatically creates a Subnet and Gateway to the network.
✔ Any new containers get added to this network

86 www.guvi.in
Default Bridge Network Details

87 www.guvi.in
When to use the bridge network?

•If you have a set of containers,


•each providing micro-services to the
other (and)
•Should not be exposed to the external
world
•The bridge network is the best
choice.
•This is typically used with layered
architectures.
•Be Aware that services in these
containers are not exposed to
outside networks.

88 www.guvi.in
When to use the user defined bridge network?

•Taking the same example from above, if you have an application that needs to
expose part of the network and not all, then we use custom bridge networks that
expose and publish custom ports.

•Again a tiered architecture, where you want to connect to a web application,


which in turn connects to backend application or database services is an
example of same – which is standard web application deployment practice

•A bridge network is useful in cases where you want to run a relatively small
network on a single host.

•You can, however, create significantly larger networks by creating an overlay


network.

89 www.guvi.in
When to use the user defined bridge network?

90 www.guvi.in
Exposing Ports
✔ Ports are exposed in the container so that the container can be using the container IP. We can
connect to the http address of that port.
✔ We are going to direct the port that is listening to http on a container to an underlying port on our
host
✔ We can redirect the exposed ports to the host ports.
✔ There are two ways of exposing the ports – by commands: P or p
✔ P: Any ports that are exposed by the container, any random port between 32768 and 65000 will be
available on the host machine. This is the ports available for Docker to pick randomly
✔ The available ports can be seen in different ways
✔ Command to connect to a container: $ docker run – d – name = nginx – P nginx:latest

91 www.guvi.in
Exposing Ports(Contd..)
✔ Two ways to connect to a container:
• Connect if IP Address is known and I am on the host machine
• I am on the network of my host machine and the IP address and port
of the container.
✔ It can also be obtainer through Docker port command : $ docker port
nginx-demo $CONTAINERPORT

92 www.guvi.in
Exposing Ports: Use Case

Exposing Ports

Free Ports Binding Ports

93 www.guvi.in
Connection Through Host Port

94 www.guvi.in
Getting Ports – Binding Ports

✔ What if I want to bind it to a specific port on my server rather than any


value which is picked in a range $ docker run –d –p 8080:80

95 www.guvi.in
Free or Binding ports?

•It is generally preferred to use binding ports esp. for services


that are external facing and do not need to change e.g. a
web server needs to be exposed on port 80.

•It is also used (indirectly) as a proxy so that end users do no


need to remember non-standard ports like 8081
(Technically firewall rules do not need to be tweaked much).

96 www.guvi.in
Docker
Volumes

97 www.guvi.in
•Docker - Volumes
✔ Volumes are files or directories that are directly mounted on the host and not part of the normal
union file system
✔ Docker filesystems are temporary by default
✔ You can create, modify, and delete files as you wish
✔ If container is stopped and restarted, all the changes will be lost: any files you previously deleted will
now be back, and any new files or edits you made won't be present
✔ Docker Data Volumes allow you to store data in separate place which can then be used as a normal
folder - all changes are persistent
✔ Command: docker run –ti –v /hostLog:/log ubuntu
✔ Run second container, where volume can be shared: docker run –ti --volumes-from
firstContainerName ubuntu

98 www.guvi.in
Volumes – Use Cases

Docker Data Volumes: Use Case 1: To keep data around, even through
Use Case container restarts

Use Case 2: To share data between the host


To Keep Data Persistent filesystem and the Docker container
Share Data Between Host
and Container
Use Case 3: To share data with other Docker
Share Data With Other
Containers containers

99 www.guvi.in
Volumes – Use Cases
✔ There's no way to directly create a "data volume" in Docker - Instead
create a data volume container with a volume attached to it. For any
containers to connect to the data volume container, need to use the
Docker's –volumes from option to grab the volume from this
container and apply them to the current container
✔ Commands:
To Keep Data Persistent
• # Create the data volume Container
Share Data Between Host
and Container docker create -v /tmp --name datacontainer Ubuntu
Share Data With Other
• # The above creates a container called data container in the
Containers
directory /tmp.
• # use it..
docker run -t -i --volumes-from datacontainer ubuntu
/bin/bash
• # from now on any data written to /tmp is persisted
100 www.guvi.in
Volumes – Use Cases

✔ Example:
docker run -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 -i nginx
✔ In above example, folder ~/nginxlogs on host machine is mapped
to /var/log/nginx
To Keep Data Persistent
✔ Writes to /var/log/nginx in container will be reflected to
Share Data Between
~/nginxlogs on host
Host and Container
Share Data With Other ✔ Similarly writes to ~/nginxlogs on host will be reflected back to
Containers /var/log/nginx on container

101 www.guvi.in
Volumes – Use Cases
• In addition to mounting a host directory in your
container, some Docker volume plugins allow you to
provision and mount shared storage, such as iSCSI, NFS,
or FC.
• A benefit of using shared volumes is that they are
To Keep Data Persistent host-independent.
Share Data Between Host • A volume can be made available on any host that a
and Container container is started on as long as it has access to the
Share Data With Other
Containers
shared storage backend, and has the plugin installed.
• The –volumes-from option is used to share information
between services.

102 www.guvi.in
Volumes – Use Cases
✔ Other Commands:
# Example of using flocker
docker run -d -P --volume-driver=flocker -v
my-named-volume:/webapp --name web training/webapp python
app.py
To Keep Data Persistent
Share Data Between Host # (or) Create a volume
and Container docker volume create -d flocker -o size=20GB my-named-volume
Share Data With Other
# use it..
Containers
docker run -d -P -v my-named-volume:/webapp --name web
training/webapp python app.py

103 www.guvi.in
Which is the best way?

• Mounting host directories is usually used.


• Any write inconsistencies have to be managed by applications themselves – not
DOCKER.
• For larger data volumes, docker has plugins to manage NFS or other shared
storage volumes.
• The factors determining which ones to choose will depend on
• Your volume requirements
• Cost of solution

104 www.guvi.in
Docker Files

105 www.guvi.in
Docker File
✔ Docker file is the basic building block of Docker containers
✔ Docker file is a file with a set of instructions written in it. It forms the basis
for any image in Docker.
✔ Almost every time any base image is going to based upon another image.
You are going to pick up a base image and build up on that image.

106 www.guvi.in
Docker File – Creation Steps
✔ Step 1: Create Directory
✔ Step 2: Create a named file subdirectory directory
✔ Step 3: Input the set of instructions
✔ Step 4: Save the file
✔ Step 5: Build the image

107 www.guvi.in
Docker File Main Sections
✔ Following are the main sections of the Docker file:
• FROM and MANINTAINER

• How to RUN commands

• How to set up ENV (environments)

• Difference between CMD vs RUN

• How to EXPOSE ports

108 www.guvi.in
Docker File – FROM and MAINTAINER
FROM:
✔ Every Docker file starts with this command
✔ It shows where is the base image coming from
✔ Will pick up an image from Docker hub or some other repository and make some
changes for ex: environmental changes, or expose your ports etc. and then save
the file.
✔ Example: FROM debian:stable

109 www.guvi.in
Docker File – FROM and MAINTAINER
MAINTAINER:
✔ The section of the Docker file shows the maintainer or the owner of the Docker
file
✔ It requires certain format – It requires the name and the email id
✔ Following is the format
• MAINTAINER name <emailid >
• FROM debian:stable
• MAINTAINER docker<sriram@gmail.com >

110 www.guvi.in
Docker File – RUN
RUN:
✔ Set of actions you want to perform on the base image, where the modification of the base
image starts.
✔ These actions have to be performed with root images
✔ Commands will be executed in the exact way you write it

Example:

FROM debian:stable
MAINTAINER docker<sriram@gmail.com >
RUN apt –get update
RUN apt –get upgrade

111 www.guvi.in
Docker File – RUN(Contd..)
RUN:
✔ Following steps will happen when you save this file and build an image
• Step 1: It is going to pull the Debian base image
• Step 2: It will set up the MAINTAINER in the container
• Step 3: IT will update the packages
• Step 4: IT will upgrade the packages

112 www.guvi.in
Docker File – ENV
ENV:
✔ Can set up the environment variable
✔ By setting this up we can pass a variable that we need to pass inside the container that runs on base
image
✔ Following format is required to set up this directive: ENV MYVALUE -test
✔ When you run the container this value will have to be passed using “echo $MYVALUE”

Example:

FROM debian:stable
MAINTAINER docker<sriram@gmail.com >
RUN apt –get update
RUN apt –get upgrade
ENV MYVALUE -test

113 www.guvi.in
Docker File – EXPOSE
EXPOSE:
✔ Command to expose any ports to expose through your container to the underlying host operating
system with mapping redirect. We can get to the containers through containers IP’s.
✔ Ports are set up in the Docker file to be exposed.
✔ Following format is required to set up this directive: EXPOSE 80 (Port number which you want to expose)
✔ When you run docker ps for this container you will see the information for the ports which are exposed

Example:
FROM debian:stable
MAINTAINER docker<sriram@gmail.com >
RUN apt –get update
RUN apt –get upgrade
ENV MYVALUE -test
EXPOSE 80
EXPOSE 24
114 www.guvi.in
Docker File – CMD
CMD:
✔ Command for starting up of a service of some kind
✔ Anything that is after a command is a list of things to run within any container that is initiated on a base
image
✔ All the actions to run when the containers are initiated is described in this section
✔ Following format is required to set up this directive: CMD [“usr/sbin/apache2ctl”]

Example:
FROM debian:stable
MAINTAINER docker<sriram@gmail.com >
RUN apt –get update && apt –get upgrade
ENV MYVALUE -test
EXPOSE 80
EXPOSE 24
CMD [“usr/sbin/apache2ctl”, “-D”, “FOREGROUND”]

115 www.guvi.in
Create Docker File - Demo
✔ Step 1: Create a directory called 'custom' and change it. In this directory, create an empty file called
”Dockerfile"
✔ Step 2: Edit ‘Dockerfile’ created in Step #1. This configuration file should be written to perform following
actions:
• Use base Centos 6 latest version image from public repository
• Identify your email address as the author and maintainer of this image
• Update the base OS after initial import of the image
• Install the Open-SSH Server
• Install Apache Web Server
• Expose ports 22 and 80 to support the services installed
✔ Step 3: Build custom image from ’Dockerfile' as created above. Name/tag this new image as
”customimg/test:v1". Once the image is built, verify the image appears in your list.

116 www.guvi.in
Questions
Should I include my code with COPY/ADD or a volume?
❑ You can add your code to the image using COPY or ADD directive in a Docker file.
❑ This is useful if you need to relocate your code along with the Docker image, for example when
you’re sending code to another environment (production, CI, etc.).
❑ Prefer COPY over ADD as image size is reduced with COPY over ADD.
❑ You should use a volume if you want to make changes to your code and see them reflected
immediately, for example when you’re developing code and your server supports hot code
reloading or live-reload.
❑ There may be cases where you’ll want to use both. You can have the image include the code
using a COPY, and use a volume in your Compose file to include the code from the host during
development. The volume overrides the directory contents of the image.

117 www.guvi.in
Docker Compose

118 www.guvi.in
119 www.guvi.in
Docker Compose: Multi Container Applications
Without using Compose With Compose
❑ Build and run one container at a time ❑ Define multi-container apps in a single file
❑ Manually connect containers ❑ Single command to deploy entire apps
❑ Manual Dependency Management ❑ Handles Dependencies
❑ Works with Networking, Volumes, Swarm

120 www.guvi.in
Docker Compose: Multi Container Applications
Without using Compose With Compose

docker run -e MYSQL_ROOT_PASSWORD=<pass> -e services:


MYSQL_DATABASE=wordpress --name db:
wordpressdb -v "$PWD/database":/var/lib/mysql -d image: mysql:5.7 volumes:
mysql:latest - db_data:/var/lib/mysql environment:
MYSQL_ROOT_PASSWORD:
docker run -e MYSQL_DATABASE: wordpress
WORDPRESS_DB_PASSWORD=<pass> --name wordpress:
wordpress --link wordpressdb:mysql -p 80:80 depends_on:
-v "$PWD/html":/var/www/html - db
-d wordpress image: wordpress:latest ports:
- "80:80"
environment:
WORDPRESS_DB_PASSWORD=<pass>

121 www.guvi.in
Docker Compose

A tool for defining and With Compose, you use


running multi- a YAML file to configure
container Docker your application’s
applications services.

Compose works in all


environments: production, With a single command,
staging, development, you create and start all
testing, as well as CI
the services from your
workflows.
configuration

122 www.guvi.in
Docker Compose

Define your app’s Define the services that


environment with a make up your app in
Dockerfile Docker Compose file

Run the CLI:

docker-compose up

123 www.guvi.in
Docker Compose: Multi Container Applications

A
B C
Service Volumes Networking
s

124 www.guvi.in
Docker Compose: Multi Container Applications

containers:
web:
build: .
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/code
environme
nt:
- PYTHONUNBUFFERED=1
redis:
image: redis:latest
command: redis-server --appendonly yes

125 www.guvi.in
Docker Compose: Multi Container Applications

126 www.guvi.in
Docker Compose: Multi Container Applications

Using .env file

127 www.guvi.in
Docker Compose: Multi Container Applications

Commands:

docker compose up
docker compose down
docker compose run –e DEBUG=1 <services>

128 www.guvi.in
Docker Compose: Multi Container Applications

- A network called myapp_default is


created.
- Name is based on directory
- A container is created using web’s
configuration.
It joins the network myapp_default
under the
name web.
- A container is created using db’s
configuration.

- Joins the network myapp_default under


the
129 name db. www.guvi.in
Docker Compose: Multi Container Applications

130 www.guvi.in
Q &A

131 www.guvi.in
We're done!
Thank you for your time and
participation.

132 www.guvi.in

You might also like