KEMBAR78
Docker: Comprehensive Guide to Containers | PDF | Computer Network | Application Software
0% found this document useful (0 votes)
180 views8 pages

Docker: Comprehensive Guide to Containers

Docker is a platform that allows developers to package applications into containers that can run on any infrastructure. Containers provide standardized environments and isolate applications from each other and the underlying infrastructure. The document discusses Docker components like images, containers, client, daemon and registry and how they work together.

Uploaded by

HEMALAKSHMI D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
180 views8 pages

Docker: Comprehensive Guide to Containers

Docker is a platform that allows developers to package applications into containers that can run on any infrastructure. Containers provide standardized environments and isolate applications from each other and the underlying infrastructure. The document discusses Docker components like images, containers, client, daemon and registry and how they work together.

Uploaded by

HEMALAKSHMI D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Docker

Docker' is a Platform set as a Service (PaaS) product intended to deliver software in the form of
packages, which are termed as containers. It uses OS-level virtualization standards, wherein the
kernel allows multiple instances of isolated user-space such as containers, partitions, zones,
virtual kernels, etc.

Evolution of Docker

Docker, an open-source project, was launched in 2013. Docker Inc. developed it further to
adopt cloud-native, which resulted in a trend towards containerization and micro services in
the software domain. Docker released its ‘enterprise edition’ in 2017.

Modern software development faces the challenge of managing the applications on a common
host or cluster. There is a need to separate the applications from one another to avoid
interference and interoperability with regard to operation or maintenance. The association of
the packages, libraries, binaries, and other software components required for an application to
run is considered crucial for managing application development.

The following are some of the benefits of Docker Containers:

 Environment standardization – production environment can be shared collaboratively


to develop, test, or maintain.

 Faster and consistent configuration – The image configuration eases unprivileged users
to run quickly.

 Faster adoption of DevOps – Supports in the key automation phases: Deploy, Operate
and Optimize.

 Safe disaster recovery – The reduced drag in the DR with minimal recovery time

How Docker Works

Docker makes use of client-server architecture. The Docker client talks with the docker
daemon which helps in building, running, and distributing the docker containers. The Docker
client runs with the daemon on the same system or we can connect the Docker client with the
Docker daemon remotely. With the help of REST API over a UNIX socket or a network, the
docker client and daemon interact with each other. To know more about working of docker
refers to the Architecture of Docker.

Components of Docker

 Docker Client: The first component of Docker is the client, which allows the users to
communicate with Docker. Being a client-server architecture, Docker can connect to the host
remotely and locally. As this component is the foremost way of interacting with Docker, it is part
of the basic components. Whenever a user gives a command to Docker, this component sends the
desired command to the host, which further fulfils the command using the Docker API. If there
are multiple hosts, then communicating with them is not an issue for the client as it can interact
with multiple hosts.

 Docker Image: Docker images are used to build containers and hold the entire metadata that
elaborates the capabilities of the container. These images are read-only binary templates in
YAML. Every image comes with numerous layers, and every layer depends on the layer below
it. The first layer is called the base layer, which contains the base operating system and image.
The layer with dependencies will come above this base layer. These layers will have all the
necessary instructions in read-only, which will be the Dockerfile. A container can be built using
an image and can be shared with different teams in an organization through a private container
registry. In case you want to share the same outside the organization, you can use a public
registry for the same

.
 Docker Daemon: Docker Daemon is among the most essential components of Docker as it is
directly responsible for fulfilling the actions related to containers. It mainly runs as a background
process that manages parts like Docker networks, storage volumes, containers, and images.
Whenever a container start up command is given through docker run, the client translates that
command into an HTTP API call and returns it to the daemon. Afterwards, the daemon analyses
the requests and communicates with the operating system. The Docker daemon will only respond
to the Docker API requests to perform the tasks. Moreover, it can also manage other Docker
services by interacting with other daemons.

 Docker Networking: As the name suggests, Docker networking is the component that helps in
establishing communication between containers. Docker comes with five main types of network
drivers, which are elaborated on below.
 None: This driver will disable the entire networking system, hindering any container
from connecting with other containers.
 Bridge: The Bridge is the default network driver for a container which is used when
multiple containers communicate with the same Docker host.
 Host: There are stances when the user does not require isolation between a container and
a host. The host network driver is used in that case, eradicating this isolation.
 Overlay: Overlay network driver allows communication between different swarm
services when the containers run on different hosts.
 macvlan: This network driver makes a container look like a physical driver by assigning
a mac address and routing the traffic between the containers through this mac address.

 Docker Registry: Docker images require a location where they can be stored and the Docker
registry is that location. Docker Hub is the default storage location of images that stores the
public registry. However, registries can either be private or public. Every time a Docker pull
request is made, the image is pulled from the desired Docker registry where it was the same. On
the other hand, Docker push commands store the image in the dedicated registry.

 Docker Container: A Docker container is the instance of an image that can be created, started,
moved, or deleted through a Docker API. Containers are a lightweight and independent method
of running applications. They can be connected to one or more networks and create a new image
depending on the current state. Being a volatile Docker component, any application or data
located within the container will be scrapped the moment the container is deleted or removed.
Containers mostly isolate each other and have defined resources.

Docker Advanced Components

 Docker Compose: Sometimes you want to run multiple containers but as a single
service. This task can be accomplished with the help of Docker compose as it is
specifically designed for this goal. It follows the principle of isolation between the
containers but also lets the containers interact with each other. The Docker compose
environments are also written using YAML.

 Docker Swarm: If developers and IT admins decide to create or manage a cluster of


swarm nodes in the Docker platform, they can use the Docker swarm service. There are
two types of swarm nodes: manager and worker. The manager node is responsible for all
the tasks related to cluster management, whereas the worker node receives and
implements the tasks sent by the manager node. No matter the type, every Docker swarm
node is a Docker daemon and communicates through Docker API.

Docker Container

A Docker container is a unit that can be created by the developer for the deployment of an
application in an isolated environment. The resources in a container can be assigned as per the
usage, making it highly efficient.
.
Docker Containers: Another Form of Virtualization

Think of a container as another form of virtualization. VMs, also just one form of virtualization,
allow a piece of hardware to host multiple operating systems as software. VMs are added to the
host machine so that the hardware power can be shared among different users and appear as
separate servers or machines. Containers virtualize the OS, splitting it into virtualized
compartments to run container applications.

The Docker daemon is what is responsible for actually does the assembling and running of code
as well as the distribution of the finalized containers. It takes the commands a developer enters
into the Docker client terminal and executes them.

This approach allows pieces of code to be put into smaller; easily transportable pieces that can
run anywhere Linux or Windows is running. It’s a way to make applications even more
distributed, and strip them down into specific functions.
Docker Images

Docker Image is an executable package of software that includes everything needed to run an
application. This image informs how a container should instantiate, determining which
software components will run and how. Docker Container is a virtual environment that
bundles application code with all the dependencies required to run the application. The
application runs quickly and reliably from one computing environment to another.

Uses of Docker Images

1. We can easily and effectively run the containers with the aid of docker images.
1. All the code, configuration settings, environmental variables, libraries, and runtime are
included in a Docker image.
1. Docker images are platform-independent.
1. Layers are the building blocks of an image.
1. With using the build command, the user has the option of completely starting from scratch
or using an existing image for the first layer.

Difference between Docker Image and Docker Container

Docker image Docker container

The Docker image is the Docker container’s The Docker container is the instance of the
source code. Docker image.

Docker Image is a pre-requisite to Docker


Dockerfile is a prerequisite to Docker Image.
Container.

Docker images can be shared between users Docker Containers can’t be shared between
with the help of the Docker Registry. the users.

To make changes in the docker image we We can directly interact with the container
need to make changes in Dockerfile. and can make the changes required.
Structure of Docker Image

The layers of software that make up a Docker image make it easier to configure the
dependencies needed to execute the container.

 Base Image: The basic image will be the starting point for the majority of Dockerfiles,
and it can be made from scratch.
 Parent Image: The parent image is the image that our image is based on. We can refer to
the parent image in the Dockerfile using the FROM command, and each declaration after
that affects the parent image.
 Layers: Docker images have numerous layers. To create a sequence of intermediary
images, each layer is created on top of the one before it.
 Docker Registry: Refer to this page on the Docker Registry for further information.

Docker Cloud repositories

Repositories in Docker Cloud store your Docker images. You can create repositories and
manually push images using docker push, or you can link to a source code provider and
use automated builds to build the images for you. These repositories can be either public or
private.

Create a new repository in Docker Cloud

To store your images in Docker Cloud, you create a repository. All individual users can create
one private repository for free, and can create unlimited public repositories.

1. Click Repositories in the left navigation.

2. Click Create.

3. Enter a name and an optional description.

4. Choose a visibility setting for the repository.

5. Optionally, click a linked source code provider to set up automated builds.

1. Select a namespace from that source code provider.

2. From that namespace, select a repository to build.

3. Optionally, expand the build settings section to set up build rules and enable or

disable Auto builds.


6. Click Create.

Edit an existing repository in Docker Cloud

You can edit your repositories in Docker Cloud to change the description and build
configuration.

From the General page, you can edit the repository’s short description, or click to edit the version
of the ReadMe displayed on the repository page.

Note: Edits to the Docker Cloud ReadMe are not reflected in the source code linked to a
repository.

To run a build, or to set up or change automated build settings, click the Builds tab, and
click Configure Automated Builds. See the documentation on configuring automated build
settings for more information.

Link to a repository from a third party registry


You can link to repositories hosted on a third party registry. This allows you to deploy images
from the third party registry to your nodes in Docker Cloud, and also allows you to enable
automated builds which push built images back to the registry.

Note: To link to a repository that you want to share with an organization, contact a member of
the organization’s Owners team. Only the Owners team can import new external registry
repositories for an organization.

1. Click Repositories in the side menu.

2. Click the down arrow menu next to the Create button.

3. Select Import.

4. Enter the name of the repository that you want to add.


For example, registry.com/namespace/repo name where registry.com is the hostname of
the registry.

5. Enter credentials for the registry.

Note: These credentials must have push permission in order to push built images back to
the repository. If you provide read-only credentials, you will be able to run automated
tests and deploy from the repository to your nodes, but you will not be able to push built
images to it.

6. Click Import.

7. Confirm that the repository on the third-party registry now appears in


your Repositories dropdown list.

You might also like