KEMBAR78
CPNT217 - 12. Using Containers in Virtualization | PDF | Virtual Machine | Operating System
0% found this document useful (0 votes)
35 views54 pages

CPNT217 - 12. Using Containers in Virtualization

Containers are lightweight bundles of applications and their dependencies that share the host operating system's kernel, allowing for faster startup times and more efficient resource utilization compared to virtual machines. The document traces the history of container technology from Unix's chroot to modern solutions like Docker and Kubernetes, highlighting their benefits in application development, resource efficiency, and workload portability. It also contrasts the pros and cons of containers versus virtual machines, emphasizing the advantages of containers in modern application architectures and DevOps practices.

Uploaded by

Pragunya Wadhwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views54 pages

CPNT217 - 12. Using Containers in Virtualization

Containers are lightweight bundles of applications and their dependencies that share the host operating system's kernel, allowing for faster startup times and more efficient resource utilization compared to virtual machines. The document traces the history of container technology from Unix's chroot to modern solutions like Docker and Kubernetes, highlighting their benefits in application development, resource efficiency, and workload portability. It also contrasts the pros and cons of containers versus virtual machines, emphasizing the advantages of containers in modern application architectures and DevOps practices.

Uploaded by

Pragunya Wadhwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

CPNT217 – Introduction

to Network Systems
Module 5: Using Containers in Virtualization
What Is a Container, Exactly?
• A container is a small, lightweight bundle of one or more
applications and the dependencies needed for that code to run.
That means a container has code, a runtime environment, system
tools, and libraries.
• Containers sit on top of a physical server and its host OS—typically
Linux or Windows. Each container shares the host OS kernel and,
usually, the binaries and libraries, too. Shared components are
read-only.
• A container can also house infrastructure services such as storage,
or a hybrid of apps and storage.
• Containers are light, only megabytes in size and take just seconds
to start. (VMs take minutes to run and measured in gigabytes
versus megabytes). So roughly you can put two to three times as
many applications on a single server with containers than you can
with a VM.
More Definitions: Binaries, Libraries, and Kernels
• Binaries: In general, binaries are non-text files made
up of ones and zeros that tell a processor how to
execute a program.
• Libraries: Libraries are sets of prewritten code that a
program can use to do either common or specialized
things. They allow developers to avoid rewriting the
same code over and over again.
• Kernels: Kernels are the ringleaders of the OS. They’re
the core programming at the center that controls all
other parts of the operating system.
Some History …
• Container services can be traced way back to the chroot operation of
Unix version 7 which was introduced in 1980s.
Chroot provided an isolated operating environment in which
applications and services could run. Processes that operated inside
these chroot subsystems were not able to access files outside their
confines, making these isolated containers very useful for, for example,
running test processes on production systems.

• In the early 2000s, FreeBSD Jails was introduced into Unix-link OS. Jails
took chroot to the next level by providing isolation for users, files,
networking, and much more. In fact, individual jails were able to have
their own IP addresses, so they were logically isolated at the
networking level as well.

• In 2004, Solaris Containers service was introduced into Solaris OS. It


provided isolation into individual segments called zones.
• In 2008, Linux Containers (LXC) came into open-source
OS. It later formed the real foundation for the original
Docker.

• In 2013 Docker replaced LXC with libcontainer and


brought containers to the mainstream making it an
industry standard for application and software
development.

• Docker has brought some services to containers,


including, a powerful application programming interface
(API), a command line interface (CLI), an efficient image
model, and cluster management tools that help
container-based services to scale.
• Containers solve the problem of environment
inconsistency. Developers generally write code locally,
say on their laptop, then deploy that code on a server.
Any differences between those environments – software
versions, permissions, database access, etc. – can lead
to bugs. With containers, developers can create a
portable, packaged unit that contains all the
dependencies needed for that unit to run in any
environment whether it’s a local copy, development,
testing, or production.

• Docker containers are supported in Windows server


2016 and Windows 10, as well as Linux distributions.
The container
ecosystem
Container platform: The goal of a container platform is to
automate the packing and loading of containers for greater
efficiency, in addition to providing governance for the overall app
development process.
Examples:
Red Hat OpenShift Container Platform: It enables developers to build,
host, deploy, manage, and scale multi‐container applications
Linux Containers (LXC): Commonly known as LXC, these are the
original Linux container technology. LXC is a Linux operating system-
level virtualization method for running multiple isolated Linux
systems on a single host.
Docker: Docker started as a project to build single-application LXC
containers, introducing several changes to LXC that make containers
more portable and flexible to use. It later morphed into its own
container runtime environment. At a high level, Docker is a Linux
utility that can efficiently create, ship, and run containers.
Orchestrator: An orchestrator is a piece of software that manages
application containers across a cluster, ensuring that each container
keeps running regardless of conditions.
Examples:
Kubernetes: Kubernetes is Greek for the “helmsman” of a ship, as in
a ship holding many containers. But Kubernetes can manage more
than just apps. It can help orchestrate storage in containers, too.
Kubernetes is included in Red Hat OpenShift Container Platform.
Kubernetes is not a container software, per se, but rather a container
orchestrator. In this cloud-native, microservices world, when some
apps run hundreds or thousands or even billions of containers,
Kubernetes helps automate management of all those containers. It
can’t function without a tool like Docker in tandem.

Docker Swarm and Universal Control Plane (UCP)

Mesosphere DC/OS
• Storage cluster: This is a network of storage nodes that
provides high availability and scale to applications that need
to persist data. The storage cluster providing storage
services could reside outside or within the container.
VM vs. Containers
Bare-metal vs. Hosted Hypervisors
Deploying new virtual machines is
simple, and you can share underlying
hardware resources to do so. You get
much higher levels of workload density
because you’re running many more
workloads on far fewer servers, rather
than having multiple servers each ran
at about 5% to 15% utilization. By
turning servers into software,
workloads could be manipulated which
led to a rise in application availability
capabilities, new disaster recovery
opportunities, and new ways to
programmatically manage workloads.
Inherited issues with virtual machines

• Resource waste • To migrate an application from


one location to another, you have
• Workload portability issues to drag an entire OS along with it.
• Proprietary virtual machine • Each individual virtual machine
formats makes it difficult to requires a separate OS
installation. Each OS requires
move workloads between access to RAM, storage, and
vendors. processing resources.
• You can’t just drag a • As you scale your environment
vSphere-based virtual and deploy more and more virtual
machine over to Amazon machines, you end up dedicating
a whole lot of resources to
EC2 or to Hyper-V, for nothing more than running
instance. operating systems.
• The container engine is effectively an abstraction
layer that enables libraries and application
bundles to operate independently without the need for
a complete OS for each bundle.
• The container engine handles all the hooks to the
underlying single instance OS.
• To maintain the benefits of virtualization container
engines can run on top of existing virtualization stack.
Multiple container engines, and therefore multiple
isolated workloads, on top of a single virtual machine.
• With the container engine, multiple isolated workloads
can run on the same hardware, even on bare metal,
without a full hypervisor in place and without all the
overhead of a bunch of operating systems.
• Containers can run on bare metal, in a virtualized
environment, or in the cloud.
What’s the Difference: VMs vs. Containers
• Virtualization provides abstraction at the hardware level, while containers
operate at the operating system (OS) level. What is the benefit of this?

• HPE tested consolidation of a single host with eight MySQL VM workloads


to eight containers running on an identical bare-metal host, they
measured up to 73 percent performance improvement (measured in
transactions per second).

• With containers workload portability takes on new meaning, because


shifting workloads between hosts and between local hosts and cloud-
based ones becomes trivial by the introduction of solutions like Docker to
the market. Such solutions also addressed portability and management
problems by introducing a level of abstraction in order to achieve
efficiency and security in workload management.
• Without workloads, you wouldn’t need containers,
virtualization, or even bare-metal servers. Therefore, the
discussion will be based the workloads/applications.
• Containers work not only on brand-new cloudlike applications
but also work with traditional applications.
• Traditional applications are often one huge piece of software,
whereas modern applications are broken up into far smaller
chunks to enable scaling of individual components.
• Also, the way releases are delivered has changed. What used
to be an occasional big event has now transformed into a
series of ongoing updates that take place on a very regular
basis. The continuous improvement approach is becoming far
more common in software circles.
• With containers, tools are available that can help you
convert traditional applications to containerized ones,
making it far easier to advance in your containerization
journey.
• Deployment of applications is getting easier as more
images are added to the Docker Store and as more
traditional applications are containerized for easy
consumption.
• With just a few simple commands, you can pull a Linux-
centric version of SQL Server from the Docker Store and
be up and running in minutes.
• It’s easier to deploy SQL Server on Docker than it is to
deploy it as a legacy application on Windows!
What’s the Difference: VMs vs. Containers - Pros &
Cons
VMS CONTAINERS
• Heavyweight. • Lightweight.
• Limited performance. • Native performance.
• Each VM runs in its own OS. • All containers share the host OS.
• Hardware-level virtualization. • OS virtualization.
• Startup time in minutes. • Startup time in milliseconds.
• • Requires less memory space.
Allocates required memory.
• Process-level isolation, possibly
• Fully isolated and hence more
less secure. (Containers share the
secure. same kernel)
• Can only issue hypercalls to the • Can make syscalls to the host
host hypervisor. kernel
• Less parts to manage • More parts to manage
What’s the Diff: VMs vs. Containers - When to use
VMS CONTAINERS
• If you need to run apps that require all • When the biggest priority is maximizing the
of the operating system’s resources and number of applications or services running on
functionality a minimal number of servers
• When you have a wide variety of • When you need maximum portability.
operating systems to manage.
• When developing a new app and you want to
• If you have an existing monolithic use a microservices architecture for
application that you don’t plan to or scalability and portability.
need to refactor into microservices. • When developing cloud-native application
• When security is particularly important development based on a microservices
since VMs are isolated by abstracted architecture.
hardware. • If you need to run many *copies* of a single
• Help companies make the most of application
their infrastructure resources by • Help companies make the most of
expanding the number of machines you the development resources by enabling
can squeeze out of a finite amount of microservices and DevOps practices.
hardware and
Containers can software
run on a virtual machine,
this makes the question less of an
either/or and more of an exercise in understanding which technology makes the
most sense for your workloads.
Benefits of Containers
• Reduced IT management resources.
• Faster spin ups.
• Smaller size means one physical machine can host
many containers.
• Reduced & simplified security updates.
• Less code to transfer, migrate, and upload workloads.
Microservices

• Microservices architectures for application development evolved out of


this container boom.

• With containers, applications could be broken down into their smallest


component parts or “services” that serve a single purpose, and those
services could be developed and deployed independently of each
other instead of in one monolithic unit.

• For example, let’s say you have an app that allows customers to buy
anything in the world. You might have a search bar, a shopping cart, a
buy button, etc. Each of those “services” can exist in their own
container, so that if, say, the search bar fails due to high load, it
doesn’t bring the whole thing down.
The Benefits of Application Containerization:
• Application Development Processes and Capabilities.
• Modernized Applications for Improved Operations
• Resource Efficiency
• Workload Portability
• Infrastructure Agnosticism
Application Development Processes and
Capabilities
Closer integration between
Continuous deployment, development and operations
integration, and innovation
• Continuous deployment processes are adopted rather
than spending years to develop the next version of a • Developers and IT have sometimes
product. With the help of SaaS tools these upgrades can
be far more regular and much smaller enhancements had a contentious relationship.
and bug fixes that are more digestible. Container-based solutions can help to
• Containerization — coupled with microservices assisted
in moving to towards agile software development
close this gap by enabling a build →
process, so rather than developing a large software ship → run methodology, and this
product, now it development efforts are more
distributed and focused on smaller individual units.
allows far more integration between
• Continuous development also calls for continuous applications and infrastructure. This is
integration (CI) and continuous deployment (CD). DevOPS.
Traditional deployment processes typically assume that
an infrastructure environment is already fully • DevOps is a mind-set in which
provisioned and awaiting application code to be developers and operators work
deployed and tested. With CI/CD approach using
containers, the entire deployment process including closely together to ensure that
both infrastructure and application — is deployed within
the overall CI/CD pipeline. During the CI/CD process
software runs well on production
involve the whole team to capture both functional systems.
requirements defined by the business and nonfunctional
requirements as well as regulatory requirements. … For
this DevOps is used. (DevOps is not covered in this
course.)
Modernized Applications for Improved Operations
• APIs are used for everything and anything in the data center. And with the power of
computing increasing, we can move previously hardware-centric capabilities into a
far more flexible and programmable software layer and change the relationship
between hardware and software.
• Imagine an application that has wildly variable infrastructure needs depending on
the time of day or time of year. Each of these kinds of applications places demand on
infrastructure that is highly variable.
• With traditional infrastructure approaches, you’d need to build a data center that
could handle the peak demand from these applications, but it that might simply sit
idle for the majority of the time.
• With containers, you can employ new resources only when you need them, such as a
small container-based application is launched to handle a request, after which it’s
shut down and removed from use. Because containers boot very quickly, this is
possible.
• With containers, the application could send the point at which current resources are
becoming slim and simply launch new resources in the cloud for you.
• With a fully programmable infrastructure, this kind of capability is at your fingertips.
Resource Efficiency
• Containers can improve overall hardware utilization
• Oversizing virtual machines uses up resources that
could go to other workloads that need it and can also
result in the business overspending on hardware.
• Every virtual machine needs its own operating system.
All this redundancy leads to a lot of inefficiency.
• By moving abstraction from the hardware level (virtual
machine) to the software level (operating system),
containers make it to put far more workloads onto far
less hardware as compared.
• Less hardware means less cost overall.
Workload Portability
• With VMs, moving workloads from server to server requires
some time because you’re also carrying a full operating
system (OS) along with it. Also, the same hypervisor has to be
running on both sides of a migration process.
• By abstracting inside the OS, as you shift workloads from place
to place, and moving exactly what that workload needs to
operate, the Hypervisor type doesn’t really matter.
• Therefore, you can move workloads far more quickly than
before.
• With the abstraction you can have a “package once, run
anywhere” deployment process.
• This portability enables developers to move from dev, test,
staging, and into production.
• IT operations teams can move workloads across environments
Infrastructure Agnosticism
• Containers take hardware agnosticism to the next level.
• Moving VMs have some limitations, such as processor
family incompatibilities, that can limit virtual machine
portability.
• With Docker, workloads can shift from bare metal to a
virtual machine and then to the cloud.
• You don’t even really have to give up virtualization to go
the container route.
Introduction to Docker
Fundamental Docker Concepts
Docker Engine
• Docker engine is the layer on which Docker runs. It’s a
lightweight runtime and tooling that manages
containers, images, builds, and more. It runs natively on
Linux systems and is made up of 3 parts:

1. A Docker Daemon that runs in the host computer.


2. A Docker Client that then communicates with the
Docker Daemon to execute commands.
3. A REST API for interacting with the Docker Daemon
remotely.
Fundamental Docker Concepts
Docker Daemon
• The Docker daemon is what actually executes commands sent
from the Docker Client — like building, running, and distributing
your containers.
• The Docker Daemon runs on the host machine, but as a user,
you never communicate directly with the Daemon. The Docker
Client can run on the host machine as well, but it’s not required
to. It can run on a different machine and communicate with the
Docker Daemon that’s running on the host machine.
Docker Client
• The Docker Client is what you, as the end-user of Docker,
communicate with. Think of it as the UI for Docker. For example,
when you are communicating to the Docker Client, it will then
communicate your instructions to the Docker Daemon.
Fundamental Docker Concepts
Dockerfile
• A Dockerfile is where you write the instructions to build
a Docker image. These instructions can be:

• RUN apt-get y install some-package: to install a software


package
• EXPOSE 8000: to expose a port
• ENV ANT_HOME /usr/local/apache-ant to pass an
environment variable

Once you’ve got your Dockerfile set up, you can use the docker
build command to build an image from it.
Fundamental Docker Concepts
Docker Image
• Images are read-only templates that you build from a set
of instructions written in your Dockerfile. Images define
both what you want your packaged application and its
dependencies to look like and what processes to run when
it’s launched.
• The Docker image is built using a Dockerfile. Each
instruction in the Dockerfile adds a new “layer” to the
image, with layers representing a portion of the image’s
file system that either adds to or replaces the layer below
it. Layers are key to Docker’s lightweight yet powerful
structure.
Fundamental Docker Concepts
Union File Systems
• Docker uses Union File Systems to build up an image. You can
think of a Union File System as a stackable file system,
meaning files and directories of separate file systems (known
as branches) can be transparently overlaid to form a single
file system.
• The contents of directories which have the same path within
the overlaid branches are seen as a single merged directory,
which avoids the need to create separate copies of each
layer. Instead, they can all be given pointers to the same
resource; when certain layers need to be modified, it’ll create
a copy and modify a local copy, leaving the original
unchanged. That’s how file systems can appear writable
without actually allowing writes.
Fundamental Docker Concepts

Volumes
• Volumes are the “data” part of a container, initialized when
a container is created. Volumes allow you to persist and
share a container’s data. Data volumes are separate from
the default Union File System and exist as normal
directories and files on the host filesystem. So, even if you
destroy, update, or rebuild your container, the data
volumes will remain untouched. When you want to update
a volume, you make changes to it directly.
• As an added bonus, data volumes can be shared and
reused among multiple containers, which is pretty efficient.
Fundamental Docker Concepts
The Registry
• A registry is a centralized repository that holds Docker
images.
• Administrators can deploy their own private registry
service, which operates in a highly scalable way.
• If you want to fully control where your images are stored or
tightly integrate images into your local development
processes, consider deploying a private registry, such as
Docker Trusted Registry. Docker Trusted Registry is an
enterprise-class containerized application that provides
fine-grained access control, as well as security and
compliance capabilities.
Fundamental Docker Concepts
Docker Containers
• A Docker container, as discussed above, wraps an application’s
software into an invisible box with everything the application needs
to run. That includes the operating system, application code,
runtime environment, system tools, system libraries, etc. Docker
containers are built off Docker images. Since images are read-only,
Docker adds a read-write file system over the read-only file system
of the image to create a container.
• When creating the container, Docker creates a network interface so
that the container can talk to the local host, attaches an available
IP address to the container, and executes the process that you
specified to run your application when defining the image.
• Once you’ve successfully created a container, you can then run it in
any environment without having to make changes.
More Docker terms:
• Images: A Docker container running in your environment is the
instantiation of an image. An image is a static file that contains
all the elements that allow a container to operate. These
elements include a file system and other parameters
necessary for whatever workload will run in the container.
Images never change and they don’t carry with them any
state. Images can be as simple as a single command or as
complex as housing an entire application.
• Pull: A pull is the act of downloading an image from a registry,
such as the one provided by the Docker Hub.
• Docker Store: The Docker Store is a free service that provides a
marketplace for enterprise developers and IT operations teams
to access third-party containers, plugins, and infrastructure.
Docker is broken up into two products:
• Docker Community Edition (CE) is the freely available version of
Docker. Docker CE is often used for development purposes, as it
doesn’t offer the support options available with Docker EE. In this
way, Docker CE is like many other open-source projects. There is
a free tier, but as you need support, you can either turn to the
community or choose from among vendor-provided options.
• Docker Enterprise Edition (EE) is the enterprise container
platform designed for enterprise development and IT teams who
build, ship, and run business-critical applications in production at
scale. Docker EE comes in three tiers: Basic, Standard, and
Advanced. Basic is what ships by default on HP Proliant servers.
Docker EE Standard adds image management and Docker
Datacenter. Docker EE Advanced wraps it all up by providing the
additional Docker Security Scanning technology.
Docker Enterprise Edition (EE) can get certified containers via the
Docker Store. Multiple categories of Docker Certified technology are
available:

 Certified Infrastructure: This includes operating systems and cloud


providers that the Docker platform is integrated and optimized for and
tested for certification.
 Certified Container: Software providers can package and distribute their
software as containers directly to the end user. Better yet, these
containers are tested, built with Docker recommended best practices,
scanned for vulnerabilities, and reviewed before posting on Docker Store.
 Certified Plugin: Networking and Volume plugins for Docker Enterprise
Edition (EE) are now available to be packaged and distributed to end
users as containers. These plugin containers, built with Docker-
recommended best practices, are scanned for vulnerabilities and must
pass an additional suite of API compliance testing before they are
reviewed and before posting on Docker Store.
Docker OS support
How is a container actually implemented?
How is a container actually implemented, especially since there isn’t
any abstract infrastructure boundary around a container?
The term “container” is really just an abstract concept to describe how
a few different features work together to visualize a “container.”
1) Namespaces
Namespaces provide containers with their own view of the underlying Linux
system, limiting what the container can see and access. When you run a
container, Docker creates namespaces that the specific container will use.
2) Control groups
Control groups (also called cgroups) are a Linux kernel feature that isolates,
prioritizes, and accounts for the resource usage (CPU, memory, disk I/O,
network, etc.) of a set of processes. In this sense, a cgroup ensures that Docker
containers only use the resources they need — and, if needed, set up limits to
what resources a container can use. Cgroups also ensure that a single container
doesn’t exhaust one of those resources and bring the entire system down.
3) Isolated Union file system
Considerations with Windows-based containers:
Orchestration: Container orchestration systems that are
common to Linux-based Docker implementations aren’t
always available on Windows. Docker’s own Swarm mode,
however, is available. Over time, as Docker on Windows
becomes more popular, expect to see more orchestration
systems become available.

Command line: PowerShell is one of the unsung heroes of the Windows


world. In addition to the same CLI used with Linux-based Docker containers,
Docker on Windows can also be managed using the PowerShell CLI, easing
the transition for Windows users. Of course, if you’re a Linux Docker admin
making the jump to Docker for Windows, you’ll be able to bring over all of
your command line knowledge and it will work for you.
Considerations with Windows-based containers
(continued):
OS version: Whereas Docker can run on almost all modern Linux variants, only
Windows 10 builds after 1903 and Windows Server after 2016 can run Docker.
Check Docker’s compatibility list to make sure your operating system
version is supported.

Linux and Windows take fundamentally different approaches to


kernel design. As such, Docker has provided the Docker Desktop
software, which allows the Docker engine to operate on Windows
and Mac. As of Windows Server 2016, Microsoft has added native
container support.

*Linux-based containers won’t run on Windows. You need to re-


create new Windows-based images starting with new Dockerfiles.
Considerations with Windows-based containers
(continued):
You can’t bring containers that were running on older versions of
Windows to your native Windows Server 2016 environment,
because a pre–Windows Server 2016 container deployment would
be running in a Linux VM on VirtualBox. You can’t simply bring
Linux containers to native Docker on Windows Server 2016.

Docker for Windows (a product offering from Docker, Inc. designed


for Windows 10) can switch modes between Windows containers
and Linux containers.

If you’ve got a lot invested in your Linux-based Docker


environment, but need to move to Windows anyway, you can still
choose to run those Linux containers under a Linux virtual
machine in Hyper-V rather than using the native Docker for
Considerations with respect to storage:

Few applications are useful without some way to store data.


Back in the day, Linux containers had no features for persisting
data, and container engines and orchestrators couldn’t support
or manage storage, either.

A container, by nature, is a transient object. It might live on one


server for a period of time and then head over to another if the
orchestrator tells it to. While a container keeps its bundle of
software and dependencies wherever it goes, it doesn’t store
data so it can maintain a light footprint. If a process stops or the
container is rebooted, all the data associated with any
applications within is lost.
Considerations with respect to storage
(continued):
Some applications may need to persist their state, data, and
configuration. For example, a database container needs
persistent storage for its data store (where the actual database
lives).
Local storage isn’t sufficient because if the container moves to
another host, it loses access to the data.
When it comes to developers building containerized
applications, they have two primary concerns. First, they need
to provision the storage their application will consume, and
second, they need to ensure that their containerized application
can mount and use the storage they provisioned in order to get
the persistence they need.
Considerations with respect to storage
(continued):
What about Metadata?
Metadata is essentially arbitrary key‐value pairs in a container
image. There’s all kinds of metadata, such as name, release,
vendor, architecture, and so on.

Metadata is as important as containers themselves. It describes


the contents within each container, without which management
across a cluster becomes a nightmare.
Considerations with respect to storage
(continued):
Developers can store metadata inside containers, along with
other contents. Metadata is too important to the whole process
to risk it disappearing due to some failure or disaster.

The best approach is to distribute metadata across multiple


containers. The information could be stored in a container of its
own — something called container‐native storage — or in
container‐ready storage. Linux containers offer flexibility and
agility, plus packaging and distribution for applications, data,
and metadata.
Considerations with respect to storage
(continued):
Storage appliances weren’t designed for the agility, speed, and scalability
that enterprises demand today.

Software‐defined storage (SDS) separates storage hardware from storage


controller software, enabling seamless portability across multiple forms of
storage hardware.

Broadly speaking, container storage comes into play in two ways:


• Storage for containers
• Storage in containers
Considerations with respect to storage
(continued):
Storage for containers:

Also known as container‐ready storage, this is essentially a setup where


storage is exposed to a container or a group of containers from an external
mount point over the network. Most storage solutions, including Software
Defined Storage, storage area networks (SANs), or network‐attached
storage (NAS) can be set up this way using standard interfaces. However,
this may not offer any additional value to a container environment from a
storage perspective. For example, few traditional storage platforms have
external application programming interfaces (APIs), which can be
leveraged by Kubernetes for Dynamic Provisioning.
Considerations with respect to storage
(continued ..):
Storage in containers:

Storage deployed inside containers, alongside applications running in


containers, is an important innovation that benefits both developers and
administrators. By containerizing storage services and managing them
under a single management plane such as Kubernetes, administrators
have fewer housekeeping tasks to deal with, allowing them to focus on
more value‐added tasks. In addition, they can run their applications and
their storage platform on the same set of infrastructure, which reduces
infrastructure expenditure. Developers benefit by being able to provision
application storage that’s both highly elastic and developer‐friendly.
References:
• Containers, HPE and Docker special Edition, for dummies
Wiley, by Scott D. Lowe, 2017.

• Container Storage, Red Hat special Edition, for dummies


Wiley, 2017 by Ed Tittel, Sayan Saha, Steve Watt, Michael
Adam, and Irshad Raihan

• What’s the Diff: VMs vs. Containers

• A Beginner-Friendly Introduction to Containers,


VMs and Docker
Copyright 2015, EMC corporation

You might also like