Vous êtes sur la page 1sur 10

Working of Docker in VM Based IT World-What Do You Need

To Know

Ecommerce development or magento development works for the


transformations needed in the IT world.

Docker is the most popular and used container platform.

Virtual machines (VMs) are increasingly used by businesses. A VM


is an operating system or application environment installed on
software. It allows the user to enjoy the same experience as on
a physical machine, with several advantages.

In particular, it is possible to run multiple OS environments on


the same machine, isolating them from each other. Similarly,
virtualization can reduce costs within a business by reducing
the number of virtual machines required.

Energy needs are also reduced. Backups and restores are also
simplified.
However, virtual machine hypervisors rely on hardware emulation,
and therefore require a lot of computing power. To remedy this
problem, many firms are turning to containers, and by extension
to Docker.

What is a container?
Before approaching Docker, it is essential to remember what a
container image is. It is a lightweight, independent set of
software processes that includes all the files needed to execute
processes: code, runtime, system tools, library, and settings.
They can be used to run Linux or Windows applications.

The containers are therefore close to the virtual machines, but


have a significant advantage. While virtualization involves
running many operating systems on a single system, the
containers share the same operating system kernel and isolate
the application processes from the rest of the system.

To put it simply, rather than virtualizing the hardware as the


hypervisor, the container virtualizes the operating system. It
is therefore significantly more efficient than a hypervisor in
terms of consuming system resources. Concretely, it is possible
to execute nearly 4 to 6 times more instances of applications
with a container than with virtual machines like Xen or KVM on
the same hardware.

Docker: what is it?


It is an open source software platform for creating, deploying
and managing virtualized application containers on an operating
system.

The services or functions of the application and its various


libraries, configuration files, dependencies and other
components are grouped within the container. Each executed
container shares the services of the operating system.

Originally created to work with the Linux platform, Docker now


works with other OS such as Microsoft Windows or Apple macOS.
There are also platform versions designed for Amazon Web
Services and Microsoft Azure.

Docker: what are the features?


The containerization platform is based on seven main components.
The Docker Engine is a client-server tool on which container
technology is based to support container-based application
creation tasks.

The engine creates a daemon server-side process for hosting


images, containers, networks, and storage volumes. This daemon
also provides a client-side SLI interface that allows users to
interact with the daemon via the platform API.

The created containers are called Dockerfiles. The Docker


Compose component allows you to define the composition of the
components within a dedicated container. The Docker Hub is a
SaaS tool that allows users to publish and share container-based
applications through a common library.

The Docker Engine's Docker Swarm mode supports load balancing of


clusters. Thus, the resources of multiple hosts can be brought
together to act as a single set. Thus, users can quickly scale
the deployment of containers.
The pros and cons

The Docker platform has many advantages. It allows you to


quickly compose, create, deploy, and scale containers on Docker
hosts. It also offers a high degree of portability, allowing
users to register and share containers on a wide variety of
hosts in public and private environments.

Compared to virtual machines, Docker also has several


advantages. It makes it possible to develop applications more
efficiently, using fewer resources, and to deploy these
applications faster.

However, it also has several disadvantages. It can be difficult


to efficiently manage a large number of containers
simultaneously. In addition, security is a problem.

The containers are isolated but share the same operating system.
In fact, an attack or a security breach on the OS can compromise
all containers. To minimize this risk, some companies run their
containers in a virtual machine.

The alternatives.

Docker is not the only container platform on the market, but it


remains the most used. Its main competitor is CoreOS rkt. This
tool is mainly known for its security, including the support of
SELinux. Other major platforms include Canonical LXD, or
Virtuozzo OpenVZ, the oldest container platform.

We can also mention the ecosystem of tools that work with the
platform for tasks such as clustering or container management.

One example is Kubernetes, the open source Container


Orchestration tool created by Google.

The numbers of success

The version 1.0 of Docker was launched in June 2014, in order to


facilitate the use of containers. Very quickly, the platform has
been very successful with many companies.
Today, according to Docker's creators, more than 3.5 million
applications have been containerized using this technology, and
more than 37 billion containerized applications have been
downloaded.

Similarly, according to the DataDog cloud monitoring system,


18.8% of users had adopted the platform in 2017.

For its part, RightScale estimates that the adoption of the


platform in the cloud industry has increased by 35% in 2017 to
49% in 2018.

Giants like Oracle and Microsoft have adopted it, just like
almost all the companies of the Cloud.

According to 451 Research, the rise of Docker is not about to


stop. These analysts estimate that the container market will
literally explode by 2021.

The revenues will be multiplied by 4 with an annual growth rate


of 35%, from $ 749 million in 2016 to $ 3.4 billion in 2021.

Docker: the technology that revolutionizes the cloud.

What is the difference with traditional virtualization?


Traditional virtualization allows, via a hypervisor, to simulate
one or more physical machines, and run them as virtual machines
(VMs) on a server or terminal.

These VMs themselves integrate an OS on which the applications


they contain are executed. This is not the case of the
container. The container makes direct call to the OS of its host
machine to make its system calls and execute its applications.

Docker containers in Linux format exploit a Linux kernel


component called LXC (or Linux Container). In Windows Server
format, they rely on an equivalent brick, called Windows Server
Container.
The Docker engine normalizes these bricks through APIs in order
to run the applications in standard containers, which are then
portable from one server to another.

What are the advantages of Docker compared to


virtualization?
As the container does not ship OS, unlike the virtual machine,
it is therefore much lighter than the latter. It does not need
to activate a second system to run its applications.

This results in a much faster launch, but also in the ability to


migrate a container (because of its low weight) more easily from
one physical machine to another.

Another advantage: the Docker containers, because of their


lightness, are portable cloud cloud. Only condition: that the
clouds in presence are optimized to accommodate them.

And this is now the case of the main ones: Amazon Web Services,
Microsoft Azure, Google Cloud Platform, OVH ... What does that
mean? Well, a Docker container, with its applications, can
easily move from one cloud to another.

What are the big scenarios in which Docker adds value


to developers?
First, Docker speeds up deployments. Why? Because the Docker
containers are light. Switching from a development or test
environment to a production environment can be done almost in
one click, which is not the case for VM, heavier.

Due to the disappearance of the intermediate VM, developers also


benefit from an application stack closer to that of the
production environment, which automatically generates fewer
unpleasant surprises during production runs.

Docker will allow at the same time to design more agile test
architecture, each test container can integrate a brick of the
application (database, languages, components ...). To test a new
version of a brick, simply change the container. Finally, on the
continuous deployment side, Docker is of interest because it
makes it possible to limit the updates to the delta of the
container that needs to be.

What about production side benefits?


Thanks to Docker, it is possible to containerize an application,
with for each layer of containers isolating its components. This
is the concept of microservice architecture.

These containers of component, because of their lightness, can


themselves, each, rely on the required machinery resources. To
achieve the same result, virtualization tools need an inactive
VM pool provisioned in advance.

With Docker, no need for a pool, since a container is bootable


in a few seconds. But, the promise of Docker goes further.
Because the Docker containers are portable from one
infrastructure to another, it becomes possible to imagine
implementing application mirroring and load balancing between
clouds, and why not plans for recovery or continuity of activity
between clouds ... or even decide to take over a project from
another cloud provider.

Is Docker technology capable of handling complex


architectures?
To facilitate the management of complex architectures, Docker
has built a Containers-as-a-Service platform. Called Docker
Enterprise Edition (Docker EE), it includes the main tools
needed to manage the deployment, management, security and
monitoring of such environments.

At the end of 2014, Docker also acquired the Tutum cloud


platform: a SaaS environment designed to drive the
implementation of containerized applications on various public
clouds (Microsoft Azure, Digital Ocean, Amazon Web Services and
IBM SoftLayer).

On the cluster management side, Docker EE integrates both Swarm,


its home orchestration engine, but also Kubernetes, which is
none other than the main alternative to the latter. Kubernetes
comes from an open source project initiated by Google.
But on the front of the automation of infrastructures, the
company of San Francisco intends to go even further. With this
in mind, in early 2016, it acquired the start-up Conductant,
which has made a name for itself in the development of Apache
Aurora, a clustering system designed to manage applications
reaching hundreds of millions of users.

In early 2017, she also got her hands on the French startup
Inifinit which publishes a distributed and cross-platform
storage technology.

Are there benchmarks between Docker and traditional


virtualization technologies, and what are their lessons
learned?
Yes. In particular, IBM published in 2014 a performance
comparison between Docker and KVM. His conclusion is
unquestionable: Docker equals or exceeds the performance of this
open source virtualization technology - and this in all cases
tested.

For Big Blue, the velocity of Docker containers is similar to


that of bare machine servers. By eliminating the resource-
intensive virtualization layer, Docker would reduce RAM
consumption by 4 to 30 times.

Published in August 2017 by the IT research department of Lund


University in Sweden, another study compares the performance of
containers with that of VMware machines. It also leads to a
conclusion in favor of Docker.

What are the limitations of Docker technology?


Initially limited to Linux, Docker has since been ported to
Windows Server. The fact remains that containers created on
Linux, can not be portable "natively" on the Microsoft server,
and vice versa.

This is the major limitation of Docker, and its main difference


with traditional virtualization. A virtual machine running Linux
can indeed run on a Windows server, and vice versa.
Beyond servers, can Docker containers be used on
connected devices or devices?
The answer is yes. Docker has a long history of providing tools
for developers to manipulate containers and test container
architectures on their computers.

The publisher also announced in April 2017 an open source


toolkit (called LinuxKit) designed to assemble a Linux
distribution from embedded system components in containers.

The idea is to propose a modular architecture to build a custom


distribution limited to the only necessary system processes for
the applications. Advantages? Being containerized, each
component can be maintained independently.

The architecture also gives the opportunity to minimize the


number of processes, which optimizes the weight of the OS and
its attack surface. A solution presented by Docker as ideal for
connected objects. The LinuxKit project was launched in
connection with the Linux Foundation and several market players
(ARM, HPE, IBM, Intel and Microsoft).

What are the components offered by Docker in open


source?
Docker has released a dozen components under Apache license.
Components that cover the main functionalities needed to drive a
containerized architecture: network management, storage,
security ... Among them is containerd.
A central brick of Docker technology since it allows the
execution of the container.

Considering the criticality of this brick for the


standardization of container offers, Docker has transferred the
rights to an independent organization (the Cloud Native
Computing Foundation).

In April 2017, the publisher went a step further by launching


Moby: an open source framework designed to build container
systems. It includes 80 open source components: those of Docker
(containerd, LinuxKit, swarmkit ...) but also other projects
(Redis, Nginx ...).
Moby wants to be a participatory project, through which all
stakeholders in the Docker community building container-based
solutions can share bricks.

Is the unikernel the same as a container?


No. The unikernel is halfway between the classic server
virtualization and the container. While traditional
virtualization takes over the entire server OS in the virtual
machine, the unikernel only embeds in the VM that the system
libraries necessary to run the application it contains.

The core of the OS remains outside of the machine. Unlike the


Docker container, the unikernel therefore takes over part of the
OS in the VM.

Among its main advantages compared to the container, the


unikernel can make it possible to optimize the system layer
embedded in the VM to the specificities of the application to be
executed.

In January 2015, Docker acquired Unikernel Systems, a start-up


specializing in unikernel, with the aim of offering an
alternative to its containers.

Where to start a Docker deployment project?


The Docker user community is starting to be important, although
its size remains difficult to evaluate. There is documentation
provided on this open source technology on the web.

Just on StackOverFlow, several thousand pages are devoted to it.


The company also provides users with a service called the Docker
Hub, designed to allow the exchange and build of pre-configured
Docker containers.
Hosting more than 460,000 container images (with Ubuntu,
WordPress, MySQL, NodeJS ...), this space is also integrated
with GitHub.

Docker also markets an on-premise version of the Docker Hub (the


Docker Hub Enterprise).
Finally, Docker has launched an online store of applications.
Objective: to offer publishers a commercial channel to
distribute their applications in the form of containers.

Conclusion
If you need to run as many applications as possible on a minimum
of servers, then you have to use containers - keeping in mind
that you will need to keep an eye on the systems that run
containers as long as their security is not locked.

If you need to run multiple applications on servers and / or


have a wide variety of operating systems, it's best to point to
virtual machines. And if security is the number one priority for
your business, then virtual machines should be preferred.

In the real world, you'll likely be using both containers and


virtual machines in the cloud and in your data centers. The
economies of scale that containers provide cannot be ignored. At
the same time, virtual machines always retain their benefits.

As container technology matures, "the VM / Container Association


will be the nirvana of cloud portability," says Thorsten von
Eicken, CTO of RightScale, a company specializing in cloud
platform management. We're not there yet, but that's where we're
headed.

Vous aimerez peut-être aussi