Vous êtes sur la page 1sur 6

Container concept and history.

Operating-system-level virtualization is a server virtualization method in which the kernel


of an operating system allows the existence of multiple isolated user-space instances,
instead of just one. Such instances, which are sometimes called c
ontainers, software
[1]
containers, virtualization engines (VEs) or jails (FreeBSD jail or chroot jail), may look and
feel like a real server from the point of view of its owners and users.
On Unix-like operating systems, this technology can be seen as an advanced
implementation of the standard chroot mechanism. In addition to isolation mechanisms, the
kernel often provides resource-management features to limit the impact of one container's
activities on other containers.
Operating-system-level virtualization is commonly used in virtual hosting environments,
where it is useful for securely allocating finite hardware resources amongst a large number
of mutually-distrusting users. System administrators may also use it, to a lesser extent, for
consolidating server hardware by moving services on separate hosts into containers on the
one server.
Other typical scenarios include separating several applications to separate containers for
improved security, hardware independence, and added resource management features. The
improved security provided by the use of a chroot mechanism, however, is nowhere near
ironclad.[2] Operating-system-level virtualization implementations capable of live migration
can also be used for dynamic load balancing of containers between nodes in a cluster.

Overhead
Operating-system-level virtualization usually imposes little to no overhead, because
programs in virtual partitions use the operating system's normal system call interface and do
not need to be subjected to emulation or be run in an intermediate virtual machine, as is the
case with whole-system virtualizers (such as VMware ESXi, QEMU or Hyper-V) and
paravirtualizers (such as Xen or UML). This form of virtualization also does not require
support in hardware to perform efficiently.

Flexibility
Operating-system-level virtualization is not as flexible as other virtualization approaches
since it cannot host a guest operating system different from the host one, or a different guest
kernel. For example, with Linux, different distributions are fine, but other operating systems
such as Windows cannot be hosted.
Solaris partially overcomes the above described limitation with its branded zones feature,
which provides the ability to run an environment within a container that emulates an older
Solaris 8 or 9 version in a Solaris 10 host. Linux branded zones (referred to as "lx" branded
zones) are also available on x86-based Solaris systems, providing a complete Linux
userspace and support for the execution of Linux applications; additionally, Solaris provides
utilities needed to install Red Hat Enterprise Linux 3.x or CentOS 3.x Linux distributions
inside "lx" zones.[3][4] However, in 2010 Linux branded zones were removed from Solaris; in
2014 they were reintroduced in Illumos, which is the open source Solaris fork, supporting
32-bit Linux kernels.[5]

Storage
Some operating-system-level virtualization implementations provide file-level copy-on-write
(CoW) mechanisms. (Most commonly, a standard file system is shared between partitions,
and those partitions that change the files automatically create their own copies.) This is
easier to back up, more space-efficient and simpler to cache than the block-level
copy-on-write schemes common on whole-system virtualizers. Whole-system virtualizers,
however, can work with non-native file systems and create and roll back snapshots of the
entire system state.
In 2014, Docker teamed with Canonical, Google, Red Hat, and Parallels to create a critical
standardized open-source program libcontainer that allows containers to work within Linux
namespaces and control groups without needing administrator access and offering a
consistent interface across all major Linux versions.
This allowed for many containers to run within a single virtual machine. Before, admins
usually walled off apps from each other by putting one app per virtual machine. Now you
didn't need to spin up a VM for every app. You can run multiple apps in one VM
environment. This meant no longer needing hundreds of VMs on one machine.
"The problem with VMs is they have a lot more overhead," said Kelsey Hightower, senior
engineer and chief advocate at CoreOS, which makes its own container product called
Rocket. "There is a huge slowdown depending on the workload. That VM has a high cost
because you have a middleman to get to your hardware. You're probably wasting 15% to
20% of resources for VMs when all they wanted was app containment.
You know an idea is a convincing one when Microsoft joins the party early. It's partnering
with Docker to support it on Azure, and they are also talking about integrating Docker
containers into Windows Server. That speaks to the level of interest around containers.
Lyman believes Microsoft will do its own work internally to make its own container spec, but
with everything they are doing with Docker it will be able to interface with Docker containers,
without a vendor lock-in
The Big Weakness
The other shoe to drop when it comes to containers is security. It doesn't have much, which
is a huge problem for a public cloud environment.
"Containers have critical limitations in areas like OS support, visibility, risk mitigation,
administration, and orchestration. This is especially true for the newer brands of
containerization which do not (yet) have a significant management and security ecosystem,
in contrast to more mature solutions like Solaris containers," said Andi Mann, vice president
and a member of the Office of the CTO at CA.
The problem is that containers share the same hooks into the kernel, said Hightower. "That's
a problem because if there are any vulnerabilities in that kernel in doing multitenency, and if I
know of an exploit the others dont know about, I have a way to get into your containers.
Containers have not yet demonstrated that they can deliver the same secure boundaries that
a VM still has."
Hightower said someone showed Docker just a few months ago how to escape the system
and gain full access to the server, so no one is ready to say you can get rid of virtual
machines just yet, especially in a public cloud environment like Amazon or Google.

"You don't want to have untrusted actors on the same server. A single organization can still
get benefit, but a cloud provider might not want to spin their whole business on containers
with no VMs to isolate their business," he said.
The second area of weakness for containers is they are not proven scalable. "Containers
face many challenges to scale. It's one thing to do a Web app but it's another thing to do a
multitenant, complex enterprise app with a lot of data of interest," said Lyman.
But that can be turned into a positive as well. "Containers are a really good match for
microservices, where you chop up the app into chunks for different teams, so everyone
works on their specialty area," said Hightower. "Containers are good for that use case."
Co-Existence
Because they have different strengths, weaknesses and functions, virtual machines and
containers should not be viewed as competitors but as complimentary.
"Containers and VMs are destined to be close companions in the cloud of clouds. Just as
one cloud is not enough, and so too, one virtualization technology is not enough. Each
technology provides a different response to different use cases, and in many cases work
together to solve those challenges," said Mann.
"Containers are especially good for early development, for example, because the speed of
manual provisioning/deprovisioning greatly outweighs the improved manageability of a
virtual machine in an environment where everything is new and rapidly changing," he added.
The two are very much complementary to each other, adds Hightower. "Now you need fewer
VMs and probably can go back to a bare metal server with no virtualization. If you are good
at VMs, you can use containers for everything."
The big challenge, then, is to address the security problem. Hightower notes there are
already some secure products out there, such as Red Hat's SELinux, with its
government-level security, and Ubuntu's AppArmor, which binds access control attributes to
applications rather than users. But more is needed to secure the kernel and keep unwanted
intruders out of other VMs in a multitenant environment.
The next step is large scale orchestration and scale, said Lyman, but he thinks that will
happen in time. He bases that on discussions with vendors, investors, and end users, which
is unusual because end users are usually laggards when it comes to new technologies. "I
saw it with DevOps and PaaS. With containers, end-users are asking questions alongside
with developers and investors," he said.
Containers have interest from startups like Docker, CoreOS and Shippable but also big
names like Google with Kubernetes, IBM supporting Docker on its BlueMix PaaS service,
Amazon supports containers on Elastic Cloud 2, HP and Microsoft have their own efforts,
and Red Hat's OpenShift PaaS.
It's similar in how OpenStack brought together startup specialists and megavendors. You're
only going to hear more about this from bigger vendors and see more different container
technologies, but right now everyone is putting most bets on Docker and Docker containers,"
said Lyman.

Docker vs LXC/LXD Whats the best for


your website?
by Reeshma Mathews | 04 August , 2016

Due to its light-weight reputation, container technologies such as Docker and LXC get a lot
of attention from online businesses these days.
Docker and LXC are suited for different purposes. But in the flurry of information floating in
the internet, these differences often get overlooked.
Today well discuss the major differences between Docker and LXC, and where to use them.

1. Full system virtualization Vs App virtualization


Containers can be broadly classified into two, based on the depth of virtualization they
provide : Containers such as LXC, that enable full system virtualization and those like
Docker, that give application virtualization.

Docker Vs LXC architecture


In full system virtualization, users would get their preferred OS flavor and can install their
required applications such as web, mail, etc. in the container. That makes LXC containers
similar to VMs.
Application virtualization is focused on a single application. A Docker container, when
started, runs a single process, which is the application for which it is intended.
In short, LXC containers can execute multiple applications and processes, while Docker
containers are restricted to a single application or a service.
Read: Rapid application deployment using Docker

2. Data is not saved in Docker


LXC containers are complete virtualization entities with its own file system. So any data
updated in an LXC container, will always be retrievable.
In a Docker container, on the other hand, changes made to the data cannot persist beyond a
restart. (However, note that, with the help of Docker volumes, it is possible to retain the data
changes in the host).
So, if you want a single container solution to manage data belonging to multiple applications
and services, LXC is the ideal choice over Docker.

3. Single purpose Vs Multi-purpose

LXC containers are multi-purpose as they allow multiple applications to execute in them.
Designing a Docker system to support multiple applications require complex setup and
coding, which is a waste of time when LXC readily provides that feature.
Docker containers are suitable for developers who want to develop and ship immutable
images of their applications across different platforms for use.
For small and medium businesses which require multiple applications and services such as
WordPress, Email, MySQL, Apache, etc., LXC containers are apt.
Read: Building a WordPress virtualization solution using LXD/LXC containers

4. Platform independence
LXC, as the name suggests, are Linux containers and cannot run on other OS. On the other
hand, Docker containers can run on any system that support Docker Engine.
As Docker Engine is supported on almost all OS such as Linux, Windows and MacOS,
Docker containers running an application can be ported easily to any of these platforms.
While Docker is suited for deploying applications in different OS versions, LXC containers
are suited for setting up a complete set of business services in Linux OS.
Read: How we configured Container as a Service in oVirt

5. Security by isolation
A popular way by which malware spreads is via cross-site contamination.
In LXC, there are multiple applications running in the same environment. So a malware
uploaded via one compromised application can possibly spread to other applications or
cause downtime to other services.
In contrast, Docker has each application running in its own isolated environment. Using
SELinux and namespaces, a Docker instance can be fully secured to prevent any cross-site
contamination.
Read: How application isolation in Docker improves security

In short..
Container virtualization is ideal for users who prefer light-weight and easily-manageable
instances. Here weve broadly covered the scope of Docker and LXC containers.
Choosing the technology that suits each business purpose, securing each instance,
customizing the OS images and managing the deployments, etc. are also critical aspects to
be taken care of.

Virtual Machines and Virtual Enviroment (Docker)?


What is docker? How works? Why docker?
Case of studies.
Cloud Examples
Issues

Vous aimerez peut-être aussi