What is a Linux container and a Linux hypervisor?

A Virtual Machine (VM) is quite a generic term for many virtualisation technologies.

There are a many variations on virtualisation technologies, but the main ones are:

  • Hardware Level Virtualisation
  • Operating System Level Virtualisation

qemu-kvm and VMWare are examples of the first. They employ a hypervisor to manage the virtual environments in which a full operating system runs. For example, on a qemu-kvm system you can have one VM running FreeBSD, another running Windows, and another running Linux.

The virtual machines created by these technologies behave like isolated individual computers to the guest. These have a virtual CPU, RAM, NIC, graphics etc which the guest believes are the genuine article. Because of this, many different operating systems can be installed on the VMs and they work "out of the box" with no modification needed.

While this is very convenient, in that many OSes will install without much effort, it has a drawback in that the hypervisor has to simulate all the hardware, which can slow things down. An alternative is para-virtualised hardware, in which a new virtual device and driver is developed for the guest that is designed for performance in a virtual environment. qemu-kvm provide the virtio range of devices and drivers for this. A downside to this is that the guest OS must be supported; but if supported, the performance benefits are great.


lxc is an example of Operating System Level Virtualisation, or containers. Under this system, there is only one kernel installed - the host kernel. Each container is simply an isolation of the userland processes. For example, a web server (for instance apache) is installed in a container. As far as that web-server is concerned, the only installed server is itself. Another container may be running a FTP server. That FTP server isn't aware of the web-server installation - only it's own. Another container can contain the full userland installation of a Linux distro (as long as that distro is capable of running with the host system's kernel).

However, there are no separate operating system installations when using containers - only isolated instances of userland services. Because of this, you cannot install different platforms in a container - no Windows on Linux.

Containers are usually created by using a chroot. This creates a separate private root (/) for a process to work with. By creating many individual private roots, processes (web-servers, or a Linux distro, etc) run in their own isolated filesystem. More advanced techniques, such as cgroups can isolate other resources such as network and RAM.


There are pros and cons to both and many long running debates as to which is best.

  • Containers are lighter, in that a full OS isn't installed for each; which is the case for hypervisors. They can therefore run on lower spec'd hardware. However, they can only run Linux guests (on Linux hosts). Also, because they share the kernel, there is the possibility that a compromised container may affect another.
  • Hypervisors are more secure and can run different OSes because a full OS is installed in each VM and guests are not aware of other VMs. However, this utilises more resources on the host, which has to be relatively powerful.

A container is bit like a chroot environment except it achieves a more complete isolation of userspace. It does not provide a real VM, but a virtual operating system. VMs create the illusion of multiple machines, within each of which a real, complete operating system may run as if on bare metal. "Complete operating system" here includes a kernel. Some VMs (e.g. QEMU) even allow for stimulating different kinds of "bare metal" architectures.

Containers instead create the illusion of multiple kernels, each of which is running a complete userland. You could, e.g., run Debian in one container and Arch in another, so the perspective from within the container is much the same as a VM. However, you can only run an OS userland compatible with the one actual kernel, in this case, Linux. This is different than real VMs, where you can run an independent kernel and hence any kind of operating system.

So true VM's are more expensive, resource wise, than containers; if you don't need different kernels in each VM, you might as well use a container.

There are other virtualization systems that do something similar to LXE, such as openVZ, widely used by VPS vendors. An openVZ VPS is an independent userland that uses the kernel of its host OS. This is why such VPSs come in a bunch of linux flavours but nothing else; they must be compatible with the host kernel.

OpenVZ and LXC style virtualization is called operating system level virtualization.

A hypervisor is system that manages virtual machines, such as VirtualBox, QEMU, or Xen. Some hypervisors, such as Xen, run on bare metal and do not require a host OS (although they may require a hosted OS to serve as a control interface). Others, such as VirtualBox and QEMU, run inside a host OS. Some, such as QEMU, allow for simulating different machine architectures; others, such as VirtualBox, do not (i.e., the VM architecture is always the same as the real host). Simulating an architecture requires more resources, just as real VMs require more resources than containers.

Hypervisor style virtualization is called platform level virtualization.