Container technology has existed long before the Docker hype around container technology has started after 2013. Now, with Docker containers having reached mainstream usage, you can easily get confused about available container types like Docker, LXC, LXD and CoreOS rocket. In this ‚LXD vs Docker‘ blog post, we will explain, why LXD is actually not competing with Docker.

We will show in a few steps how to install and run LXC containers using LXD container management functions. For that, we will make use of an automated installation process based on Vagrant and VirtualBox.

LXD vs Docker

tldr;

LXD containers offer features that make them suitable as „pets“ instead of „cattle“.

LXD is supported on Ubuntu only.

Important commands:

# initialize:
sudo lxd init
# list remote image repos:
sudo lxc remote list
# launch container:
lxc launch images:ubuntu/trusty myTrustyContainer
# list running containers:
lxc list
# run command 'ls /' in a container:
sudo lxc exec myTrustyContainer ls /
# enter command-line interface of a container:
sudo lxc exec myTrustyContainer bash
# stop container:
sudo lxc stop myTrustyContainer

LXD vs Docker – or: What is LXD and why not using it as a Docker Replacement?

After working with Docker for quite some time, I have stumbled over another container technology: Ubuntu’s LXD (say „lex-dee“). What is the difference to Docker, and do they really compete with each other, as an article in the German „Linux Magazin“ (May 2015) states?

As the developers of LXD point out, the main difference between Docker and LXD is that Docker focuses on application delivery from development to production, while LXD’s focus is system containers. This is, why LXD is more likely to compete with classical hypervisors like XEN and KVM and it is less likely to compete with Docker.

Ubuntu’s web page points out that LXD’s main goal is to provide a user experience that’s similar to that of virtual machines but using Linux containers rather than hardware virtualization.

For providing a user experience that is similar to that of virtual machines, Ubuntu integrates LXD with OpenStack through its REST API.  Although there are attempts to integrate Docker with OpenStack (project Magnum), Ubuntu comes much closer to feature parity with real hypervisors like XEN and KVM by offering features like snapshots and live migration. As any container technology, LXD offers a much lower resource footprint than virtual machines: this is, why LXD is sometimes called lightervisor.

One of the main remaining concerns of IT operations teams against usage of the container technology is that containers leave a „larger surface“ to attackers than virtual machines. Canonical, the creators of Ubuntu and LXD is tackling security concerns by making LXD-based containers secure by default. Still, any low-level security feature developed for LXC potentially is available for both, Docker and LXD, since they are based on LXC technology.

What does this mean for us?

We have learned that Docker offers a great way to deliver applications, while LXD offers a great way to reduce the footprint of virtual-machine-like containers. What, if you want to leverage the best of both worlds? One way is to run Docker containers within LXD containers. This and its current restrictions are described in this blog post of Stéphane Graber.

Okay; one step after the other: let us postpone the Docker in LXD discussion and let us get started with LXD now.

Getting Started with LXD: a Step by Step Guide

This chapter largely follows this getting started web page. However, instead of trying to be complete, we will go through a simple end to end example. Moreover, we will add some important commands found on this nice LXD cheat sheet. In addition, we will explicitly record the example output of the commands.

Prerequisites:

  • You will need administration rights on your computer.
  • I have performed my tests with direct access to the Internet: via a Firewall but without HTTP proxy. If you cannot get rid of your HTTP proxy, read this blog post, though.

Step 1: Install VirtualBox

If not already done, you need to install VirtualBox found here. See appendix A, if you encounter installation problems on Windows with the error message „Setup Wizard ended prematurely“. For my tests, I am using the already installed VirtualBox 5.0.20 r106931 on Windows 10.

Step 2: Install Vagrant

If not already done, you need to install Vagrant found here. For my tests, I am using an already installed Vagrant version 1.8.1 on my Windows 10 machine.

Step 3: Initialize and download an Ubuntu 16.0.4 Vagrant Box

In a future blog post, we want to test Docker in LXD containers. This is supported in Ubuntu 16.0.4 and higher. Therefore, we download the latest daily build of the corresponding Vagrant box. As a preparation, we create a Vagrantfile in a separate directory by issuing the following command:

vagrant init ubuntu/xenial64

You can skip the next command and directly run the vagrant up command, if you wish, since the box will be downloaded automatically, if no current version of the Vagrant box is found locally. However, I prefer to download the box first and run the box later, since it is easier to observe, what happens during the boot.

vagrant box add ubuntu/xenial64

Depending on the speed of your Internet connection, you can take a break here.

Step 4: Boot the Vagrant Box as VirtualBox Image and connect to it

Then, we will boot the box with:

vagrant up

Note: if you encounter an error message like „VT-x is not available“, this may be caused by booting Windows 10 with Hyper-V enabled or by nested virtualization. According to this stackoverflow Q&A, running Virtualbox without VT-x is possible, if you make sure that the number of CPUs is one. For that, try to set vb.cpus = 1 in the Vagrantfile. Remove any statement like vb.customize ["modifyvm", :id, "--cpus", "2"] in the Vagrantfile. If you prefer to use VT-x on your Windows 10 machine, you need to disable Hyper-V. The Appendix: „Error message: “VT-x is not available” describes how to add a boot menu item that allows to boot without Hyper-V enabled.

Now let us connect to the machine:

vagrant ssh

Step 5: Install and initialize LXD

Now we need to install LXD on the Vagrant image by issuing the commands

sudo apt-get update
sudo apt-get install -y lxd
newgrp lxd

Now we need to initialize LXD with the ‚lxd init‘ interactive command:

ubuntu@ubuntu-xenial:~$ sudo lxd init
sudo: unable to resolve host ubuntu-xenial
Name of the storage backend to use (dir or zfs) [default=zfs]: dir
Would you like LXD to be available over the network (yes/no) [default=no]? yes
Address to bind LXD to (not including port) [default=0.0.0.0]:
Port to bind LXD to [default=8443]:
Trust password for new clients:
Again:
Do you want to configure the LXD bridge (yes/no) [default=yes]? no
LXD has been successfully configured.

I have decided to use dir as storage (since ZFS was not enabled), have configured the LXD server to be available via the default network port 8443, and I have chosen to start without LXD bridge since this article was pointing out that the LXD bridge does not allow SSH connections per default.

The configuration is written to a key-value store and, which can be read with the lxc config get commands, e.g.

ubuntu@ubuntu-xenial:~$ lxc config get core.https_address
0.0.0.0:8443

The list of available system config keys can be found on this Git-hosted document. However, I have not found the storage backend type „dir“, I have configured. I guess, the system assumes that „dir“ is used as long as the zfs and lvm variables are not set. Also, it is a little bit confusing that we configure LXD, but the config is read out via LXC commands.

Step 6: Download and start an LXC Image

Step 6.1 (optional): List remote LXC Repository Servers:

The images are stored on image repositories. Apart from the local repository, the default repositories have aliases images, ubuntu and ubuntu-daily:

ubuntu@ubuntu-xenial:~$ sudo lxc remote list
sudo: unable to resolve host ubuntu-xenial
+-----------------+------------------------------------------+---------------+--------+--------+
|      NAME       |                   URL                    |   PROTOCOL    | PUBLIC | STATIC |
+-----------------+------------------------------------------+---------------+--------+--------+
| images          | https://images.linuxcontainers.org       | simplestreams | YES    | NO     |
+-----------------+------------------------------------------+---------------+--------+--------+
| local (default) | unix://                                  | lxd           | NO     | YES    |
+-----------------+------------------------------------------+---------------+--------+--------+
| ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | YES    | YES    |
+-----------------+------------------------------------------+---------------+--------+--------+
| ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | YES    | YES    |
+-----------------+------------------------------------------+---------------+--------+--------+

Step 6.2 (optional): List remote LXC Images:

List all available ubuntu images for amd64 systems on the images repository:

ubuntu@ubuntu-xenial:~$ sudo lxc image list images: amd64 ubuntu
sudo: unable to resolve host ubuntu-xenial
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+
|          ALIAS          | FINGERPRINT  | PUBLIC |              DESCRIPTION              |  ARCH  |  SIZE   |         UPLOAD DATE          |
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+
| ubuntu/precise (3 more) | adb92b46d8fc | yes    | Ubuntu precise amd64 (20160906_03:49) | x86_64 | 77.47MB | Sep 6, 2016 at 12:00am (UTC) |
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+
| ubuntu/trusty (3 more)  | 844bbb45f440 | yes    | Ubuntu trusty amd64 (20160906_03:49)  | x86_64 | 77.29MB | Sep 6, 2016 at 12:00am (UTC) |
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+
| ubuntu/wily (3 more)    | 478624089403 | yes    | Ubuntu wily amd64 (20160906_03:49)    | x86_64 | 85.37MB | Sep 6, 2016 at 12:00am (UTC) |
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+
| ubuntu/xenial (3 more)  | c4804e00842e | yes    | Ubuntu xenial amd64 (20160906_03:49)  | x86_64 | 80.93MB | Sep 6, 2016 at 12:00am (UTC) |
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+
| ubuntu/yakkety (3 more) | c8155713ecdf | yes    | Ubuntu yakkety amd64 (20160906_03:49) | x86_64 | 79.16MB | Sep 6, 2016 at 12:00am (UTC) |
+-------------------------+--------------+--------+---------------------------------------+--------+---------+------------------------------+

Instead of the „ubuntu“ filter keyword in the image list command above, you can use any filter expression. E.g. sudo lxc image list images: amd64 suse will find OpenSuse images available for x86_64.

Step 6.3 (optional): Copy remote LXC Image to local Repository:

The next command is optional. It downloads the image without running it yet. However, the command can be skipped, since the download will be done automatically with the ‚lxc launch‘ command further down below if the image is not found on the local repository already.

lxc image copy images:ubuntu/trusty local:

Step 6.4 (optional): List local LXC Images:

We can list the locally stored images with the following image list command. If you have not skipped the last step, you will find the following output:

ubuntu@ubuntu-xenial:~$ sudo lxc image list
sudo: unable to resolve host ubuntu-xenial
+-------+--------------+--------+-------------------------------------------+--------+----------+------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |                DESCRIPTION                |  ARCH  |   SIZE   |         UPLOAD DATE          |
+-------+--------------+--------+-------------------------------------------+--------+----------+------------------------------+
|       | 844bbb45f440 | no     | Ubuntu trusty amd64 (20160906_03:49)      | x86_64 | 77.29MB  | Sep 6, 2016 at 5:04pm (UTC)  |
+-------+--------------+--------+-------------------------------------------+--------+----------+------------------------------+

Step 6.5 (mandatory): Launch LXC Container from Image

With the lxc launch command, we have created a container from the image. Moreover, if the image is not available in the local repository, it will automatically download the image.

ubuntu@ubuntu-xenial:~$ lxc launch images:ubuntu/trusty myTrustyContainer
Creating myTrustyContainer
Retrieving image: 100%
Starting myTrustyContainer

If the image is already in the local repository, the „Retrieving image line“ is missing and the container can start within seconds (~6-7 sec in my case).

Step 7 (optional): List running Containers

We can list the running containers with the ‚lxc list‘ command, similar to a docker ps -a for those, who know Docker:

ubuntu@ubuntu-xenial:~$ lxc list
+-------------------+---------+------+------+------------+-----------+
|       NAME        |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+-------------------+---------+------+------+------------+-----------+
| myTrustyContainer | RUNNING |      |      | PERSISTENT | 0         |
+-------------------+---------+------+------+------------+-----------+

Step 8: Run a Command on the LXC Container

Now we are ready to run our first command on the container:

ubuntu@ubuntu-xenial:~$ sudo lxc exec myTrustyContainer ls /
sudo: unable to resolve host ubuntu-xenial
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

Step 9: Log into and exit the LXC Container

We can log into the container just by running the shell with the ‚lxc exec‘ command:

ubuntu@ubuntu-xenial:~$ sudo lxc exec myTrustyContainer bash
sudo: unable to resolve host ubuntu-xenial
root@myTrustyContainer:~# exit
exit
ubuntu@ubuntu-xenial:~$

We can exit the container by simply issuing the „exit“ command. Different from Docker containers, this will not stop the container.

Step 10: Stop the LXC Container

Finally, we stop the LXC container with the sudo ‚lxc stop‘ command:

ubuntu@ubuntu-xenial:~$ sudo lxc stop myTrustyContainer
sudo: unable to resolve host ubuntu-xenial
ubuntu@ubuntu-xenial:~$ lxc list
+-------------------+---------+------+------+------------+-----------+
|       NAME        |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+-------------------+---------+------+------+------------+-----------+
| myTrustyContainer | STOPPED |      |      | PERSISTENT | 0         |
+-------------------+---------+------+------+------------+-----------+

Summary

In this blog post, we have discussed the differences between Docker and LXD. One of the main differences between Docker and LCD is that Docker focuses on application delivery, while LXD seeks to offer Linux virtual environments as systems.

After discussing the differences between Docker and LXD, we have performed a hands-on LXD session by showing how to

  • install the software,
  • download images,
  • start containers from the images and
  • running simple Linux commands on the images.

Next steps:

Here is a list of possible next steps on the path to Docker in LXC:

  • Networking
  • Docker in LXC container
  • LXD: Integration into OpenStack
  • Put it all together

Appendix A: VirtualBox Installation Problems: „Setup Wizard ended prematurely“

  • Download the VirtualBox installer
  • When I start the installer, everything seems to be on track until I see “rolling back action” and I finally get this:
    “Oracle VM Virtualbox x.x.x Setup Wizard ended prematurely”

Resolution of the “Setup Wizard ended prematurely” Problem

Let us try to resolve the problem: the installer of Virtualbox downloaded from Oracle shows the exact same error: “…ended prematurely”. This is not a docker bug. Playing with conversion tools from Virtualbox to VMware did not lead to the desired results.

The Solution: Google is your friend: the winner is: https://forums.virtualbox.org/viewtopic.php?f=6&t=61785. After backing up the registry and changing the registry entry

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Network -> MaxFilters from 8 to 20 (decimal)

and a reboot of the Laptop, the installation of Virtualbox is successful.

Note: while this workaround has worked on my Windows 7 notebook, it has not worked on my new Windows 10 machine. However, I have managed to install VirtualBox on Windows 10 by de-selecting the USB support module during the VirtualBox installation process. I remember having seen a forum post pointing to that workaround, with the additional information that the USB drivers were installed automatically at the first time a USB device was added to a host (not yet tested on my side).

Appendix B: Vagrant VirtualBox Error message: “VT-x is not available”

Error:

If you get an error message during vagrant up telling you that VT-x is not available, a reason may be that you have enabled Hyper-V on your Windows 10 machine: VirtualBox and Hyper-V cannot share the VT-x CPU:

$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'thesteve0/openshift-origin' is up to date...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
 default: Adapter 1: nat
 default: Adapter 2: hostonly
==> default: Forwarding ports...
 default: 8443 (guest) => 8443 (host) (adapter 1)
 default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["startvm", "8ec20c4c-d017-4dcf-8224-6cf530ee530e", "--type", "headless"]

Stderr: VBoxManage.exe: error: VT-x is not available (VERR_VMX_NO_VMX)
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component ConsoleWrap, interface IConsole

Resolution:

Step 1: prepare your Windows machine for dual boot with and without Hyper-V

As Administrator, open a CMD and issue the commands

bcdedit /copy "{current}" /d "Hyper-V" 
bcdedit /set "{current}" hypervisorlaunchtype off
bcdedit /set "{current}" description "non Hyper-V"

Step 2: Reboot the machine and choose the „non Hyper-V“ option.

Now, the vagrant up command should not show the „VT-x is not available“ error message anymore.

3 comments

Comments

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.