This blog post shows a quick way how to install Kubernetes, a Docker orchestration framework published by Google, as a set of Docker containers. With that you can circumvent the hassle you may run into while trying to install Kubernetes natively.
Note: previous blogs in the DDDocker series can be found on LinkedIn; more recent blog posts on Docker are found here on WordPress.
Versions
v0.1 (draft) 2015-08-12: described 3 ways of installing Kubernetes, but they either failed or they led to problems later on, because ‚make‘ and other commands were missing on the machine running the kubectl client (Windows related problem). Now this installation version is moved to the appendix, since it might still be needed later on, if testing other procedures on Linux.
v0.2 (draft) 2015-08-14: added coarse outline of my (now successful) 4th attempt
v1.0 2015-08-17: full description of the successful attempt; I have moved the old, problematic attempt into the appendix.
v1.1 2015-08-17: added a subchapter „Networking Challenges“, which shows how to route from the Windows host to the service.
v1.2 2015-08-18: moved the page to wordpress.com, since LinkedIn blogs was not available to all of my colleagues.
v1.3 2016-07-11: moved the documentation of the unsuccessful attempts to the end of the document
Introduction
What is Kubernetes all about? You might love this intro: The Illustrated Children’s Guide to Kubernetes.
In the last blog I have investigated some low level container orchestration using fleet, which calls itself a „simple distributed init system“, but we could show that it offers possibilities to
- define container-based services and
- monitor the health of Docker hosts
- automatic restart of containers on other hosts, if a Docker host fails.
For those looking for more complex scheduling requirements or a first-class container orchestration system, Kubernetes of google is recommended. Let us explore, what Kubernetes adds to fleet’s capabilities, how to install it and how to test its core features. Kubernetes is a core element of other, more complete Container frameworks, e.g. Red Hat’s OpenShift Container Platform (a.k.a. OpenShift Enterprise 3.x).
Kubernetes Architecture
In the architecture consists of a master docker node and one or more minion nodes. In our example the master node offers:
- kubectl, i.e. the kube client
- the REST API with authentication, Replication Controller and Scheduler
- the kubelet info service, i.e. the service, which talks to the other docker hosts
Depending on the size of the solution, the functions can be spread over different docker hosts.
The minion docker hosts that are hosting pods offer following functions:
- kublet, i.e. the kube agent the kubelet info service talks to
- cAdvisor, which is used to monitor containers
- a proxy, which offers an abstraction layer for the communication with pods, see the description of pods.
Pods:
- are a set of containers on a single docker host
- each pod is assigned an IP address
- communication between pods is performed via a proxy, which is the abstraction layer offering the pod’s IP address from outside
kubectl is the client talking to a REST API, which in turn talks to the kublet info service, which in turn talks to the pods via local kublet agents.
etcd is used as a distributed key storage system. I guess, host clustering is done via the etcd discovery service (t.b.v.).
Installation of Kubernetes
…on Ubuntu Docker host (works well)…
We are following the instructions: Running Kubernetes locally via Docker on a Ubuntu VM created via Vagrant.
Installation of Ubuntu using Vagrant
Prerequisites:
- Vagrant, Virtualbox are installed
If you are operating behind a HTTP proxy, set the http_proxy and https_proxy variables accordingly (please replace the name/IP address and port that it matches your environment):
export http_proxy="http://proxy.example.com:8080" export https_proxy="http://proxy.example.com:8080"
Create and initialize a Vagrant working directory:
mkdir ubuntu-trusty64-docker; cd ubuntu-trusty64-docker vagrant init williamyeh/ubuntu-trusty64-docker
Start and connect to the VM:
vagrant up vagrant ssh
Start Kubernetes Docker Containers
If you are operating behind a proxy, set the http_proxy and https_proxy variables accordingly and add those variables also to the docker environment:
# perform this section, if you are behind a HTTP proxy, but replace IP address and port to match your environment: export http_proxy="http://proxy.example.com:8080" export https_proxy="http://proxy.example.com:8080" sudo vi /etc/default/docker # add the export commands above to the file /etc/default/docker (with sudo vi /etc/default/docker) and restart the docker service: sudo service docker restart
Install kubectl:
sudo wget https://storage.googleapis.com/kubernetes-release/release/v1.0.3/bin/linux/amd64/kubectl -O /tmp/kubectl sudo cp /tmp/kubectl /usr/local/bin/; sudo chmod +x /usr/local/bin/kubectl
Stop the cAdvisor, since it would lead to a clash in the port assignment:
docker ps | grep -i cadvisor | grep ':8080->' | awk '{print $1}' | xargs --no-run-if-empty docker stop
With docker ps, make sure that no docker container is running at this point:
Now you can follow the instructions on https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md, like follows:
Step One: Run etcd
docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
Step Two: Run the master
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
Step Three: Run the service proxy
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
Test it out
kubectl get nodes
Run an application
kubectl -s http://localhost:8080 run-container nginx --image=nginx --port=80
Now run
docker ps
you should see nginx running. You might need to wait a few minutes for the image to get pulled:
Expose it as a service
kubectl expose rc nginx --port=80
If CLUSTER_IP is blank (Known kubernetes issue #10836) run the following command to obtain it:
kubectl get svc nginx
Test the web server:
export no_proxy="<insert-ip-from-above-here>" curl <insert-ip-from-above-here>
Bingo! The NginX web server is up and running!
Accessing the Service from remote
The service is reachable from the Vagrant Linux host. However, the service cannot be reached from my Windows machine yet.
The problem can be described like follows:
- Kubernetes is automatically fetching an IP address from a pool (defined where?) for each service. In case of the Nginx service, this was the IP address 10.0.0.146.
- The address is not owned by the Vagrant Linux VM, as can be seen with an ifconfig.
- In Vagrant, per default, we have no public IP address. However, Vagrant offers the possibility to map a VM’s IP address and port to a port of the host (= Windows host in my case). However, I have not found any possibility to map a host’s IP:port pair to an IP and port that is not owned by the VM.
We have two possibilities to resolve the problem:
1) Chained port mapping (only theory; not tested yet):
- In Vagrant map the host’s port to an IP:port pair owned by the VM
- In the VM, using e.g. iptables NAT function to map the IP:port pair to the service’s IP:port pair
2) Create an additional, reachable interface for the VM and route the service IP address to this public interface
- In the Vagrantfile, add e.g. the line
config.vm.network „private_network“, ip: „192.168.33.10“ -> this will automatically create the interface eth1 in a new host-only network. You need to issue „vagrant reload –provision“ to activate this setting. - On the (Windows) host, add a route to the network, which matches the pool Kubernetes is choosing the service IP addresses from. In my case, I have added the route using the command
route add 10.0.0.0 mask 255.255.255.0 192.168.33.10
With this, the Nginx service becomes reachable (now on another, randomly chosen IP address 10.0.0.19, since I have restarted the host):
Perfect! The service is now available also from the Windows host.
In a real world example, external load balancers will map the externally visible IP:port pair to the service’s IP:Port pair; and the IP address and port will be chosen statically. Routing needs to take care that the service is reached, no matter, on which host it is located. This is something, I need to explore in more detail an another post: how can we connect several Minions to the same network, and can we make sure that the IP packets are routed to the right Minion? Gratuitous ARP?
Appendix: Installation of Kubernetes on CoreOS (removed)
This was a log of my efforts, but it lead to a dead end (installation of kubectl on CoreOS is not supported and running kubectl in a docker container did not lead to the desired results), so I have removed it; still available on request as revision 18 …
Appendix: Attempts to install Kubernetes (including unsuccessful attempts)
Installation of Kubernetes on Windows seems to be hard, as can be seen with the first three unsuccessful installation attempts. However, I have found a fairly automated way of installing Kubernetes as a set of Docker containers by using Vagrant and a base image that has docker already installed on an Ubuntu VM. This is described as successful attemt 4) below and is described in more detail in the main part of this blog.
UNSUCCESSFUL ATTEMPTS:
1) Multi-node CoreOS cluster installation on the Getting Started CoreOS page
I had to try 3 times, until the kubectl client was downloaded and installed correctly. And when trying to start my first NginX example, I found myself in a dead end: the example(s) require normal Linux commands line „make“, but CoreOS neither support those commands, nor allows to install them.
2) Running Kubernetes locally via Docker
this is supposed to be the quick way for an evaluation installation, since we only need to download and run pre-installed Docker images. Not so this time: here, I run into the problem that kubectl client cannot be installed on my boot2docker host. When I try to use one of the kubectl docker images, I always get an error that 127.0.0.1:8080 cannot be reached.
3) Running Kubernetes locally via Docker within an Ubuntu docker container on Windows boot2docker does not work either: if I install kubectl on that container, kubectl cluster-info always returns a success message no matter, which server IP address I specify and no matter, whether the kubernetes containers are up and running or not. Shoot.
SUCCESSFUL ATTEMPT:
4) SUCCESS: Running Kubernetes locally via Docker within an Ubuntu VM has succeeded finally: I had created an Ubuntu VM using Vagrant with the image ubuntu-trusty64-docker from the Vagrant boxes repository and I have downloaded kubectl v1.0.1 into /usr/local/bin of that image. First, I had the problem that kubectl always had returned an error that I could not find on Google, saying that it had received the string „Supported versions: [v1.0,…]“ or similar. Then I had found out that kubectl is connecting to http://localhost:8080, which was occupied already with a docker image google/cadvisor, that was up and running in the Ubuntu Vagrant image I had used. After finding this docker image with „docker ps“ and stopping it with „docker stop <container-id>“, kubectl worked as expected. Now „wget -qO- http://localhost:8080/ | less“ returns a list of paths in json format and all kubectl commands on the instruction page are working as expected. That was hard work. My assumption that it would work on Ubuntu was correct. I will troubleshoot in more details, why it had not worked in one of the other ways.
2 comments