In this blog post tutorial, we will learn, how to install a single node Kubernetes cluster via minikube. We will install minikube on CentOS 7 natively without the need for any virtual machine layer.

minikube @ CentOS - Install Minikube on CentOS

This post is inspired by the Katacoda Course Launch Single Node Kubernetes Cluster. However, we will start with instructions on how to install minikube on a fresh Linux system. In my case, I have tested the installation procedure on a small CentOS Hetzner Cloud machine:

Step 1: Install Docker

sudo echo nothing 2>/dev/null 1>/dev/null || alias sudo='$@'

sudo tee /etc/yum.repos.d/docker.repo <<-'EOF' 
[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/edge
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
EOF

Now let us install version 18.06 on the host machine:

yum install -y docker-ce-18.06.1.ce-3.el7.x86_64 \
  && sudo systemctl start docker \
  && sudo systemctl status docker \
  && sudo systemctl enable docker
...

If you run your tests inside of a Docker container, the systemctl commands are not needed. They will not work, anyway:

# sudo systemctl start docker
Failed to get D-Bus connection: Operation not permitted

Instead, with DooD, the container relies on the Docker service running on its Docker host. Docker will still work:

# docker search hello
NAME                                       DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
hello-world                                Hello World! (an example of minimal Dockeriz…   730                 [OK]
tutum/hello-world                          Image to test docker deployments. Has Apache…   56                                      [OK]
...

Step 2: Install Kubernetes Client (kubectl)

As root (inspired by https://github.com/mstrzele/docker-minikube/blob/master/Dockerfile / https://hub.docker.com/r/mstrzele/minikube/

# Install kubectl:
KUBE_VERSION=v1.12.2
# or if latest stable: 
# KUBE_VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/${KUBE_VERSION}/bin/linux/amd64/kubectl \
  && chmod +x kubectl \
  && mv -f kubectl /usr/local/bin/ \
  && kubectl version

# Output:
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 54.6M  100 54.6M    0     0  43.9M      0  0:00:01  0:00:01 --:--:-- 43.9M
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Let us ignore the connection error for now.

Step 3: Install Minikube

# Install minikube:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64  \
  && install minikube-linux-amd64 /usr/local/bin/minikube \
  && minikube version

# Output:
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 40.3M  100 40.3M    0     0  34.6M      0  0:00:01  0:00:01 --:--:-- 34.6M
minikube version: v0.30.0

Step 4: Start Minikube

Finally, minikube starts on the server successfully:

# minikube start --vm-driver=none
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
===================
WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS
        The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks

When using the none driver, the kubectl config and credentials generated will be root owned and will appear in the root home directory.
You will need to move the files to the appropriate location and then set the correct permissions.  An example of this is below:

        sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration
        sudo chown -R $USER $HOME/.kube
        sudo chgrp -R $USER $HOME/.kube

        sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration
        sudo chown -R $USER $HOME/.minikube
        sudo chgrp -R $USER $HOME/.minikube

This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
Loading cached images from config file.

Step 5: Explore the Status

We can see that the version command still throws a connection problem message:

# kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Nevertheless, kubectl seems to be working fine:

# kubectl cluster-info
Kubernetes master is running at https://159.69.221.89:8443
CoreDNS is running at https://159.69.221.89:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

With minikube, only a single one in all node is started on the master:

# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
minikube   Ready    master   1h    v1.10.0

Step 6: Access the Kubernetes Dashboard

Last, but not least, we will access a graphical user interface of minikube. We first search for the kubernetes-dashboard deployment:

# kubectl get deployment --all-namespaces
NAMESPACE     NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
default       first-deployment       1         1         1            1           1d
kube-system   coredns                1         1         1            1           2d
kube-system   kube-dns               1         1         1            1           2d
kube-system   kubernetes-dashboard   1         1         1            1           2d

We can see above that the kubernetes-dashboard is located in the namespace ‘kube-system’.

The kubernetes-dashboard deployment is already exposed on a Cluster IP, as we can see with the following kubectl get svc command:

# kubectl get svc -n=kube-system kubernetes-dashboard
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes-dashboard   ClusterIP   10.101.34.253                 80/TCP    2d

However, the service cannot be reached from outside, since it was created as type=ClusterIP as can be checked with the kubectl describe svc -n=kube-system kubernetes-dashboard command. Nevertheless, we can retrieve it locally, by accessing it on the ClusterIP and port 80:

# curl 10.101.34.253:80
 <!doctype html> <html ng-app="kubernetesDashboard"> <head> <meta charset="utf-8"> <title ng-controller="kdTitle as $ctrl" ng-bind="$ctrl.title()"></title> <link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="static/vendor.93db0a0d.css"> <link rel="stylesheet" href="static/app.ef45991b.css"> </head> <body ng-controller="kdMain as $ctrl"> <!--[if lt IE 10]>
      <p class="browsehappy">You are using an <strong>outdated</strong> browser.
      Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your
      experience.</p>
    <![endif]--> <kd-login layout="column" layout-fill ng-if="$ctrl.isLoginState()"> </kd-login> <kd-chrome layout="column" layout-fill ng-if="!$ctrl.isLoginState()"> </kd-chrome> <script src="static/vendor.bd425c26.js"></script> <script src="api/appConfig.json"></script> <script src="static/app.58f1fb61.js"></script> </body> </html>

If we create an SSH tunnel that maps the local port 30000 to the CLUSTER-IP and port 80, we can access the dashboard via a local browser:

putty SSH Tunnel configuration for accessing the minikube dashboard

To be honest, an SSH tunnel is not the same as accessing such a dashboard directly. However, we have not yet touched the topic of creating services in Kubernetes. This will be a topic of the next blog post.

Summary

In this blog post, we have learned how to install minikube on a CentOS machine. We have tested the installation on a CentOS cloud machine on Hetzner. However, the installation should work on any native or cloud CentOS 7 system.

In the next blog post(s), we will learn how to create deployments, pods, and services. I.e., we will learn how to create applications and how to make them accessible from the Internet.

Appendix A: Error: ‘minikube start’ fails with ‘docker service is not active’

Symptom: when docker is not installed and you try to start minikube, we get an error docker service is not active

How to reproduce: Run the following command on a system with no Docker installed:

minikube start --vm-driver=none

Error log:

[preflight] Some fatal errors occurred:
        [ERROR SystemVerification]: failed to get docker info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
        [ERROR Service-Docker]: docker service is not active, please run 'systemctl start docker.service'
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

Solution: install docker in a supported version. Currently, kubectl requires the docker version to not exceed v18.06. See step 1 above on how to install a specific Docker version on CentOS. For other Linux distros, refer to this blog post by Gaurav Joshi.

Appendix B: Installing minikube within a Docker Container (open)

Before I have installed minikube on a Cloud native system, I had tried to install minikube within a Docker container as shown below. With limited success, as you will see.

Note: the procedure will fail in step X, even if we run the Docker container in privileged mode. I have not found a solution to this yet, other than performing the whole installation on a native cloud machine. I have decided to publish the results nevertheless.

Step 1: Install Docker on the Docker Host

(host)# sudo echo nothing 2>/dev/null 1>/dev/null || alias sudo='$@'

(host)# sudo tee /etc/yum.repos.d/docker.repo <<-'EOF' 
[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/edge
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
EOF

Now let us install version 18.06 on the host machine:

(host)# yum install -y docker-ce-18.06.1.ce-3.el7.x86_64 \
  && sudo systemctl start docker \
  && sudo systemctl status docker \
  && sudo systemctl enable docker
...

Step 2: Start a privileged Docker Container

mkdir minikube
cd minikube
docker run --privileged -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd):/app centos bash -c 'cd /app; bash'

Because we will run Docker out of Docker (see this nice blog post about that topic), we need to start the Docker container in privileged mode and we also need to map the docker socket (docker.sock).

Note: Beware of the security issues that come with privileged mode and docker socket connections. Better perform such tests only on machines you do not need to rely on and never give any other person access to the docker daemon of that machine.

Step 3: Install Docker in Docker (or Docker out of Docker)

We now need to install Docker inside of the Docker container. We repeat step 1, but without the systemctl commands:

(container)# sudo echo nothing 2>/dev/null 1>/dev/null || alias sudo='$@'

(container)# sudo tee /etc/yum.repos.d/docker.repo <<-'EOF' 
[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/edge
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
EOF

Now let us install version 18.06 on the host machine:

(container)# yum install -y docker-ce-18.06.1.ce-3.el7.x86_64
...

Inside of the Docker container, the systemctl commands are not needed. They will not work, anyway:

# sudo systemctl start docker
Failed to get D-Bus connection: Operation not permitted

Instead, we have mapped the outer /var/run/docker.sock volume to the corresponding inner volume. This will allow us to access the Docker API from inside the Docker container. Sometimes, this is also called Docker out of Docker. I recommend reading this discussion on the topic of Teracy’s blog. Long story short: the docker commands will also work inside the Docker container, as you can see with this example:

# docker search hello
NAME                                       DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
hello-world                                Hello World! (an example of minimal Dockeriz…   730                 [OK]
tutum/hello-world                          Image to test docker deployments. Has Apache…   56                                      [OK]
...

Step 4: Install Kubernetes Client (kubectl) and minikube

Within the container, perform steps 2 and 3 of the main section. This will yield the same results as outside of the container.

# install kubernetes:
KUBE_VERSION=v1.12.2
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/${KUBE_VERSION}/bin/linux/amd64/kubectl \
  && chmod +x kubectl \
  && mv -f kubectl /usr/local/bin/ \
  && kubectl version

# Install minikube:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64  \
  && install minikube-linux-amd64 /usr/local/bin/minikube \
  && minikube version

Step 5: Start minikube

Here the failure comes: when trying to  start minikube, we get the following error

# yum install -y sudo
# minikube start --vm-driver=none
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
E1119 19:10:39.818633     170 start.go:254] Error updating cluster:  starting kubelet: running command:
sudo systemctl daemon-reload &&
sudo systemctl enable kubelet &&
sudo systemctl start kubelet
: exit status 1
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
        minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]

Before starting minikube, we have made sure that sudo is installed since minikube makes use of sudo. However, we get an exit status 1, when trying to start minikube. The reason might be that the systemctl commands are not successful, which can be verified within the container:

# sudo systemctl daemon-reload
Failed to get D-Bus connection: Operation not permitted

The problem with starting minikube inside the container seems to be, that the minikube start command tries to perform sudo systemctl commands, which are not supported within the container. Also, it is not recommended to perform systemctl commands in a Docker container.

A promising solution to the problem is found on /kind, which we have not tested yet. Running minikube in a native cloud CentOS system was perfectly Okay for us for our purpose.

2 comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.