CKA (15): Kubernetes Ingress

We use NginX-based Kubernetes Ingress Controllers to make Kubernetes Services available to the outside world. In our example, three separate applications share the same IP address and port. We show, how to retrieve the NginX configuration from the Ingress Controller. Moreover, we show how to install a newer NginX version provided by NginX INC.

CKA Labs (11) — Kubernetes Services

Kubernetes Services provide us with a means to load-balance between many instances of an application running on a data center. Moreover, they help make accessible the service from the Internet. Here, we will show, how PODs, endpoints, container-ports, and node ports are bound together by means of Kubwernetes Services.

CKA Labs (10) — Kubernetes DaemonSets

In this blog post, we have created a Kubernetes DaemonSet. We have observed that and POD template changes are not propagated to existing PODs if we choose the OnDelete update strategy. However, if we choose the RollingUpdate strategy, POD renewal is triggered with any update of the DaemonSet’s POD template.

CKA Labs (8) — Kubernetes ReplicaSets

In this lab, we will have a closer look at Kubernetes Replicasets. First, we will learn how ReplicaSets control, how many POD replicas are up and running at any time. We will learn, how ReplicaSets and PODs are connected: via labels. We will show that manually creating PODs with matching labels can have weird cuckoo’s eggs effects. Moreover, a POD can be detached from a ReplicaSet without stopping it by manipulating its label.

CKA Labs (7) — Kubernetes Jobs and CronJobs

In this Kubernetes lab, we will explore Kubernetes Jobs and CronJobs. Unlike Kubernetes Deployments, Kubernetes Jobs are designed to quit after they have accomplished the task (successful or not). Kubernetes CronJobs are the jobs that are repeated according to a schedule pattern Linux administrators know from crontab.

CKA Labs (3) — Deploy a simple Kubernetes Application

In part 3 of the Certified Kubernetes Administrator Labs Challenge, we will deploy a simple application by file and by command. Then we will expose and access the service from within the Kubernetes Cluster. After that, we will explore how Kubernetes Deployments helps us maintain the service by automatically restarting failed PODS through ReplicaSets. Last, but not least, we will discuss how to access the service from outside the Kubernetes cluster.

Metricbeat on Kubernetes – Kubernetes Series (11)

Three ways of installing Metricbeat (a performance monitoring solution) on Kubernetes are compared: native vs. helm with set options vs helm with values options. Metricbeat helps us monitoring performance indicators like CPU, Memory, Disk and many more on your Kubernetes nodes. We will show that (and why) installing Metricbeat via helm based on values file is the quickest way of installing Metricbeat. You just need to copy the Metricbeat chart’s values file, adapt it to your needs and run a helm command in order to roll out the Metricbeat agent to all nodes of your Kubernetes cluster.