This article will show how to use NginX-based Kubernetes Ingress Controllers to make Kubernetes services available to the outside world. We will demonstrate how three applications share the same IP address and port, while ingress rules decide, which URL pattern is routed to which application. We also will show, how to retrieve the NginX configuration from the POD.
In the Appendix, we will repeat the same task, but we will use the installation instructions by NginX INC, which offers a later version of NginX (v1.17.4 as opposed to 1.15.8.2 for the official Kubernetes ingress).
Deployment Options
Kubernetes officially supports NginX and GCE-based Ingress Controllers in the sense that they maintain projects here within the kubernetes GitHub organization.
The deployment instructions for ingress controllers options are found on the following resources
- NginX ingress officially supported by kubernetes.io:
- kubernetes.io (only for minikube, though)
- kubernetes.github.io (documentation) and kubernetes/ingress-nginx (repo; currently installs openresty/1.15.8.2)
- NginX INC ingress offered by the NginX corporation:
- /kubernetes-ingress (repo; currently installs v1.17.4)
- GCE: Google Cloud docs
Many more options are listed on this kubernetes.io page.
Step 0: Preparations
Step 0.1: Access the Kubernetes Playground
As always, we start by accessing the Katacoda Kubernetes Playground.
Step 0.2 (optional): Configure auto-completion
The Katacoda Kubernetes Playground has defined the alias and auto-completion already. Only in case you are running your tests in another environment, we recommend to issue the following two commands:
alias k=kubectl source <(kubectl completion bash)
However, even in case of the Katacoda Kubernetes Playground, auto-completion does not work for the alias k
for yet. Therefore, we need to type the following command:
source <(kubectl completion bash | sed 's/kubectl/k/g')
Once this is done, k g<tab>
will be auto-completed to k get
and k get pod <tab>
will reveal the name(s) of the available POD(s).
Task1: Install NginX Controller
Here, we will install the NginX controller by following the corresponding kubernetes GIT repo: GIT://kubernetes/ingress-nginx. We will test its installation as well by following its installation guide.
Step 1.1: Install mandatory Elements
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created deployment.apps/nginx-ingress-controller created
The corresponding POD of the deployment is up and running on node01:
kubectl get all -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/nginx-ingress-controller-8544567f-t7b9f 1/1 Running 0 100s 10.44.0.1 node01 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/nginx-ingress-controller 1/1 1 1 100s nginx-ingress-controller quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/nginx-ingress-controller-8544567f 1 1 1 100s nginx-ingress-controller quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx,pod-template-hash=8544567f
Step 2: Install an Ingress Service
Then we apply the bare-metal installation, which will create a NodePort service for us:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
service/ingress-nginx created
We have a look at the just created NodePort Service:
kubectl get svc -n ingress-nginx -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR ingress-nginx NodePort 10.109.187.65 <none> 80:31385/TCP,443:32591/TCP 23s app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Task 2: Create and Reach Apps via NginX Ingress
In this task, we will create three deployments and services, and we will show, how the ingress controller allows us to access all of those applications on the same IP address and port on different resource URLs.
Step 2.1: Create three Deployments and Services
for i in 1 2 3; do cat <<EOF | kubectl apply -f - --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp${i} spec: replicas: 1 template: metadata: labels: app: webapp${i} spec: containers: - name: webapp${i} image: katacoda/docker-http-server:latest ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: webapp${i}-svc labels: app: webapp${i} spec: ports: - port: 80 selector: app: webapp${i} EOF done # output: deployment.extensions/webapp1 created service/webapp1-svc created deployment.extensions/webapp2 created service/webapp2-svc created deployment.extensions/webapp3 created service/webapp3-svc created
Step 2.2: Create Ingress Rules
We now create ingress rules as follows:
cat <<EOF | kubectl apply -f - --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: webapp-ingress spec: rules: - host: my.kubernetes.example http: paths: - path: /webapp1 backend: serviceName: webapp1-svc servicePort: 80 - path: /webapp2 backend: serviceName: webapp2-svc servicePort: 80 - backend: serviceName: webapp3-svc servicePort: 80 EOF
In the above example, we try to reach webapp1 and webapp2 via the resource /webapp1 and /webapp2, respectively. All other resources will be routed to webapp3.
We now can reach the three applications with the following curl commands:
CLUSTER_IP=$(kubectl get svc -n ingress-nginx | grep ingress-nginx | awk '{print $3}') curl -H "Host: my.kubernetes.example" $CLUSTER_IP/webapp1 # output: <h1>This request was processed by host: webapp1-6d7df9f8d-4xtdg</h1> curl -H "Host: my.kubernetes.example" $CLUSTER_IP/webapp2 # output: <h1>This request was processed by host: webapp2-6d48b8ff76-qjr56</h1> curl -H "Host: my.kubernetes.example" $CLUSTER_IP # output: <h1>This request was processed by host: webapp3-7df59dc67b-dr76f</h1>
Note: here, we are using the service cluster IP address of the NginX ingress controller to reach the application. The Cluster_IP is a private IP address and is not reachable from the Internet. However, we have installed a NodePort Service above, which allows us to reach the services also on the public Ip address of the Kubernetes node on the ports shown in the service:
NODE_IP=node01 NODE_PORT=$(kubectl get svc -n ingress-nginx | grep ingress-nginx | awk '{print $5}' | awk -F '[:\/]' '{print $2}')
Task 3: Retrieve NginX Configuration
For troubleshooting, it makes sense that you know how to retrieve the current NginX. This is, how it is done:
POD=$(k get pod -n ingress-nginx | grep nginx | awk '{print $1}') kubectl exec -it $POD -n ingress-nginx -- cat /etc/nginx/nginx.conf
We retrieve the following output if no ingress rule is present:
kubectl exec -it $POD -n ingress-nginx -- cat /etc/nginx/nginx.conf # Configuration checksum: 16298321219370231097 # setup custom paths that do not require root access pid /tmp/nginx.pid; daemon off; worker_processes 4; worker_rlimit_nofile 261120; worker_shutdown_timeout 240s ; events { multi_accept on; worker_connections 16384; use epoll; } http { lua_package_path "/etc/nginx/lua/?.lua;;"; lua_shared_dict balancer_ewma 10M; lua_shared_dict balancer_ewma_last_touched_at 10M; lua_shared_dict balancer_ewma_locks 1M; lua_shared_dict certificate_data 20M; lua_shared_dict certificate_servers 5M; lua_shared_dict configuration_data 20M; init_by_lua_block { collectgarbage("collect") local lua_resty_waf = require("resty.waf") lua_resty_waf.init() -- init modules local ok, res ok, res = pcall(require, "lua_ingress") if not ok then error("require failed: " .. tostring(res)) else lua_ingress = res lua_ingress.set_config({ use_forwarded_headers = false, is_ssl_passthrough_enabled = false, http_redirect_code = 308, listen_ports = { ssl_proxy = "442", https = "443" }, hsts = true, hsts_max_age = 15724800, hsts_include_subdomains = true, hsts_preload = false, }) end ok, res = pcall(require, "configuration") if not ok then error("require failed: " .. tostring(res)) else configuration = res end ok, res = pcall(require, "balancer") if not ok then error("require failed: " .. tostring(res)) else balancer = res end ok, res = pcall(require, "monitor") if not ok then error("require failed: " .. tostring(res)) else monitor = res end ok, res = pcall(require, "certificate") if not ok then error("require failed: " .. tostring(res)) else certificate = res end ok, res = pcall(require, "plugins") if not ok then error("require failed: " .. tostring(res)) else plugins = res end -- load all plugins that'll be used here plugins.init({}) } init_worker_by_lua_block { lua_ingress.init_worker() balancer.init_worker() monitor.init_worker() plugins.run() } geoip_country /etc/nginx/geoip/GeoIP.dat; geoip_city /etc/nginx/geoip/GeoLiteCity.dat; geoip_org /etc/nginx/geoip/GeoIPASNum.dat; geoip_proxy_recursive on; aio threads; aio_write on; tcp_nopush on; tcp_nodelay on; log_subrequest on; reset_timedout_connection on; keepalive_timeout 75s; keepalive_requests 100; client_body_temp_path /tmp/client-body; fastcgi_temp_path /tmp/fastcgi-temp; proxy_temp_path /tmp/proxy-temp; ajp_temp_path /tmp/ajp-temp; client_header_buffer_size 1k; client_header_timeout 60s; large_client_header_buffers 4 8k; client_body_buffer_size 8k; client_body_timeout 60s; http2_max_field_size 4k; http2_max_header_size 16k; http2_max_requests 1000; types_hash_max_size 2048; server_names_hash_max_size 1024; server_names_hash_bucket_size 64; map_hash_bucket_size 64; proxy_headers_hash_max_size 512; proxy_headers_hash_bucket_size 64; variables_hash_bucket_size 128; variables_hash_max_size 2048; underscores_in_headers off; ignore_invalid_headers on; limit_req_status 503; limit_conn_status 503; include /etc/nginx/mime.types; default_type text/html; gzip on; gzip_comp_level 5; gzip_http_version 1.1; gzip_min_length 256; gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component; gzip_proxied any; gzip_vary on; # Custom headers for response server_tokens on; # disable warnings uninitialized_variable_warn off; # Additional available variables: # $namespace # $ingress_name # $service_name # $service_port log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id'; map $request_uri $loggable { default 1; } access_log /var/log/nginx/access.log upstreaminfo if=$loggable; error_log /var/log/nginx/error.log notice; resolver 10.96.0.10 valid=30s ipv6=off; # See https://www.nginx.com/blog/websocket-nginx map $http_upgrade $connection_upgrade { default upgrade; # See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive '' ''; } # Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server. # If no such header is provided, it can provide a random value. map $http_x_request_id $req_id { default $http_x_request_id; "" $request_id; } # Create a variable that contains the literal $ character. # This works because the geo module will not resolve variables. geo $literal_dollar { default "$"; } server_name_in_redirect off; port_in_redirect off; ssl_protocols TLSv1.2; ssl_early_data off; # turn on session caching to drastically improve performance ssl_session_cache builtin:1000 shared:SSL:10m; ssl_session_timeout 10m; # allow configuring ssl session tickets ssl_session_tickets on; # slightly reduce the time-to-first-byte ssl_buffer_size 4k; # allow configuring custom ssl ciphers ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256'; ssl_prefer_server_ciphers on; ssl_ecdh_curve auto; # PEM sha: 0e23a7fe97dcedda0126205d2356f5f79347cd66 ssl_certificate /etc/ingress-controller/ssl/default-fake-certificate.pem; ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem; proxy_ssl_session_reuse on; upstream upstream_balancer { ### Attention!!! # # We no longer create "upstream" section for every backend. # Backends are handled dynamically using Lua. If you would like to debug # and see what backends ingress-nginx has in its memory you can # install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin. # Once you have the plugin you can use "kubectl ingress-nginx backends" command to # inspect current backends. # ### server 0.0.0.1; # placeholder balancer_by_lua_block { balancer.balance() } keepalive 32; keepalive_timeout 60s; keepalive_requests 100; } # Cache for internal auth checks proxy_cache_path /tmp/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off; # Global filters ## start server _ server { server_name _ ; listen 80 default_server reuseport backlog=511 ; listen 443 default_server reuseport backlog=511 ssl http2 ; set $proxy_upstream_name "-"; ssl_certificate_by_lua_block { certificate.call() } location / { set $namespace ""; set $ingress_name ""; set $service_name ""; set $service_port ""; set $location_path "/"; rewrite_by_lua_block { lua_ingress.rewrite({ force_ssl_redirect = false, ssl_redirect = false, force_no_ssl_redirect = false, use_port_in_redirects = false, }) balancer.rewrite() plugins.run() } header_filter_by_lua_block { plugins.run() } body_filter_by_lua_block { } log_by_lua_block { balancer.log() monitor.call() plugins.run() } access_log off; port_in_redirect off; set $balancer_ewma_score -1; set $proxy_upstream_name "upstream-default-backend"; set $proxy_host $proxy_upstream_name; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; set $proxy_alternative_upstream_name ""; client_max_body_size 1m; proxy_set_header Host $best_http_host; # Pass the extracted client certificate to the backend # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Request-ID $req_id; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Scheme $pass_access_scheme; # Pass the original X-Forwarded-For proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; # Custom headers to proxied server proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_buffering off; proxy_buffer_size 4k; proxy_buffers 4 4k; proxy_max_temp_file_size 1024m; proxy_request_buffering on; proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout; proxy_next_upstream_timeout 0; proxy_next_upstream_tries 3; proxy_pass http://upstream_balancer; proxy_redirect off; } # health checks in cloud providers require the use of port 80 location /healthz { access_log off; return 200; } # this is required to avoid error if nginx is being monitored # with an external software (like sysdig) location /nginx_status { allow 127.0.0.1; deny all; access_log off; stub_status on; } } ## end server _ ## start server my.kubernetes.example server { server_name my.kubernetes.example ; listen 80 ; listen 443 ssl http2 ; set $proxy_upstream_name "-"; ssl_certificate_by_lua_block { certificate.call() } location /webapp2 { set $namespace "default"; set $ingress_name "webapp-ingress"; set $service_name "webapp2-svc"; set $service_port "80"; set $location_path "/webapp2"; rewrite_by_lua_block { lua_ingress.rewrite({ force_ssl_redirect = false, ssl_redirect = true, force_no_ssl_redirect = false, use_port_in_redirects = false, }) balancer.rewrite() plugins.run() } header_filter_by_lua_block { plugins.run() } body_filter_by_lua_block { } log_by_lua_block { balancer.log() monitor.call() plugins.run() } port_in_redirect off; set $balancer_ewma_score -1; set $proxy_upstream_name "default-webapp2-svc-80"; set $proxy_host $proxy_upstream_name; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; set $proxy_alternative_upstream_name ""; client_max_body_size 1m; proxy_set_header Host $best_http_host; # Pass the extracted client certificate to the backend # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Request-ID $req_id; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Scheme $pass_access_scheme; # Pass the original X-Forwarded-For proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; # Custom headers to proxied server proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_buffering off; proxy_buffer_size 4k; proxy_buffers 4 4k; proxy_max_temp_file_size 1024m; proxy_request_buffering on; proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout; proxy_next_upstream_timeout 0; proxy_next_upstream_tries 3; proxy_pass http://upstream_balancer; proxy_redirect off; } location /webapp1 { set $namespace "default"; set $ingress_name "webapp-ingress"; set $service_name "webapp1-svc"; set $service_port "80"; set $location_path "/webapp1"; rewrite_by_lua_block { lua_ingress.rewrite({ force_ssl_redirect = false, ssl_redirect = true, force_no_ssl_redirect = false, use_port_in_redirects = false, }) balancer.rewrite() plugins.run() } header_filter_by_lua_block { plugins.run() } body_filter_by_lua_block { } log_by_lua_block { balancer.log() monitor.call() plugins.run() } port_in_redirect off; set $balancer_ewma_score -1; set $proxy_upstream_name "default-webapp1-svc-80"; set $proxy_host $proxy_upstream_name; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; set $proxy_alternative_upstream_name ""; client_max_body_size 1m; proxy_set_header Host $best_http_host; # Pass the extracted client certificate to the backend # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Request-ID $req_id; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Scheme $pass_access_scheme; # Pass the original X-Forwarded-For proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; # Custom headers to proxied server proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_buffering off; proxy_buffer_size 4k; proxy_buffers 4 4k; proxy_max_temp_file_size 1024m; proxy_request_buffering on; proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout; proxy_next_upstream_timeout 0; proxy_next_upstream_tries 3; proxy_pass http://upstream_balancer; proxy_redirect off; } location / { set $namespace "default"; set $ingress_name "webapp-ingress"; set $service_name "webapp3-svc"; set $service_port "80"; set $location_path "/"; rewrite_by_lua_block { lua_ingress.rewrite({ force_ssl_redirect = false, ssl_redirect = true, force_no_ssl_redirect = false, use_port_in_redirects = false, }) balancer.rewrite() plugins.run() } header_filter_by_lua_block { plugins.run() } body_filter_by_lua_block { } log_by_lua_block { balancer.log() monitor.call() plugins.run() } port_in_redirect off; set $balancer_ewma_score -1; set $proxy_upstream_name "default-webapp3-svc-80"; set $proxy_host $proxy_upstream_name; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; set $proxy_alternative_upstream_name ""; client_max_body_size 1m; proxy_set_header Host $best_http_host; # Pass the extracted client certificate to the backend # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Request-ID $req_id; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Scheme $pass_access_scheme; # Pass the original X-Forwarded-For proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; # Custom headers to proxied server proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_buffering off; proxy_buffer_size 4k; proxy_buffers 4 4k; proxy_max_temp_file_size 1024m; proxy_request_buffering on; proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout; proxy_next_upstream_timeout 0; proxy_next_upstream_tries 3; proxy_pass http://upstream_balancer; proxy_redirect off; } } ## end server my.kubernetes.example # backend for when default-backend-service is not configured or it does not have endpoints server { listen 8181 default_server reuseport backlog=511; set $proxy_upstream_name "internal"; access_log off; location / { return 404; } } # default server, used for NGINX healthcheck and access to nginx stats server { listen 127.0.0.1:10246; set $proxy_upstream_name "internal"; keepalive_timeout 0; gzip off; access_log off; location /healthz { return 200; } location /is-dynamic-lb-initialized { content_by_lua_block { local configuration = require("configuration") local backend_data = configuration.get_backends_data() if not backend_data then ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR) return end ngx.say("OK") ngx.exit(ngx.HTTP_OK) } } location /nginx_status { stub_status on; } location /configuration { client_max_body_size 21m; client_body_buffer_size 21m; proxy_buffering off; content_by_lua_block { configuration.call() } } location / { content_by_lua_block { ngx.exit(ngx.HTTP_NOT_FOUND) } } } } stream { lua_package_cpath "/usr/local/lib/lua/?.so;/usr/lib/lua-platform-path/lua/5.1/?.so;;"; lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;/usr/local/lib/lua/?.lua;;"; lua_shared_dict tcp_udp_configuration_data 5M; init_by_lua_block { collectgarbage("collect") -- init modules local ok, res ok, res = pcall(require, "configuration") if not ok then error("require failed: " .. tostring(res)) else configuration = res end ok, res = pcall(require, "tcp_udp_configuration") if not ok then error("require failed: " .. tostring(res)) else tcp_udp_configuration = res end ok, res = pcall(require, "tcp_udp_balancer") if not ok then error("require failed: " .. tostring(res)) else tcp_udp_balancer = res end } init_worker_by_lua_block { tcp_udp_balancer.init_worker() } lua_add_variable $proxy_upstream_name; log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time; access_log /var/log/nginx/access.log log_stream ; error_log /var/log/nginx/error.log; upstream upstream_balancer { server 0.0.0.1:1234; # placeholder balancer_by_lua_block { tcp_udp_balancer.balance() } } server { listen 127.0.0.1:10247; access_log off; content_by_lua_block { tcp_udp_configuration.call() } } # TCP services # UDP services }
Note: this was the input for following version:
kubectl exec -it $POD -n ingress-nginx -- nginx -v nginx version: openresty/1.15.8.2For later versions of NginX, the file with the dynamic ingress configuration will be separated from the nginx.conf file to the /etc/nginx/conf.d/ folder. In this case, the configuration needs to be retrived with following command:
kubectl exec -it $POD -n ingress-nginx -- find /etc/nginx/conf.d/ -name '*.conf' -exec cat {} \;
Appendix: NginX Ingress Controller via Nginx Inc
Task A.1: Install an NginX Ingress Controller
In Tasks 1 to 3 of this appendix, we will install and review an NginX-based ingress controller using the scripts provided by NginX corporation published on /kubernetes-ingress. This will install a more recent version (currently 1.17.4) than the one found on kubernetes/ingress-nginx (currently openresty/1.15.8.2).
If you prefer to use the scripts provided by kubernetes GIT user, please have a look at Tasks 1 to 3 above.
Step A.1.0 (optional): Quick&Tidy: all steps on a single cut&paste panel
If you want to skip the steps in task 1 (e.g. because you repeat the task because the katacoda session has expired), you just can cut&paste the following code to the console:
# # do not apply, if you want to follw the A.1.* step by step instructions below: # kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/deployments/common/ns-and-sa.yaml kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/deployments/common/nginx-config.yaml kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/deployments/common/default-server-secret.yaml kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/deployments/daemon-set/nginx-ingress.yaml
Step A.1.1: Create Namespace and Service Account
We will install the service account and namespace by applying the kubernetes-ingress scripts of the GIT user ’nginxinc‘:
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/deployments/common/ns-and-sa.yaml namespace/nginx-ingress created serviceaccount/nginx-ingress created
Step 1.2: Create NginX ConfigMap
A config map is required by step 1.4 below. Let us create it:
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/deployments/common/nginx-config.yaml configmap/nginx-config created
Step A.1.3: Create TL Secret
Even if you do not plan to use TLS, a TLS secret is needed nevertheless, since the next command depends on it:
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/deployments/common/default-server-secret.yaml secret/default-server-secret created
Note that a pre-defined Certificate is installed this way. Anyone in the datapath can decrypt the messages using the publicly available TLS „secrets“, so this is suitable for test purposes only.
Step A.1.4: Create NginX Controller as DaemonSet
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/deployments/daemon-set/nginx-ingress.yaml daemonset.apps/nginx-ingress created
Note: other deployment options are available on github: as a Kubernetes Deployment, or via helm offering both options, Kubernetes Deployment as well as Kubernetes DaemonSet.
Step A.1.5: Verify that the Nginx Controller POD is up and running
kubectl get pods -n nginx-ingress NAME READY STATUS RESTARTS AGE nginx-ingress-m7lnk 1/1 Running 0 21s
Task A.2: Create and Reach Apps via NginX Ingress
In this task, we will create three deployments and services, and we will show, how the ingress controller allows us to access all of those applications on the same IP address and port on different resource URLs.
Step A.2.1: Create three Deployments and Services
for i in 1 2 3; do cat <<EOF | kubectl apply -f - --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp${i} spec: replicas: 1 template: metadata: labels: app: webapp${i} spec: containers: - name: webapp${i} image: katacoda/docker-http-server:latest ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: webapp${i}-svc labels: app: webapp${i} spec: ports: - port: 80 selector: app: webapp${i} EOF done # output: deployment.extensions/webapp1 created service/webapp1-svc created deployment.extensions/webapp2 created service/webapp2-svc created deployment.extensions/webapp3 created service/webapp3-svc created
Step A.2.2: Create Ingress Rules
We now create ingress rules as follows:
cat <<EOF | kubectl apply -f - --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: webapp-ingress spec: rules: - host: my.kubernetes.example http: paths: - path: /webapp1 backend: serviceName: webapp1-svc servicePort: 80 - path: /webapp2 backend: serviceName: webapp2-svc servicePort: 80 - backend: serviceName: webapp3-svc servicePort: 80 EOF
In the above example, we try to reach webapp1 and webapp2 via the resource /webapp1 and /webapp2, respectively. All other resources will be routed to webapp3.
We now can reach the three applications with the following curl commands:
curl -H "Host: my.kubernetes.example" node01/webapp1 # output: <h1>This request was processed by host: webapp1-6d7df9f8d-7f25j</h1> curl -H "Host: my.kubernetes.example" node01/webapp2 # output: <h1>This request was processed by host: webapp2-6d48b8ff76-qjr56</h1> curl -H "Host: my.kubernetes.example" node01 # output: <h1>This request was processed by host: webapp3-7df59dc67b-dr76f</h1>
Note: we can access the services directly via the agent’s IP-address or Name node01 here, because the specification of the DaemonSet has chosen to specify a hostPort:
curl -s https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/deployments/daemon-set/nginx-ingress.yaml | grep -A 6 ports: ports: - name: http containerPort: 80 hostPort: 80 - name: https containerPort: 443 hostPort: 443
Note also that the best practice document discourages the usage of hostPorts. It is better to create a service instead, as we had done above, following the instruction on kubernetes/ingress-nginx. There, we have accessed the services via the NginX controller’s Cluster IP or random NodePort, which is much less likely to collide with other applications or services running on the same node.
Task A.3: Retrieve NginX Configuration
For troubleshooting, it makes sense that you know how to retrieve the current NginX. This is, how it is done:
POD=$(k get pod -n nginx-ingress | grep nginx | awk '{print $1}') kubectl exec -it $POD -n nginx-ingress -- cat /etc/nginx/nginx.conf
We retrieve the following output if no ingress rule is present:
user nginx; worker_processes auto; daemon off; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65s; keepalive_requests 100; #gzip on; server_names_hash_max_size 1024; server_names_hash_bucket_size 256; variables_hash_bucket_size 256; variables_hash_max_size 1024; map $http_upgrade $connection_upgrade { default upgrade; '' close; } map $http_upgrade $vs_connection_header { default upgrade; '' $default_connection_header; } server { # required to support the Websocket protocol in VirtualServer/VirtualServerRoutes set $default_connection_header ""; listen 80 default_server; listen 443 ssl default_server; ssl_certificate /etc/nginx/secrets/default; ssl_certificate_key /etc/nginx/secrets/default; server_name _; server_tokens "on"; access_log off; location / { return 404; } } # stub_status server { listen 8080; allow 127.0.0.1; deny all; location /stub_status { stub_status; } } include /etc/nginx/config-version.conf; include /etc/nginx/conf.d/*.conf; server { listen unix:/var/run/nginx-502-server.sock; access_log off; location / { return 502; } } } stream { log_format stream-main '$remote_addr [$time_local] ' '$protocol $status $bytes_sent $bytes_received ' '$session_time'; access_log /var/log/nginx/stream-access.log stream-main; }
The output does not change, after an Ingress rule is created. Instead, a new configuration file is added to /etc/nginx/conf.d folder:
kubectl exec -it $POD -n nginx-ingress -- find /etc/nginx/conf.d/ -name '*.conf' -exec cat {} \;
The output is as follows (some empty lines removed):
# configuration for default/webapp-ingress upstream default-webapp-ingress-my.kubernetes.example-webapp1-svc-80 { zone default-webapp-ingress-my.kubernetes.example-webapp1-svc-80 256k; random two least_conn; server 127.0.0.1:8181 max_fails=1 fail_timeout=10s max_conns=0; } upstream default-webapp-ingress-my.kubernetes.example-webapp2-svc-80 { zone default-webapp-ingress-my.kubernetes.example-webapp2-svc-80 256k; random two least_conn; server 127.0.0.1:8181 max_fails=1 fail_timeout=10s max_conns=0; } upstream default-webapp-ingress-my.kubernetes.example-webapp3-svc-80 { zone default-webapp-ingress-my.kubernetes.example-webapp3-svc-80 256k; random two least_conn; server 127.0.0.1:8181 max_fails=1 fail_timeout=10s max_conns=0; } server { listen 80; server_tokens on; server_name my.kubernetes.example; location /webapp1 { proxy_http_version 1.1; proxy_connect_timeout 60s; proxy_read_timeout 60s; proxy_send_timeout 60s; client_max_body_size 1m; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Proto $scheme; proxy_buffering on; proxy_pass http://default-webapp-ingress-my.kubernetes.example-webapp1-svc-80; } location /webapp2 { proxy_http_version 1.1; proxy_connect_timeout 60s; proxy_read_timeout 60s; proxy_send_timeout 60s; client_max_body_size 1m; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Proto $scheme; proxy_buffering on; proxy_pass http://default-webapp-ingress-my.kubernetes.example-webapp2-svc-80; } location / { proxy_http_version 1.1; proxy_connect_timeout 60s; proxy_read_timeout 60s; proxy_send_timeout 60s; client_max_body_size 1m; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Proto $scheme; proxy_buffering on; proxy_pass http://default-webapp-ingress-my.kubernetes.example-webapp3-svc-80; } }
This is better readable than the all-in-one configuration file we had observed in the case of the official kubernetes git repo.
Summary
We have installed an NginX ingress controller following the instructions on kubernetes.github.io. Currently, this will install an openresty/1.15.8.2 version. We have shown, how to access three independent services on the same IP address and port. Moreover, we have shown how to retrieve the NginX configuration from the controller POD.
NginX INC offers an alternative NginX-based ingress controller via (currently version nginx/1.17.4). This is demonstrated in the appendix.
Profunction is a leading physiotherapy clinic dedicated to optimizing physical health and performance. With a team of skilled and experienced physiotherapists, Profunction offers personalized treatment plans tailored to each individual’s needs, whether recovering from injury, managing chronic conditions, or enhancing athletic performance.