Nginx Ingress Controller - Failed Calling Webhook - kubernetes

I set up a k8s cluster using kubeadm (v1.18) on an Ubuntu virtual machine.
Now I need to add an Ingress Controller. I decided for nginx (but I'm open for other solutions). I installed it according to the docs, section "bare-metal":
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.31.1/deploy/static/provider/baremetal/deploy.yaml
The installation seems fine to me:
kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-b8smg 0/1 Completed 0 8m21s
pod/ingress-nginx-admission-patch-6nbjb 0/1 Completed 1 8m21s
pod/ingress-nginx-controller-78f6c57f64-m89n8 1/1 Running 0 8m31s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.107.152.204 <none> 80:32367/TCP,443:31480/TCP 8m31s
service/ingress-nginx-controller-admission ClusterIP 10.110.191.169 <none> 443/TCP 8m31s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 8m31s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-78f6c57f64 1 1 1 8m31s
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 2s 8m31s
job.batch/ingress-nginx-admission-patch 1/1 3s 8m31s
However, when trying to apply a custom Ingress, I get the following error:
Error from server (InternalError): error when creating "yaml/xxx/xxx-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: Temporary Redirect
Any idea what could be wrong?
I suspected DNS, but other NodePort services are working as expected and DNS works within the cluster.
The only thing I can see is that I don't have a default-http-backend which is mentioned in the docs here. However, this seems normal in my case, according to this thread.
Last but not least, I tried as well the installation with manifests (after removing ingress-nginx namespace from previous installation) and the installation via Helm chart. It has the same result.
I'm pretty much a beginner on k8s and this is my playground-cluster. So I'm open to alternative solutions as well, as long as I don't need to set up the whole cluster from scratch.
Update:
With "applying custom Ingress", I mean:
kubectl apply -f <myIngress.yaml>
Content of myIngress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /someroute/fittingmyneeds
pathType: Prefix
backend:
serviceName: some-service
servicePort: 5000

Another option you have is to remove the Validating Webhook entirely:
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
I found I had to do that on another issue, but the workaround/solution works here as well.
This isn't the best answer; the best answer is to figure out why this doesn't work. But at some point, you live with workarounds.
I'm installing on Docker for Mac, so I used the cloud rather than baremetal version:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml

In my case I'd mixed the installations up.
I resolved the issue by executing the following steps:
$ kubectl get validatingwebhookconfigurations
I iterated through the list of configurations received from the above steps and deleted the configuration using
$ `kubectl delete validatingwebhookconfigurations [configuration-name]`

In my case I didn't need to delete the ValidatingWebhookConfiguration. The issue was that I was using a private cluster on GCP version 1.17.14-gke.1600. If I got it correctly, on a default Kubernetes installation, the valitaingwebhook API (which of course is running on the master node), is exposed at port 443. But with GCP they changed the port to 8443 due to security reasons because in order to allocate port 443, the service needs to have root access to the node. Since they didn't want that, they changed to 8443. Now, since a private cluster only has the ports 80/443 externally allowed for Ingress on the nodes (that is, all the nodes will only accept requests to these ports), when the Kubernetes tries to validate your Ingress against the validatingwebhook-address:8443 it will fail - it would not fail if it ran on 443. This thread contains more detailed information.
So the current workaround for that, as recommended by Google itself (but very poorly documented) is adding a Firewall rule on GCP, that will allow inbound (Ingress) TCP requests to your master node at port 8443, so that the other nodes within the cluster can reach the master for validatingwebhook API running on it with that very port.
As to how to create the rule, this is how I did it:
Went to Firewall Rules and added a new one.
At the field Network I selected the VPC from which my cluster is.
Direction of traffic I set as Ingress
Action on match to Allow
Targets to Specified target tags
The Target tags can be found on the master node details in a property called Network tags. To find it, I opened a new window, went to my cluster node pools, found the master node pool. Then entered one of the nodes to look for the Virtual Machine details. There I found Network Tags. Copied its value and went back to the Firewall Rule form.
Pasted the copied network tag to the tag field
At Protocols and ports, checked the Specified protocols and ports
Then checked TCP and placed 8443
Saved the rule and applied the manifest again.
NOTE: Most threads out there will say it's the port 9443. It may work. But I first attempted 8443 since it was reported to work on this thread. It worked for me so I didn't even try 9443.

Might be because of a previous nginx-ingress-controller configuration.
You can try to run the following command -
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

I've solved this issue. The problem was that you use Kubernetes version 1.18, but the ValidatingWebhookConfiguration in current ingress-Nginx uses the oldest API; see the doc:
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
Ensure that the Kubernetes cluster is at least as new as v1.16 (to use admissionregistration.k8s.io/v1), or v1.9 (to use admissionregistration.k8s.io/v1beta1).
And in current yaml :
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1beta1
and in rules :
apiVersions:
- v1beta1
So you need to change it on v1 :
apiVersion: admissionregistration.k8s.io/v1
and add rule -v1 :
apiVersions:
- v1beta1
- v1
After you change it and redeploy -your custom ingress service will deploy sucessfull

Finally, I managed to run Ingress Nginx properly by changing the way of installation. I still don't understand why the previous installation didn't work, but I'll share nevertheless the solution along with some more insights into the original problem.
Solution
Uninstall ingress nginx: Delete the ingress-nginx namespace. This does not remove the validating webhook configuration - delete this one manually. Then install MetalLB and install ingress nginx again. I now used the version from the Helm stable repo. Now everything works as expected. Thanks to Long on the kubernetes slack channel!
Some more insights into the original problem
The yamls provided by the installation guide contain a ValidatingWebHookConfiguration:
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
namespace: ingress-nginx
webhooks:
- name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- extensions
- networking.k8s.io
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /extensions/v1beta1/ingresses
Validation is performed whenever I create or update an ingress (the content of my ingress.yaml doesn't matter). The validation failed, because when calling the service, the response is a Temporary Redirect. I don't know why.
The corresponding service is:
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
The pod matching the selector comes from this deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=ingress-nginx/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
Something in this validation chain goes wrong. Would be interesting to know, what and why, but I can continue working with my MetalLB solution. Note that this solution does not contain a validating webhook at all.

I am not sure if this helps this late, but might it be, that your cluster was behind proxy? Because in that case you have to have no_proxy configured correctly. Specifically, it has to include .svc,.cluster.local otherwise validation webhook requests such as https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s will be routed via proxy server (note that .svc in the URL).
I had exactly this issue and adding .svc into no_proxy variable helped. You can try this out quickly by modifying /etc/kubernetes/manifests/kube-apiserver.yaml file which will in turn automatically recreate your kubernetes api server pod.
This is not the case just for ingress validation, but also for other things that might refer URL in your cluster ending with .svc or .namespace.svc.cluster.local (i.e. see this bug)

On a baremetal cluster, I disabled the admissionWebhooks during the Helm3 install:
kubectl create ns ingress-nginx
helm install [RELEASE_NAME] ingress-nginx/ingress-nginx -n ingress-nginx --set controller.admissionWebhooks.enabled=false

In my case, it was the AWS EKS module, which now comes with harden security group. but nginx-ingress requires the cluster to communicate with the ingress controller so I have to whitelist below port in the node security group
node_security_group_additional_rules = {
cluster_to_node = {
description = "Cluster to ingress-nginx webhook"
protocol = "-1"
from_port = 8443
to_port = 8443
type = "ingress"
source_cluster_security_group = true
}
}
input_node_security_group_additional_rules

I had this error. Basically I have a script installing the nginx controller with helm; the script then immediately installs an application that uses ingress, also with helm. That app install failed, just the ingress part.
Solution was to wait 60s after the install of the nginx, to give the WebAdmissionHook time to come up and be ready.

If using terraform and helm disable the Validating Webhook
resource "helm_release" "nginx_ingress" {
...
set {
name = "controller.admissionWebhooks.enabled"
value = "false"
}
...
}

what worked for me was to increase the timeout while waiting for ingress to come up.

I was bringing up a cluster with a known-good configuration and another had been created just last week in essentially the same way. And my error message was a little more specific about what failed in the webhook :
│ Error: Failed to create Ingress
'auth-system/alertmanager-oauth2-proxy'
because: Internal error occurred: failed calling webhook
"validate.nginx.ingress.kubernetes.io": Post
"https://nginx-nginx-ingress-controller-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s":
x509: certificate signed by unknown authority
It turns out that in my many configs, one of them had a typo in the DNS names input to nginx creation. So nginx thought it had one domain name, but it got a certificate for a slightly different dns name, which caused the validating web hook to fail.
The solution was not to delete the hook, but to address the underlying config problem in nginx dns so that it matched its X.509 certificate domain.

just use v1 instead v1beta1 in deploy.yaml

This is a solution for those using GKE cluster.
I tested two ways to fix this issue.
Terraform
GCP Console
Terraform
resource "google_compute_firewall" "validate-nginx" {
project = "${YOUR_PROJECT_ID}"
name = "access-master-to-validatenginx"
network = "${YOUR_NETWORK}"
allow {
protocol = "tcp"
ports = ["8443"]
}
target_tags = ["${NODE_NETWORK_TAG}"]
source_ranges = ["${CONTROL_PLANE_ADDRESS_RANGE}"]
}
GCP Console

To add a terraform example for GCP, extending #mauricio
resource "google_container_cluster" "primary" {
...
}
resource "google_compute_firewall" "validate_nginx" {
project = local.project
name = "validate-nginx"
network = google_compute_network.vpc.name
allow {
protocol = "tcp"
ports = ["8443"]
}
direction = "INGRESS"
source_ranges = [google_container_cluster.primary.private_cluster_config[0].master_ipv4_cidr_block]
}

Related

How to expose a service to outside Kubernetes cluster via ingress?

I'm struggling to expose a service in an AWS cluster to outside and access it via a browser. Since my previous question haven't drawn any answers, I decided to simplify the issue in several aspects.
First, I've created a deployment which should work without any configuration. Based on this article, I did
kubectl create namespace tests
created file probe-service.yaml based on paulbouwer/hello-kubernetes:1.8 and deployed it kubectl create -f probe-service.yaml -n tests:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
created ingress.yaml and applied it (kubectl apply -f .\probes\ingress.yaml -n tests)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
spec:
rules:
- host: test.projectname.org
http:
paths:
- pathType: Prefix
path: "/test"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
- host: test2.projectname.org
http:
paths:
- pathType: Prefix
path: "/test2"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
ingressClassName: nginx
Second, I can see that DNS actually point to the cluster and ingress rules are applied:
if I open http://test.projectname.org/test or any irrelevant path (http://test.projectname.org/test3), I'm shown NET::ERR_CERT_AUTHORITY_INVALID, but
if I use "open anyway" in browser, irrelevant paths give ERR_TOO_MANY_REDIRECTS while http://test.projectname.org/test gives Cannot GET /test
Now, TLS issues aside (those deserve a separate question), why can I get Cannot GET /test? It looks like ingress controller (ingress-nginx) got the rules (otherwise it wouldn't descriminate paths; that's why I don't show DNS settings, although they are described in the previous question) but instead of showing the simple hello-kubernetes page at /test it returns this simple 404 message. Why is that? What could possibly go wrong? How to debug this?
Some debug info:
kubectl version --short tells Kubernetes Client Version is v1.21.5 and Server Version is v1.20.7-eks-d88609
kubectl get ingress -n tests shows that hello-kubernetes-ingress exists indeed, with nginx class, 2 expected hosts, address equal to that shown for load balancer in AWS console
kubectl get all -n tests shows
NAME READY STATUS RESTARTS AGE
pod/hello-kubernetes-first-6f77d8ff99-gjw5d 1/1 Running 0 5h4m
pod/hello-kubernetes-first-6f77d8ff99-ptwsn 1/1 Running 0 5h4m
pod/hello-kubernetes-first-6f77d8ff99-x8w87 1/1 Running 0 5h4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello-kubernetes-first ClusterIP 10.100.18.189 <none> 80/TCP 5h4m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hello-kubernetes-first 3/3 3 3 5h4m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hello-kubernetes-first-6f77d8ff99 3 3 3 5h4m
ingress-nginx was installed before me via the following chart:
apiVersion: v2
name: nginx
description: A Helm chart for Kubernetes
type: application
version: 4.0.6
appVersion: "1.0.4"
dependencies:
- name: ingress-nginx
version: 4.0.6
repository: https://kubernetes.github.io/ingress-nginx
and the values overwrites applied with the chart differ from the original ones mostly (well, those got updated since the installation) in extraArgs: default-ssl-certificate: "nginx-ingress/dragon-family-com" is uncommneted
PS To answer Andrew, I indeed tried to setup HTTPS but it seemingly didn't help, so I haven't included what I tried into the initial question. Yet, here's what I did:
installed cert-manager, currently without a custom chart: kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml
based on cert-manager's tutorial and SO question created a ClusterIssuer with the following config:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-backoffice
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
# use https://acme-v02.api.letsencrypt.org/directory after everything is fixed and works
privateKeySecretRef: # this secret will be created in the namespace of cert-manager
name: letsencrypt-backoffice-private-key
# email: <will be used for urgent alerts about expiration etc>
solvers:
# TODO: add for each domain/second-level domain/*.projectname.org
- selector:
dnsZones:
- test.projectname.org
- test2.projectname.org
# haven't made it to work yet, so switched to the simpler to configure http01 challenge
# dns01:
# route53:
# region: ... # that of load balancer (but we also have ...)
# accessKeyID: <of IAM user with access to Route53>
# secretAccessKeySecretRef: # created that
# name: route53-credentials-secret
# key: secret-access-key
# role: arn:aws:iam::645730347045:role/cert-manager
http01:
ingress:
class: nginx
and applied it via kubectl apply -f issuer.yaml
created 2 certificates in the same file and applied it again:
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-certificate
spec:
secretName: tls-secret
issuerRef:
kind: ClusterIssuer
name: letsencrypt-backoffice
commonName: test.projectname.org
dnsNames:
- test.projectname.org
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-certificate-2
spec:
secretName: tls-secret-2
issuerRef:
kind: ClusterIssuer
name: letsencrypt-backoffice
commonName: test2.projectname.org
dnsNames:
- test2.projectname.org
made sure that the certificates are issued correctly (skipping the pain part, the result is: kubectl get certificates shows that both certificates have READY = true and both tls secrets are created)
figured that my ingress is in another namespace and secrets for tls in ingress spec can only be referred in the same namespace (haven't tried the wildcard certificate and --default-ssl-certificate option yet), so for each one copied them to tests namespace:
opened existing secret, like kubectl edit secret tls-secret-2, copied data and annotations
created an empty (Opaque) secret in tests: kubectl create secret generic tls-secret-2-copy -n tests
opened it (kubectl edit secret tls-secret-2-copy -n tests) and inserted data an annotations
in ingress spec, added the tls bit:
tls:
- hosts:
- test.projectname.org
secretName: tls-secret-copy
- hosts:
- test2.projectname.org
secretName: tls-secret-2-copy
I hoped that this will help, but actually it made no difference (I get ERR_TOO_MANY_REDIRECTS for irrelevant paths, redirect from http to https, NET::ERR_CERT_AUTHORITY_INVALID at https and Cannot GET /test if I insist on getting to the page)
Since you've used your own answer to complement the question, I'll kind of answer all the things you asked, while providing a divide and conquer strategy to troubleshooting kubernetes networking.
At the end I'll give you some nginx and IP answers
This is correct
- host: test3.projectname.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
Breaking down troubleshooting with Ingress
DNS
Ingress
Service
Pod
Certificate
1.DNS
you can use the command dig to query the DNS
dig google.com
Ingress
the ingress controller doesn't look for the IP, it just looks for the headers
you can force a host using any tool that lets you change the headers, like curl
curl --header 'Host: test3.projectname.com' http://123.123.123.123 (your public IP)
Service
you can be sure that your service is working by creating ubuntu/centos pod, using kubectl exec -it podname -- bash and trying to curl your service form withing the cluster
Pod
You're getting this
192.168.14.57 - - [14/Nov/2021:12:02:58 +0000] "GET /test2 HTTP/2.0" 404 144
"-" "<browser's user-agent header value>" 448 0.002
This part GET /test2 means that the request got the address from the DNS, went all the way from the internet, found your clusters, found your ingress controller, got through the service and reached your pod. Congratz! Your ingress is working!
But why is it returning 404?
The path that was passed to the service and from the service to the pod is /test2
Do you have a file called test2 that nginx can serve? Do you have an upstream config in nginx that has a test2 prefix?
That's why, you're getting a 404 from nginx, not from the ingress controller.
Those IPs are internal, remember, the internet traffic ended at the cluster border, now you're in an internal network. Here's a rough sketch of what's happening
Let's say that you're accessing it from your laptop. Your laptop has the IP 192.168.123.123, but your home has the address 7.8.9.1, so when your request hits the cluster, the cluster sees 7.8.9.1 requesting test3.projectname.com.
The cluster looks for the ingress controller, which finds a suitable configuration and passed the request down to the service, which passes the request down to the pod.
So,
your router can see your private IP (192.168.123.123)
Your cluster(ingress) can see your router's IP (7.8.9.1)
Your service can see the ingress's IP (192.168.?.?)
Your pod can see the service's IP (192.168.14.57)
It's a game of pass around.
If you want to see the public IP in your nginx logs, you need to customize it to get the X-Real-IP header, which is usually where load-balancers/ingresses/ambassador/proxies put the actual requester public IP.
Well, I haven't figured this out for ArgoCD yet (edit: figured, but the solution is ArgoCD-specific), but for this test service it seems that path resolving is the source of the issue. It may be not the only source (to be retested on test2 subdomain), but when I created a new subdomain in the hosted zone (test3, not used anywhere before) and pointed it via A entry to the load balancer (as "alias" in AWS console), and then added to the ingress a new rule with / path, like this:
- host: test3.projectname.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
I've finally got the hello kubernetes thing on http://test3.projectname.org. I have succeeded with TLS after a number of attempts/research and some help in a separate question.
But I haven't succeeded with actual debugging: looking at kubectl logs -n nginx <pod name, see kubectl get pod -n nginx> doesn't really help understanding what path was passed to the service and is rather difficult to understand (can't even find where those IPs come from: they are not mine, LB's, cluster IP of the service; neither I understand what tests-hello-kubernetes-first-80 stands for – it's just a concatenation of namespace, service name and port, no object has such name, including ingress):
192.168.14.57 - - [14/Nov/2021:12:02:58 +0000] "GET /test2 HTTP/2.0" 404 144
"-" "<browser's user-agent header value>" 448 0.002
[tests-hello-kubernetes-first-80] [] 192.168.49.95:8080 144 0.000 404 <some hash>
Any more pointers on debugging will be helpful; also suggestions regarding correct path ~rewriting for nginx-ingress are welcome.

Nginx-Ingress Helm Deployment --tcp-services-configmap Argument not found

I'm trying to do TCP/UDP port-forwarding with an ingress.
Following the docs: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
It says to set: --tcp-services-configmap but doesn't tell you where to set it. I assume it is command line arguments. I then googled the list of command line arguments for nginx-ingress
https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/
Here you can clearly see its an argument of the controller:
--tcp-services-configmap Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic.
First Question: how do I dynamically add to the container arguments of the nginx-ingress helm chart I don't see that documented anywhere?
Second Question: What is the proper way to set this with the current version of nginx-ingress because setting the command line argument fails the container startup because the binary doesn't have that argument option.
Here in the default helm chart values.yaml there are some options about setting the namespace for the configmap for tcp-services but given the docs say I have to set it as an argument but that argument fails the startup I'm not sure how you actually set this.
https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml
I manually edited the deployment and set the flag on the container args:
- args:
- -nginx-plus=false
- -nginx-reload-timeout=60000
- -enable-app-protect=false
- -nginx-configmaps=$(POD_NAMESPACE)/emoney-nginx-controller-nginx-ingress
- -default-server-tls-secret=$(POD_NAMESPACE)/emoney-nginx-controller-nginx-ingress-default-server-tls
- -ingress-class=emoney-ingress
- -health-status=false
- -health-status-uri=/nginx-health
- -tcp-services-configmap=emoney-node/tcp-services-configmap
- -nginx-debug=false
- -v=1
- -nginx-status=true
- -nginx-status-port=8080
- -nginx-status-allow-cidrs=127.0.0.1
- -report-ingress-status
- -external-service=emoney-nginx-controller-nginx-ingress
- -enable-leader-election=true
- -leader-election-lock-name=emoney-nginx-controller-nginx-ingress-leader-election
- -enable-prometheus-metrics=true
- -prometheus-metrics-listen-port=9113
- -prometheus-tls-secret=
- -enable-custom-resources=true
- -enable-tls-passthrough=false
- -enable-snippets=false
- -enable-preview-policies=false
- -ready-status=true
- -ready-status-port=8081
- -enable-latency-metrics=false
env:
When I set this like the docs say should be possible the pod fails to start up because it errors out saying that argument isn't an option of the binary.
kubectl logs emoney-nginx-controller-nginx-ingress-5769565cc7-vmgrf -n emoney-node
flag provided but not defined: -tcp-services-configmap
Usage of /nginx-ingress:
-alsologtostderr
log to standard error as well as files
-default-server-tls-secret string
A Secret with a TLS certificate and key for TLS termination of the default server. Format: <namespace>/<name>.
If not set, than the certificate and key in the file "/etc/nginx/secrets/default" are used.
If "/etc/nginx/secrets/default" doesn't exist, the Ingress Controller will configure NGINX to reject TLS connections to the default server.
If a secret is set, but the Ingress controller is not able to fetch it from Kubernetes API or it is not set and the Ingress Controller
fails to read the file "/etc/nginx/secrets/default", the Ingress controller will fail to start.
-enable-app-protect
Enable support for NGINX App Protect. Requires -nginx-plus.
-enable-custom-resources
Enable custom resources (default true)
-enable-internal-routes
Enable support for internal routes with NGINX Service Mesh. Requires -spire-agent-address and -nginx-plus. Is for use with NGINX Service Mesh only.
-enable-latency-metrics
Enable collection of latency metrics for upstreams. Requires -enable-prometheus-metrics
-enable-leader-election
Enable Leader election to avoid multiple replicas of the controller reporting the status of Ingress, VirtualServer and VirtualServerRoute resources -- only one replica will report status (default true). See -report-ingress-status flag. (default true)
-enable-preview-policies
Enable preview policies
-enable-prometheus-metrics
Enable exposing NGINX or NGINX Plus metrics in the Prometheus format
-enable-snippets
Enable custom NGINX configuration snippets in Ingress, VirtualServer, VirtualServerRoute and TransportServer resources.
-enable-tls-passthrough
Enable TLS Passthrough on port 443. Requires -enable-custom-resources
-external-service string
Specifies the name of the service with the type LoadBalancer through which the Ingress controller pods are exposed externally.
The external address of the service is used when reporting the status of Ingress, VirtualServer and VirtualServerRoute resources. For Ingress resources only: Requires -report-ingress-status.
-global-configuration string
The namespace/name of the GlobalConfiguration resource for global configuration of the Ingress Controller. Requires -enable-custom-resources. Format: <namespace>/<name>
-health-status
Add a location based on the value of health-status-uri to the default server. The location responds with the 200 status code for any request.
Useful for external health-checking of the Ingress controller
-health-status-uri string
Sets the URI of health status location in the default server. Requires -health-status (default "/nginx-health")
-ingress-class string
A class of the Ingress controller.
An IngressClass resource with the name equal to the class must be deployed. Otherwise, the Ingress Controller will fail to start.
The Ingress controller only processes resources that belong to its class - i.e. have the "ingressClassName" field resource equal to the class.
The Ingress Controller processes all the VirtualServer/VirtualServerRoute/TransportServer resources that do not have the "ingressClassName" field for all versions of kubernetes. (default "nginx")
-ingress-template-path string
Path to the ingress NGINX configuration template for an ingress resource.
(default for NGINX "nginx.ingress.tmpl"; default for NGINX Plus "nginx-plus.ingress.tmpl")
-ingresslink string
Specifies the name of the IngressLink resource, which exposes the Ingress Controller pods via a BIG-IP system.
The IP of the BIG-IP system is used when reporting the status of Ingress, VirtualServer and VirtualServerRoute resources. For Ingress resources only: Requires -report-ingress-status.
-leader-election-lock-name string
Specifies the name of the ConfigMap, within the same namespace as the controller, used as the lock for leader election. Requires -enable-leader-election. (default "nginx-ingress-leader-election")
-log_backtrace_at value
when logging hits line file:N, emit a stack trace
-log_dir string
If non-empty, write log files in this directory
-logtostderr
log to standard error instead of files
-main-template-path string
Path to the main NGINX configuration template. (default for NGINX "nginx.tmpl"; default for NGINX Plus "nginx-plus.tmpl")
-nginx-configmaps string
A ConfigMap resource for customizing NGINX configuration. If a ConfigMap is set,
but the Ingress controller is not able to fetch it from Kubernetes API, the Ingress controller will fail to start.
Format: <namespace>/<name>
-nginx-debug
Enable debugging for NGINX. Uses the nginx-debug binary. Requires 'error-log-level: debug' in the ConfigMap.
-nginx-plus
Enable support for NGINX Plus
-nginx-reload-timeout int
The timeout in milliseconds which the Ingress Controller will wait for a successful NGINX reload after a change or at the initial start. (default 60000) (default 60000)
-nginx-status
Enable the NGINX stub_status, or the NGINX Plus API. (default true)
-nginx-status-allow-cidrs string
Add IPv4 IP/CIDR blocks to the allow list for NGINX stub_status or the NGINX Plus API. Separate multiple IP/CIDR by commas. (default "127.0.0.1")
-nginx-status-port int
Set the port where the NGINX stub_status or the NGINX Plus API is exposed. [1024 - 65535] (default 8080)
-prometheus-metrics-listen-port int
Set the port where the Prometheus metrics are exposed. [1024 - 65535] (default 9113)
-prometheus-tls-secret string
A Secret with a TLS certificate and key for TLS termination of the prometheus endpoint.
-proxy string
Use a proxy server to connect to Kubernetes API started by "kubectl proxy" command. For testing purposes only.
The Ingress controller does not start NGINX and does not write any generated NGINX configuration files to disk
-ready-status
Enables the readiness endpoint '/nginx-ready'. The endpoint returns a success code when NGINX has loaded all the config after the startup (default true)
-ready-status-port int
Set the port where the readiness endpoint is exposed. [1024 - 65535] (default 8081)
-report-ingress-status
Updates the address field in the status of Ingress resources. Requires the -external-service or -ingresslink flag, or the 'external-status-address' key in the ConfigMap.
-spire-agent-address string
Specifies the address of the running Spire agent. Requires -nginx-plus and is for use with NGINX Service Mesh only. If the flag is set,
but the Ingress Controller is not able to connect with the Spire Agent, the Ingress Controller will fail to start.
-stderrthreshold value
logs at or above this threshold go to stderr
-transportserver-template-path string
Path to the TransportServer NGINX configuration template for a TransportServer resource.
(default for NGINX "nginx.transportserver.tmpl"; default for NGINX Plus "nginx-plus.transportserver.tmpl")
-v value
log level for V logs
-version
Print the version, git-commit hash and build date and exit
-virtualserver-template-path string
Path to the VirtualServer NGINX configuration template for a VirtualServer resource.
(default for NGINX "nginx.virtualserver.tmpl"; default for NGINX Plus "nginx-plus.virtualserver.tmpl")
-vmodule value
comma-separated list of pattern=N settings for file-filtered logging
-watch-namespace string
Namespace to watch for Ingress resources. By default the Ingress controller watches all namespaces
-wildcard-tls-secret string
A Secret with a TLS certificate and key for TLS termination of every Ingress host for which TLS termination is enabled but the Secret is not specified.
Format: <namespace>/<name>. If the argument is not set, for such Ingress hosts NGINX will break any attempt to establish a TLS connection.
If the argument is set, but the Ingress controller is not able to fetch the Secret from Kubernetes API, the Ingress controller will fail to start.
Config Map
apiVersion: v1
data:
"1317": emoney-node/emoney-api:1317
"9090": emoney-node/emoney-grpc:9090
"26656": emoney-node/emoney:26656
"26657": emoney-node/emoney-rpc:26657
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
creationTimestamp: "2021-11-01T18:06:49Z"
labels:
app.kubernetes.io/managed-by: Helm
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:1317: {}
f:9090: {}
f:26656: {}
f:26657: {}
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/managed-by: {}
manager: helm
operation: Update
time: "2021-11-01T18:06:49Z"
name: tcp-services-configmap
namespace: emoney-node
resourceVersion: "2056146"
selfLink: /api/v1/namespaces/emoney-node/configmaps/tcp-services-configmap
uid: 188f5dc8-02f9-4ee5-a5e3-819d00ff8b67
Name: emoney
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibcnode
app.kubernetes.io/version=1.16.0
helm.sh/chart=ibcnode-0.1.0
Annotations: meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
Selector: app.kubernetes.io/instance=emoney,app.kubernetes.io/name=ibcnode
Type: ClusterIP
IP: 172.20.30.240
Port: p2p 26656/TCP
TargetPort: 26656/TCP
Endpoints: 10.0.36.192:26656
Session Affinity: None
Events: <none>
Name: emoney-api
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibcnode
app.kubernetes.io/version=1.16.0
helm.sh/chart=ibcnode-0.1.0
Annotations: meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
Selector: app.kubernetes.io/instance=emoney,app.kubernetes.io/name=ibcnode
Type: ClusterIP
IP: 172.20.166.97
Port: api 1317/TCP
TargetPort: 1317/TCP
Endpoints: 10.0.36.192:1317
Session Affinity: None
Events: <none>
Name: emoney-grpc
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibcnode
app.kubernetes.io/version=1.16.0
helm.sh/chart=ibcnode-0.1.0
Annotations: meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
Selector: app.kubernetes.io/instance=emoney,app.kubernetes.io/name=ibcnode
Type: ClusterIP
IP: 172.20.136.177
Port: grpc 9090/TCP
TargetPort: 9090/TCP
Endpoints: 10.0.36.192:9090
Session Affinity: None
Events: <none>
Name: emoney-nginx-controller-nginx-ingress
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney-nginx-controller
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=emoney-nginx-controller-nginx-ingress
helm.sh/chart=nginx-ingress-0.11.3
Annotations: meta.helm.sh/release-name: emoney-nginx-controller
meta.helm.sh/release-namespace: emoney-node
Selector: app=emoney-nginx-controller-nginx-ingress
Type: LoadBalancer
IP: 172.20.16.202
LoadBalancer Ingress: lb removed
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 32250/TCP
Endpoints: 10.0.43.32:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32375/TCP
Endpoints: 10.0.43.32:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30904
Events: <none>
Name: emoney-rpc
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibcnode
app.kubernetes.io/version=1.16.0
helm.sh/chart=ibcnode-0.1.0
Annotations: meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
Selector: app.kubernetes.io/instance=emoney,app.kubernetes.io/name=ibcnode
Type: ClusterIP
IP: 172.20.42.163
Port: rpc 26657/TCP
TargetPort: 26657/TCP
Endpoints: 10.0.36.192:26657
Session Affinity: None
Events: <none>
helm repo add nginx-stable https://helm.nginx.com/stable --kubeconfig=./kubeconfig || echo "helm repo already added"
helm repo update --kubeconfig=./kubeconfig || echo "helm repo already updated"
helm upgrade ${app_name}-nginx-controller -n ${app_namespace} nginx-stable/nginx-ingress \
--install \
--kubeconfig=./kubeconfig \
--create-namespace \
--set controller.service.type=LoadBalancer \
--set controller.tcp.configMapNamespace=${app_namespace} \
--set controller.ingressClass="${app_name}-ingress"
kubectl rollout status -w deployment/${app_name} --kubeconfig=./kubeconfig -n ${app_namespace}
#- --tcp-services-configmap=emoney-node/tcp-services-configmap
You could say the helm chart is biased in that it doesn't expose the option to set those args as chart value. It will set them by itself based on conditional logic when it's required according to the values.
When I check the nginx template in the repo, I see that additional args are passed from the template in the params helper file. Those seem to be generated dynamically. I.E.
{{- if .Values.tcp }}
- --tcp-services-configmap={{ default "$(POD_NAMESPACE)" .Values.controller.tcp.configMapNamespace }}/{{ include "ingress-nginx.fullname" . }}-tcp
{{- end }}
So, it seems it will use this flag only if the tcp value isn't empty. On the same condition, it will create the configmap.
Further, the tcp value allows you to set a key configMapNamespace. So if you were to set this key only, then the flag would be used as per paramaters helpers. Now you need to create your configmap only in the provided namespace and let it match the name {{ include "ingress-nginx.fullname" . }}-tcp.
So you could create the configmap in the default namespace and name it ingress-nginx-tcp or similar, depending on how you set the release name.
kubectl create configmap ingress-nginx-tcp --from-literal 1883=mqtt/emqx:1883 -n default
helm install --set controller.tcp.configMapNamespace=default ingress-nginx ingress-nginx/ingress-nginx
I think the only problem with that is that you cannot create it in the .Release.Namespace, since when tcp isn't empty it will attempt to create a configmap there by itself, which would result in conflicts. At least that's how I interpret the templates in the chart repo.
I personally, have configured TCP via values file that I pass to helm with -f.
helm install -f values.yaml ingress-nginx ingress-nginx/ingress-nginx
# configure the tcp configmap
tcp:
1883: mqtt/emqx:1883
8883: mqtt/emqx:8883
# enable the service and expose the tcp ports.
# be careful as this will pontentially make them
# availble on the public web
controller:
service:
enabled: true
ports:
http: 80
https: 443
mqtt: 1883
mqttssl: 8883
targetPorts:
http: http
https: https
mqtt: mqtt
mqttssl: mqttssl

What does the default helm create chart do?

Does the default helm chart actually run and do something I can observe?
I've tried running it (the default helm chart) and it does run; So, what does it do?
To recreate the problem I'm asking about, do the following:
helm create helm-it # Create a helm chart (the default)
helm install helm-it ./helm-it # Run it
helm list # See it running
helm get manifest helm-it # See the manifest (YAML) that is running (I think)
By examining the manifest (using helm get manifest helm-it), I can see how it's configured. The important bit is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helm-it
labels:
helm.sh/chart: helm-it-0.1.0
app.kubernetes.io/name: helm-it
app.kubernetes.io/instance: helm-it
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: helm-it
app.kubernetes.io/instance: helm-it
template:
metadata:
labels:
app.kubernetes.io/name: helm-it
app.kubernetes.io/instance: helm-it
spec:
serviceAccountName: helm-it
securityContext:
{}
containers:
- name: helm-it
securityContext:
{}
image: "nginx:1.16.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
It seems to be running nginx on port 80 but when trying to access it using curl, I got an error (see output below).
curl http://localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
I looked for a solution
When looking for a solution I started with the helm create documentation at https://helm.sh/docs/helm/helm_create/
Searches for does helm create run brought up several links for how to create a helm chart in 5 minutes (too many to review) but the answer might be in one of these web pages.
o https://phoenixnap.com/kb/create-helm-chart - was one of the results. But it did not answer my question.
Why I care
The reason I'm asking is because I'm trying to convert a k8s yaml file into a helm chart and want to know what I'm starting with to know what I can delete and what I need to add. I found this link:
How to convert k8s yaml to helm chart - and it said I could just
just drop that file under templates/ and add a Chart.yml.
which I tried but it didn't work.
The templates created by the helm create command run Nginx as a stateless application. I found this in the book Learning Helm, by Matt Butcher, Matt Farina, Josh Dolitsky on page 67. Available in OReilly online books and Google Books.
To access the NGINX application, you might need to forward data from your host to the K8S cluster.
When performing the helm install it gives this output:
helm install myapp anvil
NAME: myapp
LAST DEPLOYED: Fri Apr 23 12:23:46 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=anvil,app.kubernetes.io/instance
=myapp" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].c
ontainerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
The complicated looking commands simply get the name of the pod and the port that is being used by NGINX to create a command that will forward data from your localhost to the kubernetes cluster. That command is:
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
After port-forwarding is running, you can then access http://localhost:8080/ and get this display that says NGINX is running. You'll know it's working if the port forwarding displays more logging information. Mine displayed the following:
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
Handling connection for 8080
Each time you hit the URL, an additional logging line Handling connection for 8080 is displayed (and the web page displays the Welcome to NGINX page).

Nginx Ingress returns 502 Bad Gateway on Kubernetes

I have a Kubernetes cluster deployed on AWS (EKS). I deployed the cluster using the “eksctl” command line tool. I’m trying to deploy a Dash python app on the cluster without success. The default port for Dash is 8050. For the deployment I used the following resources:
pod
service (ClusterIP type)
ingress
You can check the resource configuration files below:
pod-configuration-file.yml
kind: Pod
apiVersion: v1
metadata:
name: dashboard-app
labels:
app: dashboard
spec:
containers:
- name: dashboard
image: my_image_from_ecr
ports:
- containerPort: 8050
service-configuration-file.yml
kind: Service
apiVersion: v1
metadata:
name: dashboard-service
spec:
selector:
app: dashboard
ports:
- port: 8050 # exposed port
targetPort: 8050
ingress-configuration-file.yml (host based routing)
kind: Ingress
metadata:
name: dashboard-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: dashboard.my_domain
http:
paths:
- backend:
serviceName: dashboard-service
servicePort: 8050
path: /
I followed the steps below:
kubectl apply -f pod-configuration-file.yml
kubectl apply -f service-configuration-file.yml
kubectl apply -f ingress-confguration-file.yml
I also noticed that the pod deployment works as expected:
kubectl logs my_pod:
and the output is:
Dash is running on http://127.0.0.1:8050/
Warning: This is a development server. Do not use app.run_server
in production, use a production WSGI server like gunicorn instead.
* Serving Flask app "annotation_analysis" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
You can see from the ingress configuration file that I want to do host based routing using my domain. For this to work, I have also deployed an nginx-ingress. I have also created an “A” record set using Route53
that maps the “dashboard.my_domain” to the nginx-ingress:
kubectl get ingress
and the output is:
NAME HOSTS ADDRESS. PORTS. AGE
dashboard-ingress dashboard.my_domain nginx-ingress.elb.aws-region.amazonaws.com 80 93s
Moreover,
kubectl describe ingress dashboard-ingress
and the output is:
Name: dashboard-ingress
Namespace: default
Address: nginx-ingress.elb.aws-region.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
host.my-domain
/ dashboard-service:8050 (192.168.36.42:8050)
Annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: false
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
Unfortunately, when I try to access the Dash app on the browser, I get a
502 Bad Gateway error from the nginx. Could you please help me because my Kubernetes knowledge is limited.
Thanks in advance.
It had nothing to do with Kubernetes or AWS settings. I had to change my python Dash code from:
if __name__ == "__main__":
app.run_server(debug=True)
to:
if __name__ == "__main__":
app.run_server(host='0.0.0.0',debug=True).
The addition of host='0.0.0.0' did the trick!
I think you'll need to check whether any other service is exposed at path / on the same host.
Secondly, try removing rewrite-target annotation. Also can you please update your question with output of kubectl describe ingress <ingress_Name>
I would also suggest you to use backend-protocol annotation with value as HTTP (default value, you can avoid using this if dashboard application is not SSL Configured, and only this application is served at the said host.) But, you may need to add this if multiple applications are served at this host, and create one Ingress with backend-protocol: HTTP for non SSL services, and another with backend-protocol: HTTPS to serve traffic to SSL enabled services.
For more information on backend-protocol annotation, kindly refer this link.
I have often faced this issue in my Ingress Setup and these steps have helped me resolve it.

Google Kubernetes Ingress health check always failing

I have configured a web application pod exposed via apache on port 80. I'm unable to configure a service + ingress for accessing from the internet. The issue is that the backend services always report as UNHEALTHY.
Pod Config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: webapp
name: webapp
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
name: webapp
template:
metadata:
labels:
name: webapp
spec:
containers:
- image: asia.gcr.io/my-app/my-app:latest
name: webapp
ports:
- containerPort: 80
name: http-server
Service Config:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
name: webapp
ports:
- protocol: TCP
port: 50000
targetPort: 80
Ingress Config:
kind: Ingress
metadata:
name: webapp-ingress
spec:
backend:
serviceName: webapp-service
servicePort: 50000
This results in backend services reporting as UNHEALTHY.
The health check settings:
Path: /
Protocol: HTTP
Port: 32463
Proxy protocol: NONE
Additional information: I've tried a different approach of exposing the deployment as a load balancer with external IP and that works perfectly. When trying to use a NodePort + Ingress, this issue persists.
With GKE, the health check on the Load balancer is created automatically when you create the ingress. Since the HC is created automatically, so are the firewall rules.
Since you have no readinessProbe configured, the LB has a default HC created (the one you listed). To debug this properly, you need to isolate where the point of failure is.
First, make sure your pod is serving traffic properly;
kubectl exec [pod_name] -- wget localhost:80
If the application has curl built in, you can use that instead of wget.
If the application has neither wget or curl, skip to the next step.
get the following output and keep track of the output:
kubectl get po -l name=webapp -o wide
kubectl get svc webapp-service
You need to keep the service and pod clusterIPs
SSH to a node in your cluster and run sudo toolbox bash
Install curl:
apt-get install curl`
Test the pods to make sure they are serving traffic within the cluster:
curl -I [pod_clusterIP]:80
This needs to return a 200 response
Test the service:
curl -I [service_clusterIP]:80
If the pod is not returning a 200 response, the container is either not working correctly or the port is not open on the pod.
if the pod is working but the service is not, there is an issue with the routes in your iptables which is managed by kube-proxy and would be an issue with the cluster.
Finally, if both the pod and the service are working, there is an issue with the Load balancer health checks and also an issue that Google needs to investigate.
As Patrick mentioned, the checks will be created automatically by GCP.
By default, GKE will use readinessProbe.httpGet.path for the health check.
But if there is no readinessProbe configured, then it will just use the root path /, which must return an HTTP 200 (OK) response (and that's not always the case, for example, if the app redirects to another path, then the GCP health check will fail).