kubernetes ingress 502 bad gateway - kubernetes

I installed a Kubernetes Cluster on bare metal (using VMware virtual machines) with the following nodes
master-01 Ready control-plane,master 5d3h v1.21.3
master-02 Ready control-plane,master 5d3h v1.21.3
master-03 Ready control-plane,master 5d3h v1.21.3
worker-01 Ready <none> 5d2h v1.21.3
worker-02 Ready <none> 5d2h v1.21.3
worker-03 Ready <none> 5d2h v1.21.3
Metallb is installed as loadbalancer for the cluster and calico as CNI
I also installed nginx-ingress-controller with helm
$ helm repo add nginx-stable https://helm.nginx.com/stable
$ helm repo update
$ helm install ingress-controller nginx-stable/nginx-ingress
I deployed a simple nginx server for testing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 2
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx-app
#type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-myapp
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
My deployments with loadbalancer types get their IP from metallb and works fine but when I add ingress although an IP is assigned I get error 502 bad gateway as shown below:
firewall is enabled but required ports are opened
6443/tcp 2379-2380/tcp 10250-10252/tcp 179/tcp 7946/tcp 7946/udp 8443/tcp on master nodes
10250/tcp 30000-32767/tcp 7946/tcp 7946/udp 8443/tcp 179/tcp on worker nodes
My services and pods works fine
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ingress-controller-nginx-ingress LoadBalancer 10.101.17.180 10.1.210.100 80:31509/TCP,443:30004/TCP 33m app=ingress-controller-nginx-ingress
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d <none>
nginx-service ClusterIP 10.101.48.198 <none> 80/TCP 31m app=nginx-app
My ingress logs gives me error with no route to the internal IP
2021/07/29 07:46:24 [error] 42#42: *8 connect() failed (113: No route to host) while connecting to upstream, client: 10.1.210.5, server: myapp.com, request: "GET / HTTP/1.1", upstream: "http://192.168.171.17:80/", host: "myapp.com"
10.1.210.5 - - [29/Jul/2021:07:46:24 +0000] "GET / HTTP/1.1" 502 157 "-" "curl/7.68.0" "-"
W0729 07:50:16.416830 1 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
192.168.2.131 - - [29/Jul/2021:07:51:03 +0000] "GET / HTTP/1.1" 404 555 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36" "-"
192.168.2.131 - - [29/Jul/2021:07:51:03 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://10.1.210.100/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36" "-"
W0729 07:56:43.420282 1 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0729 08:05:28.422594 1 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0729 08:10:45.425329 1 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
2021/07/29 08:13:59 [error] 42#42: *12 connect() failed (113: No route to host) while connecting to upstream, client: 10.1.210.5, server: myapp.com, request: "GET / HTTP/1.1", upstream: "http://192.168.171.17:80/", host: "myapp.com"
10.1.210.5 - - [29/Jul/2021:08:13:59 +0000] "GET / HTTP/1.1" 502 157 "-" "curl/7.68.0" "-"
2021/07/29 08:14:09 [error] 42#42: *14 connect() failed (113: No route to host) while connecting to upstream, client: 10.1.210.5, server: myapp.com, request: "GET / HTTP/1.1", upstream: "http://192.168.171.17:80/", host: "myapp.com"
10.1.210.5 - - [29/Jul/2021:08:14:09 +0000] "GET / HTTP/1.1" 502 157 "-" "curl/7.68.0" "-"
Any idea please ?
EDIT : As asked here description of services and pods
$ kubectl describe pod nginx-deployment-6f7d8d4d55-sncdr
Name: nginx-deployment-6f7d8d4d55-sncdr
Namespace: default
Priority: 0
Node: worker-01/10.1.210.63
Start Time: Thu, 29 Jul 2021 08:43:59 +0100
Labels: app=nginx-app
pod-template-hash=6f7d8d4d55
Annotations: cni.projectcalico.org/podIP: 192.168.171.17/32
cni.projectcalico.org/podIPs: 192.168.171.17/32
Status: Running
IP: 192.168.171.17
IPs:
IP: 192.168.171.17
Controlled By: ReplicaSet/nginx-deployment-6f7d8d4d55
Containers:
nginx:
Container ID: docker://fc61b73f8a833ad13b8956d8ce151b221b75a58a9a2fbae928464f3b0a77cca2
Image: nginx
Image ID: docker-pullable://nginx#sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 29 Jul 2021 08:44:01 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wkc48 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-wkc48:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned default/nginx-deployment-6f7d8d4d55-sncdr to worker-01
Normal Pulling 16m kubelet Pulling image "nginx"
Normal Pulled 16m kubelet Successfully pulled image "nginx" in 1.51808376s
Normal Created 16m kubelet Created container nginx
Normal Started 16m kubelet Started container nginx
$ kubectl describe svc ingress-controller-nginx-ingress
Name: ingress-controller-nginx-ingress
Namespace: default
Labels: app.kubernetes.io/instance=ingress-controller
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-controller-nginx-ingress
helm.sh/chart=nginx-ingress-0.10.0
Annotations: meta.helm.sh/release-name: ingress-controller
meta.helm.sh/release-namespace: default
Selector: app=ingress-controller-nginx-ingress
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.101.17.180
IPs: 10.101.17.180
LoadBalancer Ingress: 10.1.210.100
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31509/TCP
Endpoints: 192.168.37.202:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 30004/TCP
Endpoints: 192.168.37.202:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 31108
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 18m metallb-controller Assigned IP "10.1.210.100"
Normal nodeAssigned 3m21s (x182 over 18m) metallb-speaker announcing from node "worker-02"
$ kubectl describe svc nginx-service
Name: nginx-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=nginx-app
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.101.48.198
IPs: 10.101.48.198
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 192.168.171.17:80
Session Affinity: None
Events: <none>
$ kubectl exec -it ingress-controller-nginx-ingress-dd5db86dc-gqdpm -- /bin/bash
nginx#ingress-controller-nginx-ingress-dd5db86dc-gqdpm:/$ curl 192.168.171.17:80
curl: (7) Failed to connect to 192.168.171.17 port 80: No route to host
nginx#ingress-controller-nginx-ingress-dd5db86dc-gqdpm:/$ curl 192.168.171.17
curl: (7) Failed to connect to 192.168.171.17 port 80: No route to host
nginx#ingress-controller-nginx-ingress-dd5db86dc-gqdpm:/$ curl 10.101.48.198
curl: (7) Failed to connect to 10.101.48.198 port 80: Connection timed out
nginx#ingress-controller-nginx-ingress-dd5db86dc-gqdpm:/$ curl nginx-deployment-6f7d8d4d55-sncdr
curl: (6) Could not resolve host: nginx-deployment-6f7d8d4d55-sncdr
nginx#ingress-controller-nginx-ingress-dd5db86dc-gqdpm:/$
To be honest I don't understand why curl svcip doesn't work anymore; yesterday it worked.

The problem was a firewall issue I disabled firewalld and it works now, I thought that had to open port 8443 but it seems to be another port if anyone can tell me which one
Thank you

I had a similar issue with a traefik ingress in k3s. I enabled masquerade in firewalld
firewall-cmd --permanent --add-masquerade && firewall-cmd --reload
Credit to this post for the idea: https://github.com/k3s-io/k3s/issues/1646#issuecomment-881191877

Related

How to fix http 502 from external reverse proxy with upstream to ingress-nginx

Currently I have a cluster with single controller and single worker, also a nginx reverse-proxy (hhtp only) outside cluster.
Controller is at 192.168.1.65
worker is at 192.168.1.61
reverse proxy at 192.168.1.93 and public ip
here is my ingress-nginx services
bino#corobalap  ~/k0s-sriwijaya/ingress-nginx/testapp  kubectl -n ingress-nginx get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.102.58.7 192.168.1.186 80:31097/TCP,443:31116/TCP 56m
ingress-nginx-controller-admission ClusterIP 10.108.233.49 <none> 443/TCP 56m
bino#corobalap  ~/k0s-sriwijaya/ingress-nginx/testapp  kubectl -n ingress-nginx describe svc ingress-nginx-controller
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.3.0
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.102.58.7
IPs: 10.102.58.7
LoadBalancer Ingress: 192.168.1.186
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31097/TCP
Endpoints: 10.244.0.23:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31116/TCP
Endpoints: 10.244.0.23:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
that 192.168.1.186 is assigned by MetalLB.
bino#corobalap  ~/k0s-sriwijaya/ingress-nginx/testapp  kubectl get IPAddressPools -A
NAMESPACE NAME AGE
metallb-system pool01 99m
bino#corobalap  ~/k0s-sriwijaya/ingress-nginx/testapp  kubectl -n metallb-system describe IPAddressPool pool01
Name: pool01
Namespace: metallb-system
Labels: <none>
Annotations: <none>
API Version: metallb.io/v1beta1
Kind: IPAddressPool
Metadata:
Creation Timestamp: 2022-07-26T09:08:10Z
Generation: 1
Managed Fields:
API Version: metallb.io/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:addresses:
f:autoAssign:
f:avoidBuggyIPs:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-07-26T09:08:10Z
Resource Version: 41021
UID: 2a0dcfb2-bf8f-4b1a-b459-380e78959586
Spec:
Addresses:
192.168.1.186 - 192.168.1.191
Auto Assign: true
Avoid Buggy I Ps: false
Events: <none>
I deploy hello-app at namespace : 'dev'
bino#corobalap  ~/k0s-sriwijaya/ingress-nginx/testapp  kubectl -n dev get all
NAME READY STATUS RESTARTS AGE
pod/hello-app-5c554f556c-v2gx9 1/1 Running 1 (20m ago) 63m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello-service ClusterIP 10.111.161.2 <none> 8081/TCP 62m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hello-app 1/1 1 1 63m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hello-app-5c554f556c 1 1 1 63m
bino#corobalap  ~/k0s-sriwijaya/ingress-nginx/testapp  kubectl -n dev describe service hello-service
Name: hello-service
Namespace: dev
Labels: app=hello
Annotations: <none>
Selector: app=hello
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.111.161.2
IPs: 10.111.161.2
Port: <unset> 8081/TCP
TargetPort: 8080/TCP
Endpoints: 10.244.0.22:8080
Session Affinity: None
Events: <none>
Local tests of that service:
bino#k8s-worker-1:~$ curl http://10.111.161.2:8081
Hello, world!
Version: 2.0.0
Hostname: hello-app-5c554f556c-v2gx9
bino#k8s-worker-1:~$ curl http://10.244.0.22:8080
Hello, world!
Version: 2.0.0
Hostname: hello-app-5c554f556c-v2gx9
and the ingress resource of that service:
bino#corobalap  ~/k0s-sriwijaya/ingress-nginx/testapp  kubectl -n dev describe ingress hello-app-ingress
Name: hello-app-ingress
Labels: <none>
Namespace: dev
Address: 192.168.1.61
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
bino.k8s.jcamp.cloud
/ hello-service:8081 (10.244.0.22:8080)
Annotations: ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 23m (x3 over 24m) nginx-ingress-controller Scheduled for sync
When I open http://bino.k8s.jcamp.cloud I got 502
my nginx reverse proxy conf :
server {
listen 80 default_server;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://192.168.1.186;
}
}
The nginx error log say
2022/07/26 06:24:21 [error] 1593#1593: *6 connect() failed (113: No route to host) while connecting to upstream, client: 203.161.185.210, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.1.186:80/favicon.ico", host: "bino.k8s.jcamp.cloud", referrer: "http://bino.k8s.jcamp.cloud/"
from describe ingress-nginx-controller pod
bino#corobalap  ~/k0s-sriwijaya/ingress-nginx/testapp  kubectl -n ingress-nginx describe pod ingress-nginx-controller-6dc865cd86-9fmsk
Name: ingress-nginx-controller-6dc865cd86-9fmsk
Namespace: ingress-nginx
Priority: 0
Node: k8s-worker-1/192.168.1.61
Start Time: Tue, 26 Jul 2022 16:11:05 +0700
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=6dc865cd86
Annotations: kubernetes.io/psp: 00-k0s-privileged
Status: Running
IP: 10.244.0.23
IPs:
IP: 10.244.0.23
Controlled By: ReplicaSet/ingress-nginx-controller-6dc865cd86
Containers:
controller:
Container ID: containerd://541446c98b55312376aba4744891baa325dca26410abe5f94707d270d378d881
Image: registry.k8s.io/ingress-nginx/controller:v1.3.0#sha256:d1707ca76d3b044ab8a28277a2466a02100ee9f58a86af1535a3edf9323ea1b5
Image ID: registry.k8s.io/ingress-nginx/controller#sha256:d1707ca76d3b044ab8a28277a2466a02100ee9f58a86af1535a3edf9323ea1b5
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Running
Started: Tue, 26 Jul 2022 16:56:40 +0700
Last State: Terminated
Reason: Unknown
Exit Code: 255
Started: Tue, 26 Jul 2022 16:11:09 +0700
Finished: Tue, 26 Jul 2022 16:56:26 +0700
Ready: True
Restart Count: 1
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-6dc865cd86-9fmsk (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nfmrc (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-nfmrc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning NodeNotReady 44m node-controller Node is not ready
Warning FailedMount 43m kubelet MountVolume.SetUp failed for volume "webhook-cert" : object "ingress-nginx"/"ingress-nginx-admission" not registered
Warning FailedMount 43m kubelet MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition
Warning FailedMount 43m kubelet MountVolume.SetUp failed for volume "kube-api-access-nfmrc" : failed to sync configmap cache: timed out waiting for the condition
Normal SandboxChanged 43m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 43m kubelet Container image "registry.k8s.io/ingress-nginx/controller:v1.3.0#sha256:d1707ca76d3b044ab8a28277a2466a02100ee9f58a86af1535a3edf9323ea1b5" already present on machine
Normal Created 43m kubelet Created container controller
Normal Started 43m kubelet Started container controller
Warning Unhealthy 42m (x2 over 42m) kubelet Liveness probe failed: Get "http://10.244.0.23:10254/healthz": dial tcp 10.244.0.23:10254: connect: connection refused
Warning Unhealthy 42m (x3 over 43m) kubelet Readiness probe failed: Get "http://10.244.0.23:10254/healthz": dial tcp 10.244.0.23:10254: connect: connection refused
Normal RELOAD 42m nginx-ingress-controller NGINX reload triggered due to a change in configuration
and here is the nft ruleset
bino#k8s-worker-1:~$ su -
Password:
root#k8s-worker-1:~# systemctl status nftables.service
● nftables.service - nftables
Loaded: loaded (/lib/systemd/system/nftables.service; enabled; vendor preset: enabled)
Active: active (exited) since Tue 2022-07-26 05:56:17 EDT; 46min ago
Docs: man:nft(8)
http://wiki.nftables.org
Process: 186 ExecStart=/usr/sbin/nft -f /etc/nftables.conf (code=exited, status=0/SUCCESS)
Main PID: 186 (code=exited, status=0/SUCCESS)
CPU: 34ms
Warning: journal has been rotated since unit was started, output may be incomplete.
[]
Complete ruleset is at https://pastebin.com/xd58rcQp
Kindly please tell me what to do, to check, or to learn for fixing this problem
Sincerely
-bino-
my bad ...
There is a name mismatch between ip pool devinition yaml and the l2 advertisement yaml.

404 Not Found error after configuring the Nginx Ingress Controller

UPDATE:
The issue persists but I used another way (sub-domain name, instead of the path) to 'bypass' the issue:
ubuntu#df1:~$ cat k8s-dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- dashboard.XXXX
secretName: df1-tls
rules:
- host: dashboard.XXXX
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
This error bothers me for some time and I hope with your help I can come down to the bottom of it.
I have one K8S cluster (single node so far, to avoid any network related issues). I installed Grafana on it.
All pods are running fine:
ubuntu:~$ k get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default grafana-646c8874cb-h6tc5 1/1 Running 0 11h
default nginx-1-7bdc99b884-xh7kl 1/1 Running 0 36h
kube-system coredns-64897985d-4sk6l 1/1 Running 0 2d16h
kube-system coredns-64897985d-dx5h6 1/1 Running 0 2d16h
kube-system etcd-df1 1/1 Running 1 3d14h
kube-system kilo-kb52f 1/1 Running 0 2d16h
kube-system kube-apiserver-df1 1/1 Running 1 3d14h
kube-system kube-controller-manager-df1 1/1 Running 4 3d14h
kube-system kube-flannel-ds-fjkxv 1/1 Running 0 3d13h
kube-system kube-proxy-bd2xt 1/1 Running 0 3d14h
kube-system kube-scheduler-df1 1/1 Running 10 3d14h
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-5skdw 1/1 Running 0 2d16h
kubernetes-dashboard kubernetes-dashboard-6b6b86c4c5-56zp2 1/1 Running 0 2d16h
nginx-ingress nginx-ingress-5b467c7d7-qtqtq 1/1 Running 0 2d15h
As you saw, I installed nginx ingress controller.
Here is the ingress:
ubuntu:~$ k describe ing grafana
Name: grafana
Labels: app.kubernetes.io/instance=grafana
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=grafana
app.kubernetes.io/version=8.3.3
helm.sh/chart=grafana-6.20.5
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
kalepa.k8s.io
/grafana grafana:80 (10.244.0.14:3000)
Annotations: meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: default
Events: <none>
Here is the service that is defined in above ingress:
ubuntu:~$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.96.148.1 <none> 80/TCP 11h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d14h
If I do a curl to the cluster ip of the service, it goes through without an issue:
ubuntu:~$ curl 10.96.148.1
Found.
If I do a curl to the hostname with the path to the service, I got the 404 error:
ubuntu:~$ curl kalepa.k8s.io/grafana
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>
The hostname is resolved to the cluster ip of the nginx ingress service (nodeport):
ubuntu:~$ grep kalepa.k8s.io /etc/hosts
10.96.241.112 kalepa.k8s.io
This is the nginx ingress service definition:
ubuntu:~$ k describe -n nginx-ingress svc nginx-ingress
Name: nginx-ingress
Namespace: nginx-ingress
Labels: <none>
Annotations: <none>
Selector: app=nginx-ingress
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.241.112
IPs: 10.96.241.112
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31803/TCP
Endpoints: 10.244.0.6:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31913/TCP
Endpoints: 10.244.0.6:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What am I missing? Thanks for your help!
This is happening as you are using /grafana and this path does not exist in the grafana application - hence 404. You need to first configure grafana to use this context path before you can forward your traffic to /grafana.
If you use / as path, it will work. That's why curl 10.96.148 works as you are not adding a route /grafana. But most likely that path is already used by some other service, that's why you were using /grafana to begin with.
Therefore, you need to update your grafana.ini file to set the context root explicitly as shown below.
You may put your grafana.ini in a configmap, mount it to the original grafana.ini location and recreate the deployment.
[server]
domain = kalepa.k8s.io
root_url = http://kalepa.k8s.io/grafana/
I can see there is no ingressClassName specified for your ingress. It looks something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
ingressClassName: nginx
tls:
- hosts:
- kalepa.k8s.io
secretName: secret_name
rules:
- host: kalepa.k8s.io
http:
paths:
...

How to make My First ingress work on baremetal NodeIP?

I have pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
namespace: dev
spec:
selector:
matchLabels:
app: hello
replicas: 3
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
Make service:
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
namespace: dev
labels:
app: hello
spec:
type: ClusterIP
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
Check it:
---
apiVersion: v1
kind: Service
metadata:
name: hello-node-service
namespace: dev
spec:
type: NodePort
selector:
app: hello
ports:
- port: 80
targetPort: 8080
$ kubectl get svc -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node-service NodePort 10.233.3.50 <none> 80:31263/TCP 9h
hello-service ClusterIP 10.233.45.159 <none> 80/TCP 44h
$ curl -I http://cluster.local:31263
HTTP/1.1 200 OK
Date: Sat, 11 Sep 2021 07:31:28 GMT
Content-Length: 66
Content-Type: text/plain; charset=utf-8
I have verified that the service is working.
Install ingress with NodeIP (https://kubernetes.github.io/ingress-nginx/deploy/):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/baremetal/deploy.yaml
$ kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx --watch
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-7gsft 0/1 Completed 0 10h
ingress-nginx-admission-patch-qj57b 0/1 Completed 1 10h
ingress-nginx-controller-8cf5559f8-mh6fr 1/1 Running 0 10h
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.233.52.118 <none> 80:30377/TCP,443:31682/TCP 10h
ingress-nginx-controller-admission ClusterIP 10.233.51.175 <none> 443/TCP 10h
Check it:
$ curl -I http://cluster.local:30377/healthz
HTTP/1.1 200 OK
Date: Sat, 11 Sep 2021 07:39:04 GMT
Content-Type: text/html
Content-Length: 0
Connection: keep-alive
Make ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-hello
namespace: dev
spec:
rules:
- host: cluster.local
http:
paths:
- backend:
service:
name: hello-service
port:
number: 80
path: "/hello"
pathType: Prefix
Check It:
$ curl -I http://cluster.local:30377/hello
HTTP/1.1 404 Not Found
Date: Sat, 11 Sep 2021 07:40:43 GMT
Content-Type: text/html
Content-Length: 146
Connection: keep-alive
It's doesn't work. I spend few days, tried add ExternalIP to ingress controller.
Can you please tell me who had the experience of setting up ingress, what am I doing wrong?
=(((
INFO about cluster:
$ kubectl get ingress -n dev
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-hello <none> cluster.local 80 10h
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kuber-ingress-01 Ready worker 10d v1.21.3
kuber-master1 Ready control-plane,master 10d v1.21.3
kuber-master2 Ready control-plane,master 10d v1.21.3
kuber-master3 Ready control-plane,master 10d v1.21.3
kuber-node-01 Ready worker 10d v1.21.3
kuber-node-02 Ready worker 10d v1.21.3
kuber-node-03 Ready worker 10d v1.21.3
Inventory:
kuber-master1 10.0.57.31
kuber-master2 10.0.57.32
kuber-master3 10.0.57.33
kuber-node-01 10.0.57.34
kuber-node-02 10.0.57.35
kuber-node-03 10.0.57.36
kuber-ingress-01 10.0.57.30
$ ping cluster.local
PING cluster.local (10.0.57.30) 56(84) bytes of data.
64 bytes from ingress.example.com (10.0.57.30): icmp_seq=1 ttl=62 time=0.603 ms
The solution is to add the following content to the ingress - annotation.
Then the ingress controller starts to see the DNS addresses.
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
Also, for convenience, changed path: / to a regular expression:
- path: /v1(/|$)(.*)

Kubernetes : Cannot interconnect pod in microservice application

I am working on a microservice application and I am unable to connect my React to my backend api pod.
The request will be internal as I am using ServerSideRendering, so when the page load first, the client pod connects directly to the backend pod. I am using ingress-nginx to connect them internally as well.
Endpoint(from React pod --> Express pod):
http://ingress-nginx-controller.ingress-nginx.svc.cluster.local
Ingress details:
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.245.81.11 149.69.37.110 80:31702/TCP,443:31028/TCP 2d1h
Ingress-Config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: cultor.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
Ingress log:
[error] 1230#1230: *1253654 broken header: "GET /api/users/currentuser HTTP/1.1
Also, I am unable to ping ingress-nginx-controller.ingress-nginx.svc.cluster.local from inside of client pod.
EXTRA LOGS
$ kubectl get ns
NAME STATUS AGE
default Active 2d3h
ingress-nginx Active 2d1h
kube-node-lease Active 2d3h
kube-public Active 2d3h
kube-system Active 2d3h
#####
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-mongo-srv ClusterIP 10.245.155.193 <none> 27017/TCP 6h8m
auth-srv ClusterIP 10.245.1.179 <none> 3000/TCP 6h8m
client-srv ClusterIP 10.245.100.11 <none> 3000/TCP 6h8m
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 2d3h
UPDATE:
Ingress logs:
[error] 1230#1230: *1253654 broken header: "GET /api/users/currentuser HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
host: cultor.dev
x-request-id: 5cfd15996dc8481114b39a16f0be5f06
x-real-ip: 45.248.29.8
x-forwarded-for: 45.248.29.8
x-forwarded-proto: https
x-forwarded-host: cultor.dev
x-forwarded-port: 443
x-scheme: https
cache-control: max-age=0
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36
sec-fetch-site: none
sec-fetch-mode: navigate
sec-fetch-user: ?1
sec-fetch-dest: document
accept-encoding: gzip, deflate, br
accept-language: en-US,en-IN;q=0.9,en;q=0.8,la;q=0.7
This is a bug in using Ingress loadbalancer with Digitalocean for proxy to connect pods internally via load balancer:
Workaround:
DNS record for a custom hostname (at a provider of your choice) must be set up that points to the external IP address of the load-balancer. Afterwards, digitalocean-cloud-controller-manager must be instructed to return the custom hostname (instead of the external LB IP address) in the service ingress status field status.Hostname by specifying the hostname in the service.beta.kubernetes.io/do-loadbalancer-hostname annotation. Clients may then connect to the hostname to reach the load-balancer from inside the cluster.
Full official explaination of this bug

Google Kubernetes Engine Ingress doesn't work

Create ingress followed the guide of 'Kubernetes in Action' book on GKE, but the ingress doesn't work, can' be accessed from the public IP address of Ingress.
Create the replicaset to create pod.
Create Service. (followed the nodeport method on 'Kubernetes in Action').
Create ingress.
ReplicaSet, Service, Ingress are created successfully, nodeport can be accessed from the public IP address, no UNHEALTHY in ingress.
replicaset:
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: sonyfaye/kubia
Service:
apiVersion: v1
kind: Service
metadata:
name: kubia-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30123
selector:
app: kubia
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
spec:
rules:
- host: kubia.example.com
http:
paths:
- path: /
backend:
serviceName: kubia-nodeport
servicePort: 80
The nodeport itself can be accessed from public IP addresses.
C:\kube>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 8d
kubia-nodeport NodePort 10.59.253.10 <none> 80:30123/TCP 20h
C:\kube>kubectl get node
NAME STATUS ROLES AGE VERSION
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6
C:\kube>kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-kubia-default-pool-08dd2133-qbz6 Ready <none> 8d v1.12.8-gke.6 10.140.0.17 35.201.224.238 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-183639fa-18vr Ready <none> 8d v1.12.8-gke.6 10.140.0.18 35.229.152.12 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
gke-kubia-default-pool-42725220-43q8 Ready <none> 8d v1.12.8-gke.6 10.140.0.16 34.80.225.64 Container-Optimized OS from Google 4.14.119+ docker://17.3.2
C:\kube>curl http://34.80.225.64:30123
You've hit kubia-j2lnr
But the ingress can't be accessed from outside.
hosts file:
34.98.92.110 kubia.example.com
C:\kube>kubectl describe ingress
Name: kubia
Namespace: default
Address: 34.98.92.110
Default backend: default-http-backend:80 (10.56.0.7:8080)
Rules:
Host Path Backends
---- ---- --------
kubia.example.com
/ kubia-nodeport:80 (10.56.0.14:8080,10.56.1.6:8080,10.56.3.4:8080)
Annotations:
ingress.kubernetes.io/backends: {"k8s-be-30123--c4addd497b1e0a6d":"HEALTHY","k8s-be-30594--c4addd497b1e0a6d":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/target-proxy: k8s-tp-default-kubia--c4addd497b1e0a6d
ingress.kubernetes.io/url-map: k8s-um-default-kubia--c4addd497b1e0a6d
Events:
<none>
C:\kube>curl http://kubia.example.com
curl: (7) Failed to connect to kubia.example.com port 80: Timed out
C:\kube>telnet kubia.example.com 80
Connecting To kubia.example.com...
C:\kube>telnet 34.98.92.110 80
Connecting To 34.98.92.110...Could not open connection to the host, on port 80: Connect failed
Tried from intranet.
curl 34.98.92.110 IP can get some resule, and 80 port of 34.98.92.110 is accessible from intranet.
C:\kube>kubectl exec -it kubia-lrt9x bash
root#kubia-lrt9x:/# curl http://kubia.example.com
curl: (6) Could not resolve host: kubia.example.com
root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/# curl http://34.98.92.110
default backend - 404root#kubia-lrt9x:/#
root#kubia-lrt9x:/# curl http://10.56.0.7:8080
default backend - 404root#kubia-lrt9x:/#
Does anybody know how to debug this?
The nodeport is been added to the firewall, or else nodeport is not accessible. The Ingress IP seems don't need to be added to the firewall.
Try to expose replicaset to be able to connect from the outside:
$ kubectl expose rs hello-world --type=NodePort --name=my-service
remember to first delete service kubia-nodeport and delete selector and section with service in Ingress configuration file and then apply changes using kubectl apply command.
More information you can find here: exposing-externalip.
Useful doc: kubectl-expose.