Kubernetes : Cannot interconnect pod in microservice application - kubernetes

I am working on a microservice application and I am unable to connect my React to my backend api pod.
The request will be internal as I am using ServerSideRendering, so when the page load first, the client pod connects directly to the backend pod. I am using ingress-nginx to connect them internally as well.
Endpoint(from React pod --> Express pod):
http://ingress-nginx-controller.ingress-nginx.svc.cluster.local
Ingress details:
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.245.81.11 149.69.37.110 80:31702/TCP,443:31028/TCP 2d1h
Ingress-Config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: cultor.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
Ingress log:
[error] 1230#1230: *1253654 broken header: "GET /api/users/currentuser HTTP/1.1
Also, I am unable to ping ingress-nginx-controller.ingress-nginx.svc.cluster.local from inside of client pod.
EXTRA LOGS
$ kubectl get ns
NAME STATUS AGE
default Active 2d3h
ingress-nginx Active 2d1h
kube-node-lease Active 2d3h
kube-public Active 2d3h
kube-system Active 2d3h
#####
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-mongo-srv ClusterIP 10.245.155.193 <none> 27017/TCP 6h8m
auth-srv ClusterIP 10.245.1.179 <none> 3000/TCP 6h8m
client-srv ClusterIP 10.245.100.11 <none> 3000/TCP 6h8m
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 2d3h
UPDATE:
Ingress logs:
[error] 1230#1230: *1253654 broken header: "GET /api/users/currentuser HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
host: cultor.dev
x-request-id: 5cfd15996dc8481114b39a16f0be5f06
x-real-ip: 45.248.29.8
x-forwarded-for: 45.248.29.8
x-forwarded-proto: https
x-forwarded-host: cultor.dev
x-forwarded-port: 443
x-scheme: https
cache-control: max-age=0
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36
sec-fetch-site: none
sec-fetch-mode: navigate
sec-fetch-user: ?1
sec-fetch-dest: document
accept-encoding: gzip, deflate, br
accept-language: en-US,en-IN;q=0.9,en;q=0.8,la;q=0.7

This is a bug in using Ingress loadbalancer with Digitalocean for proxy to connect pods internally via load balancer:
Workaround:
DNS record for a custom hostname (at a provider of your choice) must be set up that points to the external IP address of the load-balancer. Afterwards, digitalocean-cloud-controller-manager must be instructed to return the custom hostname (instead of the external LB IP address) in the service ingress status field status.Hostname by specifying the hostname in the service.beta.kubernetes.io/do-loadbalancer-hostname annotation. Clients may then connect to the hostname to reach the load-balancer from inside the cluster.
Full official explaination of this bug

Related

Kubernetes ingress redirects to 504

I'm trying to learn kubernetes with a couple of rpi's at home. I'm trying to run pihole in the cluster, which has worked, now the issue i'm facing is a redirect issue with ingress.
my ingress.yaml file output:
## pihole.ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: pihole
name: pihole-ingress
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: pihole.192.168.1.230.nip.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: pihole-web
port:
number: 80
output of kubectl describe ingress:
Name: pihole-ingress
Namespace: pihole
Address: 192.168.1.230
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
pihole.192.168.1.230.nip.io
/ pihole-web:80 (10.42.2.7:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 18s (x12 over 11h) nginx-ingress-controller Scheduled for sync
Output of get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller-admission ClusterIP 10.43.240.186 <none> 443/TCP 22h
ingress-nginx-controller LoadBalancer 10.43.64.54 192.168.1.230 80:31093/TCP,443:30179/TCP 22h
I'm able to get into the pod and curl the cluster ip to get the output i expect, but when i try to visit pihole.192.168.1.230, i get a 504 error. Hoping anyone can assist with my ingress to redirect to the pihole-web service. Please let me know if there's any additional information i can provide.
EDIT:
kubectl get po -n pihole -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pihole-7d4dc6b8d8-vclxz 1/1 Running 0 9h 10.42.2.8 node02.iad <none> <none>
kubectl get svc -n pihole
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pihole-web ClusterIP 10.43.102.198 <none> 80/TCP,443/TCP 9h
pihole-dhcp NodePort 10.43.191.110 <none> 67:32021/UDP 9h
pihole-dns-udp NodePort 10.43.214.15 <none> 53:31153/UDP 9h
pihole-dns-tcp NodePort 10.43.168.6 <none> 53:32754/TCP 9h
another edit: since this question was originally posted, and the above edit was made, pihole pod ip was changed from 10.42.2.7 to 10.42.2.8
I checked the logs for the ingress controller and saw the following. Hoping someone can help me decipher this:
2021/09/03 17:52:35 [error] 1938#1938: *3132346 upstream timed out (110: Operation timed out) while connecting to upstream, client: 10.42.1.1, server: pihole.192.168.1.230.nip.io, request: "GET / HTTP/1.1", upstream: "http://10.42.2.8:80/", host: "pihole.192.168.1.230.nip.io", referrer: "http://pihole.192.168.1.230.nip.io/"

kubernetes ingress 502 bad gateway

I installed a Kubernetes Cluster on bare metal (using VMware virtual machines) with the following nodes
master-01 Ready control-plane,master 5d3h v1.21.3
master-02 Ready control-plane,master 5d3h v1.21.3
master-03 Ready control-plane,master 5d3h v1.21.3
worker-01 Ready <none> 5d2h v1.21.3
worker-02 Ready <none> 5d2h v1.21.3
worker-03 Ready <none> 5d2h v1.21.3
Metallb is installed as loadbalancer for the cluster and calico as CNI
I also installed nginx-ingress-controller with helm
$ helm repo add nginx-stable https://helm.nginx.com/stable
$ helm repo update
$ helm install ingress-controller nginx-stable/nginx-ingress
I deployed a simple nginx server for testing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 2
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx-app
#type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-myapp
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
My deployments with loadbalancer types get their IP from metallb and works fine but when I add ingress although an IP is assigned I get error 502 bad gateway as shown below:
firewall is enabled but required ports are opened
6443/tcp 2379-2380/tcp 10250-10252/tcp 179/tcp 7946/tcp 7946/udp 8443/tcp on master nodes
10250/tcp 30000-32767/tcp 7946/tcp 7946/udp 8443/tcp 179/tcp on worker nodes
My services and pods works fine
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ingress-controller-nginx-ingress LoadBalancer 10.101.17.180 10.1.210.100 80:31509/TCP,443:30004/TCP 33m app=ingress-controller-nginx-ingress
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d <none>
nginx-service ClusterIP 10.101.48.198 <none> 80/TCP 31m app=nginx-app
My ingress logs gives me error with no route to the internal IP
2021/07/29 07:46:24 [error] 42#42: *8 connect() failed (113: No route to host) while connecting to upstream, client: 10.1.210.5, server: myapp.com, request: "GET / HTTP/1.1", upstream: "http://192.168.171.17:80/", host: "myapp.com"
10.1.210.5 - - [29/Jul/2021:07:46:24 +0000] "GET / HTTP/1.1" 502 157 "-" "curl/7.68.0" "-"
W0729 07:50:16.416830 1 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
192.168.2.131 - - [29/Jul/2021:07:51:03 +0000] "GET / HTTP/1.1" 404 555 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36" "-"
192.168.2.131 - - [29/Jul/2021:07:51:03 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://10.1.210.100/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36" "-"
W0729 07:56:43.420282 1 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0729 08:05:28.422594 1 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0729 08:10:45.425329 1 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
2021/07/29 08:13:59 [error] 42#42: *12 connect() failed (113: No route to host) while connecting to upstream, client: 10.1.210.5, server: myapp.com, request: "GET / HTTP/1.1", upstream: "http://192.168.171.17:80/", host: "myapp.com"
10.1.210.5 - - [29/Jul/2021:08:13:59 +0000] "GET / HTTP/1.1" 502 157 "-" "curl/7.68.0" "-"
2021/07/29 08:14:09 [error] 42#42: *14 connect() failed (113: No route to host) while connecting to upstream, client: 10.1.210.5, server: myapp.com, request: "GET / HTTP/1.1", upstream: "http://192.168.171.17:80/", host: "myapp.com"
10.1.210.5 - - [29/Jul/2021:08:14:09 +0000] "GET / HTTP/1.1" 502 157 "-" "curl/7.68.0" "-"
Any idea please ?
EDIT : As asked here description of services and pods
$ kubectl describe pod nginx-deployment-6f7d8d4d55-sncdr
Name: nginx-deployment-6f7d8d4d55-sncdr
Namespace: default
Priority: 0
Node: worker-01/10.1.210.63
Start Time: Thu, 29 Jul 2021 08:43:59 +0100
Labels: app=nginx-app
pod-template-hash=6f7d8d4d55
Annotations: cni.projectcalico.org/podIP: 192.168.171.17/32
cni.projectcalico.org/podIPs: 192.168.171.17/32
Status: Running
IP: 192.168.171.17
IPs:
IP: 192.168.171.17
Controlled By: ReplicaSet/nginx-deployment-6f7d8d4d55
Containers:
nginx:
Container ID: docker://fc61b73f8a833ad13b8956d8ce151b221b75a58a9a2fbae928464f3b0a77cca2
Image: nginx
Image ID: docker-pullable://nginx#sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 29 Jul 2021 08:44:01 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wkc48 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-wkc48:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned default/nginx-deployment-6f7d8d4d55-sncdr to worker-01
Normal Pulling 16m kubelet Pulling image "nginx"
Normal Pulled 16m kubelet Successfully pulled image "nginx" in 1.51808376s
Normal Created 16m kubelet Created container nginx
Normal Started 16m kubelet Started container nginx
$ kubectl describe svc ingress-controller-nginx-ingress
Name: ingress-controller-nginx-ingress
Namespace: default
Labels: app.kubernetes.io/instance=ingress-controller
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-controller-nginx-ingress
helm.sh/chart=nginx-ingress-0.10.0
Annotations: meta.helm.sh/release-name: ingress-controller
meta.helm.sh/release-namespace: default
Selector: app=ingress-controller-nginx-ingress
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.101.17.180
IPs: 10.101.17.180
LoadBalancer Ingress: 10.1.210.100
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31509/TCP
Endpoints: 192.168.37.202:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 30004/TCP
Endpoints: 192.168.37.202:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 31108
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 18m metallb-controller Assigned IP "10.1.210.100"
Normal nodeAssigned 3m21s (x182 over 18m) metallb-speaker announcing from node "worker-02"
$ kubectl describe svc nginx-service
Name: nginx-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=nginx-app
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.101.48.198
IPs: 10.101.48.198
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 192.168.171.17:80
Session Affinity: None
Events: <none>
$ kubectl exec -it ingress-controller-nginx-ingress-dd5db86dc-gqdpm -- /bin/bash
nginx#ingress-controller-nginx-ingress-dd5db86dc-gqdpm:/$ curl 192.168.171.17:80
curl: (7) Failed to connect to 192.168.171.17 port 80: No route to host
nginx#ingress-controller-nginx-ingress-dd5db86dc-gqdpm:/$ curl 192.168.171.17
curl: (7) Failed to connect to 192.168.171.17 port 80: No route to host
nginx#ingress-controller-nginx-ingress-dd5db86dc-gqdpm:/$ curl 10.101.48.198
curl: (7) Failed to connect to 10.101.48.198 port 80: Connection timed out
nginx#ingress-controller-nginx-ingress-dd5db86dc-gqdpm:/$ curl nginx-deployment-6f7d8d4d55-sncdr
curl: (6) Could not resolve host: nginx-deployment-6f7d8d4d55-sncdr
nginx#ingress-controller-nginx-ingress-dd5db86dc-gqdpm:/$
To be honest I don't understand why curl svcip doesn't work anymore; yesterday it worked.
The problem was a firewall issue I disabled firewalld and it works now, I thought that had to open port 8443 but it seems to be another port if anyone can tell me which one
Thank you
I had a similar issue with a traefik ingress in k3s. I enabled masquerade in firewalld
firewall-cmd --permanent --add-masquerade && firewall-cmd --reload
Credit to this post for the idea: https://github.com/k3s-io/k3s/issues/1646#issuecomment-881191877

Communication Between Two Services in Kubernetes Cluster Using Ingress as API Gateway

I am having problems trying to get communication between two services in a kubernetes cluster. We are using a kong ingress object as an 'api gateway' to reroute http
calls from a simple Angular frontend to send it to a .NET Core 3.1 API Controller Interface backend.
In front of these two ClusterIP services sits an ingress controller to take external http(s) calls from our kubernetes cluster to launch the frontend service. This ingress is shown here:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
namespace: kong
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: app.***.*******.com << Obfuscated
http:
paths:
- path: /
backend:
serviceName: frontend-service
servicePort: 80
The first service is called 'frontend-service', a simple Angular 9 frontend that allows me to type in http strings and submit those strings to the backend.
The manifest yaml file for this is shown below. Note that the image name is obfuscated for various reasons.
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: kong
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
imagePullSecrets:
- name: regcred
containers:
- name: frontend
image: ***********/*******************:**** << Obfuscated
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: kong
name: frontend-service
spec:
type: ClusterIP
selector:
app: frontend
ports:
- port: 80
targetPort: 80
protocol: TCP
The second service is a simple .NET Core 3.1 API interface that prints back some text when the controller is reached. The backend service is called 'dataapi' and has one simple Controller in it called ValuesController.
The manifest yaml file for this is shown below.
replicas: 1
selector:
matchLabels:
app: dataapi
template:
metadata:
labels:
app: dataapi
spec:
imagePullSecrets:
- name: regcred
containers:
- name: dataapi
image: ***********/*******************:**** << Obfuscated
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: dataapi
namespace: kong
labels:
app: dataapi
spec:
ports:
- port: 80
name: http
targetPort: 80
selector:
app: dataapi
We are using a kong ingress as a proxy to redirect incoming http calls to the dataapi service. This manifest file is shown below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kong-gateway
namespace: kong
spec:
ingressClassName: kong
rules:
- http:
paths:
- path: /dataapi
pathType: Prefix
backend:
service:
name: dataapi
port:
number: 80
Performing a 'kubectl get all' produces the following output:
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/dataapi-dbc8bbb69-mzmdc 1/1 Running 0 2d2h
pod/frontend-5d5ffcdfb7-kqxq9 1/1 Running 0 65m
pod/ingress-kong-56f8f44fd5-rwr9j 2/2 Running 0 6d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dataapi ClusterIP 10.128.72.137 <none> 80/TCP,443/TCP 2d2h
service/frontend-service ClusterIP 10.128.44.109 <none> 80/TCP 2d
service/kong-proxy LoadBalancer 10.128.246.165 XX.XX.XX.XX 80:31289/TCP,443:31202/TCP 6d
service/kong-validation-webhook ClusterIP 10.128.138.44 <none> 443/TCP 6d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dataapi 1/1 1 1 2d2h
deployment.apps/frontend 1/1 1 1 2d
deployment.apps/ingress-kong 1/1 1 1 6d
NAME DESIRED CURRENT READY AGE
replicaset.apps/dataapi-dbc8bbb69 1 1 1 2d2h
replicaset.apps/frontend-59bf9c75dc 0 0 0 25h
replicaset.apps/ingress-kong-56f8f44fd5 1 1 1 6d
and 'kubectl get ingresses' gives:
NAME CLASS HOSTS (Obfuscated)
ingress-nginx <none> ***.******.com,**.********.com,**.****.com,**.******.com + 1 more... xx.xx.xxx.xx 80 6d ADDRESS PORTS AGE
kong-gateway kong * xx.xx.xxx.xx 80 2d2h
From the frontend, the expectation is that constructing the http string:
http://kong-proxy/dataapi/api/values
will enter our 'values' controller in the backend and return the text string from that controller.
Both services are running on the same kubernetes cluster, here using Linode. Our thinking is that it is a 'within cluster' communication between two services both of type ClusterIP.
The error reported in the Chrome console is:
zone-evergreen.js:2828 GET http://kong-proxy/dataapi/api/values net::ERR_NAME_NOT_RESOLVED
Note that we had found a similar StackOverflow issue as ours and the suggestion in that result was to add 'default.svc.cluster.local' to the http string as follows:
http://kong-proxy.default.svc.cluster.local/dataapi/api/values
This did not work. We also substituted kong, which is the namespace of the service, for default like this:
http://kong-proxy.kong.svc.cluster.local/dataapi/api/values
yielding the same errors as above.
Is there a critical step I am missing? Any advice is greatly appreciated!
*************** UPDATE From Eric Gagnon's Response(s) **************
Again, thank you Eric for Responding. Here are what my colleague and I have tried per your suggestions
Pod dns misconfiguration: check if pod's first nameserver equals 'kube-dns' svc ip and if search start with kong.svc.cluster.local:
kubectl exec -i -t -n kong frontend-simple-deployment-7b8b9cfb44-f2shk -- cat /etc/resolv.conf
nameserver 10.128.0.10
search kong.svc.cluster.local svc.cluster.local cluster.local members.linode.com
options ndots:5
kubectl get -n kube-system svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.128.0.10 <none> 53/UDP,53/TCP,9153/TCP 55d
kubectl describe -n kube-system svc kube-dns
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: lke.linode.com/caplke-version: v1.19.9-001
prometheus.io/port: 9153
prometheus.io/scrape: true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.128.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.2.4.10:53,10.2.4.14:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.2.4.10:53,10.2.4.14:53
Port: metrics 9153/TCP
TargetPort: 9153/TCP
Endpoints: 10.2.4.10:9153,10.2.4.14:9153
Session Affinity: None
Events: <none>
App Not using pod dns: in Node, output dns.getServers() to console
I do not understand where and how to do this. We tried to add DNS directly inside our Angular frontend app, but we found out it is not possible to add this.
Kong-proxy doesn't like something: set logging debug, hit the app a bunch of times, and grep logs.
We have tried two tests here. First, our kong-proxy service reachable from an ingress controller. Note that this is not our simple frontend app. It is nothing more than a proxy that passes an http string to a public gateway we have set up. This does work. We have exposed this through as:
http://gateway.cwg.stratbore.com/test/api/test
["Successfully pinged Test controller!!"]
kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test
10.2.4.11 - - [16/Apr/2021:16:03:42 +0000] "GET /test/api/test HTTP/1.1" 200 52 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
So this works.
But when we try and do it from a simple frontend interface running in the same cluster as our backend:
it does not work with the text shown in the text box. This command does not add anything new:
kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test
The front end comes back with an error.
But if we do add this http text:
The kong-ingress pod is hit:
kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test
10.2.4.11 - - [16/Apr/2021:16:03:42 +0000] "GET /test/api/test HTTP/1.1" 200 52 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
10.2.4.11 - - [17/Apr/2021:16:55:50 +0000] "GET /test/api/test HTTP/1.1" 200 52 "http://app-basic.cwg.stratbore.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
but the frontend gets an error back.
So at this point, we have tried a lot of things to get our frontend app to successfully send an http to our backend and get a response back and we are unsuccessful. I have also tried various configurations of our nginx.conf file that is packaged with our frontend app but no luck there either.
I am about to package all of this up in a github project. Thanks.
Chris,
I haven't used linode or kong and don't know what your frontend actually does, so I'll just point out what I can see:
The simplest dns check is to curl (or ping, dig, etc.):
http://[dataapi's pod ip]:80 from a host node
http://[kong-proxy svc's internal ip]/dataapi/api/values from a host node (or another pod - see below)
default path matching on nginx ingress controller is pathPrefix, so your nginx ingress with path: / and nginx.ingress.kubernetes.io/rewrite-target: / actually matches everything and rewrites to /. This may not be an issue if you properly specify all your ingresses so they take priority over "/".
you said 'using a kong ingress as a proxy to redirect incoming', just want to make sure you're proxying (not redirecting the client).
Is chrome just relaying its upstream error from frontend-service? An external client shouldn't be able to resolve the cluster's urls (unless you've joined your local machine to the cluster's network or done some other fancy trick). By default, dns only works within the cluster.
cluster dns generally follows [service name].[namespace name].svc.cluster.local. If dns cluster dns is working, then using curl, ping, wget, etc. from a pod in the cluster and pointing it to that svc will send it to the cluster svc ip, not an external ip.
is your dataapi service configured to respond to /dataapi/api/values or does it not care what the uri is?
If you don't have any network policies restricting traffic within a namespace, you should be able to create a test pod in the same namespace, and curl the service dns and the pod ip's directly:
apiVersion: v1
kind: Pod
metadata:
name: curl-test
namespace: kong
spec:
containers:
- name: curl-test
image: buildpack-deps
imagePullPolicy: Always
command:
- "curl"
- "-v"
- "http://dataapi:80/dataapi/api/values"
#nodeSelector:
# kubernetes.io/hostname: [a more different node's hostname]
The pod should attempt dns resolution from the cluster. So it should find dataapi's svc ip and curl port 80 path /dataapi/api/values. Service IP's are virtual so they aren't actually 'reachable'. Instead, iptables routes them to the pod ip, which has an actual network endpoint and IS addressable.
once it completes, just check the logs: kubectl logs curl-test, and then delete it.
If this fails, the nature of the failure in the logs should tell you if it's a dns or link issue. If it works, then you probably don't have a cluster dns issue. But it's possible you have an inter-node communication issue. To test this, you can run the same manifest as above, but uncomment the node selector field to force it to run on a different node than your kong-proxy pod. It's a manual method, but it's quick for troubleshooting. Just rinse and repeat as needed for other nodes.
Of course, it may not be any of this, but hopefully this helps troubleshoot.
After a lot of help from Eric G (thank you!) on this, and reading this previous StackOverflow, I finally solved the issue. As the answer in this link illustrates, our frontend pod was serving up our application in a web browser which knows NOTHING about Kubernetes clusters.
As the link suggests, we added another rule in our nginx ingress to successfully route our http requests to the proper service
- host: gateway.*******.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gateway-service
port:
number: 80
Then from our Angular frontend, we sent our HTTP requests as follows:
...
http.get<string>("http://gateway.*******.com/api/name_of_contoller');
...
And we were finally able to communicate with our backend service the way we wanted. Both frontend and backend in the same Kubernetes Cluster.

Can not access ingress service from within cluster

I am new to kubernetes and I have minikube setup on my linux mint 20.
I am trying to implement server side rendering with nextjs, I have installed ingress-nginx using helm.
ingess-service.yaml :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: example.dev
http:
paths:
- backend:
serviceName: users-srv
servicePort: 4000
path: /api/users/?(.*)
- backend:
serviceName: ui-srv
servicePort: 3000
path: /?(.*)
in next js app ui I want to access ingress controller in order to make api calls from server side. I tried:
axios.get('http://ingress-nginx-controller-admission/api/users/currentuser')
axios.get('http://ingress-nginx-controller/api/users/currentuser')
axios.get('http://ingress-service/api/users/currentuser')
but nothing is working.
kubctl get services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.107.45.123 172.42.42.100 80:31205/TCP,443:32568/TCP 80m
ingress-nginx-controller-admission ClusterIP 10.111.229.112 <none> 443/TCP 80m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d1h
ui-srv ClusterIP 10.99.20.51 <none> 3000/TCP 89s
users-mongo-srv ClusterIP 10.103.187.200 <none> 27017/TCP 89s
users-srv ClusterIP 10.99.15.244 <none> 4000/TCP 89s
can anyone help me out ?
Thanks in advance...
The ingress is designed to handle external traffic to the cluster and as such, it is expecting the request to arrive at the domain you specified (aka example.dev)
To access your APIs from inside a Pod, you should most definitely use directly the services that are served by the Ingress, such as users-srv or ui-srv.
If you really want to contact the ingress instead of the Service, you could try a couple things:
Make so that example.dev points to the LoadBalancer IP address, for example adding it to /etc/hosts of the cluster's nodes should work 8or even internally in the Pod). But take into consideration that this means accessing the services by a long route when you could just access them with the service name.
Remove the host parameter from your rules, meaning the services should be served generally at the IP address of the nginx-controller, this should make using ingress-nginx-controller work as expected. This is not supported by all Ingress Controllers but it could work.

Kubernetes - ingress-nginx "no active endpoint" error

I am building a microservice application. I am unable to send request to one of the services using postman:
Endpoint I am sending POST request to using postman:
http://cultor.dev/api/project
Error : "project-srv" does not have any active Endpoint (ingress-nginx returns 503 error)
Note
All the other microservices are running fine which use exact same config.
ingress-nginx config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: cultor.dev
http:
paths:
- path: /api/project/?(.*)
backend:
serviceName: project-srv
servicePort: 3000
- path: /api/profile/?(.*)
backend:
serviceName: profile-srv
servicePort: 3000
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
ClusterIP services:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-srv ClusterIP 10.245.52.208 <none> 3000/TCP 40m
client-srv ClusterIP 10.245.199.94 <none> 3000/TCP 39m
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 24d
nats-srv ClusterIP 10.245.1.58 <none> 4222/TCP,8222/TCP 39m
profile-srv ClusterIP 10.245.208.174 <none> 3000/TCP 39m
project-srv ClusterIP 10.245.131.56 <none> 3000/TCP 39m
LOGS
inress-nginx:
45.248.29.8 - - [02/Oct/2020:15:16:52 +0000] "POST /api/project/507f1f77bcf86cd799439011 HTTP/1.1" 503 197 "-" "PostmanRuntime/7.26.5" 591 0.000 [default-project-srv-3000] [] - - - - e1ae0615f49091786d56cab2bb9c94c6
W1002 15:17:59.712320 8 controller.go:916] Service "default/project-srv" does not have any active Endpoint.
I1002 15:17:59.814364 8 main.go:115] successfully validated configuration, accepting ingress ingress-service in namespace default
W1002 15:17:59.827616 8 controller.go:916] Service "default/project-srv" does not have any active Endpoint.
It was error with the labels as suggested by #Kamol Hasan.
The pod selector label in the 'Deployment' config was not matching selector in it's 'Service' config.