502 bad gateway using Kubernetes with Ingress controller - kubernetes

I have a Kubernetes setup on Minikube with this configuration:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myappdeployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: custom-docker-image:latest
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: example-service
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 3000
# Port to forward to inside the pod
targetPort: 3000
type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
- http:
paths:
- path: /
backend:
serviceName: example-service
servicePort: 3000
I took a look at the solution to this Stackoverflow post and it seems my configuration is.. fine?
What am I doing wrong to get 502 Bad Gateway when accessing http://192.xxx.xx.x, my Minikube address? The nginx-ingress-controller logs say:
...
Connection refused while connecting to upstream
...
Another odd piece of info, when I follow this guide to setting up the basic node service on Kubernetes, everything works and I see a "Hello world" page when I open up the Minikube add

Steps taken:
I ran
kubectl port-forward pods/myappdeployment-5dff57cfb4-6c6bd 3000:3000
Then I visited localhost:3000 and saw
This page isn’t working
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
I figured the reason is more obvious in the logs so I ran
kubectl logs pods/myappdeployment-5dff57cfb4-6c6bd
and got
Waiting for MySQL...
no destination
Waiting for MySQL...
no destination
Waiting for MySQL...
no destination
Waiting for MySQL...
no destination
...
Thus I reasoned that I was originally getting a 502 because all of the pods were not running due to no MySQL setup.

Related

Localhost kubernetes ingress not exposing services to local machine

I'm running kuberenetes in localhost, the pod is running and I can access the services when I port forwarding:
kubectl port-forward svc/my-service 8080:8080
I can get/post etc. the services in localhost.
I'm trying to use it with ingress to access it, here is the yml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080
I've also installed the ingress controller. But it isn't working as expected. Anything wrong with this?
EDIT: the service that Im trying to connect with ingress:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
labels:
app: my-service
spec:
replicas: 1
selector:
matchLabels:
app: my-service
template:
metadata:
labels: my-service
app: my-service
spec:
containers:
- image: test/my-service:0.0.1-SNAPSHOT
name: my-service
ports:
- containerPort:8080
... other spring boot override properties
---
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-service
spec:
type: ClusterIP
selector:
app: my-service
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
service is working by itself though
EDIT:
It worked when I used https instead of http
Is ingress resource in the same namespace as the service? Can you share the manifest of service? Also, what do logs of nginx ingress-controller show and what sort of error do you face when hitting the endpoint in the browser?
Ingress's YAML file looks OK to me BTW.
I was being stupid. It worked when I used https instead of http

Minikube NGINX Ingress return 404 Not Found

I created a deployment, a service and an Ingress to be able to access a NGINX webserver from my host, but I keep getting 404 Not Found. After a lot of hours troubleshooting, I'm getting to a point where some help would be very welcomed.
The steps and related yaml files are below.
Enable Minikube NGINX Ingress controller
minikube addons enable ingress
Create NGINX web server deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver-white
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx-webserver-white
template:
metadata:
labels:
app: nginx-webserver-white
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
Create ClusterIP Service to manage the access to the pods
apiVersion: v1
kind: Service
metadata:
name: webserver-white-svc
labels:
run: webserver-white-svc
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-webserver-white
Create Ingress to access service from outside the Cluster
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webserver-white-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
defaultBackend:
service:
name: webserver-white-svc
port:
number: 80
rules:
- host: white.example.com # This is pointing to the control plane IP
http:
paths:
- backend:
service:
name: webserver-white-svc
port:
number: 80
path: /
pathType: ImplementationSpecific
Tests
When connecting to one pod and executing curl http://localhost, it return the NGINX homepage html, so the pod looks good.
When creating a testing pod and executing curl http://<service-cluster-ip>, it return the NGINX homepage html, so the service looks good.
When connecting to the ingress nginx controller pod and executing curl http://<service-cluster-ip>, it also return the NGINX homepage html, so the connection between the ingress controller and the service looks good.
When connecting to the control plane with minikube ssh and executing ping <nginx-controller-ip> I see that it reaches the nginx controller.
I tested the same, but with a NodePort Service instead of ClusterIP and noticed that I could access the NGINX homepage using the node port, but not the Ingress port.
Any idea what I could be doing wrong and/or what else I could do to better troubleshoot this issue?
Other notes
minikube version: v1.23.0
kubectl version on the client and server: v1.22.1
OS: Ubuntu 18.04.5 LTS (Bionic Beaver)
UPDATE/SOLUTION:
The solution was to add the missing annotation kubernetes.io/ingress.class: "nginx" on the Ingress.
The solution was to add the missing annotation kubernetes.io/ingress.class: "nginx" on the Ingress.

cannot hit pod in kubernetes cluster from other pod but can from ingress

I'm able to hit a pod from outside my k8s cluster using an ingress but cannot from within the cluster and am getting a "connection refused" error. I tried to shell into the pod that's refusing connections and run the following curls which work just fine when running in my local/host environment:
curl localhost:4000/api/v1/users
curl 127.0.0.1:4000/api/v1/users
curl 0.0.0.0:4000/api/v1/users
curl :4000/api/v1/users
to no avail. The cluster ip is 10.99.224.173 but that times out and I'd prefer not to bypass dns since this is dynamically assigned by k8s. And it's not working anyway. The service is a nodejs based one. I can add more information but figured I'd try to err on the side of too little information than too much. To isolate the issue as being a k8s problem, I've run the two services locally outside of k8s with no issues. I think a good starting point would be to identify why I can't curl to the server from within the same pod. Thanks!
EDIT 2: closing the cluster from skaffold and re-running skaffold dev resolved this issue and I'm now able to run the following just fine:
curl localhost:4000/api/v1/users
curl 127.0.0.1:4000/api/v1/users
curl 0.0.0.0:4000/api/v1/users
curl :4000/api/v1/users
I found that the tchannel-node library does not accept 0.0.0.0 as a valid ip address to listen to, and the closest I can pass is 127.0.0.1. Unfortunately, this means that calling to the cluster ip 10.99.224.173:9090 will never be registered by the server as 127.0.0.1:9090 the way 0.0.0.0:9090 will. I'm wondering how I can fix my understanding to pass the correct ip address.
EDIT (requested yaml files):
client
apiVersion: apps/v1
kind: Deployment
metadata:
name: tickets-depl
spec:
replicas: 1
selector:
matchLabels:
app: tickets
template:
metadata:
labels:
app: tickets
spec:
containers:
- name: tickets
image: mine/tickets-go
---
apiVersion: v1
kind: Service
metadata:
name: tickets-svc
spec:
selector:
app: tickets
ports:
- name: tickets
protocol: TCP
port: 4004
targetPort: 4004
server that refuses connections
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: mine/auth
env:
- name: PORT
value: "4000"
- name: TCHANNEL_PORT
value: "9090"
---
apiVersion: v1
kind: Service
metadata:
name: auth-svc
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 4000
targetPort: 4000
- name: auth-thrift
protocol: TCP
port: 9090
targetPort: 9090
ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-svc
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: foo.com
http:
paths:
- path: /api/v1/users/?(.*)
backend:
service:
name: auth-svc
port:
number: 4000
pathType: Prefix
- path: /api/v1/tickets/?(.*)
backend:
service:
name: tickets-svc
port:
number: 4004
pathType: Prefix

Error {"message":"failure to get a peer from the ring-balancer"} using kong ingress

Getting error msg when I trying to access with public IP:
"{"message":"failure to get a peer from the ring-balancer"}"
Looks like Kong is unable to the upstream services.
I am using voting app
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
serviceName: voting-service
servicePort: 80
service.yaml
apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: voting-app
spec:
ports:
- targetPort: 80
port: 80
selector:
name: voting-app-pod
app: voting-app
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: voting-app
spec:
template:
metadata:
labels:
name: voting-app-pod
app: voting-app
spec:
containers:
- name: voting-app
image: dockersamples/examplevotingapp_vote
ports:
- containerPort: 80
replicas: 2
selector:
matchLabels:
app: voting-app
There could be one of many things wrong here. But essentially your ingress cannot get to your backend.
If your backend up and running?
Check backend pods are "Running"
kubectl get pods
Check backend deployment has all replicas up
kubectl get deploy
Connect to the app pod and run a localhost:80 request
kubectl exec -it <pod-name> sh
# curl http://localhost
Connect to the ingress pod and see if you can reach the service from there
kubectl exec -it <ingress-pod-name> sh
# dig voting-service (can you DNS resolve it)
# telnet voting-sevice 80
# curl http://voting-service
This issue might shed some insights as to why you can't reach the backend service. What http error code are you seeing?
The problem is resolved after deploying services and deployments in kong namespace instead of default namespace. Now I can access the application with Kong ingress public IP.
Looks like kong ingress is not able to resolve DNS with headless DNS. We need mention FQDN in ingress yaml
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
name: voting-service
Port:
number: 80
Try this i thing it will work

Routing troubleshooting with Kubernetes Ingress

I tried to setup a GKE environment with a frontend pod (cup-fe) and a backend one, used to authenticate the user upon login (cup-auth), but I can't get my ingress to work.
Following is the frontend pod (cup-fe) running nginx with an angular app. I created also a static IP address resolved by "cup.xxx.it" and "cup-auth.xxx.it" dns:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cup-fe
namespace: default
labels:
app: cup-fe
spec:
replicas: 2
selector:
matchLabels:
app: "cup-fe"
template:
metadata:
labels:
app: "cup-fe"
spec:
containers:
- image: "eu.gcr.io/xxx-cup-yyyyyy/cup-fe:latest"
name: "cup-fe"
dnsPolicy: ClusterFirst
Then is the auth pod (cup-auth):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cup-auth
namespace: default
labels:
app: cup-auth
spec:
replicas: 1
selector:
matchLabels:
app: cup-auth
template:
metadata:
labels:
app: cup-auth
spec:
containers:
image: "eu.gcr.io/xxx-cup-yyyyyy/cup-auth:latest"
imagePullPolicy: Always
name: cup-auth
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
- containerPort: 8778
name: jolokia
protocol: TCP
- containerPort: 8888
name: management
protocol: TCP
dnsPolicy: ClusterFirst
Then I created two NodePorts to expose the above pods:
kubectl expose deployment cup-fe --type=NodePort --port=80
kubectl expose deployment cup-auth --type=NodePort --port=8080
Last, I created an ingress to route external http requests towards services:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http-ingress
namespace: default
labels:
app: http-ingress
spec:
rules:
- host: cup.xxx.it
http:
paths:
- path: /*
backend:
serviceName: cup-fe
servicePort: 80
- host: cup-auth.xxx.it
http:
paths:
- path: /*
backend:
serviceName: cup-auth
So, I can reach the frontend pod at http://cup.xxx.it, the angular app redirects me to http://cup-auth.xxx.it/login, but I get only 502 bad request. With kubectl describe ingress command, I can see an unhealthy backend for the cup-auth pod.
I paste a successful output by using cup-auth label:
$ kubectl exec -it cup-fe-7f979bb747-6lqfx wget cup.xxx.it/login
Connecting to cup.xxx.it
login 100% |********************************| 1646 0:00:00 ETA
And then the not working output:
$ kubectl exec -it cup-fe-7f979bb747-6lqfx wget cup-auth.xxx.it/login
Connecting to cup-auth.xxx.it
wget: server returned error: HTTP/1.1 502 Bad Gateway
command terminated with exit code 1
I tried and replicated your setup as much as I could, but did not have any issues.
I can call the cup-auth.testdomain.internal/login normally within and outside the pods.
Usually, the 502 errors occur when the request received to the LB couldn't forward to a backend. Since you mention that you are seeing an unhealthy backend this can be the reason.
This could be due to a wrong configuration of the health checks or a problem with your application.
First I would look at the logs to see the reason the request is failing, and eliminate that there is no issue with the health checks or with the application itself.