getting 502 Bad Gateway on eks aws-alb-ingress - kubernetes

I created in AWS a EKS Cluster via Terraform using terraform-aws-modules/eks/aws as module. This cluster has one pod (golang app) using nodeport as service and ingress. The problem I have is that I'm getting 502 bad gateway when I hit the endpoint.
My config:
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: golang-deployment
labels:
app: golang-app
spec:
replicas: 1
selector:
matchLabels:
name: golang-app
template:
metadata:
labels:
name: golang-app
spec:
containers:
- name: golang-app
image: 019496914213.dkr.ecr.eu-north-1.amazonaws.com/goland:1.0
ports:
- containerPort: 9000
service:
kind: Service
apiVersion: v1
metadata:
name: golang-service
spec:
type: NodePort
selector:
app: golang-app
ports:
- protocol: TCP
port: 9000
targetPort: 9000
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
labels:
app: app
spec:
rules:
- http:
paths:
- path: /api/v2
pathType: ImplementationSpecific
backend:
service:
name: golang-service
port:
number: 9000
kubectl get service
golang-service NodePort 172.20.44.34 <none> 9000:32184/TCP 106m
The security groups for the cluster and nodes were created by terraform-aws-modules/eks/aws module.
I checked severals things:
kubectl port-forward golang-deployment-5894d8d6fc-ktmmb 9000:9000
WORKS! I can see the golang app using localhost:9000 in my computer
kubectl exec curl -i --tty nslookup golang-app
Server: 172.20.0.10
Address 1: 172.20.0.10 kube-dns.kube-system.svc.cluster.local
Name: golang-app
Address 1: 172.20.130.130 golang-app.default.svc.cluster.local
WORKS!
kubectl exec curl -i --tty curl golang-app:9000
curl: (7) Failed to connect to golang-app port 9000: Connection refused
NOT WORKS
Any idea?

You should be calling the service not deployment.
golang-service is svc name instead of deployment name
kubectl exec curl -i --tty curl golang-service:9000

Related

How to deploy Ingress to expose MinIO cluster outside

I have setup MinIO in kubernetes (k3s) - one node implementation.
Services
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: minio
labels:
app: minio
spec:
clusterIP: None
selector:
app: minio
ports:
- port: 9011
name: minio
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
namespace: minio
labels:
app: minio
spec:
type: LoadBalancer
selector:
app: minio
ports:
- port: 9012
targetPort: 9011
protocol: TCP
Sateful Set
[. . .]
containers:
- name: ches
image: minio/minio
args:
- server
- /data
[. . .]
- containerPort: 9000
hostPort: 9011
[. . .]
Command kubectl logs minio-0 -n minio returns the following:
API: http://10.42.0.14:9000 http://127.0.0.1:9000
Console: http://10.42.0.14:41989 http://127.0.0.1:41989
I am trying to setup Ingress. The steps that followed are:
Setup an Ingress Controller
From here: https://kubernetes.github.io/ingress-nginx/deploy/
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
Create an Internal Service
apiVersion: v1
kind: Service
metadata:
name: minio-service-ingress
namespace: minio
labels:
app: minio
spec:
selector:
app: minio
ports:
- port: 9011
name: minio
Create Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio-ingress
namespace: minio
spec:
rules:
- host: minio.com
http:
paths:
- backend:
service:
name: minio-service-ingress
port:
number: 9011
path: /
pathType: Prefix
When executing kubectl get ing -n minio:
NAME CLASS HOSTS ADDRESS PORTS AGE
minio-ingress <none> minio.com 192.168.1.14 80 43m
In /etc/hosts I added the entry:
192.168.1.14 minio.com
However, when I am trying to enter http://minio.com/ in browser I get:
Bad Gateway
Am I missing something here?

cannot hit pod in kubernetes cluster from other pod but can from ingress

I'm able to hit a pod from outside my k8s cluster using an ingress but cannot from within the cluster and am getting a "connection refused" error. I tried to shell into the pod that's refusing connections and run the following curls which work just fine when running in my local/host environment:
curl localhost:4000/api/v1/users
curl 127.0.0.1:4000/api/v1/users
curl 0.0.0.0:4000/api/v1/users
curl :4000/api/v1/users
to no avail. The cluster ip is 10.99.224.173 but that times out and I'd prefer not to bypass dns since this is dynamically assigned by k8s. And it's not working anyway. The service is a nodejs based one. I can add more information but figured I'd try to err on the side of too little information than too much. To isolate the issue as being a k8s problem, I've run the two services locally outside of k8s with no issues. I think a good starting point would be to identify why I can't curl to the server from within the same pod. Thanks!
EDIT 2: closing the cluster from skaffold and re-running skaffold dev resolved this issue and I'm now able to run the following just fine:
curl localhost:4000/api/v1/users
curl 127.0.0.1:4000/api/v1/users
curl 0.0.0.0:4000/api/v1/users
curl :4000/api/v1/users
I found that the tchannel-node library does not accept 0.0.0.0 as a valid ip address to listen to, and the closest I can pass is 127.0.0.1. Unfortunately, this means that calling to the cluster ip 10.99.224.173:9090 will never be registered by the server as 127.0.0.1:9090 the way 0.0.0.0:9090 will. I'm wondering how I can fix my understanding to pass the correct ip address.
EDIT (requested yaml files):
client
apiVersion: apps/v1
kind: Deployment
metadata:
name: tickets-depl
spec:
replicas: 1
selector:
matchLabels:
app: tickets
template:
metadata:
labels:
app: tickets
spec:
containers:
- name: tickets
image: mine/tickets-go
---
apiVersion: v1
kind: Service
metadata:
name: tickets-svc
spec:
selector:
app: tickets
ports:
- name: tickets
protocol: TCP
port: 4004
targetPort: 4004
server that refuses connections
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: mine/auth
env:
- name: PORT
value: "4000"
- name: TCHANNEL_PORT
value: "9090"
---
apiVersion: v1
kind: Service
metadata:
name: auth-svc
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 4000
targetPort: 4000
- name: auth-thrift
protocol: TCP
port: 9090
targetPort: 9090
ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-svc
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: foo.com
http:
paths:
- path: /api/v1/users/?(.*)
backend:
service:
name: auth-svc
port:
number: 4000
pathType: Prefix
- path: /api/v1/tickets/?(.*)
backend:
service:
name: tickets-svc
port:
number: 4004
pathType: Prefix

Error {"message":"failure to get a peer from the ring-balancer"} using kong ingress

Getting error msg when I trying to access with public IP:
"{"message":"failure to get a peer from the ring-balancer"}"
Looks like Kong is unable to the upstream services.
I am using voting app
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
serviceName: voting-service
servicePort: 80
service.yaml
apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: voting-app
spec:
ports:
- targetPort: 80
port: 80
selector:
name: voting-app-pod
app: voting-app
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: voting-app
spec:
template:
metadata:
labels:
name: voting-app-pod
app: voting-app
spec:
containers:
- name: voting-app
image: dockersamples/examplevotingapp_vote
ports:
- containerPort: 80
replicas: 2
selector:
matchLabels:
app: voting-app
There could be one of many things wrong here. But essentially your ingress cannot get to your backend.
If your backend up and running?
Check backend pods are "Running"
kubectl get pods
Check backend deployment has all replicas up
kubectl get deploy
Connect to the app pod and run a localhost:80 request
kubectl exec -it <pod-name> sh
# curl http://localhost
Connect to the ingress pod and see if you can reach the service from there
kubectl exec -it <ingress-pod-name> sh
# dig voting-service (can you DNS resolve it)
# telnet voting-sevice 80
# curl http://voting-service
This issue might shed some insights as to why you can't reach the backend service. What http error code are you seeing?
The problem is resolved after deploying services and deployments in kong namespace instead of default namespace. Now I can access the application with Kong ingress public IP.
Looks like kong ingress is not able to resolve DNS with headless DNS. We need mention FQDN in ingress yaml
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
name: voting-service
Port:
number: 80
Try this i thing it will work

Kubernetes's LoadBalancer yaml not working even though CLI `expose` function works

This is my Service and Deployment yaml that I am running on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-hello-world
labels:
app: node-hello-world
spec:
replicas: 1
selector:
matchLabels:
app: node-hello-world
template:
metadata:
labels:
app: node-hello-world
spec:
containers:
- name: node-hello-world
image: node-hello-world:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: node-hello-world-load-balancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9000
targetPort: 8080
nodePort: 30002
selector:
name: node-hello-world
Results:
$ minikube service node-hello-world-load-balancer --url
http://192.168.99.101:30002
$ curl http://192.168.99.101:30002
curl: (7) Failed to connect to 192.168.99.101 port 30002: Connection refused
However, running the following CLI worked:
$ kubectl expose deployment node-hello-world --type=LoadBalancer
$ minikube service node-hello-world --url
http://192.168.99.101:30130
$ curl http://192.168.99.101:30130
Hello World!
What am I doing wrong with my LoadBalancer yaml config?
you have configured wrong the service selector
selector:
name: node-hello-world
it should be:
selector:
app: node-hello-world
https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
you can debug this by describing the service, and seeing that the endpoint list is empty, so there are no pods mapped to your endpoint's service list
kubectl describe svc node-hello-world-load-balancer | grep -i endpoints
Endpoints: <none>

Routing troubleshooting with Kubernetes Ingress

I tried to setup a GKE environment with a frontend pod (cup-fe) and a backend one, used to authenticate the user upon login (cup-auth), but I can't get my ingress to work.
Following is the frontend pod (cup-fe) running nginx with an angular app. I created also a static IP address resolved by "cup.xxx.it" and "cup-auth.xxx.it" dns:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cup-fe
namespace: default
labels:
app: cup-fe
spec:
replicas: 2
selector:
matchLabels:
app: "cup-fe"
template:
metadata:
labels:
app: "cup-fe"
spec:
containers:
- image: "eu.gcr.io/xxx-cup-yyyyyy/cup-fe:latest"
name: "cup-fe"
dnsPolicy: ClusterFirst
Then is the auth pod (cup-auth):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cup-auth
namespace: default
labels:
app: cup-auth
spec:
replicas: 1
selector:
matchLabels:
app: cup-auth
template:
metadata:
labels:
app: cup-auth
spec:
containers:
image: "eu.gcr.io/xxx-cup-yyyyyy/cup-auth:latest"
imagePullPolicy: Always
name: cup-auth
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
- containerPort: 8778
name: jolokia
protocol: TCP
- containerPort: 8888
name: management
protocol: TCP
dnsPolicy: ClusterFirst
Then I created two NodePorts to expose the above pods:
kubectl expose deployment cup-fe --type=NodePort --port=80
kubectl expose deployment cup-auth --type=NodePort --port=8080
Last, I created an ingress to route external http requests towards services:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http-ingress
namespace: default
labels:
app: http-ingress
spec:
rules:
- host: cup.xxx.it
http:
paths:
- path: /*
backend:
serviceName: cup-fe
servicePort: 80
- host: cup-auth.xxx.it
http:
paths:
- path: /*
backend:
serviceName: cup-auth
So, I can reach the frontend pod at http://cup.xxx.it, the angular app redirects me to http://cup-auth.xxx.it/login, but I get only 502 bad request. With kubectl describe ingress command, I can see an unhealthy backend for the cup-auth pod.
I paste a successful output by using cup-auth label:
$ kubectl exec -it cup-fe-7f979bb747-6lqfx wget cup.xxx.it/login
Connecting to cup.xxx.it
login 100% |********************************| 1646 0:00:00 ETA
And then the not working output:
$ kubectl exec -it cup-fe-7f979bb747-6lqfx wget cup-auth.xxx.it/login
Connecting to cup-auth.xxx.it
wget: server returned error: HTTP/1.1 502 Bad Gateway
command terminated with exit code 1
I tried and replicated your setup as much as I could, but did not have any issues.
I can call the cup-auth.testdomain.internal/login normally within and outside the pods.
Usually, the 502 errors occur when the request received to the LB couldn't forward to a backend. Since you mention that you are seeing an unhealthy backend this can be the reason.
This could be due to a wrong configuration of the health checks or a problem with your application.
First I would look at the logs to see the reason the request is failing, and eliminate that there is no issue with the health checks or with the application itself.