Routing troubleshooting with Kubernetes Ingress - kubernetes

I tried to setup a GKE environment with a frontend pod (cup-fe) and a backend one, used to authenticate the user upon login (cup-auth), but I can't get my ingress to work.
Following is the frontend pod (cup-fe) running nginx with an angular app. I created also a static IP address resolved by "cup.xxx.it" and "cup-auth.xxx.it" dns:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cup-fe
namespace: default
labels:
app: cup-fe
spec:
replicas: 2
selector:
matchLabels:
app: "cup-fe"
template:
metadata:
labels:
app: "cup-fe"
spec:
containers:
- image: "eu.gcr.io/xxx-cup-yyyyyy/cup-fe:latest"
name: "cup-fe"
dnsPolicy: ClusterFirst
Then is the auth pod (cup-auth):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cup-auth
namespace: default
labels:
app: cup-auth
spec:
replicas: 1
selector:
matchLabels:
app: cup-auth
template:
metadata:
labels:
app: cup-auth
spec:
containers:
image: "eu.gcr.io/xxx-cup-yyyyyy/cup-auth:latest"
imagePullPolicy: Always
name: cup-auth
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
- containerPort: 8778
name: jolokia
protocol: TCP
- containerPort: 8888
name: management
protocol: TCP
dnsPolicy: ClusterFirst
Then I created two NodePorts to expose the above pods:
kubectl expose deployment cup-fe --type=NodePort --port=80
kubectl expose deployment cup-auth --type=NodePort --port=8080
Last, I created an ingress to route external http requests towards services:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http-ingress
namespace: default
labels:
app: http-ingress
spec:
rules:
- host: cup.xxx.it
http:
paths:
- path: /*
backend:
serviceName: cup-fe
servicePort: 80
- host: cup-auth.xxx.it
http:
paths:
- path: /*
backend:
serviceName: cup-auth
So, I can reach the frontend pod at http://cup.xxx.it, the angular app redirects me to http://cup-auth.xxx.it/login, but I get only 502 bad request. With kubectl describe ingress command, I can see an unhealthy backend for the cup-auth pod.
I paste a successful output by using cup-auth label:
$ kubectl exec -it cup-fe-7f979bb747-6lqfx wget cup.xxx.it/login
Connecting to cup.xxx.it
login 100% |********************************| 1646 0:00:00 ETA
And then the not working output:
$ kubectl exec -it cup-fe-7f979bb747-6lqfx wget cup-auth.xxx.it/login
Connecting to cup-auth.xxx.it
wget: server returned error: HTTP/1.1 502 Bad Gateway
command terminated with exit code 1

I tried and replicated your setup as much as I could, but did not have any issues.
I can call the cup-auth.testdomain.internal/login normally within and outside the pods.
Usually, the 502 errors occur when the request received to the LB couldn't forward to a backend. Since you mention that you are seeing an unhealthy backend this can be the reason.
This could be due to a wrong configuration of the health checks or a problem with your application.
First I would look at the logs to see the reason the request is failing, and eliminate that there is no issue with the health checks or with the application itself.

Related

cannot hit pod in kubernetes cluster from other pod but can from ingress

I'm able to hit a pod from outside my k8s cluster using an ingress but cannot from within the cluster and am getting a "connection refused" error. I tried to shell into the pod that's refusing connections and run the following curls which work just fine when running in my local/host environment:
curl localhost:4000/api/v1/users
curl 127.0.0.1:4000/api/v1/users
curl 0.0.0.0:4000/api/v1/users
curl :4000/api/v1/users
to no avail. The cluster ip is 10.99.224.173 but that times out and I'd prefer not to bypass dns since this is dynamically assigned by k8s. And it's not working anyway. The service is a nodejs based one. I can add more information but figured I'd try to err on the side of too little information than too much. To isolate the issue as being a k8s problem, I've run the two services locally outside of k8s with no issues. I think a good starting point would be to identify why I can't curl to the server from within the same pod. Thanks!
EDIT 2: closing the cluster from skaffold and re-running skaffold dev resolved this issue and I'm now able to run the following just fine:
curl localhost:4000/api/v1/users
curl 127.0.0.1:4000/api/v1/users
curl 0.0.0.0:4000/api/v1/users
curl :4000/api/v1/users
I found that the tchannel-node library does not accept 0.0.0.0 as a valid ip address to listen to, and the closest I can pass is 127.0.0.1. Unfortunately, this means that calling to the cluster ip 10.99.224.173:9090 will never be registered by the server as 127.0.0.1:9090 the way 0.0.0.0:9090 will. I'm wondering how I can fix my understanding to pass the correct ip address.
EDIT (requested yaml files):
client
apiVersion: apps/v1
kind: Deployment
metadata:
name: tickets-depl
spec:
replicas: 1
selector:
matchLabels:
app: tickets
template:
metadata:
labels:
app: tickets
spec:
containers:
- name: tickets
image: mine/tickets-go
---
apiVersion: v1
kind: Service
metadata:
name: tickets-svc
spec:
selector:
app: tickets
ports:
- name: tickets
protocol: TCP
port: 4004
targetPort: 4004
server that refuses connections
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: mine/auth
env:
- name: PORT
value: "4000"
- name: TCHANNEL_PORT
value: "9090"
---
apiVersion: v1
kind: Service
metadata:
name: auth-svc
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 4000
targetPort: 4000
- name: auth-thrift
protocol: TCP
port: 9090
targetPort: 9090
ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-svc
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: foo.com
http:
paths:
- path: /api/v1/users/?(.*)
backend:
service:
name: auth-svc
port:
number: 4000
pathType: Prefix
- path: /api/v1/tickets/?(.*)
backend:
service:
name: tickets-svc
port:
number: 4004
pathType: Prefix

Error {"message":"failure to get a peer from the ring-balancer"} using kong ingress

Getting error msg when I trying to access with public IP:
"{"message":"failure to get a peer from the ring-balancer"}"
Looks like Kong is unable to the upstream services.
I am using voting app
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
serviceName: voting-service
servicePort: 80
service.yaml
apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: voting-app
spec:
ports:
- targetPort: 80
port: 80
selector:
name: voting-app-pod
app: voting-app
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: voting-app
spec:
template:
metadata:
labels:
name: voting-app-pod
app: voting-app
spec:
containers:
- name: voting-app
image: dockersamples/examplevotingapp_vote
ports:
- containerPort: 80
replicas: 2
selector:
matchLabels:
app: voting-app
There could be one of many things wrong here. But essentially your ingress cannot get to your backend.
If your backend up and running?
Check backend pods are "Running"
kubectl get pods
Check backend deployment has all replicas up
kubectl get deploy
Connect to the app pod and run a localhost:80 request
kubectl exec -it <pod-name> sh
# curl http://localhost
Connect to the ingress pod and see if you can reach the service from there
kubectl exec -it <ingress-pod-name> sh
# dig voting-service (can you DNS resolve it)
# telnet voting-sevice 80
# curl http://voting-service
This issue might shed some insights as to why you can't reach the backend service. What http error code are you seeing?
The problem is resolved after deploying services and deployments in kong namespace instead of default namespace. Now I can access the application with Kong ingress public IP.
Looks like kong ingress is not able to resolve DNS with headless DNS. We need mention FQDN in ingress yaml
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
name: voting-service
Port:
number: 80
Try this i thing it will work

Can't connect with my frontend with kubectl

With kubernetes, I created an ingress with a service like these :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: syntaxmap2
spec:
backend:
serviceName: testsvc
servicePort: 3000
The service testsvc is already created.
I created a frontend service like these :
apiVersion: v1
kind: Service
metadata:
name: syntaxmapfrontend
spec:
selector:
app: syntaxmap
tier: frontend
ports:
- protocol: "TCP"
port: 7000
targetPort: 7000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: syntaxmapfrontend
spec:
selector:
matchLabels:
app: syntaxmap
tier: frontend
track: stable
replicas: 1
template:
metadata:
labels:
app: syntaxmap
tier: frontend
track: stable
spec:
containers:
- name: nginx
image: "gcr.io/google-samples/hello-frontend:1.0"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
When I do these command :
kubectl describe ingress syntaxmap2
I have an Ip adress than i can put in my browser and I have an answer
But when I do these command :
kubctl describe service syntaxmapfrontend
I have an Ip adress with a port and when I try to connect to it with curl, I have a time out.
How can I connect to my kubernet frontend with curl ?
The service is accessible only from within the k8s cluster. You either need to change the type of address from ClusterIP to NodeIP, or use something like kubectl port-forward or kubefwd.
If you need more detailed advice, you'll need to post the output of those commands, or even better, show us how you created the objects.
I have found a way.
I write :
minikube service syntaxmapfrontend
And it open a browser with the right URL.

502 bad gateway using Kubernetes with Ingress controller

I have a Kubernetes setup on Minikube with this configuration:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myappdeployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: custom-docker-image:latest
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: example-service
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 3000
# Port to forward to inside the pod
targetPort: 3000
type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
- http:
paths:
- path: /
backend:
serviceName: example-service
servicePort: 3000
I took a look at the solution to this Stackoverflow post and it seems my configuration is.. fine?
What am I doing wrong to get 502 Bad Gateway when accessing http://192.xxx.xx.x, my Minikube address? The nginx-ingress-controller logs say:
...
Connection refused while connecting to upstream
...
Another odd piece of info, when I follow this guide to setting up the basic node service on Kubernetes, everything works and I see a "Hello world" page when I open up the Minikube add
Steps taken:
I ran
kubectl port-forward pods/myappdeployment-5dff57cfb4-6c6bd 3000:3000
Then I visited localhost:3000 and saw
This page isn’t working
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
I figured the reason is more obvious in the logs so I ran
kubectl logs pods/myappdeployment-5dff57cfb4-6c6bd
and got
Waiting for MySQL...
no destination
Waiting for MySQL...
no destination
Waiting for MySQL...
no destination
Waiting for MySQL...
no destination
...
Thus I reasoned that I was originally getting a 502 because all of the pods were not running due to no MySQL setup.

How to set up https on kubernetes bare metal using traefik ingress controller

I'm running a kubernetes cluster which consists of three nodes and brilliantly works, but it's time to make my web application secure, so I deployed an ingress controller(traefik). But I was unable to find instructions for setting up https on it. I know most of things I will have to do, like setting up a "secret"(container with certs) etc. but I was wondering how to configure my ingress controller and all files related to it so I would be able to use secure connection
I have already configured ingress controller and created some frontends and backends. Also I configured nginx server(It's actually a web application I'm running) to work on 443 port
My web application deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 3 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: ilchub/my-nginx
ports:
- containerPort: 443
tolerations:
- key: "primary"
operator: Equal
value: "true"
effect: "NoSchedule"
Traefik ingress controller deployment code
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: https
containerPort: secure
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
Ingress for traefik dashboard
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
rules:
- host: cluster.aws.ctrlok.dev
http:
paths:
- path: /
backend:
serviceName: traefik-web-ui
servicePort: web
External expose related config
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
nodePort: 30036
name: web
- protocol: TCP
port: 443
nodePort: 30035
name: secure
- protocol: TCP
port: 8080
nodePort: 30034
name: admin
type: NodePort
What I want to do is securing my application which is already running. Final result has to be a webpage running over https
Actually you have 3 ways to configure Traefik to use https to communicate with backend pods:
If the service port defined in the ingress spec is 443 (note that you can still use targetPort to use a different port on your pod).
If the service port defined in the ingress spec has a name that starts with https (such as https-api, https-web or just https).
If the ingress spec includes the annotation ingress.kubernetes.io/protocol: https.
If either of those configuration options exist, then the backend communication protocol is assumed to be TLS, and will connect via TLS automatically.
Also additional authentication annotations should be added to the Ingress object, like:
ingress.kubernetes.io/auth-tls-secret: secret
And of course, add a TLS Certificate to the Ingress