Service routing with Ingress on GKE - kubernetes

We are running several services in a Kubernetes cluster on GKE (Google Kubernetes Engine) and are having trouble configuring routing with Ingress.
Let's say that we have auth-service and user-service and would like to access them by the following urls: http://www.example.com/auth and http://www.example.com/user. All requests to these urls should be redirected to the correct services and routed internally (http://www.example.com/user/people -> http://user-service/people).
These is our configuration for the auth service:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: api-auth
spec:
replicas: 1
template:
metadata:
labels:
app: api-auth
tier: backend
track: stable
spec:
containers:
- name: api-auth
image: "<our-image>"
ports:
- name: http
containerPort: 9000
livenessProbe:
httpGet:
path: /health
port: 9000
initialDelaySeconds: 180
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: 9000
initialDelaySeconds: 180
timeoutSeconds: 5
---
kind: Service
apiVersion: v1
metadata:
name: auth-service
labels:
app: api-auth
spec:
type: NodePort
selector:
app: api-auth
tier: backend
ports:
- port: 80
targetPort: 9000
Internally, the service is running on Tomcat on port 9000, this part is working fine.
The problem is with our Ingress configuration:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: auth-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: <our-static-api>
kubernetes.io/ingress.class: "gce"
labels:
app: api-auth
spec:
rules:
- http:
paths:
- path: /auth
backend:
serviceName: auth-service
servicePort: 80
- path: /auth/*
backend:
serviceName: auth-service
servicePort: 80
- path: /user
backend:
serviceName: user-service
servicePort: 80
- path: /user/*
backend:
serviceName: user-service
servicePort: 80
Whenever I access our static api (let's call it example.com for now) in the following way: http://www.example.com/auth, I am getting 502 - Bad gateway. Running kubectl describe ingress says, that our services's health is unknown.
I am running our of ideas what might be causing this strange behavior. Could someone point me to the right direction?

You mentioned on Slack the services are Spring Boot apps. It's probably not related to that, but you need to make sure the ingress path matches the context of your Spring Boot app, i. e. if your ingress path is /user, your app context must be configured with server.context-path=/user. The service would then be reachable under http://user-service/user.

Your health check will reflect your readiness probes. The health check needs to use your nodePort port because the request is coming from a Load Balancer. If your health check is targeting port 9000, the request will not get through because that port on the node is not active.
Make sure your LB health check is targeting the correct port (in the 30000 range) and that the target path will respond with 200, otherwise your health checks will continue to fail and you will continue to get 502 errors

Related

GKE Ingress with container-native load balancing does not detect health check (Invalid value for field 'resource.httpHealthCheck')

I am running a cluster on Google Kubernetes Engine and I am currently trying to switch from using an Ingress with external load balancing (and NodePort services) to an ingress with container-native load balancing (and ClusterIP services) following this documentation: Container native load balancing
To communicate with my services I am using the following ingress configuration that used to work just fine when using NodePort services instead of ClusterIP:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mw-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: mw-cluster-ip
networking.gke.io/managed-certificates: mw-certificate
kubernetes.io/ingress.allow-http: "false"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: billing-frontend-service
servicePort: 80
- path: /auth/api/*
backend:
serviceName: auth-service
servicePort: 8083
Now following the documentation, instead of using a readinessProbe as a part of the container deployment as a health check I switched to using ClusterIP services in combination with BackendConfig instead. For each deployment I am using a service like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: auth
name: auth-service
namespace: default
annotations:
cloud.google.com/backend-config: '{"default": "auth-hc-config"}'
spec:
type: ClusterIP
selector:
app: auth
ports:
- port: 8083
protocol: TCP
targetPort: 8083
And a Backend config:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: auth-hc-config
spec:
healthCheck:
checkIntervalSec: 10
port: 8083
type: http
requestPath: /auth/health
As a reference, this is what the readinessProbe used to look like before:
readinessProbe:
failureThreshold: 3
httpGet:
path: /auth/health
port: 8083
scheme: HTTP
periodSeconds: 10
Now to the actual problem. I deploy the containers and services first and they seem to startup just fine. The ingress however does not seem to pick up the health checks properly and shows this in the Cloud console:
Error during sync: error running backend syncing routine: error ensuring health check: googleapi: Error 400: Invalid value for field 'resource.httpHealthCheck': ''. HTTP healthCheck missing., invalid
The cluster as well as the node pool are running GKE version 1.17.6-gke.11 so the annotation cloud.google.com/neg: '{"ingress": true}' is not necessary. I have checked and the service is annotated correctly:
Annotations: cloud.google.com/backend-config: {"default": "auth-hc-config"}
cloud.google.com/neg: {"ingress":true}
cloud.google.com/neg-status: {"network_endpoint_groups":{"8083":"k8s1-2078beeb-default-auth-service-8083-16a14039"},"zones":["europe-west3-b"]}
I have already tried to re-create the cluster and the node-pool with no effect. Any ideas on how to resolve this? Am I missing an additional health check somewhere?
I found my issue. Apparently the BackendConfig's type attribute is case-sensitive. Once I changed it from http to HTTP it worked after I recreated the ingress.

GCP HTTP(S) load balancer ignoring GKE readinessProbe specification

I’ve already seen this question; AFAIK I’m doing everything in the answers there.
Using GKE, I’ve deployed a GCP HTTP(S) load balancer-based ingress for a kubernetes cluster containing two almost identical deployments: production and development instances of the same application.
I set up a dedicated port on each pod template to use for health checks by the load balancer so that they are not impacted by redirects from the root path on the primary HTTP port. However, the health checks are consistently failing.
From these docs I added a readinessProbe parameter to my deployments, which the load balancer seems to be ignoring completely.
I’ve verified that the server on :p-ready (9292; the dedicated health check port) is running correctly using the following (in separate terminals):
➜ kubectl port-forward deployment/d-an-server p-ready
➜ curl http://localhost:9292/ -D -
HTTP/1.1 200 OK
content-length: 0
date: Wed, 26 Feb 2020 01:21:55 GMT
What have I missed?
A couple notes on the below configs:
The ${...} variables below are filled by the build script as part of deployment.
The second service (s-an-server-dev) is almost an exact duplicate of the first (with it’s own deployment) just with -dev suffixes on the names and labels.
Deployment
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "d-an-server"
namespace: "default"
labels:
app: "a-an-server"
spec:
replicas: 1
selector:
matchLabels:
app: "a-an-server"
template:
metadata:
labels:
app: "a-an-server"
spec:
containers:
- name: "c-an-server-app"
image: "gcr.io/${PROJECT_ID}/an-server-app:${SHORT_SHA}"
ports:
- name: "p-http"
containerPort: 8080
- name: "p-ready"
containerPort: 9292
readinessProbe:
httpGet:
path: "/"
port: "p-ready"
initialDelaySeconds: 30
Service
apiVersion: "v1"
kind: "Service"
metadata:
name: "s-an-server"
namespace: "default"
spec:
ports:
- port: 8080
targetPort: "p-http"
protocol: "TCP"
name: "sp-http"
selector:
app: "a-an-server"
type: "NodePort"
Ingress
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "primary-ingress"
annotations:
kubernetes.io/ingress.global-static-ip-name: "primary-static-ipv4"
networking.gke.io/managed-certificates: "appname-production-cert,appname-development-cert"
spec:
rules:
- host: "appname.example.com"
http:
paths:
- backend:
serviceName: "s-an-server"
servicePort: "sp-http"
- host: "dev.appname.example.com"
http:
paths:
- backend:
serviceName: "s-an-server-dev"
servicePort: "sp-http-dev"
I think what's happening here is GKE ingress is not at all informed of port 9292. You are referring sp-http in the ingress which refers to port 8080.
You need to make sure of below:
1.The service's targetPort field must point to the pod port's containerPort value or name.
2.The readiness probe must be exposed on the port matching the servicePort specified in the Ingress.

Not able to access multiple services from ingress controller hosted on different ports

I have two services hosted on different ports and I have created a ingress resource which looks like this
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /svc1/
backend:
serviceName: app1-svc
servicePort: 3000
- path: /svc2/
backend:
serviceName: app2-svc
servicePort: 8080
on top of this I have created a NodePort type ingress controller.
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
name: controller-nginx-ingress-controller
spec:
clusterIP: 10.88.18.191
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30080
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31442
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: controller
And finally, setup a cloud load balancer to access application running on my K8S cluster.
Problem:
I couldn't able to access any of my application using URL routing
http://load-balancer-ip/svc1/
http://load-balancer-ip/svc2/
Can anyone please let me know what incorrect I'm doing? And how to resolve this issue?
From what you mentioned in comments I am pretty sure the problem can be solved with path rewrites.
Now, when you are sending a request to /svc1/ with path: /svc1/ then the request is forwarded to app1-svc with path set to /svc1/ and you are receiving 404 errors as there is no such path in app1. From what you mentioned, you can most probably solve the issue using rewrite. You can achieve it using nginx.ingress.kubernetes.io/rewrite-target annotation, so your ingress would look like following:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: mynamespace
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /svc1(/|$)(.*)
backend:
serviceName: app1-svc
servicePort: 3000
- path: /svc2(/|$)(.*)
backend:
serviceName: app2-svc
servicePort: 8080
In this case when sending request with path set to /svc1/something the request will be forwarded to app1 with path rewritten to /something.
Also take a look in ingress docs for more explanation.
Let me know if it solved you issue.

Botpress behind a Nginx reverse proxy

I am thinking of setting up multiple chatbots as in a containerized platform lets say docker or Kubernetes, and I would want to be able to access these chatbots through a reverse proxy such as Nginx. any help is appreciated.
My example scenario
I have a multiple chatbots, lets call them Bravo, Charlie, Delta
Bravo's IP address and port is 10.0.0.2:8080
Charlie's IP : 10.0.0.3:8080
Delta's IP :10.0.0.4:8080
All of these bots are living in containers behind a nginx proxy.
Now if I want to access these chatbots, I am able to get to the browser with 10.0.0.2:8080 and use the chatbots,
If I could setup a domain (alpha,org) and want to access these chatbots as alpha,com/bravo , or alpha,com/charlie and alpha,com/delta how would I be able to achieve this.?
The Proxy pass directive works only for the index_html and the chatbot application seems to have some kind of base url path that I am unable to figure out.
nginx returns a blank page if I inspect the traffic. Help me debug this.
You can use nginx-ingress controller with this ingress definition: (But first you need to deploy nginx-ingress controller on your cluster, you can use this link)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: alpha-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: alpha.com
http:
paths:
- path: /bravo
backend:
serviceName: BravoService
servicePort: 80
- path: /charlie
backend:
serviceName: CharlieService
servicePort: 80
- path: /delta
backend:
serviceName: DeltaService
servicePort: 80 # You could also use named ports if you already named the port in the service like bravo-http-port
This expects that you have already defined and deployed your services with associated deployments. for Ex:
apiVersion: v1
kind: Service
metadata:
name: BravoService
labels:
app: bravo
spec:
type: NodePort
selector:
app: bravo
ports:
- name: bravo-http-port
protocol: TCP
port: 80
targetPort: bravo-port
nodePort: 8080
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: bravo-deployment
labels:
app: bravo
spec:
# init with 3 replicas
replicas: 1
selector:
matchLabels:
app: bravo
template:
metadata:
labels:
app: bravo
spec:
containers:
- name: bravo-container
image: my-docker-repo/project:1.0
ports:
- name: bravo-port
containerPort: 8080
If you have more questions on this please don't hesitate.

GCE Ingress not picking up health check from readiness probe

When I create a GCE ingress, Google Load Balancer does not set the health check from the readiness probe. According to the docs (Ingress GCE health checks) it should pick it up.
Expose an arbitrary URL as a readiness probe on the pods backing the Service.
Any ideas why?
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend-prod
labels:
app: frontend-prod
spec:
selector:
matchLabels:
app: frontend-prod
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: frontend-prod
spec:
imagePullSecrets:
- name: regcred
containers:
- image: app:latest
readinessProbe:
httpGet:
path: /healthcheck
port: 3000
initialDelaySeconds: 15
periodSeconds: 5
name: frontend-prod-app
- env:
- name: PASSWORD_PROTECT
value: "1"
image: nginx:latest
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 5
name: frontend-prod-nginx
Service:
apiVersion: v1
kind: Service
metadata:
name: frontend-prod
labels:
app: frontend-prod
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: frontend-prod
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-prod-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: frontend-prod-ip
spec:
tls:
- secretName: testsecret
backend:
serviceName: frontend-prod
servicePort: 80
So apparently, you need to include the container port on the PodSpec.
Does not seem to be documented anywhere.
e.g.
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Thanks, Brian! https://github.com/kubernetes/ingress-gce/issues/241
This is now possible in the latest GKE (I am on 1.14.10-gke.27, not sure if that matters)
Define a readinessProbe on your container in your Deployment.
Recreate your Ingress.
The health check will point to the path in readinessProbe.httpGet.path of the Deployment yaml config.
Update by Jonathan Lin below: This has been fixed very recently. Define a readinessProbe on the Deployment. Recreate your Ingress. It will pick up the health check path from the readinessProbe.
GKE Ingress health check path is currently not configurable. You can go to http://console.cloud.google.com (UI) and visit Load Balancers list to see the health check it uses.
Currently the health check for an Ingress is GET / on each backend: specified on the Ingress. So all your apps behind a GKE Ingress must return HTTP 200 OK to GET / requests.
That said, the health checks you specified on your Pods are still being used ––by the kubelet to make sure your Pod is actually functioning and healthy.
Google has recently added support for CRD that can configure your Backend Services along with healthchecks:
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config
namespace: prod
spec:
healthCheck:
checkIntervalSec: 30
port: 8080
type: HTTP #case-sensitive
requestPath: /healthcheck
See here.
Another reason why Google Cloud Load Balancer does not pick-up GCE health check configuration from Kubernetes Pod readiness probe could be that the service is configured as "selectorless" (the selector attribute is empty and you manage endpoints directly).
This is the case with e.g. kube-lego: see https://github.com/jetstack/kube-lego/issues/68#issuecomment-303748457 and https://github.com/jetstack/kube-lego/issues/68#issuecomment-327457982.
Original question does have selector specified in the service, so this hint doesn't apply. This hints serves visitors that have the same problem with a different cause.