Kubernetes EKS Ingress and TLS - kubernetes

I'm trying to accomplish a VERY common task for an application:
Assign a certificate and secure it with TLS/HTTPS.
I've spent nearly a day scouring thru documentation and trying multiple different tactics to get this working but nothing is working for me.
Initially I setup nginx-ingress on EKS using Helm by following the docs here: https://github.com/nginxinc/kubernetes-ingress. I tried to get the sample app working (cafe) using the following config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
tls:
- hosts:
- cafe.example.com
secretName: cafe-secret
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
The ingress and all supported services/deploys worked fine but there's one major thing missing: the ingress doesn't have an associated address/ELB:
NAME HOSTS ADDRESS PORTS AGE
cafe-ingress cafe.example.com 80, 443 12h
Service LoadBalancers create ELB resources, i.e.:
testnodeapp LoadBalancer 172.20.4.161 a64b46f3588fe... 80:32107/TCP 13h
However, the Ingress is not creating an address. How do I get an Ingress controller exposed externally on EKS to handle TLS/HTTPS?

I've replicated every step necessary to get up and running on EKS with a secure ingress. I hope this helps anybody else that wants to get their application on EKS quickly and securely.
To get up and running on EKS:
Deploy EKS using the CloudFormation template here: Keep in mind that I've restricted access with the CidrIp: 193.22.12.32/32. Change this to suit your needs.
Install Client Tools. Follow the guide here.
Configure the client. Follow the guide here.
Enable the worker nodes. Follow the guide here.
You can verify that the cluster is up and running and you are pointing to it by running:
kubectl get svc
Now you launch a test application with the nginx ingress.
NOTE: Everything is placed under the ingress-nginx namespace. Ideally this would be templated to build under different namespaces, but for the purposes of this example it works.
Deploy nginx-ingress:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
Fetch rbac.yml from here. Run:
kubectl apply -f rbac.yml
Have a certificate and key ready for testing. Create the necessary secret like so:
kubectl create secret tls cafe-secret --key mycert.key --cert mycert.crt -n ingress-nginx
Copy coffee.yml from here. Copy coffee-ingress.yml from here. Update the domain you want to run this under. Run them like so
kubectl apply -f coffee.yaml
kubectl apply -f coffee-ingress.yaml
Update the CNAME for your domain to point to the ADDRESS for:
kubectl get ing -n ingress-nginx -o wide
Refresh DNS cache and test the domain. You should get a secure page with request stats. I've replicated this multiple times so if it fails to work for you check the steps, config, and certificate. Also, check the logs on the nginx-ingress-controller* pod.
kubectl logs pod/nginx-ingress-controller-*********** -n ingress-nginx
That should give you some indication of what's wrong.

To make an Ingress resource work, the cluster must have an Ingress controller configured.
This is unlike other types of controllers, which typically run as part of the kube-controller-manager binary,
and which are typically started automatically as part of cluster creation.
For EKS with helm, you may try:
helm registry install quay.io/coreos/alb-ingress-controller-helm
Next, configure the Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
spec:
rules:
- host: YOUR_DOMAIN
http:
paths:
- path: /
backend:
serviceName: ingress-example-test
servicePort: 80
tls:
- secretName: custom-tls-cert
hosts:
- YOUR_DOMAIN
Apply the config:
kubectl create -f ingress.yaml
Next, create the secret with TLS certificates:
kubectl create secret tls custom-tls-cert --key /path/to/tls.key --cert /path/to/tls.crt
and reference to them in the Ingress definition:
tls:
- secretName: custom-tls-cert
hosts:
- YOUR_DOMAIN
The following example of configuration shows how to configure the Ingress controller:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
k8s-app: nginx-ingress-controller
template:
metadata:
labels:
k8s-app: nginx-ingress-controller
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
# like with kubeadm
# hostNetwork: true
terminationGracePeriodSeconds: 60
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
Next, apply the above configuration, then you can check services for External IP exposed:
kubectl get service nginx-controller -n kube-system
External IP is an address which terminates at one of the Kubernetes nodes as configured by externally configured routing mechanisms.
When configured within a service definition, once a request reaches a node, traffic is redirected to a service endpoint.
Documentation of Kubernetes provides more examples.

Related

Correct way to expose ingress service using baremetal Kubernetes Cluster

I have the following topology in my kubernetes cluster:
So, I have 2 Nodes: 1 Master and 1 Worker Node.
Now I created an application with my deployment.yml and my service.yml, using nodePort configuration, see:
apiVersion: v1
kind: Service
metadata:
name: administrativo-service
spec:
type: NodePort
selector:
app: administrativo
ports:
- protocol: TCP
port: 80
targetPort: 8080
And this is my service:
Now I need to access this API using my DNS, something like: myapi.localdns, so I followed this steps to install Ingress Controller based on nginx:
https://kubernetes.github.io/ingress-nginx/deploy/#quick-start
https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters
After 1 hour this is POD status in ingress-nginx namespace:
And finally, this is my Ingress yml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: administrativo-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: myapi.localdns
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: administrativo-service
port:
number: 80
Well, my idea is to create an entry in my company to DNS to point to this DNS myapi.localdns:
but to do it I need the Ingress Address, that don't show in my ingress resource, see:
I solved the problem, using this steps:
First create in my company DNS the CNAMEs pointing to my kubernetes workernode IP.
Reinstall ingress-nginx controller using bare-metal configuration: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.5/deploy/static/provider/baremetal/deploy.yaml.
Change the deploy.yaml to use NodePort before use kubectl apply
Use externalIPS to expose my service in port 80.

BackendConfig with Ingress gives UNHEALTHY backend

I am learning how to use an ingress to expose my application GKE v1.19.
I followed the tutorial on GKE docs for Service, Ingress, and BackendConfig to get to the following setup. However, my backend services still become UNHEALTHY after some time. My aim is to overwrite the default "/" health check path for the ingress controller.
I have the same health checks defined in my deployment.yaml file under livenessProbe and readinessProbe and they seem to work fine since the Pod enters running stage. I have also tried to curl the endpoint and it returns a 200 status.
I have no clue why are my service is marked as unhealthy despite them being accessible from the NodePort service I defined directly. Any advice or help would be appreciated. Thank you.
I will add my yaml files below:
deployment.yaml
....
livenessProbe:
httpGet:
path: /api
port: 3100
initialDelaySeconds: 180
readinessProbe:
httpGet:
path: /api
port: 3100
initialDelaySeconds: 180
.....
backendconfig.yaml
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config
namespace: ns1
spec:
healthCheck:
checkIntervalSec: 30
port: 3100
type: HTTP #case-sensitive
requestPath: /api
service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"default": "backend-config"}'
name: service-ns1
namespace: ns1
labels:
app: service-ns1
spec:
type: NodePort
ports:
- protocol: TCP
port: 3100
targetPort: 3100
selector:
app: service-ns1
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ns1-ingress
namespace: ns1
annotations:
kubernetes.io/ingress.global-static-ip-name: ns1-ip
networking.gke.io/managed-certificates: ns1-cert
kubernetes.io/ingress.allow-http: "false"
spec:
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: service-ns1
servicePort: 3100
The ideal way to use the ‘BackendConfig’ is when the serving pods for your service contains multiple containers, if you're using the Anthos Ingress controller or if you need control over the port used for the load balancer's health checks, then you should use a BackendConfig CDR to define health check parameters. Refer to the 1.
When a backend service's health check parameters are inferred from a serving Pod's readiness probe, GKE does not keep the readiness probe and health check synchronized. Hence any changes you make to the readiness probe will not be copied to the health check of the corresponding backend service on the load balancer as per 2.
In your scenario, the backend is healthy when it follows path ‘/’ but showing unhealthy when it uses path ‘/api’, so there might be some misconfiguration in your ingress.
I would suggest you to add the annotations: ingress.kubernetes.io/rewrite-target: /api
so the path mentioned in spec.path will be rewritten to /api before the request is sent to the backend service.

Find why i am getting 502 Bad gateway error on kubernetes

I am using kubernetes. I have Ingress service which talks my container service. We have exposed a webapi which works all fine. But we keep getting 502 bad gateway error. I am new to kubernetes and i have no clue how to go about debugging this issue. Server is a nodejs server connected to database. Is there anything wrong with configuration?
My Deployment file--
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-pod
spec:
replicas: 1
template:
metadata:
labels:
app: my-pod
spec:
containers:
- name: my-pod
image: my-image
ports:
- name: "http"
containerPort: 8086
resources:
limits:
memory: 2048Mi
cpu: 1020m
---
apiVersion: v1
kind: Service
metadata:
name: my-pod-serv
spec:
ports:
- port: 80
targetPort: "http"
selector:
app: my-pod
My Ingress Service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: abc.test.com
http:
paths:
- path: /abc
backend:
serviceName: my-pod-serv
servicePort: 80
In Your case:
I think that you get this 502 gateway error because you don't have Ingress controller configured correctly.
Please try do do it with installed Ingress like in example below. It will do all automatically.
Nginx Ingress step by step:
1) Install helm
2) Install nginx controller using helm
$ helm install stable/nginx-ingress --name nginx-ingress
It will create 2 services. You can get their details via
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 29d
nginx-ingress-controller LoadBalancer 10.39.243.140 35.X.X.15 80:32324/TCP,443:31425/TCP 19m
nginx-ingress-default-backend ClusterIP 10.39.252.175 <none> 80/TCP 19m
nginx-ingress-controller - in short, it's dealing with requests to Ingress and directing
nginx-ingress-default-backend - in short, default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand
3) Create 2 deployments (or use yours)
$ kubectl run my-pod --image=nginx
deployment.apps/my-pod created
$ kubectl run nginx1 --image=nginx
deployment.apps/nginx1 created
4) Connect to one of the pods
$ kubectl exec -ti my-pod-675799d7b-95gph bash
And add additional line to the output to see which one we will try to connect later.
$ echo "HELLO THIS IS INGRESS TEST" >> /usr/share/nginx/html/index.html
$ exit
5) Expose deployments.
$ kubectl expose deploy nginx1 --port 80
service/nginx1 exposed
$ kubectl expose deploy my-pod --port 80
service/my-pod exposed
This will automatically create service and will looks like
apiVersion: v1
kind: Service
metadata:
labels:
app: my-pod
name: my-pod
selfLink: /api/v1/namespaces/default/services/my-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: my-pod
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
6) Now its the time to create Ingress.yaml and deploy it. Each rule in ingress need to be specified. Here I have 2 services. Each service specification starts with -host under rule parameter.
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: two-svc-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /pod
backend:
serviceName: my-pod
servicePort: 80
- host: nginx.test.svc
http:
paths:
- path: /abc
backend:
serviceName: nginx1
servicePort: 80
$ kubectl apply -f Ingress.yaml
ingress.extensions/two-svc-ingress created
7) You can check Ingress and hosts
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
two-svc-ingress my.pod.svc,nginx.test.svc 35.228.230.6 80 57m
8) Eplanation why I installed Ingress.
Connect to the ingress controller pod
$ kubectl exec -ti nginx-ingress-controller-76bf4c745c-prp8h bash
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ cat /etc/nginx/nginx.conf
Because I have installed nginx ingress earlier, after deploying Ingress.yaml, the nginx-ingress-controller found changes and automatically added necessary code.
In this file you should be able to find whole configuration for two services. I will not copy configuration but only headers.
start server my.pod.svc
start server nginx.test.svc
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ exit
9) Test
$ kubectl get svc to get your nginx-ingress-controller external IP
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/
default backend - 404
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/pod
<!DOCTYPE html>
...
</html>
HELLO THIS IS INGRESS TEST
Please keep in mind Ingress needs to be in the same namespace like services. If you have a few services in many namespace you need to create Ingress for each namespace.
I would need to set up a cluster in order to test your yml files.
Just to help you debugging, follow this steps:
1- get the logs of the my-pod container using kubectl logs my-pod-container-name, make sure everything is working
2- Use port-forward to expose your container and test it.
3- Make sure the service is working properly, change its type to load balancer, so you can reach it from outside the cluster.
If the three things are working there is a problem with your ingress configuration.
I am not sure if I explained it in a detailed way, let me know if something is not clear

EKS to integrate Kubernetes ingress

Can anybody point me to the workflow that I can direct traffic to my domain through Ingress on EKS?
I have this:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world
labels:
app: hello-world
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: hello-world
servicePort: 80
rules:
- host: DOMAIN-I-OWN.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
ports:
- port: 80
targetPort: 32000
protocol: TCP
name: http
selector:
app: hello-world
And able to hit DOMAIN-I-OWN.com using minikube
kubectl config use-context minikube
echo "$(minikube ip) DOMAIN-I-OWN.com" | sudo tee -a /etc/hosts
But, I can't find tutorials how to do the same thing on AWS EKS?
I have set up EKS cluster and have 3 nodes running.
And have pods deployed with those Ingress and Service spec.
And let's say I own "DOMAIN-I-OWN.com" through Google domains or GoDaddy.
What would be the next step to set up the DNS?
Do I need ingress controller? Do I need install it separate to make this work?
Any help would be appreciated! Got stuck on this several days...
You need to wire up something like https://github.com/kubernetes-incubator/external-dns to automatically point DNS names to your cluster's published services' IPs.
take a look to https://github.com/kubernetes-sigs/aws-alb-ingress-controller. It provides a controller that watches for ingress events from the API server. When it finds ingress resources that satisfy its requirements, it begins the creation of AWS resources(subnets, security groups, elbs).
You can create hosted zone on was route 53 and add the records to Godaddy. If you use https://github.com/kubernetes-sigs/aws-alb-ingress-controller. After ingress is setup, you will get a cname add it to your route 53 record

Kubernetes Ingress (GCE) keeps returning 502 error

I am trying to setup an Ingress in GCE Kubernetes. But when I visit the IP address and path combination defined in the Ingress, I keep getting the following 502 error:
Here is what I get when I run: kubectl describe ing --namespace dpl-staging
Name: dpl-identity
Namespace: dpl-staging
Address: 35.186.221.153
Default backend: default-http-backend:80 (10.0.8.5:8080)
TLS:
dpl-identity terminates
Rules:
Host Path Backends
---- ---- --------
*
/api/identity/* dpl-identity:4000 (<none>)
Annotations:
https-forwarding-rule: k8s-fws-dpl-staging-dpl-identity--5fc40252fadea594
https-target-proxy: k8s-tps-dpl-staging-dpl-identity--5fc40252fadea594
url-map: k8s-um-dpl-staging-dpl-identity--5fc40252fadea594
backends: {"k8s-be-31962--5fc40252fadea594":"HEALTHY","k8s-be-32396--5fc40252fadea594":"UNHEALTHY"}
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
15m 15m 1 {loadbalancer-controller } Normal ADD dpl-staging/dpl-identity
15m 15m 1 {loadbalancer-controller } Normal CREATE ip: 35.186.221.153
15m 6m 4 {loadbalancer-controller } Normal Service no user specified default backend, using system default
I think the problem is dpl-identity:4000 (<none>). Shouldn't I see the IP address of the dpl-identity service instead of <none>?
Here is my service description: kubectl describe svc --namespace dpl-staging
Name: dpl-identity
Namespace: dpl-staging
Labels: app=dpl-identity
Selector: app=dpl-identity
Type: NodePort
IP: 10.3.254.194
Port: http 4000/TCP
NodePort: http 32396/TCP
Endpoints: 10.0.2.29:8000,10.0.2.30:8000
Session Affinity: None
No events.
Also, here is the result of executing: kubectl describe ep -n dpl-staging dpl-identity
Name: dpl-identity
Namespace: dpl-staging
Labels: app=dpl-identity
Subsets:
Addresses: 10.0.2.29,10.0.2.30
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
http 8000 TCP
No events.
Here is my deployment.yaml:
apiVersion: v1
kind: Secret
metadata:
namespace: dpl-staging
name: dpl-identity
type: Opaque
data:
tls.key: <base64 key>
tls.crt: <base64 crt>
---
apiVersion: v1
kind: Service
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
spec:
type: NodePort
ports:
- port: 4000
targetPort: 8000
protocol: TCP
name: http
selector:
app: dpl-identity
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: dpl-identity
rules:
- http:
paths:
- path: /api/identity/*
backend:
serviceName: dpl-identity
servicePort: 4000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: dpl-staging
name: dpl-identity
kind: Ingress
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: dpl-identity
rules:
- http:
paths:
- path: /api/identity/*
backend:
serviceName: dpl-identity
servicePort: 4000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
spec:
replicas: 2
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: dpl-identity
spec:
containers:
- image: gcr.io/munpat-container-engine/dpl/identity:0.4.9
name: dpl-identity
ports:
- containerPort: 8000
name: http
volumeMounts:
- name: dpl-identity
mountPath: /data
volumes:
- name: dpl-identity
secret:
secretName: dpl-identity
Your backend k8s-be-32396--5fc40252fadea594 is showing as "UNHEALTHY".
Ingress will not forward traffic if the backend is UNHEALTHY, this will result in the 502 error you are seeing.
It will be being marked as UNHEALTHY becuase it is not passing it's health check, you can check the health check setting for k8s-be-32396--5fc40252fadea594 to see if they are appropriate for your pod, it may be polling an URI or port that is not returning a 200 response. You can find these setting under Compute Engine > Health Checks.
If they are correct then there are many steps between your browser and the container that could be passing traffic incorrectly, you could try kubectl exec -it PODID -- bash (or ash if you are using Alpine) and then try curl-ing localhost to see if the container is responding as expected, if it is and the health checks are also configured correctly then this would narrow down the issue to likely be with your service, you could then try changing the service from a NodePort type to a LoadBalancer and see if hitting the service IP directly from your browser works.
I was having the same issue. It turns out I had to wait a few minutes before ingress to validate the service health. If someone is going to the same and done all the steps like readinessProbe and linvenessProbe, just ensure your ingress is pointing to a service that is either a NodePort, and wait a few minutes until the yellow warning icon turns into a green one. Also, check the log on StackDriver to get a better idea of what's going on. My readinessProbe and livenessProbe is on /login, for the gce class. So I don't think it has to be on /healthz.
Issue is indeed a health check and seemed "random" for my apps where I used name-based virtual hosts to reverse proxy requests from ingress via domains to two separate backend services. Both were secured using Lets Encrypt and kube-lego. My solution was to standardize the path for health checks for all services sharing an ingress, and declare the readinessProbe and livenessProbe configs in my deployment.yml file.
I faced this issue with Google cloud cluster node version 1.7.8 and found this issue that closely-resembled what I experienced:
* https://github.com/jetstack/kube-lego/issues/27
I'm using gce and kube-lego and my backend service health checks were on / and kube-lego is on /healthz. It appears differing paths for health checks with gce ingress might be the cause so it may be worth updating backend services to match the /healthz pattern so all use same (or as one commenter in Github issue stated they updated kube-lego to pass on /).
I had the same problem, and it persisted after I enabled livenessProbe as well readinessPorbe.
It turned this was to do with basic auth. I've added basic auth to livenessProbe and the readinessPorbe, but turns out the GCE HTTP(S) load balancer doesn't have a configuration option for that.
There seem to be a few another kind of issue with too, e.g. setting container port to 8080 and service port to 80 didn't work with GKE ingress controller (yet I wouldn't clearly indicate what the problem was). And broadly, it looks to me like there is very little visibility and running your own ingress container is a better option with respect to visibility.
I picked Traefik for my project, it worked out of the box, and I'd like to enable Let's Encrypt integration. The only change I had to make to Traefik manifests was about tweaking the service object to disabling access to the UI from outside of the cluster and expose my app with through external load balancer (GCE TCP LB). Also, Traefik is more native to Kubernetes. I tried Heptio Contour, but something didn't work out of the box (will give it a go next time when the new version comes out).
I had the same issue. I turned out that the pod itself was running ok, which I tested via port-forwarding and accessing the health-check URL.
Port-Forward can be activated in console as follows:
$ kubectl port-forward <pod-name> local-port:pod-port
So if the pod is running ok and ingress still shows unhealthy state there might be an issue with your service configuration. In my case my app-selector was incorrect, causing the selection of a non existent pod. Interestingly this isn't showed as an errors or alerts in google console.
Definition of the pods:
#pod-definition.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: <pod-name>
namespace: <namespace>
spec:
selector:
matchLabels:
app: **<pod-name>**
template:
metadata:
labels:
app: <pod-name>
spec:
#spec-definition follows
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: <name-of-service-here>
namespace: <namespace>
spec:
type: NodePort
selector:
app: **<pod-name>**
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: <port-name-here>
The "Limitations" section of the kubernetes documentation states that:
All Kubernetes services must serve a 200 page on '/', or whatever custom value you've specified through GLBC's --health-check-path argument.
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#limitations
I solved the problem by
Remove the service from ingress definition
Deploy ingress kubectl apply -f ingress.yaml
Add the service to ingress definition
Deploy ingress again
Essentially, I followed Roy's advice and tried to turn it off and on again.
Log can read from Stackdriver Logging, in my case, it is backend_timeout error. After increase the default timeout (30s) via BackendConfig, it stop returning 502 even under load.
More on:
https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service#creating_a_backendconfig
I've fixed this issue after adding the following readiness and liveness probe with successThreshold: 1 and failureThreshold: 3 . Also i kept initialDelaySeconds to 70 because sometime an application responds bit late , it may vary per application.
NOTE: Also ensure that the path in httpGet should exist in your application(like in my case /api/books) other wise GCP pings /healthz path and doesn't guarantee to return 200 OK .
readinessProbe:
httpGet:
path: /api/books
port: 80
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
initialDelaySeconds: 70
timeoutSeconds: 60
livenessProbe:
httpGet:
path: /api/books
port: 80
initialDelaySeconds: 70
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 60
I could able to sort out after struggling a lot and tried many things.
Keep Learn & Share
I had the same issue when I was using a wrong image and the request couldn't be satisfied as the configurations were different.