i'm playing around with k8s services. I have created simple Spring Boot app, that display it's version number and pod name when curling endpoint:
curl localhost:9000/version
1.3_car-registry-deployment-66684dd8c4-r274b
Then i dockerized it, pushed into my local Kind cluster and deployed with 5 replicas. Next I created service targeting all 5 pods. Lastly, i exposed service like so:
kubectl port-forward svc/car-registry-service 9000:9000
Now when curling my endpoint i expected to see randomly picked pod names, but instead I only get responses from single pod. Moreover, if i kill that one pod then my service stops working, ie i'm getting ERR_EMPTY_RESPONSE, even though there are 4 more pods available. What am I missing? Here's my deployment and service yamls:
apiVersion: apps/v1
kind: Deployment
metadata:
name: car-registry-deployment
spec:
replicas: 5
selector:
matchLabels:
app: car-registry
template:
metadata:
name: car-registry
labels:
app: car-registry
spec:
containers:
- name: car-registry
image: car-registry-database:v1.3
ports:
- containerPort: 9000
protocol: TCP
name: rest
readinessProbe:
exec:
command:
- sh
- -c
- curl http://localhost:9000/healthz | grep "OK"
initialDelaySeconds: 15
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: car-registry-service
spec:
type: ClusterIP
selector:
app: car-registry
ports:
- protocol: TCP
port: 9000
targetPort: 9000
You’re using TCP, so you’re probably using keep-alive. Try to hit it with your browser or a new tty.
Try:
curl -H "Connection: close" http://your-service:port/path
Else, check kube-proxy logs to see if there’s any additional info. Your initial question doesn’t provide much detail.
I am trying to create a scalable varnish cluster on some managed Kubernetes services (azure's, google's, or amazon's Kubernetes service) but I'm having trouble getting started. Any advice or references are helpful, thanks!
We (Varnish Software) are working on official Helm charts to make k8s deployments a lot easier. For the time being we only have an official Docker Image.
You can find install instructions on https://www.varnish-software.com/developers/tutorials/running-varnish-docker/.
However, I have some standalone k8s files that can be a good way to get started.
Config map
apiVersion: v1
kind: ConfigMap
metadata:
name: varnish
labels:
name: varnish
data:
default.vcl: |+
vcl 4.1;
backend default none;
sub vcl_recv {
if (req.url == "/varnish-ping") {
return(synth(200));
}
if (req.url == "/varnish-ready") {
return(synth(200));
}
return(synth(200,"Welcome"));
}
This config map contains the VCL file. This VCL file doesn't do anything useful besides having /varnish-ping & /varnish-ready endpoints. Please customize to your needs.
Service definition
Here's a basic service definition that exposes port 80
apiVersion: v1
kind: Service
metadata:
name: varnish
labels:
name: varnish
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
name: varnish-http
selector:
name: varnish
Deployment
And finally here's the deployment. It uses the official Varnish Docker image and more specifically the 6.0 LTS version.
It uses the synthetic /varnish-ping & /varnish-ready endpoints and mounts the config map under /etc/varnish to load the VCL file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: varnish
labels:
name: varnish
spec:
replicas: 1
selector:
matchLabels:
name: varnish
template:
metadata:
labels:
name: varnish
spec:
containers:
- name: varnish
image: "varnish:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /varnish-ping
port: 80
initialDelaySeconds: 30
periodSeconds: 5
readinessProbe:
httpGet:
path: /varnish-ready
port: 80
initialDelaySeconds: 30
periodSeconds: 5
volumeMounts:
- name: varnish
mountPath: /etc/varnish
volumes:
- name: varnish
configMap:
name: varnish
Deploying the config
Run kubectl apply -f . in the folder with the various k8s files (config map, service definition & deployment). This is the output you'll get:
$ kubectl apply -f .
configmap/varnish created
deployment.apps/varnish created
service/varnish created
By running kubectl get all you'll see the status of the deployment.
When running this on your local computer, just call kubectl port-forward service/varnish 8080:80 to port forward the Varnish service to localhost:8080. This allows you to test Varnish on k8s locally by accessing http://localhost:8080.
Run kubectl delete -f . to tear it down again.
Disclaimer
Although these configs were featured in my Varnish 6 by Example book, this is not an official tutorial. These scripts can probably be improved. However, it is a simple way to get started.
Try this Varnish on Kubernetes operator.
I have a Kubernetes deployment on GCP and a ClusterIP service to discover pods in this deployment. The deployment contains multiple replica set pods which come and go based on our horizontal pod scalar configuration (based on CPU Utilization).
Now, when a new replica set pod is created, it takes some time for the application to start servicing. But the ClusterIP already starts distributing requests to new replica set pod before the application is ready, which causes the requests to be not serviced.
ClusterIP service yaml:
apiVersion: v1
kind: Service
metadata:
labels:
app: service-name
tier: backend
environment: "dev"
creator: internal
name: service-name
spec:
clusterIP: None
ports:
- name: https
protocol: TCP
port: 7070
targetPort: 7070
selector:
app: dep-name
tier: "backend"
environment: "dev"
creator: "ME"
type: ClusterIP
How can the ClusterIP be told to start distributing requests to the new pod after the application starts? Can there be any initial delay or liveness probe set for this purpose?
Kubernetes provides readiness probe for it. With readiness probes, Kubernetes will not send traffic to a pod until the probe is successful. When updating a deployment, it will also leave old replica(s) running until probes have been successful on new replica. That means that if your new pods are broken in some way, they’ll never see traffic, your old pods will continue to serve all traffic for the deployment.
You need to update the deployment file with following readiness probe:
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
If your application have http probe then you can set readiness probe in HTTP mode as well.
For more information how can you use readiness probe refer:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes
You should have a readiness probe as defined in the documentation at
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes.
As defined in the documentation you should be able to configure using initialDelaySeconds and periodSeconds.
Your current behavior is probably because the service load balancer has seen that all the containers in the pod are started. You can define your readyness checks like the example below picked from documentation.
kind: Pod
metadata:
name: goproxy
labels:
app: goproxy
spec:
containers:
- name: goproxy
image: k8s.gcr.io/goproxy:0.1
ports:
- containerPort: 8080
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
I have a situation where I have zero endpoints available for one service. To test this, I specially crafted a yaml descriptor that uses a simple node server to set and retrieve the ready/live status for a pod:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs-deployment
labels:
app: nodejs
spec:
replicas: 3
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodejs
image: nodejs_server
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /is_alive
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 10
readinessProbe:
httpGet:
path: /is_ready
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-service
labels:
app: nodejs
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: nodejs
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nodejs-ingress
spec:
backend:
serviceName: nodejs-service
servicePort: 80
The node server has methods to set and retrieve the liveness and readiness.
When the app start I can see that 3 replicas are created and the status of them is ready. OK then now I trigger manually the status of their readiness to set to false [from outside the ingress]. One pod is correctly removed from the endpoint so no traffic is routed to it[that's OK as this is the expected behavior]. When I set all the ready-statuses to false for all pods the endpoints list is empty [still the expected behavior].
At that point I cannot set ready=true from outside the ingress as the traffic is not routed to any pod. Is there a way here for example of triggering a restart of the pod when the ready is not achieved after n-timer or n-seconds? Or when the endpoints list is empty?
Well, that is perfectly normal and expected behaviour. What you can do, on the side, is to forward traffic from localhost to a particular pod with kubectl port-forward. That way you can access the pod directly, without ingresses etc. and set it's readiness back to ok. If you want to restart when host it not ready for to long, just use the same endpoint for liveness probe, but trigger it after more tries.
I am trying to setup an Ingress in GCE Kubernetes. But when I visit the IP address and path combination defined in the Ingress, I keep getting the following 502 error:
Here is what I get when I run: kubectl describe ing --namespace dpl-staging
Name: dpl-identity
Namespace: dpl-staging
Address: 35.186.221.153
Default backend: default-http-backend:80 (10.0.8.5:8080)
TLS:
dpl-identity terminates
Rules:
Host Path Backends
---- ---- --------
*
/api/identity/* dpl-identity:4000 (<none>)
Annotations:
https-forwarding-rule: k8s-fws-dpl-staging-dpl-identity--5fc40252fadea594
https-target-proxy: k8s-tps-dpl-staging-dpl-identity--5fc40252fadea594
url-map: k8s-um-dpl-staging-dpl-identity--5fc40252fadea594
backends: {"k8s-be-31962--5fc40252fadea594":"HEALTHY","k8s-be-32396--5fc40252fadea594":"UNHEALTHY"}
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
15m 15m 1 {loadbalancer-controller } Normal ADD dpl-staging/dpl-identity
15m 15m 1 {loadbalancer-controller } Normal CREATE ip: 35.186.221.153
15m 6m 4 {loadbalancer-controller } Normal Service no user specified default backend, using system default
I think the problem is dpl-identity:4000 (<none>). Shouldn't I see the IP address of the dpl-identity service instead of <none>?
Here is my service description: kubectl describe svc --namespace dpl-staging
Name: dpl-identity
Namespace: dpl-staging
Labels: app=dpl-identity
Selector: app=dpl-identity
Type: NodePort
IP: 10.3.254.194
Port: http 4000/TCP
NodePort: http 32396/TCP
Endpoints: 10.0.2.29:8000,10.0.2.30:8000
Session Affinity: None
No events.
Also, here is the result of executing: kubectl describe ep -n dpl-staging dpl-identity
Name: dpl-identity
Namespace: dpl-staging
Labels: app=dpl-identity
Subsets:
Addresses: 10.0.2.29,10.0.2.30
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
http 8000 TCP
No events.
Here is my deployment.yaml:
apiVersion: v1
kind: Secret
metadata:
namespace: dpl-staging
name: dpl-identity
type: Opaque
data:
tls.key: <base64 key>
tls.crt: <base64 crt>
---
apiVersion: v1
kind: Service
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
spec:
type: NodePort
ports:
- port: 4000
targetPort: 8000
protocol: TCP
name: http
selector:
app: dpl-identity
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: dpl-identity
rules:
- http:
paths:
- path: /api/identity/*
backend:
serviceName: dpl-identity
servicePort: 4000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: dpl-staging
name: dpl-identity
kind: Ingress
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: dpl-identity
rules:
- http:
paths:
- path: /api/identity/*
backend:
serviceName: dpl-identity
servicePort: 4000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
spec:
replicas: 2
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: dpl-identity
spec:
containers:
- image: gcr.io/munpat-container-engine/dpl/identity:0.4.9
name: dpl-identity
ports:
- containerPort: 8000
name: http
volumeMounts:
- name: dpl-identity
mountPath: /data
volumes:
- name: dpl-identity
secret:
secretName: dpl-identity
Your backend k8s-be-32396--5fc40252fadea594 is showing as "UNHEALTHY".
Ingress will not forward traffic if the backend is UNHEALTHY, this will result in the 502 error you are seeing.
It will be being marked as UNHEALTHY becuase it is not passing it's health check, you can check the health check setting for k8s-be-32396--5fc40252fadea594 to see if they are appropriate for your pod, it may be polling an URI or port that is not returning a 200 response. You can find these setting under Compute Engine > Health Checks.
If they are correct then there are many steps between your browser and the container that could be passing traffic incorrectly, you could try kubectl exec -it PODID -- bash (or ash if you are using Alpine) and then try curl-ing localhost to see if the container is responding as expected, if it is and the health checks are also configured correctly then this would narrow down the issue to likely be with your service, you could then try changing the service from a NodePort type to a LoadBalancer and see if hitting the service IP directly from your browser works.
I was having the same issue. It turns out I had to wait a few minutes before ingress to validate the service health. If someone is going to the same and done all the steps like readinessProbe and linvenessProbe, just ensure your ingress is pointing to a service that is either a NodePort, and wait a few minutes until the yellow warning icon turns into a green one. Also, check the log on StackDriver to get a better idea of what's going on. My readinessProbe and livenessProbe is on /login, for the gce class. So I don't think it has to be on /healthz.
Issue is indeed a health check and seemed "random" for my apps where I used name-based virtual hosts to reverse proxy requests from ingress via domains to two separate backend services. Both were secured using Lets Encrypt and kube-lego. My solution was to standardize the path for health checks for all services sharing an ingress, and declare the readinessProbe and livenessProbe configs in my deployment.yml file.
I faced this issue with Google cloud cluster node version 1.7.8 and found this issue that closely-resembled what I experienced:
* https://github.com/jetstack/kube-lego/issues/27
I'm using gce and kube-lego and my backend service health checks were on / and kube-lego is on /healthz. It appears differing paths for health checks with gce ingress might be the cause so it may be worth updating backend services to match the /healthz pattern so all use same (or as one commenter in Github issue stated they updated kube-lego to pass on /).
I had the same problem, and it persisted after I enabled livenessProbe as well readinessPorbe.
It turned this was to do with basic auth. I've added basic auth to livenessProbe and the readinessPorbe, but turns out the GCE HTTP(S) load balancer doesn't have a configuration option for that.
There seem to be a few another kind of issue with too, e.g. setting container port to 8080 and service port to 80 didn't work with GKE ingress controller (yet I wouldn't clearly indicate what the problem was). And broadly, it looks to me like there is very little visibility and running your own ingress container is a better option with respect to visibility.
I picked Traefik for my project, it worked out of the box, and I'd like to enable Let's Encrypt integration. The only change I had to make to Traefik manifests was about tweaking the service object to disabling access to the UI from outside of the cluster and expose my app with through external load balancer (GCE TCP LB). Also, Traefik is more native to Kubernetes. I tried Heptio Contour, but something didn't work out of the box (will give it a go next time when the new version comes out).
I had the same issue. I turned out that the pod itself was running ok, which I tested via port-forwarding and accessing the health-check URL.
Port-Forward can be activated in console as follows:
$ kubectl port-forward <pod-name> local-port:pod-port
So if the pod is running ok and ingress still shows unhealthy state there might be an issue with your service configuration. In my case my app-selector was incorrect, causing the selection of a non existent pod. Interestingly this isn't showed as an errors or alerts in google console.
Definition of the pods:
#pod-definition.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: <pod-name>
namespace: <namespace>
spec:
selector:
matchLabels:
app: **<pod-name>**
template:
metadata:
labels:
app: <pod-name>
spec:
#spec-definition follows
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: <name-of-service-here>
namespace: <namespace>
spec:
type: NodePort
selector:
app: **<pod-name>**
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: <port-name-here>
The "Limitations" section of the kubernetes documentation states that:
All Kubernetes services must serve a 200 page on '/', or whatever custom value you've specified through GLBC's --health-check-path argument.
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#limitations
I solved the problem by
Remove the service from ingress definition
Deploy ingress kubectl apply -f ingress.yaml
Add the service to ingress definition
Deploy ingress again
Essentially, I followed Roy's advice and tried to turn it off and on again.
Log can read from Stackdriver Logging, in my case, it is backend_timeout error. After increase the default timeout (30s) via BackendConfig, it stop returning 502 even under load.
More on:
https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service#creating_a_backendconfig
I've fixed this issue after adding the following readiness and liveness probe with successThreshold: 1 and failureThreshold: 3 . Also i kept initialDelaySeconds to 70 because sometime an application responds bit late , it may vary per application.
NOTE: Also ensure that the path in httpGet should exist in your application(like in my case /api/books) other wise GCP pings /healthz path and doesn't guarantee to return 200 OK .
readinessProbe:
httpGet:
path: /api/books
port: 80
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
initialDelaySeconds: 70
timeoutSeconds: 60
livenessProbe:
httpGet:
path: /api/books
port: 80
initialDelaySeconds: 70
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 60
I could able to sort out after struggling a lot and tried many things.
Keep Learn & Share
I had the same issue when I was using a wrong image and the request couldn't be satisfied as the configurations were different.