kubernetes health check look for string - kubernetes

I have a container that has a ping endpoint (returns pong) and I want to probe the ping endpoint and see if I get a pong back. If it was just to check 200 , I could have added a liveliness check in my pod like this ->
livenessProbe:
initialDelaySeconds: 2
periodSeconds: 5
httpGet:
path: /ping
port: 9876
How do I modify this to check to see if I get a pong response back?

As the HTTP probe only checks the status code of the response, you need to use the exec probe to run a command on the container. Something like this, which requires curl being installed on the container:
livenessProbe:
initialDelaySeconds: 2
periodSeconds: 5
exec:
command:
- sh
- -c
- curl -s http://localhost:9876/ping | grep pong

httpGet livenessProbe and readinessProbe only care about http response code
Any code greater than or equal to 200 and less than 400 indicates success. Any other code indicates failure.
Better to change your pong message to set the appropriate http status code on the response.

Related

Kubernetes - How to read response body in livenessProbe of a container?

Below is the current configuration for livenessProbe:
livenessProbe:
httpGet:
path: /heartbeat
port: 8000
initialDelaySeconds: 2
timeoutSeconds: 2
periodSeconds: 8
failureThreshold: 2
But response body for URL .well-known/heartbeat shows status: "DOWN" and the http return status as 200
So, Kubelet does not restart the container, due to http response status 200
How to ensure Kubelet reads the response body instead of http return status? using livenessProbe configuration
You can interpret the body in your probe using shell command, example:
livenessProbe:
exec:
command:
- sh
- -c
- curl -s localhost | grep 'status: "UP"'
grep return non-zero if status: "DOWN" which will direct readinessProbe to fail. You can of course adjust the script according to your actual response body.
How to ensure Kubelet reads the response body instead of http return status? using livenessProbe configuration
This is not according to the "contract" provided by Kubernetes. You probably need to implement a custom endpoint that follows the contract for HTTP liveness probes as below.
From Define a HTTP liveness probe
If the handler returns a failure code, the kubelet kills the container and restarts it.
Any code greater than or equal to 200 and less than 400 indicates success. Any other code indicates failure.

Kubernetes external probes

Is it possible to define external path for example a other webserver as path for the web probes?
Or a TCP probe with a different IP?
livenessProbe:
httpGet:
path: external.de/test
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
I know thats not how you should use probes but I need it for testing.
Does someone know how to define probes that are not applied on the pods directly?
You can use following command along with your liveness probe
livenessProbe:
exec:
command:
- curl
- external.de/test:8080
initialDelaySeconds: 10
periodSeconds: 105
In this case, if curl external.de/test:8080command returns with an exit code of 0 then it will be assumed healthy, otherwise any other exit code will be deemed unhealthy.
Also keep in mind, once probe will fail, the pod running this probe will be restarted, not the one that running external.de/test:8080 web server
More details on how to use command within liveness probe described here
If you want to achieve that, you cannot use the http probe.
You have to use the exec one, pointing to a simple bash script that executes cURL on your behalf, so you can mount it via a ConfigMap or directly hostMount to perform your testing.

exec probe in GKE

I'm trying to use exec probes for readiness and liveness in GKE. This is because it is part of Kubernetes' recommended way to do health checks on gRPC back ends. However when I put the exec probe config into my deployment yaml and apply it, it doesn't take effect in GCP. This is my container yaml:
- name: rev79-uac-sandbox
image: gcr.io/rev79-232812/uac:latest
imagePullPolicy: Always
ports:
- containerPort: 3011
readinessProbe:
exec:
command: ["bin/grpc_health_probe", "-addr=:3011"]
initialDelaySeconds: 5
livenessProbe:
exec:
command: ["bin/grpc_health_probe", "-addr=:3011"]
initialDelaySeconds: 10
But still the health checks fail and when I look at the health check configuration in the GCP console I see a plain HTTP health check directed at '/'
When I edit a health check in GCP console there doesn't seem to be any way to choose an exec type. Also I can't see any mention of liveness checks as contrasted to readiness checks even though these are separate Kubernetes things.
Does Google cloud support using exec for health checks?
If so, how do I do it?
If not, how can I health check a gRPC server?
TCP probes are useful when we are using gRPC Services rather than using HTTP probes.
- containerPort: 3011
readinessProbe:
tcpSocket:
port: 3011
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 3011
initialDelaySeconds: 15
periodSeconds: 20
the kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy, if it can’t it is considered a failure
define-a-tcp-liveness-probe
Exec probes work in GKE just the same way they work everywhere. You can view liveness probe result in "kubectl describe pod". Or you can simply log in into pod, execute command and see its return code.
The server has to implement the grpc probe protocol as indicated here as indicated in this article
Both answers from Vasily Angapov and Suresh Vishnoi should in theory work, however in practice they don't (at least in my practice).
So my solution was to start another server on my backend container - an HTTP server that simply has the job of executing the health check whenever it gets a request and returning a 200 status if it passes and a 503 if it fails.
I also had to open a second port on my container for that server to listen on.

Should i create a API for readinessprobe to work kubernetes

I am trying to create a RollingUpdate and trying to use below code to see if pod came up or not. Should i create explicit API path like /healthz in my application so that kubernetes pings it and gets 200 status back or else its internal url for kubernetes?
specs:
containers:
- name: liveness
readinessProbe:
httpGet:
path: /healthz
port: 80
As#Thomas answered the Http probe, If application does not provide a endpoint to validate the success response. you can use TCP Probe
kubelet tries to establish a TCP connection on the container's port. If it can establish a connection, the container is considered healthy; if it can’t it is considered unhealthy
for example, in your case it would be like this
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20
You can get further information over here configure-liveness-readiness-probes/
Kubernetes will make a request to the container on port 80 and path /healthz and expects a status code in the range of 2xx-3xx to be considered successful.
If your application does not provide a mapping for the path and returns a 404, kubernetes assumes that the health check fails.
Depending on your application you need to manually provide the API, if it is not done by your framework. (You can check using a curl or wget to the path from another pod and verify the result)

Openshift readiness probe not executed

Running a Spring Boot application inside a OpenShift Pod. To execute the readiness and liveness probe, I created an appropriate YAML file. However the Pod fails and responds that he was not able to pass the readiness check (after approximately 5 minutes).
My goal is to execute the readiness probe every 20 minutes. But I assume that it is failing because it adds up the initalDelaySeconds together with the periodSeconds. So I guess that the first check after the pod has been started will be executed after 22 minutes.
Following the related configuration of the readiness probe.
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health
port: 8080
scheme: HTTP
initialDelaySeconds: 120
periodSeconds: 1200
successThreshold: 1
timeoutSeconds: 60
Is my assumption right? How to avoid it (Maybe increase the timeout regarding the kubelet)?
Your configuration is correct and the initialDelaySeconds and periodSeconds do not sum up. So, the first readinessProbe HTTP call will exactly in 2 min after you start your POD.
I would look for an issue in your app itself, first thing that comes to my mind is that your path is /actuator/health, shouldn't it be just /health? That is the default in case of Spring Boot Actuator.
If that doesn't help, then the best would be to debug it: exec into your container and use curl to check if your health endpoint works correctly (it should return HTTP Code 200).