How to set basic auth to Kubernetes' readinessProbe correctly?
If set this config for Kubernetes' readinessProbe in deployment kind.
readinessProbe:
httpGet:
path: /healthcheck
port: 8080
httpHeaders:
- name: Authorization
value: Basic <real base64 encoded data>
Deploy it to GKE, GCP's health check can't pass to access the inside application with using basic authentication.
But from here, it seems it should use this syntax. Why can't pass?
The server side is using JSON response at the /healthcheck point. Is it also necessary to set Accept or Content-Type to the httpHeaders?
And, is it good to set this health check to livenessProbe or readinessProbe?
According to kubernetes doc:
If the process in your container is able to crash on its own whenever it encounters an issue or becomes unhealthy, you do not necessarily need a liveness probe; the kubelet will automatically perform the correct action in accordance with the Pod's restartPolicy. If you'd like your container to be killed and restarted if a probe fails, then specify a liveness probe, and specify a restartPolicy of Always or OnFailure.
Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-liveness-probe
If you'd like to start sending traffic to a Pod only when a probe succeeds, specify a readiness probe. In this case, the readiness probe might be the same as the liveness probe, but the existence of the readiness probe in the spec means that the Pod will start without receiving any traffic and only start receiving traffic after the probe starts succeeding. If your container needs to work on loading large data, configuration files, or migrations during startup, specify a readiness probe.
Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-readiness-probe
So, you can use health check according to your need. But in kubernetes doc, they give an example of health check as a liveliness probe.
Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request
And it is the best practice to use Content-Type when you send request other then browser or other typical client. I believe using Content-Type: application/json will solve the issue if other things go right in server side.
Related
I'm looking for a approach where we can turn on/off liveness probe using configMap UI on Rancher, and same time it does not require redeployment of the pod.
In details : Assume the liveness probe is already been config on pod. Next I have a boolean flag and it is set false, so if false, liveness probe will stop probing on pod. At other time, if set boolean flag to true, liveness probe will resume probe. All these need to works without pod redeployment. Below is my draft idea:
There's a configMap UI (configMap listed on Rancher) which hold the boolean flag to turn on/off liveness probe
something like this :
app.liveness.probe.mode=false
Next I want to absorb the above boolean flag into deployment.yaml with the conditional checking as below :
{{ if app.liveness.probe.mode }}
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 70
periodSeconds: 10
{{ end }}
I'm not sure how to reference the configMap boolean flag on Rancher into the deployment.yaml file.
Or is there any other approach to control the liveness probe on/off toggle without redeployment of pod.
Sounds to me as if you want to temporarily switch off you Liveness probe in order to keep your Pod running, even if the Pod is technically not healthy.
Have you considered using a Readiness probe for this?
So, you remove your Liveness probe, which basically keeps your Pod running as long as your application is running.
Your define a Readiness probe that executes HTTP checks on an application endpoint. While that endpoint returns 200, your Pod receives traffic. If you want to take traffic off the Pod, while keeping your Pod running, your readiness probe will have to return an error code (>=400).
To accomplish this dynamically with ConfigMaps, you could mount a ConfigMap as a volume into your application container putting a config file onto your container file system. The config file contains a flag like readninessProbe.active=true. This file is read by your Readiness probe endpoint and returns a 200 or 500 dependent on the value of the boolean.
Naturally, a similar approach could be used to control your Livness probe endpoint, but the K8s mechanism for keeping Pods running but taking traffic off them is taken care of by Readiness probes.
After deployment with using helm carts, I got CrashLoopBackOff error.
NAME READY STATUS RESTARTS AGE
myproject-myproject-54ff57477d-h5fng 0/1 CrashLoopBackOff 10 24m
Then, I describe the pod to see events and I saw smth like below
Liveness probe failed: Get http://10.16.26.26:8080/status:
dial tcp 10.16.26.26:8080: connect: connection refused
Readiness probe failed: Get http://10.16.26.26:8080/status:
dial tcp 10.16.26.26:8080: connect: connection refused
Lastly, I saw invalid grant access to my GCP cloud proxy in logs as below
time="2020-01-15T15:30:46Z" level=fatal msg=application_main error="Post https://www.googleapis.com/{....blabla.....}: oauth2: cannot fetch token: 400 Bad Request\nResponse: {\n \"error\": \"invalid_grant\",\n \"error_description\": \"Not a valid email or user ID.\"\n}"
However, I checked my service account in IAM, it has access to cloud proxy. Furthermore, I tested with using same credentials in my local, and endpoint for readiness probe was working successfully.
Does anyone has any suggestion about my problem?
You can disable liveness probe to stop CrashLoopBackoff, exec into container and test from there.
Ideally you should not keep save config for liveness and readiness probe.It is not advisable for liveness probe to depend on anything external, it should just check if pod is live or not.
Referring to problem with granting access on GCP - fix this by using Email Address (the string that ends with ...#developer.gserviceaccount.com) instead of Client ID for client_id parameter value. The naming set by Google is confusing.
More information and troubleshooting you can find here: google-oautgh-grant.
Referring to problem with probes:
Check if URL is health. Your Probes may be too sensitive - your application take a while to start or respond.
Readiness and liveness probes can be used in parallel for the same container. Using both can ensure that traffic does not reach a container that is not ready for it, and that containers are restarted when they fail.
Liveness probe checks if your application is in a healthy state in your already running pod.
Readiness probe will actually check if your pod is ready to receive traffic. Thus, if there is no /path endpoint, it will never appear as Running
egg:
livenessProbe:
httpGet:
path: /your-path
port: 5000
failureThreshold: 1
periodSeconds: 2
initialDelaySeconds: 2
ports:
- name: http
containerPort: 5000
If endpoint /index2 will not exist pod will never appear as Running.
Make sure that you properly set up liveness and readiness probe.
For an HTTP probe, the kubelet sends an HTTP request to the specified
path and port to perform the check. The kubelet sends the probe to the
pod’s IP address, unless the address is overridden by the optional
host field in httpGet. If scheme field is set to HTTPS, the kubelet
sends an HTTPS request skipping the certificate verification. In most
scenarios, you do not want to set the host field. Here’s one scenario
where you would set it. Suppose the Container listens on 127.0.0.1
and the Pod’s hostNetwork field is true. Then host, under httpGet,
should be set to 127.0.0.1. Make sure you did it. If your pod relies
on virtual hosts, which is probably the more common case, you should
not use host, but rather set the Host header in httpHeaders.
For a TCP probe, the kubelet makes the probe connection at the node,
not in the pod, which means that you can not use a service name in the
host parameter since the kubelet is unable to resolve it.
Most important thing you need to configure when using liveness probes. This is the initialDelaySeconds setting.
Make sure that you do have port 80 open on the container.
Liveness probe failure causes the pod to restart. You need to make sure the probe doesn’t start until the app is ready. Otherwise, the app will constantly restart and never be ready!
I recommend to use p99 startup time for the initialDelaySeconds.
Take a look here: probes-kubernetes, most-common-fails-kubernetes-deployments.
Consider a pod which has a healthcheck setup via a http endpoint /health at port 80 and it takes almost 60 seconds to be actually ready & serve the traffic.
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 60
livenessProbe:
httpGet:
path: /health
port: 80
Questions:
Is my above config correct for the given requirement?
Does liveness probe start working only after the pod becomes ready ? In other words, I assume readiness probe job is complete once the POD is ready. After that livenessProbe takes care of health check. In this case, I can ignore the initialDelaySeconds for livenessProbe. If they are independent, what is the point of doing livenessProbe check when the pod itself is not ready! ?
Check this documentation. What do they mean by
If you want your Container to be able to take itself down for
maintenance, you can specify a readiness probe that checks an endpoint
specific to readiness that is different from the liveness probe.
I was assuming, the running pod will take itself down only if the livenessProbe fails. not the readinessProbe. The doc says other way.
Clarify!
I'm starting from the second problem to answer. The second question is:
Does liveness probe start working only after the pod becomes ready?
In other words, I assume readiness probe job is complete once the POD
is ready. After that livenessProbe takes care of health check.
Our initial understanding is that liveness probe will start to check after readiness probe was succeeded but it turn out not to be like that. It has opened an issue for this challenge.Yon can look up to here. Then It was solved this problem by adding startup probes.
To sum up:
livenessProbe
livenessProbe: Indicates whether the Container is running. If the
liveness probe fails, the kubelet kills the Container, and the
Container is subjected to its restart policy. If a Container does not
provide a liveness probe, the default state is Success.
readinessProbe
readinessProbe: Indicates whether the Container is ready to service requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.
startupProbe
startupProbe: Indicates whether the application within the Container is started. All other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a startup probe, the default state is Success
look up here.
The liveness probes are to check if the container is started and alive. If this isn’t the case, kubernetes will eventually restart the container.
The readiness probes in turn also check dependencies like database connections or other services your container is depending on to fulfill it’s work. As a developer you have to invest here more time into the implementation than just for the liveness probes. You have to expose an endpoint which is also checking the mentioned dependencies when queried.
Your current configuration uses a health endpoint which are usually used by liveness probes. It probably doesn’t check if your services is really ready to take traffic.
Kubernetes relies on the readiness probes. During a rolling update, it will keep the old container up and running until the new service declares that it is ready to take traffic. Therefore the readiness probes have to be implemented correctly.
I will show the difference between them in a couple of simple points:
livenessProbe
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
It is used to indicate if the container has started and is alive or not i.e. proof of being available.
In the given example, if the request fails, it will restart the container.
If not provided the default state is Success.
readinessProbe
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
It is used to indicate if the container is ready to serve traffic or not i.e.proof of being ready to use.
It checks dependencies like database connections or other services your container is depending on to fulfill its work.
In the given example, until the request returns Success, it won't serve any traffic(by removing the Pod’s IP address from the endpoints of all Services that match the Pod).
Kubernetes relies on the readiness probes during rolling updates, it keeps the old container up and running until the new service declares that it is ready to take traffic.
If not provided the default state is Success.
Summary
Liveness Probes: Used to check if the container is available and alive.
Readiness Probes: Used to check if the application is ready to be used and serve the traffic.
Both readiness probe and liveness probe seem to have same behavior. They do same type of checks. But the action they take in case of failures is different.
Readiness Probe shuts the traffic from service down. so that service can always the send the request to healthy pod whereas the liveness probe restarts the pod in case of failure. It does not do anything for the service. Service continues to send the request to the pods as usual if it is in ‘available’ status.
It is recommended to use both probes!!
Check here for detailed explanation with code samples.
The Kubernetes platform has capabilities for validating container applications, called healthchecks. Liveness is proof of availability and readness is proof of pod readiness is ready to use.
The features are designed to prevent service downtime and inconsistent images by enabling restarts when needed. Kubernetes uses liveness to know when to restart the container, so it can solve most problems. Kubernetes uses readness to know when the container is available to accept requests. The pod is considered ready when all containers are ready. Therefore, when the pod takes too long to initialize (by cache mount, DB schema, etc.) it is recommended to increase initialDelaySeconds.
I'd post it as a comment but it's too long, So let's make it a full answer.
Is my above config correct for the given requirement?
IMHO no, you are missing initialDelaySeconds for both probes and liveness and rediness probably should not call the same endpoint. I'd use the suggestionss form #fgul
Does liveness probe start working only after the pod becomes ready ?
In other words, I assume readiness probe job is complete once the POD
is ready. After that livenessProbe takes care of health check. In this
case, I can ignore the initialDelaySeconds for livenessProbe. If they
are independent, what is the point of doing livenessProbe check when
the pod itself is not ready! ?
I think you were thinking about startupProbe, again #fgul described what does what so there is no point in me repeating.
I was assuming, the running pod will take itself down only if the
livenessProbe fails. not the readinessProbe. The doc says other way.
The pod can be restarted only based on livenessProbe, not the redinessProbe.
I'd think twice before binding a rediness probe with external services (being alive as #randy advised), especially in high load services:
Let's assume you have define a deployment with lots of pods, that are connecting to a database and are processing lots of requests.
Now the database goes down.
The rediness probe is checking also db connection and it marks all of the pods as "out of service".
Now the db goes up.
Pods rediness probe will start to pass but not instantly and on all pods right away - the pods will be marked as "Ready" one after an other.
But it might be too slow - the second the first pod will be marked as ready, ALL of the traffic will be sent to this one pod alone. It might end in a situation that the "waking up" pods will be killed by the traffic one after an other.
For that kind of situation I'd say the rediness pod should check only pod internal stuff and don't care about the externall services. The kubernetes endpoint will return an error and either the clients might support failing service (it's called "designed for failure") or the loadbalancer/ingress can cover it.
I think the below image describes the use-cases for each.
Liveness probes are a relatively specialized tool, and you probably don't want one at all. However they run totally independently AFAIK.
On Openshift/Kubernetes, when a readiness check is configured with a HTTP GET with a path on for example a spring boot app with a service and route, is the HTTP GET request calling the Openshift service or route or something else and expecting 200-399?
Thanks,
B.
The kubernetes documentation on readiness and liveness probes states that
For an HTTP probe, the kubelet sends an HTTP request to the specified path and port to perform the check. The kubelet sends the probe to the pod’s IP address, unless the address is overridden by the optional host field in httpGet. [...] In most scenarios, you do not want to set the host field. Here’s one scenario where you would set it. Suppose the Container listens on 127.0.0.1 and the Pod’s hostNetwork field is true. Then host, under httpGet, should be set to 127.0.0.1. If your pod relies on virtual hosts, which is probably the more common case, you should not use host, but rather set the Host header in httpHeaders.
So it uses the Pod's IP unless you set the host field on the probe. The service or ingress route are not used here because the readiness and liveness probes are used to decide if the service or ingress route should send traffic to the Pod.
The HTTP request comes from the Kubelet. Each kubernetes node runs the Kubelet process, which is responsible for node registration, and management of pods. The Kubelet is also responsible for watching the set of Pods that are bound to its node and making sure those Pods are running. It then reports back status as things change with respect to those Pods.
When talking about the HTTP probe, the docs say that
Any code greater than or equal to 200 and less than 400 indicates success. Any other code indicates failure.
Correct, it is using a webhook to determine if the container is ready to serve requests or not. By default the request is made to the Pod IP directly since when it fails, the container IP is removed from all endpoints for all services. This can be overridden by the host filed in the probe definition.
Any response code from 200-399 is considered a success as you have mentioned.
Sometimes a pod should take some times to "warmup"(like load some data to cache). At that time it should not be exposed.
How to prevent a pod to be added to a kube-service unitl initialization complete?
You should use health checks. More specifically in Kubernetes, you need a ReadinessProbe
ReadinessProbe: indicates whether the container is ready to service requests. If the ReadinessProbe fails, the endpoints controller will remove the pod’s IP address from the endpoints of all services that match the pod. The default state of Readiness before the initial delay is Failure. The state of Readiness for a container when no probe is provided is assumed to be Success.
Also, difference from LivenessProbe:
If you’d like to start sending traffic to a pod only when a probe succeeds, specify a ReadinessProbe. In this case, the ReadinessProbe may be the same as the LivenessProbe, but the existence of the ReadinessProbe in the spec means that the pod will start without receiving any traffic and only start receiving traffic once the probe starts succeeding.
http://kubernetes.io/docs/user-guide/pod-states/