Should i create a API for readinessprobe to work kubernetes - kubernetes

I am trying to create a RollingUpdate and trying to use below code to see if pod came up or not. Should i create explicit API path like /healthz in my application so that kubernetes pings it and gets 200 status back or else its internal url for kubernetes?
specs:
containers:
- name: liveness
readinessProbe:
httpGet:
path: /healthz
port: 80

As#Thomas answered the Http probe, If application does not provide a endpoint to validate the success response. you can use TCP Probe
kubelet tries to establish a TCP connection on the container's port. If it can establish a connection, the container is considered healthy; if it can’t it is considered unhealthy
for example, in your case it would be like this
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20
You can get further information over here configure-liveness-readiness-probes/

Kubernetes will make a request to the container on port 80 and path /healthz and expects a status code in the range of 2xx-3xx to be considered successful.
If your application does not provide a mapping for the path and returns a 404, kubernetes assumes that the health check fails.
Depending on your application you need to manually provide the API, if it is not done by your framework. (You can check using a curl or wget to the path from another pod and verify the result)

Related

Does Cadence Service provide any health check API endpoint

Does Cadence Service provide any health check API endpoint to monitor and ensure its availability.
For K8 deployment, I've see this:
livenessProbe:
initialDelaySeconds: 150
tcpSocket:
port: rpc
readinessProbe:
initialDelaySeconds: 10
tcpSocket:
port: rpc
But it's not http

exec probe in GKE

I'm trying to use exec probes for readiness and liveness in GKE. This is because it is part of Kubernetes' recommended way to do health checks on gRPC back ends. However when I put the exec probe config into my deployment yaml and apply it, it doesn't take effect in GCP. This is my container yaml:
- name: rev79-uac-sandbox
image: gcr.io/rev79-232812/uac:latest
imagePullPolicy: Always
ports:
- containerPort: 3011
readinessProbe:
exec:
command: ["bin/grpc_health_probe", "-addr=:3011"]
initialDelaySeconds: 5
livenessProbe:
exec:
command: ["bin/grpc_health_probe", "-addr=:3011"]
initialDelaySeconds: 10
But still the health checks fail and when I look at the health check configuration in the GCP console I see a plain HTTP health check directed at '/'
When I edit a health check in GCP console there doesn't seem to be any way to choose an exec type. Also I can't see any mention of liveness checks as contrasted to readiness checks even though these are separate Kubernetes things.
Does Google cloud support using exec for health checks?
If so, how do I do it?
If not, how can I health check a gRPC server?
TCP probes are useful when we are using gRPC Services rather than using HTTP probes.
- containerPort: 3011
readinessProbe:
tcpSocket:
port: 3011
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 3011
initialDelaySeconds: 15
periodSeconds: 20
the kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy, if it can’t it is considered a failure
define-a-tcp-liveness-probe
Exec probes work in GKE just the same way they work everywhere. You can view liveness probe result in "kubectl describe pod". Or you can simply log in into pod, execute command and see its return code.
The server has to implement the grpc probe protocol as indicated here as indicated in this article
Both answers from Vasily Angapov and Suresh Vishnoi should in theory work, however in practice they don't (at least in my practice).
So my solution was to start another server on my backend container - an HTTP server that simply has the job of executing the health check whenever it gets a request and returning a 200 status if it passes and a 503 if it fails.
I also had to open a second port on my container for that server to listen on.

Custom Health check with GCP

Hi I try to use custom health check with GCP LoadBalancer.
I have added readinessProbe & livenessProbe like this:
readinessProbe:
httpGet:
path: /health
port: dash
initialDelaySeconds: 5
periodSeconds: 1
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 10
livenessProbe:
httpGet:
path: /health
port: dash
initialDelaySeconds: 5
periodSeconds: 1
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 10
But when I create my ingress I haven't got my custom health check
Path LB
I FINALIZED an answer. What I was trying to do was impossible. My GCE Ingress used a backend on port 80 . But in my ReadinessProbe I told him to check on port 8080 and on the /health path. This is impossible!
The port of the service declared in the Ingress backend must be the same as that declared in the readinessProbe. Only the path can be different. If we do not respect this pattern, it is / that is associated with the Health Check GCP path.
From a network point of view this is logical, the Health Check GCP is "out" of the Kube cluster, if we tell it to route on port 80 but our ReadinessProbe is on another port, how it can ensure that even if the port associated with the ReadinessProbe meets port 80 (which is the one on which it must route traffic) also respond.
In summary, the port of the backend declared in Ingress must have a readinessProbe on the same port. The only thing we can customize is the path.
I think you are confused between resources in GCP.
The code you posted is at no moment in relation to a Load balancer resource, as it's a kubernetes health check for pod states. If you want to know if the probes are working, check your pod state, if it's not running describe your pod and look at the logs, should indicate an issue with the probes.
I'm going to guess that you have an ingress resource somewhere in your kubernetes conf wich creates the lb and all the resources around it like the health check (still guessing that the image you posted is in relation to that).
If you are using GKE you should leave the google automated resource conf from k8s config you deployed as it is, cause you may brake some things that google is already maintaining for you.

Openshift - liveness probe not working for http

I have configured liveness probe using httpGet but it's failing with *error Client.Timeout exceeded while awaiting headers*
But the same API is working fine inside the container(using curl) and outside the container(postman).
I have tried adding host attribute in liveness probe but no luck.
Any idea what's going wrong.
Liveness Probe:
livenessProbe:
initialDelaySeconds: 45
periodSeconds: 10
httpGet:
path: /health
port: xxxxx
timeoutSeconds: 5
Version details:
OpenShift Master->v3.9.0+ba7faec-1
Kubernetes Master->v1.9.1+a0ce1bc657
OpenShift Web Console->v3.9.0+b600d46-dirty
Try increasing the initialDelaySeconds, port lower, and check for any transitive features (such as PVC) causing slow loading:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 200
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 1
failureThreshold: 5
PS: For success on probe validation your HTTP return status must be greater than or equal to 200 and less than 400.
Hope this helps

Accessing service health checks ports after configuring istio

So we're deploying istio 1.0.2 with global mtls and so far it's gone well.
For health checks we've added separate ports to the services and configured them as per the docs:
https://istio.io/docs/tasks/traffic-management/app-health-check/#mutual-tls-is-enabled
Our application ports are now on 8080 and health checks ports are on 8081.
After doing this Kubernetes is able to do health checks and the services appear to be running normally.
However our monitoring solution cannot hit the health check port.
The monitoring application also sits in kubernetes and is currently outside the mesh. The above doc says the following:
Because the Istio proxy only intercepts ports that are explicitly declared in the containerPort field, traffic to 8002 port bypasses the Istio proxy regardless of whether Istio mutual TLS is enabled.e
This is how we have it configured. So in our case 8081 should be outside the mesh:
livenessProbe:
failureThreshold: 3
httpGet:
path: /manage/health
port: 8081
scheme: HTTP
initialDelaySeconds: 180
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: <our-service>
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /manage/health
port: 8081
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
However we can't access 8081 from another pod which is outside the mesh.
For example:
curl http://<our-service>:8081/manage/health
curl: (7) Failed connect to <our-service>:8081; Connection timed out
If we try from another pod inside the mesh istio throws back a 404, which is perhaps expected.
I tried to play around with destination rules like this:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: <our-service>-health
spec:
host: <our-service>.namepspace.svc.cluster.local
trafficPolicy:
portLevelSettings:
- port:
number: 8081
tls:
mode: DISABLE
But that just kills all connectivity to the service, both internally and through the ingress gateway.
According to the official Istio Documentation port 8081 will not get through Istio Envoy, hence won’t be accessible for the other Pods outside your service mesh, because Istio proxy determines only the value of containerPort transmitting through the Pod's service.
In case you build Istio service mesh without TLS authentication between Pods, there is an option to use the same port for the basic network route to the Pod's service and the readiness/liveness probes.
However, if you use port 8001 for both regular traffic and liveness
probes, health check will fail when mutual TLS is enabled because the
HTTP request is sent from Kubelet, which does not send client
certificate to the liveness-http service.
Assuming that Istio Mixer provides a three Prometheus endpoints, you can consider using Prometheus as the main monitoring tool in order to collect and analyze the mesh metrics.