Openshift - liveness probe not working for http - kubernetes

I have configured liveness probe using httpGet but it's failing with *error Client.Timeout exceeded while awaiting headers*
But the same API is working fine inside the container(using curl) and outside the container(postman).
I have tried adding host attribute in liveness probe but no luck.
Any idea what's going wrong.
Liveness Probe:
livenessProbe:
initialDelaySeconds: 45
periodSeconds: 10
httpGet:
path: /health
port: xxxxx
timeoutSeconds: 5
Version details:
OpenShift Master->v3.9.0+ba7faec-1
Kubernetes Master->v1.9.1+a0ce1bc657
OpenShift Web Console->v3.9.0+b600d46-dirty

Try increasing the initialDelaySeconds, port lower, and check for any transitive features (such as PVC) causing slow loading:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 200
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 1
failureThreshold: 5
PS: For success on probe validation your HTTP return status must be greater than or equal to 200 and less than 400.
Hope this helps

Related

Liveness/Readiness probe failure for bitnami/zookeeper and bitnami/kafka image

I am trying to add liveness and readinessprobe for zookeeper using bitnami/zookeeper image, but the pod creation is failing, please let me know what values need to be added in liveness and readiness probe.
Below is the value that I have tried with.
livenessProbe:
enabled: true
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
I am getting the below error.
[spec.containers[0].livenessProbe: Required value: must specify a handler type, spec.containers[0].readinessProbe: Required value: must specify a handler type]
The Kubernetes Probes as the livenessProbe and readinessProbe require a handler which is used for the probe. Kubernetes supports multiple handler types, e.g. a HTTP request probe or a liveness command probe. There are additional handler types, e.g. TCP probes.
You can find all supported handler types in the documentation.
Please note that the handler configuration is required and there isn't a default handler type.

Remove Kubernetes Readiness Probe

My deployment had a readinessProbe configured like:
readinessProbe:
port: 8080
path: /ready
initialDelaySeconds: 30
failureThreshold: 60
periodSeconds: 10
timeoutSeconds: 15
I want to remove the probe for some reason. However, after removing it from my YML file my deployment is not successful because look like the pod is never considered ready. Checking in GCP I discover that the result YML file has a readiness probe that points to some "default values" that I haven't set nowhere:
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 80
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
Is there a way to actually remove a ReadinessProbe for good?
You need to set readinessProbe to null value like that:
readinessProbe: null
I've experienced the same problem when trying to remove a readinessProbe. The only way I have found to successfully do this is by first deleting the deployment then applying the deployment yaml to the cluster. This does cause some downtime but removes the probe for good.
`kubectl delete deployment <deployment-name>`
`kubectl apply -f deployment.yaml`
One possible way to remove the probe would be to modify changes to the configuration file(yaml file) and disable the Readiness Probe.
You can do this, by adding enabled: false to the configuration file. For example,
readinessProbe:
enabled: false
path: /ready
initialDelaySeconds: 30
failureThreshold: 60
periodSeconds: 10
timeoutSeconds: 15
If you are unable to modify the configuration file then, as mentioned by #G. Rafael try re-creating the deployment and apply the deployment configuration file to the cluster.

Openshift readiness and liveness probe never failing even with incorrect http url

I am running a Spring Boot 2.0.0 application inside an OpenShift Pod. To execute the readiness and liveness probe, I am relying on spring boot actuator healthchecks. My application properties file has following properties :
server.address=127.0.0.1
management.server.address=0.0.0.0
server.port=8080
management.server.port=8081
management.endpoints.enabled-by-default=false
management.endpoint.info.enabled=true
management.endpoint.health.enabled=true
management.endpoints.web.exposure.include=health,info
management.endpoint.health.show-details=always
management.security.enabled=false
Following the related configuration of the readiness and liveness probe.
livenessProbe:
failureThreshold: 3
httpGet:
path: /mmcc
port: 8081
scheme: HTTP
initialDelaySeconds: 35
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
readinessProbe:
failureThreshold: 3
httpGet:
path: /jjkjk
port: 8081
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
My expectation is that my readiness and liveness probe should fail with these random urls, but they are succeeding.
Not sure what i am missing here. kindly help.
The answer by Simon gave me a starting point and I looked for curl -vvv localhost:8081/jjkjk output.
The URL was redirecting me to the login url so I figured this is because I have spring security in my classpath.
So I added in my properties file :
management.endpoints.web.exposure.include=health,info
and added
#Configuration
public class ActuatorSecurity extends WebSecurityConfigurerAdapter {
#Override
protected void configure(HttpSecurity http) throws Exception {
http.requestMatcher(EndpointRequest.toAnyEndpoint()).authorizeRequests()
.anyRequest().permitAll()
}
}
and this resolved my problem by enabling the access to url without credentials.

How to put two prots in "livenessProbe"?

My legacy server listens on two TCP ports. I want to put livenessProbe and readinessProbe on two ports. For single port it looks like the following. How to do it for 2 ports ?
livenessProbe:
tcpSocket:
port: 15772
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 5
readinessProbe:
tcpSocket:
port: 15772
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5
Kubernetes pod object will not allow to keep more than one liveness and readiness probe per container. if the pod contains multiple containers, you can define multiple liveness/readiness probes for every containers.

Should i create a API for readinessprobe to work kubernetes

I am trying to create a RollingUpdate and trying to use below code to see if pod came up or not. Should i create explicit API path like /healthz in my application so that kubernetes pings it and gets 200 status back or else its internal url for kubernetes?
specs:
containers:
- name: liveness
readinessProbe:
httpGet:
path: /healthz
port: 80
As#Thomas answered the Http probe, If application does not provide a endpoint to validate the success response. you can use TCP Probe
kubelet tries to establish a TCP connection on the container's port. If it can establish a connection, the container is considered healthy; if it can’t it is considered unhealthy
for example, in your case it would be like this
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20
You can get further information over here configure-liveness-readiness-probes/
Kubernetes will make a request to the container on port 80 and path /healthz and expects a status code in the range of 2xx-3xx to be considered successful.
If your application does not provide a mapping for the path and returns a 404, kubernetes assumes that the health check fails.
Depending on your application you need to manually provide the API, if it is not done by your framework. (You can check using a curl or wget to the path from another pod and verify the result)