How to put two prots in "livenessProbe"? - kubernetes

My legacy server listens on two TCP ports. I want to put livenessProbe and readinessProbe on two ports. For single port it looks like the following. How to do it for 2 ports ?
livenessProbe:
tcpSocket:
port: 15772
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 5
readinessProbe:
tcpSocket:
port: 15772
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5

Kubernetes pod object will not allow to keep more than one liveness and readiness probe per container. if the pod contains multiple containers, you can define multiple liveness/readiness probes for every containers.

Related

Liveness/Readiness probe failure for bitnami/zookeeper and bitnami/kafka image

I am trying to add liveness and readinessprobe for zookeeper using bitnami/zookeeper image, but the pod creation is failing, please let me know what values need to be added in liveness and readiness probe.
Below is the value that I have tried with.
livenessProbe:
enabled: true
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
I am getting the below error.
[spec.containers[0].livenessProbe: Required value: must specify a handler type, spec.containers[0].readinessProbe: Required value: must specify a handler type]
The Kubernetes Probes as the livenessProbe and readinessProbe require a handler which is used for the probe. Kubernetes supports multiple handler types, e.g. a HTTP request probe or a liveness command probe. There are additional handler types, e.g. TCP probes.
You can find all supported handler types in the documentation.
Please note that the handler configuration is required and there isn't a default handler type.

Remove Kubernetes Readiness Probe

My deployment had a readinessProbe configured like:
readinessProbe:
port: 8080
path: /ready
initialDelaySeconds: 30
failureThreshold: 60
periodSeconds: 10
timeoutSeconds: 15
I want to remove the probe for some reason. However, after removing it from my YML file my deployment is not successful because look like the pod is never considered ready. Checking in GCP I discover that the result YML file has a readiness probe that points to some "default values" that I haven't set nowhere:
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 80
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
Is there a way to actually remove a ReadinessProbe for good?
You need to set readinessProbe to null value like that:
readinessProbe: null
I've experienced the same problem when trying to remove a readinessProbe. The only way I have found to successfully do this is by first deleting the deployment then applying the deployment yaml to the cluster. This does cause some downtime but removes the probe for good.
`kubectl delete deployment <deployment-name>`
`kubectl apply -f deployment.yaml`
One possible way to remove the probe would be to modify changes to the configuration file(yaml file) and disable the Readiness Probe.
You can do this, by adding enabled: false to the configuration file. For example,
readinessProbe:
enabled: false
path: /ready
initialDelaySeconds: 30
failureThreshold: 60
periodSeconds: 10
timeoutSeconds: 15
If you are unable to modify the configuration file then, as mentioned by #G. Rafael try re-creating the deployment and apply the deployment configuration file to the cluster.

Openshift readiness and liveness probe never failing even with incorrect http url

I am running a Spring Boot 2.0.0 application inside an OpenShift Pod. To execute the readiness and liveness probe, I am relying on spring boot actuator healthchecks. My application properties file has following properties :
server.address=127.0.0.1
management.server.address=0.0.0.0
server.port=8080
management.server.port=8081
management.endpoints.enabled-by-default=false
management.endpoint.info.enabled=true
management.endpoint.health.enabled=true
management.endpoints.web.exposure.include=health,info
management.endpoint.health.show-details=always
management.security.enabled=false
Following the related configuration of the readiness and liveness probe.
livenessProbe:
failureThreshold: 3
httpGet:
path: /mmcc
port: 8081
scheme: HTTP
initialDelaySeconds: 35
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
readinessProbe:
failureThreshold: 3
httpGet:
path: /jjkjk
port: 8081
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
My expectation is that my readiness and liveness probe should fail with these random urls, but they are succeeding.
Not sure what i am missing here. kindly help.
The answer by Simon gave me a starting point and I looked for curl -vvv localhost:8081/jjkjk output.
The URL was redirecting me to the login url so I figured this is because I have spring security in my classpath.
So I added in my properties file :
management.endpoints.web.exposure.include=health,info
and added
#Configuration
public class ActuatorSecurity extends WebSecurityConfigurerAdapter {
#Override
protected void configure(HttpSecurity http) throws Exception {
http.requestMatcher(EndpointRequest.toAnyEndpoint()).authorizeRequests()
.anyRequest().permitAll()
}
}
and this resolved my problem by enabling the access to url without credentials.

Kubernetes readinessProbe configure to change pinging time

My setting for readinessProbe is following:
readinessProbe:
httpGet:
path: /up
port: *status-port
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
I want to change the periodSeconds to a larger value once my pod is running ok. Is it possible to achieve this? Since during starting of the pod it makes sense to probe it once every 5 seconds, but once it is running fine, it would be more efficient use of resource to probe it once every say 30 seconds.
Such a feature doesn't exist. You can look here for available options.

Openshift - liveness probe not working for http

I have configured liveness probe using httpGet but it's failing with *error Client.Timeout exceeded while awaiting headers*
But the same API is working fine inside the container(using curl) and outside the container(postman).
I have tried adding host attribute in liveness probe but no luck.
Any idea what's going wrong.
Liveness Probe:
livenessProbe:
initialDelaySeconds: 45
periodSeconds: 10
httpGet:
path: /health
port: xxxxx
timeoutSeconds: 5
Version details:
OpenShift Master->v3.9.0+ba7faec-1
Kubernetes Master->v1.9.1+a0ce1bc657
OpenShift Web Console->v3.9.0+b600d46-dirty
Try increasing the initialDelaySeconds, port lower, and check for any transitive features (such as PVC) causing slow loading:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 200
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 1
failureThreshold: 5
PS: For success on probe validation your HTTP return status must be greater than or equal to 200 and less than 400.
Hope this helps