Can you define probes in Helm? - kubernetes-helm

I'm trying to figure out if you can configure Kubernetes probes in Helm charts. I've seen some Git issues about this, but they concerned specific projects. Is there a standard way in which Helm allows for the configuration of probes?

Kubernetes probes can be defined in Helm charts by using the 'livenessProbe' and 'readinessProbe' fields in the pod's container spec like so:
containers:
- name: my-app
image: my-app-image
livenessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /readiness
port: http
initialDelaySeconds: 5
periodSeconds: 5

Related

K8S ReadinessProbe failed, but not from the pods

When doing a kubectl describe pods xxxx, I got in the Events
Warning Unhealthy 7m5s (x2 over 7m15s) kubelet Readiness probe failed: Get "http://192.168.13.66:8080/manage/info": dial tcp 192.168.13.66:8080: connect: connection refused
But when I logged into the pods kubectl exec -ti xxx -- /bin/bash, it seems to work
curl localhost:8080/manage/info
{"build":{"version":"yyyy","branch":"UNKNOWN","ciNumber":"yyy","revision":"25ce837","artifact":"artichaud","name":"xxx","time":"2022-06-07T07:25:19.440Z","user":"root", ...."}}
As far as I understand, the ReadinessProbe is performed by kubelet which runs on each node. So, how could the kubelet agent not able to perform the probe, while I can from the pods (or any pods for that matter) ?
Here are the Probes
livenessProbe:
httpGet:
path: /manage/info
port: http
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /manage/info
port: http
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
The port (http) is already defined
containers:
- name: service
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP

Liveness and Readiness probes failing in Kubernetes cluster- istio proxy sidecar injection is enabled in application

Below is the config for probes in my application helm chart
{{- if .Values.endpoint.liveness }}
livenessProbe:
httpGet:
host: localhost
path: {{ .Values.endpoint.liveness | quote }}
port: 9080
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: 5
{{- end }}
{{- if .Values.endpoint.readiness }}
readinessProbe:
httpGet:
host: localhost
path: {{ .Values.endpoint.readiness | quote }}
port: 9080
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: 60
{{- end }}
{{- end }}
when I deploy, in deployment.yaml
livenessProbe:
httpGet:
path: /my/app/path/health
port: 9080
host: localhost
scheme: HTTP
initialDelaySeconds: 8
timeoutSeconds: 1
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /my/app/path/health
port: 9080
host: localhost
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 60
successThreshold: 1
failureThreshold: 3
But in pod.yaml, it is
livenessProbe:
httpGet:
path: /app-health/app-name/livez
port: 15020
host: localhost
scheme: HTTP
initialDelaySeconds: 8
timeoutSeconds: 1
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /app-health/app-name/readyz
port: 15020
host: localhost
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 60
successThreshold: 1
failureThreshold: 3
and then gives the following error in the pod:
`Readiness probe failed: Get http://IP:15021/healthz/ready: dial tcp IP:15021: connect: connection refused
spec.containers{istio-proxy}
warning
Liveness probe failed: Get http://localhost:15020/app-health/app-name/livez: dial tcp 127.0.0.1:15020: connect: connection refused
spec.containers{app-name}
warning
Readiness probe failed: Get http://localhost:15020/app-health/app-name/readyz: dial tcp 127.0.0.1:15020: connect: connection refused spec.containers{app-name} `
why is the pod using a different path and port for the probes and it is failing giving the above error.
Can someone please help me with what am missing?
You're getting those different paths because those are globally configured across mesh in Istio's control plane component i.e., istio-sidecar-injector configmap
This is coming via sidecar's webhook injection.
See for the below property in "istio-sidecar-injector configmap"
sidecarInjectorWebhook.rewriteAppHTTPProbe=true

Kubernetes does not respect initialDelaySeconds when starting up

I have configured a pod as follows:
livenessProbe:
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 3
readinessProbe:
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 3
Readiness and Liveness probes are tcp-socker based
readinessProbe:
tcpSocket:
port: {{ .Values.port }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
However, when I deploy, the pod is marked almost immediately as failed and goes in Error --> CrashLoopBackOff.
The pod checks a redis connection (which does indeed need a little time to become ready).
And of course I see the connection errors in my pod's logs
File "/usr/local/lib/python3.9/site-packages/redis/connection.py", line 563, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 111 connecting to redis-master:6379. Connection refused.
Why is the pod being marked in Error / CLBO so eagerly, way before initialDelaySeconds: 60?
Here is the pod's yaml dump regarding the probes (I increased the initialDelaySecond to both probes to 100, but still the same...)
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 100
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 9898
timeoutSeconds: 1
name: mycontainer
ports:
- containerPort: 9898
protocol: TCP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 100
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 9898
timeoutSeconds: 1
initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated.
Now pod can get failed because of CrashLoopBackOff even before starting the probes. This is the concept here. It can occur if you set the container restartPolicy to Never.
You can see the pod logs or events for getting the reason of pod failure (can use kubectl describe <pod>)

Liveliness Probe does not allow http headers

I am unable to find what the issue is when I introduce the httpHeader component of the liveliness check in deployment file
---
apiVersion: apps/v1
kind: Deployment
.....
....
livenessProbe:
httpGet:
path: "/healthcheck"
port: 9082
initialDelaySeconds: 5
httpHeaders:
- name: x-b3-sampled
value: 0
timeoutSeconds: 2
periodSeconds: 10
I am trying to suppress the tracing on the healthcheck. The moment I remove the httpHeaders component the deployment is successful. Otherwise it give the following error
[ValidationError(Deployment.spec.template.spec.containers[0].livenessProbe): unknown field "httpHeaders" in io.k8s.api.core.v1.Probe
Crap!!
Realized httpHeaders should be inside the httpGet.
livenessProbe:
httpGet:
path: "/healthcheck"
port: 9082
httpHeaders:
- name: x-b3-sampled
value: 0
initialDelaySeconds: 5
timeoutSeconds: 2
periodSeconds: 10
Yet this does not suppress the tracing!! Forgot to mention that we have ISTIO 1.6 and I thought this would suppress the tracing of the healthcheck... Any help please.

kubernetes liveness port from environment

here there is my deployment
...
envFrom:
- configMapRef:
name: myapp-cmap-l
livenessProbe:
httpGet:
path: /
port: ???
initialDelaySeconds: 5
periodSeconds: 5
...
myapp-cmap-l contains APP_PORT=5000.
How can I reference that value on readinessProbe? Tried ${APP_PORT} or $(APP_PORT)
Riccardo