Liveliness Probe does not allow http headers - kubernetes

I am unable to find what the issue is when I introduce the httpHeader component of the liveliness check in deployment file
---
apiVersion: apps/v1
kind: Deployment
.....
....
livenessProbe:
httpGet:
path: "/healthcheck"
port: 9082
initialDelaySeconds: 5
httpHeaders:
- name: x-b3-sampled
value: 0
timeoutSeconds: 2
periodSeconds: 10
I am trying to suppress the tracing on the healthcheck. The moment I remove the httpHeaders component the deployment is successful. Otherwise it give the following error
[ValidationError(Deployment.spec.template.spec.containers[0].livenessProbe): unknown field "httpHeaders" in io.k8s.api.core.v1.Probe

Crap!!
Realized httpHeaders should be inside the httpGet.
livenessProbe:
httpGet:
path: "/healthcheck"
port: 9082
httpHeaders:
- name: x-b3-sampled
value: 0
initialDelaySeconds: 5
timeoutSeconds: 2
periodSeconds: 10
Yet this does not suppress the tracing!! Forgot to mention that we have ISTIO 1.6 and I thought this would suppress the tracing of the healthcheck... Any help please.

Related

Can you define probes in Helm?

I'm trying to figure out if you can configure Kubernetes probes in Helm charts. I've seen some Git issues about this, but they concerned specific projects. Is there a standard way in which Helm allows for the configuration of probes?
Kubernetes probes can be defined in Helm charts by using the 'livenessProbe' and 'readinessProbe' fields in the pod's container spec like so:
containers:
- name: my-app
image: my-app-image
livenessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /readiness
port: http
initialDelaySeconds: 5
periodSeconds: 5

Liveness and Readiness probes failing in Kubernetes cluster- istio proxy sidecar injection is enabled in application

Below is the config for probes in my application helm chart
{{- if .Values.endpoint.liveness }}
livenessProbe:
httpGet:
host: localhost
path: {{ .Values.endpoint.liveness | quote }}
port: 9080
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: 5
{{- end }}
{{- if .Values.endpoint.readiness }}
readinessProbe:
httpGet:
host: localhost
path: {{ .Values.endpoint.readiness | quote }}
port: 9080
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: 60
{{- end }}
{{- end }}
when I deploy, in deployment.yaml
livenessProbe:
httpGet:
path: /my/app/path/health
port: 9080
host: localhost
scheme: HTTP
initialDelaySeconds: 8
timeoutSeconds: 1
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /my/app/path/health
port: 9080
host: localhost
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 60
successThreshold: 1
failureThreshold: 3
But in pod.yaml, it is
livenessProbe:
httpGet:
path: /app-health/app-name/livez
port: 15020
host: localhost
scheme: HTTP
initialDelaySeconds: 8
timeoutSeconds: 1
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /app-health/app-name/readyz
port: 15020
host: localhost
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 60
successThreshold: 1
failureThreshold: 3
and then gives the following error in the pod:
`Readiness probe failed: Get http://IP:15021/healthz/ready: dial tcp IP:15021: connect: connection refused
spec.containers{istio-proxy}
warning
Liveness probe failed: Get http://localhost:15020/app-health/app-name/livez: dial tcp 127.0.0.1:15020: connect: connection refused
spec.containers{app-name}
warning
Readiness probe failed: Get http://localhost:15020/app-health/app-name/readyz: dial tcp 127.0.0.1:15020: connect: connection refused spec.containers{app-name} `
why is the pod using a different path and port for the probes and it is failing giving the above error.
Can someone please help me with what am missing?
You're getting those different paths because those are globally configured across mesh in Istio's control plane component i.e., istio-sidecar-injector configmap
This is coming via sidecar's webhook injection.
See for the below property in "istio-sidecar-injector configmap"
sidecarInjectorWebhook.rewriteAppHTTPProbe=true

Kubernetes does not respect initialDelaySeconds when starting up

I have configured a pod as follows:
livenessProbe:
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 3
readinessProbe:
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 3
Readiness and Liveness probes are tcp-socker based
readinessProbe:
tcpSocket:
port: {{ .Values.port }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
However, when I deploy, the pod is marked almost immediately as failed and goes in Error --> CrashLoopBackOff.
The pod checks a redis connection (which does indeed need a little time to become ready).
And of course I see the connection errors in my pod's logs
File "/usr/local/lib/python3.9/site-packages/redis/connection.py", line 563, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 111 connecting to redis-master:6379. Connection refused.
Why is the pod being marked in Error / CLBO so eagerly, way before initialDelaySeconds: 60?
Here is the pod's yaml dump regarding the probes (I increased the initialDelaySecond to both probes to 100, but still the same...)
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 100
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 9898
timeoutSeconds: 1
name: mycontainer
ports:
- containerPort: 9898
protocol: TCP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 100
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 9898
timeoutSeconds: 1
initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated.
Now pod can get failed because of CrashLoopBackOff even before starting the probes. This is the concept here. It can occur if you set the container restartPolicy to Never.
You can see the pod logs or events for getting the reason of pod failure (can use kubectl describe <pod>)

I am getting 502 always in GKE

Hello I am writing a application which I want to run in GKE and here is my deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: chopcast-dev
spec:
selector:
matchLabels:
app: chopcast-dev
replicas: 3
template:
metadata:
labels:
app: chopcast-dev
spec:
containers:
- name: chopcast-dev
image: eu.gcr.io/valued-amp-998877/chopcast:latest
livenessProbe:
failureThreshold: 3
httpGet:
path: /api/v1/healthz
port: 5000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/v1/healthz
port: 5000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 10
ports:
- containerPort: 5000
Here is the service
apiVersion: v1
kind: Service
metadata:
name: chopcast-dev-service
spec:
selector:
app: chopcast-dev
ports:
- protocol: "TCP"
port: 5000
targetPort: 5000
type: LoadBalancer
and here is the ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: chopcast-dev-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: gke-tribetacttics-static-ip
networking.gke.io/managed-certificates: chopcast-dev-certificate
#kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: "chopcast-dev-service"
servicePort: 5000
rules:
- http:
paths:
- backend:
serviceName: "chopcast-dev-service"
servicePort: 5000
host: "gke.abcd.com"
every time I get my service running successfully and when I run the exposed service with the assigned IP it works file ie
x.x.x.x:5000/api/v1/healthz
But whenever I put the ingress static IP I always get
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
I tried with BackendConfig too.
As per the documentation, the health should pick up from the readinessProbe but it seems it is not picking up as I can see in load balancer section it is 0/1 backend is up. Unfortunately, I don't want to serve 200 on the / route.
Please help.
Finally, I figured out what was going wrong. The readinessProbe path should have a trailing slash as my application is designed that way.
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/v1/healthz/
port: 5000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 10
Here are few checklist
Check your application name, namespace and labels.
Use selectors and check all selectors are proper.
Use trailing slashes if your app router is designed such way.
Also check the container port in your deployment YAML.
Cheers
Which version of kubernetes are you using?
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: chopcast-dev-ingress
spec:
rules:
- host: "gke.abcd.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: chopcast-dev-service
port:
number: 5000

kubernetes liveness port from environment

here there is my deployment
...
envFrom:
- configMapRef:
name: myapp-cmap-l
livenessProbe:
httpGet:
path: /
port: ???
initialDelaySeconds: 5
periodSeconds: 5
...
myapp-cmap-l contains APP_PORT=5000.
How can I reference that value on readinessProbe? Tried ${APP_PORT} or $(APP_PORT)
Riccardo