kubernetes liveness port from environment - kubernetes

here there is my deployment
...
envFrom:
- configMapRef:
name: myapp-cmap-l
livenessProbe:
httpGet:
path: /
port: ???
initialDelaySeconds: 5
periodSeconds: 5
...
myapp-cmap-l contains APP_PORT=5000.
How can I reference that value on readinessProbe? Tried ${APP_PORT} or $(APP_PORT)
Riccardo

Related

Can you define probes in Helm?

I'm trying to figure out if you can configure Kubernetes probes in Helm charts. I've seen some Git issues about this, but they concerned specific projects. Is there a standard way in which Helm allows for the configuration of probes?
Kubernetes probes can be defined in Helm charts by using the 'livenessProbe' and 'readinessProbe' fields in the pod's container spec like so:
containers:
- name: my-app
image: my-app-image
livenessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /readiness
port: http
initialDelaySeconds: 5
periodSeconds: 5

K8S ReadinessProbe failed, but not from the pods

When doing a kubectl describe pods xxxx, I got in the Events
Warning Unhealthy 7m5s (x2 over 7m15s) kubelet Readiness probe failed: Get "http://192.168.13.66:8080/manage/info": dial tcp 192.168.13.66:8080: connect: connection refused
But when I logged into the pods kubectl exec -ti xxx -- /bin/bash, it seems to work
curl localhost:8080/manage/info
{"build":{"version":"yyyy","branch":"UNKNOWN","ciNumber":"yyy","revision":"25ce837","artifact":"artichaud","name":"xxx","time":"2022-06-07T07:25:19.440Z","user":"root", ...."}}
As far as I understand, the ReadinessProbe is performed by kubelet which runs on each node. So, how could the kubelet agent not able to perform the probe, while I can from the pods (or any pods for that matter) ?
Here are the Probes
livenessProbe:
httpGet:
path: /manage/info
port: http
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /manage/info
port: http
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
The port (http) is already defined
containers:
- name: service
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP

Liveness and Readiness probes failing in Kubernetes cluster- istio proxy sidecar injection is enabled in application

Below is the config for probes in my application helm chart
{{- if .Values.endpoint.liveness }}
livenessProbe:
httpGet:
host: localhost
path: {{ .Values.endpoint.liveness | quote }}
port: 9080
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: 5
{{- end }}
{{- if .Values.endpoint.readiness }}
readinessProbe:
httpGet:
host: localhost
path: {{ .Values.endpoint.readiness | quote }}
port: 9080
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: 60
{{- end }}
{{- end }}
when I deploy, in deployment.yaml
livenessProbe:
httpGet:
path: /my/app/path/health
port: 9080
host: localhost
scheme: HTTP
initialDelaySeconds: 8
timeoutSeconds: 1
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /my/app/path/health
port: 9080
host: localhost
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 60
successThreshold: 1
failureThreshold: 3
But in pod.yaml, it is
livenessProbe:
httpGet:
path: /app-health/app-name/livez
port: 15020
host: localhost
scheme: HTTP
initialDelaySeconds: 8
timeoutSeconds: 1
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /app-health/app-name/readyz
port: 15020
host: localhost
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 60
successThreshold: 1
failureThreshold: 3
and then gives the following error in the pod:
`Readiness probe failed: Get http://IP:15021/healthz/ready: dial tcp IP:15021: connect: connection refused
spec.containers{istio-proxy}
warning
Liveness probe failed: Get http://localhost:15020/app-health/app-name/livez: dial tcp 127.0.0.1:15020: connect: connection refused
spec.containers{app-name}
warning
Readiness probe failed: Get http://localhost:15020/app-health/app-name/readyz: dial tcp 127.0.0.1:15020: connect: connection refused spec.containers{app-name} `
why is the pod using a different path and port for the probes and it is failing giving the above error.
Can someone please help me with what am missing?
You're getting those different paths because those are globally configured across mesh in Istio's control plane component i.e., istio-sidecar-injector configmap
This is coming via sidecar's webhook injection.
See for the below property in "istio-sidecar-injector configmap"
sidecarInjectorWebhook.rewriteAppHTTPProbe=true

Liveliness Probe does not allow http headers

I am unable to find what the issue is when I introduce the httpHeader component of the liveliness check in deployment file
---
apiVersion: apps/v1
kind: Deployment
.....
....
livenessProbe:
httpGet:
path: "/healthcheck"
port: 9082
initialDelaySeconds: 5
httpHeaders:
- name: x-b3-sampled
value: 0
timeoutSeconds: 2
periodSeconds: 10
I am trying to suppress the tracing on the healthcheck. The moment I remove the httpHeaders component the deployment is successful. Otherwise it give the following error
[ValidationError(Deployment.spec.template.spec.containers[0].livenessProbe): unknown field "httpHeaders" in io.k8s.api.core.v1.Probe
Crap!!
Realized httpHeaders should be inside the httpGet.
livenessProbe:
httpGet:
path: "/healthcheck"
port: 9082
httpHeaders:
- name: x-b3-sampled
value: 0
initialDelaySeconds: 5
timeoutSeconds: 2
periodSeconds: 10
Yet this does not suppress the tracing!! Forgot to mention that we have ISTIO 1.6 and I thought this would suppress the tracing of the healthcheck... Any help please.

Kubernetes Keycloak high availability cluster

I'm trying to deploy Keycloak in Kubernetes with multiple replicas. I am using Helm 3.0 charts with the latest Kubernetes. It deploys fine when I have one replica in my stateful set—but I need high availability and, thus, at least two replicas. So far, it only works with one replica. With two replicas, I can't login as either an admin or as a regular user.
Can someone provide me with a working version of Keycloak deployment (preferably Helm) that supports multiple replicas?
jgroups:
discoveryProtocol: dns.DNS_PING
jgroups:
discoveryProtocol: Kubernetes.KUBE_PING
jgroups:
discoveryProtocol: JDBC_PING
Stateful set snippet
apiVersion: v1
items:
- apiVersion: apps/v1
kind: StatefulSet
metadata:
...
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: keycloak
helm.sh/chart: keycloak-7.5.0
name: ...
namespace: default
spec:
podManagementPolicy: Parallel
replicas: 2
revisionHistoryLimit: 10
...
containers:
- command:
- /scripts/keycloak.sh
env:
...
livenessProbe:
failureThreshold: 3
httpGet:
path: /auth/
port: http
scheme: HTTP
initialDelaySeconds: 300
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: keycloak
ports:
...
readinessProbe:
failureThreshold: 3
httpGet:
path: /auth/realms/master
port: http
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
- name: POSTGRES_DB
value: keycloak
- name: POSTGRESQL_ENABLE_LDAP
value: "no"
image: docker.io/bitnami/postgresql:12.2.0-debian-10-r91
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /bin/sh
- -c
- exec pg_isready -U "keycloak" -d "keycloak" -h 127.0.0.1 -p 5432
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: bizmall-postgresql
ports:
- containerPort: 5432
name: tcp-postgresql
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -c
- -e
- |
exec pg_isready -U "keycloak" -d "keycloak" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 250m
memory: 256Mi
securityContext:
runAsUser: 1001
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /dev/shm
name: dshm
- mountPath: /bitnami/postgresql
name: data
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
terminationGracePeriodSeconds: 30
Here’s helm chart for keycloak - https://github.com/codecentric/helm-charts/tree/master/charts/keycloak we are using it do deploy HA mode keycloak with 3 replicas.