How to define a liveness command - kubernetes

A livenessProbe (extracted from an example) below is working well.
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
But, my livenessProbe is not working.(pod is continually restarted).
YAML is below
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness-test
name: liveness
spec:
containers:
- name: liveness
args:
- /bin/bash
- -c
- /home/my_home/run_myprogram.sh; sleep 20
image: liveness:v0.6
securityContext:
privileged: true
livenessProbe:
exec:
command:
- /home/my_home/check.sh
initialDelaySeconds: 10
periodSeconds: 5
/home/my_home/check.sh (to restart the pod when the number of running processes is 1 or 0) is below, which is pre-tested.
#!/bin/sh
if [ $(ps -ef | grep -v grep | grep my-program | wc -l) -lt 2 ]; then
exit 1
else
exit 0
fi

This problem is related to Golang Command API.
I changed the livenessProbe as below
livenessProbe:
exec:
command:
- /bin/sh
- -c
- /home/test/check.sh

Related

Kubernetes: How to refer environment variable in config file

As per Kubernetes Doc , to reference a env variable in config use expr $(ENV_VAR).
But, this is not working for me.
In Readiness and Liveliness Probe API, I am getting token value as $(KUBERNETES_AUTH_TOKEN) instead of abcdefg
containers:
env:
- name: KUBERNETES_AUTH_TOKEN
value: "abcdefg"
lifecycle:
preStop:
exec:
command:
- "sh"
- "-c"
- |
curl -H "Authorization: $(KUBERNETES_AUTH_TOKEN)" -H "Content-Type: application/json" -X GET http://localhost:9443/pre-stop
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 9853
scheme: HTTP
httpHeaders:
- name: Authorization
value: $(KUBERNETES_AUTH_TOKEN)
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 9853
scheme: HTTP
httpHeaders:
- name: Authorization
value: $(KUBERNETES_AUTH_TOKEN)
There is an issue opened about a very similar question here
Anyway, i want to share some yaml lines.
This is not intended for production env, obviously it's just to play around with the commands.
apiVersion: v1
kind: Pod
metadata:
labels:
run: ngtest
name: ngtest
spec:
volumes:
- name: stackoverflow-volume
hostPath:
path: /k8s/stackoverflow-eliminare-vl
initContainers:
- name: initial-container
volumeMounts:
- mountPath: /status/
name: stackoverflow-volume
image: bash
env:
- name: KUBERNETES_AUTH_TOKEN
value: "abcdefg"
command:
# this only writes a text file only to show the purpose of initContainers (here you could create a bash script)
- "sh"
- "-c"
- "echo $(KUBERNETES_AUTH_TOKEN) > /status/$(KUBERNETES_AUTH_TOKEN).txt" # withi this line, substitution occurs with $() form
containers:
- image: nginx
name: ngtest
ports:
- containerPort: 80
env:
- name: KUBERNETES_AUTH_TOKEN
value: "abcdefg"
volumeMounts:
- mountPath: /status/
name: stackoverflow-volume
lifecycle:
preStop:
exec:
command:
- "sh"
- "-c"
- |
echo $KUBERNETES_AUTH_TOKEN > /status/$KUBERNETES_AUTH_TOKEN.txt &&
echo 'echo $KUBERNETES_AUTH_TOKEN > /status/anotherFile2.txt' > /status/exec-$KUBERNETES_AUTH_TOKEN-2.sh && . /status/exec-$KUBERNETES_AUTH_TOKEN-2.sh &&
echo 'echo $(KUBERNETES_AUTH_TOKEN) > /status/anotherFile3.txt' > /status/exec-$(KUBERNETES_AUTH_TOKEN)-3.sh && . /status/exec-$(KUBERNETES_AUTH_TOKEN)-3.sh &&
echo 'curl -H "Authorization: $KUBERNETES_AUTH_TOKEN" -k https://www.google.com/search?q=$KUBERNETES_AUTH_TOKEN > /status/googleresult.txt && exit 0' > /status/exec-a-query-on-google.sh && . /status/exec-a-query-on-google.sh
# first two lines are working
# the third one with $(KUBERNETES_AUTH_TOKEN), do not
# last one with a bash script creation works, and this could be a solution
resources: {}
# # so, instead of use http liveness
# livenessProbe:
# failureThreshold: 3
# httpGet:
# path: /status
# port: 80
# scheme: HTTP
# httpHeaders:
# - name: Authorization
# value: $(KUBERNETES_AUTH_TOKEN)
# u can use the exec to call the endpoint from the bash script.
# reads the file of the initContainer (optional)
# here i omitted the url call , but you can copy from the above examples
livenessProbe:
exec:
command:
- sh
- -c
- "cat /status/$KUBERNETES_AUTH_TOKEN.txt && echo -$KUBERNETES_AUTH_TOKEN- `date` top! >> /status/resultliveness.txt &&
exit 0"
initialDelaySeconds: 15
periodSeconds: 5
dnsPolicy: ClusterFirst
restartPolicy: Always
This creates a pod with an hostPath volume (only to show you the output of the files) where will be create files based on the command of the yaml.
More details on the yaml.
If you go in you cluster machine, you can view the files produced.
Anyway, you should use ConfigMap, Secrets and/or https://helm.sh/docs/chart_template_guide/values_files/, which allow us to create your own charts an separate you config values from the yaml templates.
Hopefully, it helps.
Ps. This is my first answer on StackOverflow, please don't be too rude with me!
I don't think that's possible (at least, not that way -- kiggyttass' answer has an alternative that may work), and really your readiness and liveness endpoints shouldn't require authentication.
However, there is a hacky way around this. I don't know if its available to you, but it works for me. First, you'll need to have the KUBERNETES_AUTH_TOKEN variable set in the environment in which you're running kubectl, then you need to use the bash tool envsubst, like so:
cat k8s.yaml | envsubst | kubectl apply -f -
If you use this approach, you'll have to remove the parenthesis around the env-var.
One final note, the header you're using requires a hint as to the nature of the token, so it should maybe be:
value: Bearer $KUBERNETES_AUTH_TOKEN
Or Basic, or whatever. Also, I believe the token itself must be base64 encoded.

When should I use commands or args in readinessProbes

I am working my way through killer.sh.for the CKAD. I encountered a pod definition file that has a command field under the readiness probe and the container executes another command but uses args.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod6
name: pod6
spec:
containers:
- args:
- sh
- -c
- touch /tmp/ready && sleep 1d
image: busybox:1.31.0
name: pod6
resources: {}
readinessProbe: # add
exec: # add
command: # add
- sh # add
- -c # add
- cat /tmp/ready # add
initialDelaySeconds: 5 # add
periodSeconds: 10 # add
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
If the readiness probe weren't used and this pod were created implicitly, args wouldn't be utilized.
kubectl run pod6 --image=busybox:1.31.0 --dry-run=client --command -- sh -c "touch /tmp/ready && sleep 1d" > 6.yaml
The output YAML would look like this:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod69
name: pod69
spec:
containers:
- command:
- sh
- -c
- touch /tmp/ready && sleep 1d
image: busybox:1.31.9
name: pod69
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
Why is command not used on both the readinessProbe and the container?
When do commands become args?
Is there a way to tell?
I've read through this document: https://kubernetes.io/docs/tasks/inject-data-application/_print/
but I still haven't had much luck understanding this situation and when to switch to args.
The reason why you have both cmd + args in Kubernetes is because it gives you options to override the default Commands + Args from the image that you are trying to run.
In your specific case, the busybox image does not have any default Commands with the image so specifying the starting command in either cmd or args in the Pod.yaml file is essentially the same.
To your question of when do commands become args - they dont, when a container is spun up using your image, it simply executes cmd + args. And if the cmd is empty in (both the image & the yaml file) then only the args are executed.
The thread here may give you some more explanation

livenessProbe seems not to be executed

A container defined inside a deployment has a livenessProbe set up: by definition, it calls a remote endpoint and checks, whether response contains useful information or an empty response (which should trigger the pod's restart).
The whole definition is as follows (I removed the further checks for better clarity of the markup):
apiVersion: apps/v1
kind: Deployment
metadata:
name: fc-backend-deployment
labels:
name: fc-backend-deployment
app: fc-test
spec:
replicas: 1
selector:
matchLabels:
name: fc-backend-pod
app: fc-test
template:
metadata:
name: fc-backend-pod
labels:
name: fc-backend-pod
app: fc-test
spec:
containers:
- name: fc-backend
image: localhost:5000/backend:1.3
ports:
- containerPort: 4044
env:
- name: NODE_ENV
value: "dev"
- name: REDIS_HOST
value: "redis"
livenessProbe:
exec:
command:
- curl -X GET $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats | head -c 30 > /app/out.log
initialDelaySeconds: 20
failureThreshold: 12
periodSeconds: 10
I also tried putting the command into an array:
command: ["sh", "-c", "curl -X GET $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats", "|", "head", "-c", "30", ">", "/app/out.log"]
and splitting into separate lines:
- /bin/bash
- -c
- curl
- -X
- GET
- $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats
- |
- head
- -c
- "30"
- >
- /app/out.log
and even like this:
command:
- |
curl -X GET $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats | head -c 30 > /app/out.log
All attempts were made with and without (/bin/ba)sh -c - with the same result.
But, as you're reading this, you already know that none of these worked.
I know it by exec'ing into running container and trying to find the /app/out.log file - it wasn't present any time I watched the directory contents. It looks like the probe gets never executed.
The command run inside running container works just fine: data gets fetched and written to the specified file.
What might be causing the probe not to get executed?
When using the exec type of probes, Kubernetes will not run a shell to process the command, it will just run the command directly. This means that you can only use a single command and that the | character is considered just another parameter of your curl.
To solve the problem, you need to use sh -c to exec shell code, something like the following:
livenessProbe:
exec:
command:
- sh
- -c
- >-
curl -X GET $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats |
head -c 30 > /app/out.log

Incorrect liveness probe for Redis not failing

I have configured a liveness probe for my Redis instances that makes sure that the Redis is able to retrieve keys for it to be able to be called 'alive'.
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 10
exec:
command:
{{- include "liveness_probe" . | nindent 16 }}
_liveness.tpl
{{/* Liveness probe script. */}}
{{- define "liveness_probe" -}}
- "redis-cli"
- "set"
- "liveness_test_key"
- "\"SUCCESS\""
- "&&"
- "redis-cli"
- "get"
- "liveness_test_key"
- "|"
- "awk"
- "'$1 != \"SUCCESS\" {exit 1}'"
{{- end }}
The pod is able to start after doing the change. However, I would like to make sure that the probe is working as expected. For that I just added a delete command before the get command.
{{/* Liveness probe script. */}}
{{- define "liveness_probe" -}}
- "redis-cli"
- "set"
- "liveness_test_key"
- "\"SUCCESS\""
- "&&"
- "redis-cli"
- "del"
- "liveness_test_key"
- "&&"
- "redis-cli"
- "get"
- "liveness_test_key"
- "|"
- "awk"
- "'$1 != \"SUCCESS\" {exit 1}'"
{{- end }}
I get the expected exit codes when I execute this command directly in my command prompt.
But the thing is that my pod is still able to start.
Is the liveness probe command I am using okay? If so, how do I verify this?
Try this for your liveness probe it is working fine and you can try the same in readinessProbe:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: redis
spec:
containers:
- image: redis
name: redis
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
#export REDISCLI_AUTH="$REDIS_PASSWORD"
set_response=$(
redis-cli set liveness_test_key "SUCCESS"
)
del_response=$(
redis-cli del liveness_test_key
)
response=$(
redis-cli get liveness_test_key
)
if [ "$response" != "SUCCESS" ] ; then
echo "Unable to get keys, something is wrong"
exit 1
fi
initialDelaySeconds: 5
periodSeconds: 5
status: {}
You will need to edit these values in your template
I think you're confusing livenessProbewith readinessProbe. livenessProbe tells kubernetes to restart your pod if your command returns a non-zero exit code, this is executed after the period specified in initialDelaySeconds: 20
Whereas readinessProbe is what decides whether a pod is in Ready state to accept traffict or not.
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 10
exec:
command:
{{- include "liveness_probe" . | nindent 16 }}
They can also be used together if you need so.
Please check this page from kubernetes documentation where they explain livenessProbe, readinessProbe and startupProbe

Kubernetes Liveness probe & environment variables

I'm trying to figure this problem out where i need to Attach liveness probe to the container and restart after checking if environment USER is null or undefined.
Any advice on how to set this condition on a busybox container please?
Thanks for helping out a beginner.
Sincerely.
V
[[ ! -z "$YOURVAR" ]] will return false if $YOURVAR is not defined.
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- /bin/bash
- [[ ! -z "$var" ]]
initialDelaySeconds: 5
periodSeconds: 5