livenessProbe seems not to be executed - kubernetes

A container defined inside a deployment has a livenessProbe set up: by definition, it calls a remote endpoint and checks, whether response contains useful information or an empty response (which should trigger the pod's restart).
The whole definition is as follows (I removed the further checks for better clarity of the markup):
apiVersion: apps/v1
kind: Deployment
metadata:
name: fc-backend-deployment
labels:
name: fc-backend-deployment
app: fc-test
spec:
replicas: 1
selector:
matchLabels:
name: fc-backend-pod
app: fc-test
template:
metadata:
name: fc-backend-pod
labels:
name: fc-backend-pod
app: fc-test
spec:
containers:
- name: fc-backend
image: localhost:5000/backend:1.3
ports:
- containerPort: 4044
env:
- name: NODE_ENV
value: "dev"
- name: REDIS_HOST
value: "redis"
livenessProbe:
exec:
command:
- curl -X GET $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats | head -c 30 > /app/out.log
initialDelaySeconds: 20
failureThreshold: 12
periodSeconds: 10
I also tried putting the command into an array:
command: ["sh", "-c", "curl -X GET $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats", "|", "head", "-c", "30", ">", "/app/out.log"]
and splitting into separate lines:
- /bin/bash
- -c
- curl
- -X
- GET
- $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats
- |
- head
- -c
- "30"
- >
- /app/out.log
and even like this:
command:
- |
curl -X GET $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats | head -c 30 > /app/out.log
All attempts were made with and without (/bin/ba)sh -c - with the same result.
But, as you're reading this, you already know that none of these worked.
I know it by exec'ing into running container and trying to find the /app/out.log file - it wasn't present any time I watched the directory contents. It looks like the probe gets never executed.
The command run inside running container works just fine: data gets fetched and written to the specified file.
What might be causing the probe not to get executed?

When using the exec type of probes, Kubernetes will not run a shell to process the command, it will just run the command directly. This means that you can only use a single command and that the | character is considered just another parameter of your curl.
To solve the problem, you need to use sh -c to exec shell code, something like the following:
livenessProbe:
exec:
command:
- sh
- -c
- >-
curl -X GET $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats |
head -c 30 > /app/out.log

Related

Kubernetes: How to refer environment variable in config file

As per Kubernetes Doc , to reference a env variable in config use expr $(ENV_VAR).
But, this is not working for me.
In Readiness and Liveliness Probe API, I am getting token value as $(KUBERNETES_AUTH_TOKEN) instead of abcdefg
containers:
env:
- name: KUBERNETES_AUTH_TOKEN
value: "abcdefg"
lifecycle:
preStop:
exec:
command:
- "sh"
- "-c"
- |
curl -H "Authorization: $(KUBERNETES_AUTH_TOKEN)" -H "Content-Type: application/json" -X GET http://localhost:9443/pre-stop
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 9853
scheme: HTTP
httpHeaders:
- name: Authorization
value: $(KUBERNETES_AUTH_TOKEN)
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 9853
scheme: HTTP
httpHeaders:
- name: Authorization
value: $(KUBERNETES_AUTH_TOKEN)
There is an issue opened about a very similar question here
Anyway, i want to share some yaml lines.
This is not intended for production env, obviously it's just to play around with the commands.
apiVersion: v1
kind: Pod
metadata:
labels:
run: ngtest
name: ngtest
spec:
volumes:
- name: stackoverflow-volume
hostPath:
path: /k8s/stackoverflow-eliminare-vl
initContainers:
- name: initial-container
volumeMounts:
- mountPath: /status/
name: stackoverflow-volume
image: bash
env:
- name: KUBERNETES_AUTH_TOKEN
value: "abcdefg"
command:
# this only writes a text file only to show the purpose of initContainers (here you could create a bash script)
- "sh"
- "-c"
- "echo $(KUBERNETES_AUTH_TOKEN) > /status/$(KUBERNETES_AUTH_TOKEN).txt" # withi this line, substitution occurs with $() form
containers:
- image: nginx
name: ngtest
ports:
- containerPort: 80
env:
- name: KUBERNETES_AUTH_TOKEN
value: "abcdefg"
volumeMounts:
- mountPath: /status/
name: stackoverflow-volume
lifecycle:
preStop:
exec:
command:
- "sh"
- "-c"
- |
echo $KUBERNETES_AUTH_TOKEN > /status/$KUBERNETES_AUTH_TOKEN.txt &&
echo 'echo $KUBERNETES_AUTH_TOKEN > /status/anotherFile2.txt' > /status/exec-$KUBERNETES_AUTH_TOKEN-2.sh && . /status/exec-$KUBERNETES_AUTH_TOKEN-2.sh &&
echo 'echo $(KUBERNETES_AUTH_TOKEN) > /status/anotherFile3.txt' > /status/exec-$(KUBERNETES_AUTH_TOKEN)-3.sh && . /status/exec-$(KUBERNETES_AUTH_TOKEN)-3.sh &&
echo 'curl -H "Authorization: $KUBERNETES_AUTH_TOKEN" -k https://www.google.com/search?q=$KUBERNETES_AUTH_TOKEN > /status/googleresult.txt && exit 0' > /status/exec-a-query-on-google.sh && . /status/exec-a-query-on-google.sh
# first two lines are working
# the third one with $(KUBERNETES_AUTH_TOKEN), do not
# last one with a bash script creation works, and this could be a solution
resources: {}
# # so, instead of use http liveness
# livenessProbe:
# failureThreshold: 3
# httpGet:
# path: /status
# port: 80
# scheme: HTTP
# httpHeaders:
# - name: Authorization
# value: $(KUBERNETES_AUTH_TOKEN)
# u can use the exec to call the endpoint from the bash script.
# reads the file of the initContainer (optional)
# here i omitted the url call , but you can copy from the above examples
livenessProbe:
exec:
command:
- sh
- -c
- "cat /status/$KUBERNETES_AUTH_TOKEN.txt && echo -$KUBERNETES_AUTH_TOKEN- `date` top! >> /status/resultliveness.txt &&
exit 0"
initialDelaySeconds: 15
periodSeconds: 5
dnsPolicy: ClusterFirst
restartPolicy: Always
This creates a pod with an hostPath volume (only to show you the output of the files) where will be create files based on the command of the yaml.
More details on the yaml.
If you go in you cluster machine, you can view the files produced.
Anyway, you should use ConfigMap, Secrets and/or https://helm.sh/docs/chart_template_guide/values_files/, which allow us to create your own charts an separate you config values from the yaml templates.
Hopefully, it helps.
Ps. This is my first answer on StackOverflow, please don't be too rude with me!
I don't think that's possible (at least, not that way -- kiggyttass' answer has an alternative that may work), and really your readiness and liveness endpoints shouldn't require authentication.
However, there is a hacky way around this. I don't know if its available to you, but it works for me. First, you'll need to have the KUBERNETES_AUTH_TOKEN variable set in the environment in which you're running kubectl, then you need to use the bash tool envsubst, like so:
cat k8s.yaml | envsubst | kubectl apply -f -
If you use this approach, you'll have to remove the parenthesis around the env-var.
One final note, the header you're using requires a hint as to the nature of the token, so it should maybe be:
value: Bearer $KUBERNETES_AUTH_TOKEN
Or Basic, or whatever. Also, I believe the token itself must be base64 encoded.

Script in a pod is not getting executed

I have an EKS cluster and an RDS (mariadb). I am trying to make a backup of given databases though a script in a CronJob. The CronJob object looks like this:
apiVersion: batch/v1
kind: CronJob
metadata:
name: mysqldump
namespace: mysqldump
spec:
schedule: "* * * * *"
concurrencyPolicy: Replace
jobTemplate:
spec:
template:
spec:
containers:
- name: mysql-backup
image: viejo/debian-mysqldump:latest
envFrom:
- configMapRef:
name: mysqldump-config
args:
- /bin/bash
- -c
- /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
resources:
limits:
cpu: "0.5"
memory: "0.5Gi"
restartPolicy: OnFailure
The script is called mysqldump.sh, which gets all necessary details from a ConfigMap object. It makes the dump of the databases in an environment variable MYSQLDUMP_DATABASES, and moves it to S3 bucket.
Note: I am going to move some variables to a Secret, but before I need this to work.
What happens is NOTHING. The script is never getting executed I tried putting a "echo starting the backup", before the script, and "echo backup ended" after it, but I don't see none of them. If I'd access the container and execute the same exact command manually, it works:
root#mysqldump-27550908-sjwfm:/# /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
root#mysqldump-27550908-sjwfm:/#
Can anyone point out a possible issue?
Try change args to command:
...
command:
- /bin/bash
- -c
- /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
...

When should I use commands or args in readinessProbes

I am working my way through killer.sh.for the CKAD. I encountered a pod definition file that has a command field under the readiness probe and the container executes another command but uses args.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod6
name: pod6
spec:
containers:
- args:
- sh
- -c
- touch /tmp/ready && sleep 1d
image: busybox:1.31.0
name: pod6
resources: {}
readinessProbe: # add
exec: # add
command: # add
- sh # add
- -c # add
- cat /tmp/ready # add
initialDelaySeconds: 5 # add
periodSeconds: 10 # add
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
If the readiness probe weren't used and this pod were created implicitly, args wouldn't be utilized.
kubectl run pod6 --image=busybox:1.31.0 --dry-run=client --command -- sh -c "touch /tmp/ready && sleep 1d" > 6.yaml
The output YAML would look like this:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod69
name: pod69
spec:
containers:
- command:
- sh
- -c
- touch /tmp/ready && sleep 1d
image: busybox:1.31.9
name: pod69
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
Why is command not used on both the readinessProbe and the container?
When do commands become args?
Is there a way to tell?
I've read through this document: https://kubernetes.io/docs/tasks/inject-data-application/_print/
but I still haven't had much luck understanding this situation and when to switch to args.
The reason why you have both cmd + args in Kubernetes is because it gives you options to override the default Commands + Args from the image that you are trying to run.
In your specific case, the busybox image does not have any default Commands with the image so specifying the starting command in either cmd or args in the Pod.yaml file is essentially the same.
To your question of when do commands become args - they dont, when a container is spun up using your image, it simply executes cmd + args. And if the cmd is empty in (both the image & the yaml file) then only the args are executed.
The thread here may give you some more explanation

cronjob yml file with wget command

Hi I'm new with Kubernetes. I'm trying to run wget command in cronjob.yml file to get data from url each day. For now I'm testing it and pass schedule as 1min. I also add some echo command just to get some response from that job. Below is my yml file. I'm changing directory to folder where I want to save data and passing url with site from which I'm taking it. I tried url in terminal with wget url and it works and download json file hidden in url.
apiVersion: batch/v1
kind: CronJob
metadata:
name: reference
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: reference
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
- cd /mnt/c/Users/path_to_folder
- wget {url}
restartPolicy: OnFailure
When I create job and watch the pod logs nothing happen with url, I don't get any response.
Commands I run are:
kubectl create -f cronjob.yml
kubectl get pods
kubectl logs <pod_name>
In return I just get only command with date (img above)
When I leave just command with wget, nothing happen. In pods I can see in STATUS CrashLoopBackOff. So the command has problem to run.
command:
- cd /mnt/c/Users/path_to_folder
- wget {url}
How does wget command in cronjob.yml should look like?
The command in kubernetes is docker equivalent to entrypoint in docker. For any container, there should be only one process as entry point. Either the default entry point in the image or supplied via command.
Here you are using /bin/sh as a single process and everything else as it's argument. The way you were executing /bin/sh -c , it means providing date; echo Hello from the Kubernetes cluster as input command. NOT the cd and wget commands. Change your manifest to the following to feed everything as one block to the /bin/sh. Note that, all the commands is fit as 1 argument.
apiVersion: batch/v1
kind: CronJob
metadata:
name: reference
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: reference
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster; cd /mnt/c/Users/path_to_folder;wget {url}
restartPolicy: OnFailure
To illustrate the problem, check the following examples. Note that only 1st argument is executed.
/bin/sh -c date
Tue 24 Aug 2021 12:28:30 PM CDT
/bin/sh -c echo hi
/bin/sh -c 'echo hi'
hi
/bin/sh -c 'echo hi && date'
hi
Tue 24 Aug 2021 12:28:45 PM CDT
/bin/sh -c 'echo hi' date #<-----your case is similar to this, no date printed.
hi
-c Read commands from the command_string operand instead of from the standard input. Special parameter 0
will be set from the command_name operand and the positional parameters ($1, $2, etc.) set from the re‐
maining argument operands.

Kubernetes - processing an unlimited number of work-items

I need to get a work-item from a work-queue and then sequentially run a series of containers to process each work-item. This can be done using initContainers (https://stackoverflow.com/a/46880653/94078)
What would be the recommended way of restarting the process to get the next work-item?
Jobs seem ideal but don't seem to support an infinite/indefinite number of completions.
Using a single Pod doesn't work because initContainers aren't restarted (https://github.com/kubernetes/kubernetes/issues/52345).
I would prefer to avoid the maintenance/learning overhead of a system like argo or brigade.
Thanks!
Jobs should be used for working with work queues. When using work queues you should not set the .spec.comletions (or set it to null). In that case Pods will keep getting created until one of the Pods exit successfully. It is a little awkward exiting from the (main) container with a failure state on purpose, but this is the specification. You may set .spec.parallelism to your liking irrespective of this setting; I've set it to 1 as it appears you do not want any parallelism.
In your question you did not specify what you want to do if the work queue gets empty, so I will give two solutions, one if you want to wait for new items (infinite) and one if want to end the job if the work queue gets empty (finite, but indefinite number of items).
Both examples use redis, but you can apply this pattern to your favorite queue. Note that the part that pops an item from the queue is not safe; if your Pod dies for some reason after having popped an item, that item will remain unprocessed or not fully processed. See the reliable-queue pattern for a proper solution.
To implement the sequential steps on each work item I've used init containers. Note that this really is a primitve solution, but you have limited options if you don't want to use some framework to implement a proper pipeline.
There is an asciinema if any would like to see this at work without deploying redis, etc.
Redis
To test this you'll need to create, at a minimum, a redis Pod and a Service. I am using the example from fine parallel processing work queue. You can deploy that with:
kubectl apply -f https://rawgit.com/kubernetes/website/master/docs/tasks/job/fine-parallel-processing-work-queue/redis-pod.yaml
kubectl apply -f https://rawgit.com/kubernetes/website/master/docs/tasks/job/fine-parallel-processing-work-queue/redis-service.yaml
The rest of this solution expects that you have a service name redis in the same namespace as your Job and it does not require authentication and a Pod called redis-master.
Inserting items
To insert some items in the work queue use this command (you will need bash for this to work):
echo -ne "rpush job "{1..10}"\n" | kubectl exec -it redis-master -- redis-cli
Infinite version
This version waits if the queue is empty thus it will never complete.
apiVersion: batch/v1
kind: Job
metadata:
name: primitive-pipeline-infinite
spec:
parallelism: 1
completions: null
template:
metadata:
name: primitive-pipeline-infinite
spec:
volumes: [{name: shared, emptyDir: {}}]
initContainers:
- name: pop-from-queue-unsafe
image: redis
command: ["sh","-c","redis-cli -h redis blpop job 0 >/shared/item.txt"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-1
image: busybox
command: ["sh","-c","echo step-1 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-2
image: busybox
command: ["sh","-c","echo step-2 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-3
image: busybox
command: ["sh","-c","echo step-3 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
containers:
- name: done
image: busybox
command: ["sh","-c","echo all done with `cat /shared/item.txt`; sleep 1; exit 1"]
volumeMounts: [{name: shared, mountPath: /shared}]
restartPolicy: Never
Finite version
This version stops the job if the queue is empty. Note the trick that the pop init container checks if the queue is empty and all the subsequent init containers and the main container immediately exit if it is indeed empty - this is the mechanism that signals Kubernetes that the Job is completed and there is no need to create new Pods for it.
apiVersion: batch/v1
kind: Job
metadata:
name: primitive-pipeline-finite
spec:
parallelism: 1
completions: null
template:
metadata:
name: primitive-pipeline-finite
spec:
volumes: [{name: shared, emptyDir: {}}]
initContainers:
- name: pop-from-queue-unsafe
image: redis
command: ["sh","-c","redis-cli -h redis lpop job >/shared/item.txt; grep -q . /shared/item.txt || :>/shared/done.txt"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-1
image: busybox
command: ["sh","-c","[ -f /shared/done.txt ] && exit 0; echo step-1 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-2
image: busybox
command: ["sh","-c","[ -f /shared/done.txt ] && exit 0; echo step-2 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
- name: step-3
image: busybox
command: ["sh","-c","[ -f /shared/done.txt ] && exit 0; echo step-3 working on `cat /shared/item.txt` ...; sleep 5"]
volumeMounts: [{name: shared, mountPath: /shared}]
containers:
- name: done
image: busybox
command: ["sh","-c","[ -f /shared/done.txt ] && exit 0; echo all done with `cat /shared/item.txt`; sleep 1; exit 1"]
volumeMounts: [{name: shared, mountPath: /shared}]
restartPolicy: Never
The easiest way in this case is to use CronJob. CronJob runs Jobs according to a schedule. For more information go through documentation.
Here is an example (I took it from here and modified it)
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: sequential-jobs
spec:
schedule: "*/1 * * * *" #Here is the schedule in Linux-cron format
jobTemplate:
spec:
template:
metadata:
name: sequential-job
spec:
initContainers:
- name: job-1
image: busybox
command: ['sh', '-c', 'for i in 1 2 3; do echo "job-1 `date`" && sleep 5s; done;']
- name: job-2
image: busybox
command: ['sh', '-c', 'for i in 1 2 3; do echo "job-2 `date`" && sleep 5s; done;']
containers:
- name: job-done
image: busybox
command: ['sh', '-c', 'echo "job-1 and job-2 completed"']
restartPolicy: Never
his solution however has some limitations:
It cannot run more often than 1 minute
If you need to process your work-items one-by-one you need to create additional check for running jobs in InitContainer
CronJobs are available only in Kubernetes 1.8 and higher