Hi I'm new with Kubernetes. I'm trying to run wget command in cronjob.yml file to get data from url each day. For now I'm testing it and pass schedule as 1min. I also add some echo command just to get some response from that job. Below is my yml file. I'm changing directory to folder where I want to save data and passing url with site from which I'm taking it. I tried url in terminal with wget url and it works and download json file hidden in url.
apiVersion: batch/v1
kind: CronJob
metadata:
name: reference
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: reference
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
- cd /mnt/c/Users/path_to_folder
- wget {url}
restartPolicy: OnFailure
When I create job and watch the pod logs nothing happen with url, I don't get any response.
Commands I run are:
kubectl create -f cronjob.yml
kubectl get pods
kubectl logs <pod_name>
In return I just get only command with date (img above)
When I leave just command with wget, nothing happen. In pods I can see in STATUS CrashLoopBackOff. So the command has problem to run.
command:
- cd /mnt/c/Users/path_to_folder
- wget {url}
How does wget command in cronjob.yml should look like?
The command in kubernetes is docker equivalent to entrypoint in docker. For any container, there should be only one process as entry point. Either the default entry point in the image or supplied via command.
Here you are using /bin/sh as a single process and everything else as it's argument. The way you were executing /bin/sh -c , it means providing date; echo Hello from the Kubernetes cluster as input command. NOT the cd and wget commands. Change your manifest to the following to feed everything as one block to the /bin/sh. Note that, all the commands is fit as 1 argument.
apiVersion: batch/v1
kind: CronJob
metadata:
name: reference
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: reference
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster; cd /mnt/c/Users/path_to_folder;wget {url}
restartPolicy: OnFailure
To illustrate the problem, check the following examples. Note that only 1st argument is executed.
/bin/sh -c date
Tue 24 Aug 2021 12:28:30 PM CDT
/bin/sh -c echo hi
/bin/sh -c 'echo hi'
hi
/bin/sh -c 'echo hi && date'
hi
Tue 24 Aug 2021 12:28:45 PM CDT
/bin/sh -c 'echo hi' date #<-----your case is similar to this, no date printed.
hi
-c Read commands from the command_string operand instead of from the standard input. Special parameter 0
will be set from the command_name operand and the positional parameters ($1, $2, etc.) set from the re‐
maining argument operands.
Related
I have an EKS cluster and an RDS (mariadb). I am trying to make a backup of given databases though a script in a CronJob. The CronJob object looks like this:
apiVersion: batch/v1
kind: CronJob
metadata:
name: mysqldump
namespace: mysqldump
spec:
schedule: "* * * * *"
concurrencyPolicy: Replace
jobTemplate:
spec:
template:
spec:
containers:
- name: mysql-backup
image: viejo/debian-mysqldump:latest
envFrom:
- configMapRef:
name: mysqldump-config
args:
- /bin/bash
- -c
- /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
resources:
limits:
cpu: "0.5"
memory: "0.5Gi"
restartPolicy: OnFailure
The script is called mysqldump.sh, which gets all necessary details from a ConfigMap object. It makes the dump of the databases in an environment variable MYSQLDUMP_DATABASES, and moves it to S3 bucket.
Note: I am going to move some variables to a Secret, but before I need this to work.
What happens is NOTHING. The script is never getting executed I tried putting a "echo starting the backup", before the script, and "echo backup ended" after it, but I don't see none of them. If I'd access the container and execute the same exact command manually, it works:
root#mysqldump-27550908-sjwfm:/# /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
root#mysqldump-27550908-sjwfm:/#
Can anyone point out a possible issue?
Try change args to command:
...
command:
- /bin/bash
- -c
- /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
...
Creating a Cron Job
What does the flag '-c' do in the Kubernetes Cronjob?
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Community wiki for clearness and further searches.
#François is completely correct. /bin/sh -c comes directly from unix and simply means command you issue after shell. This is NOT a parameter for k8s cronjob :
If the -c option is present, then commands are read from string. If
there are arguments after the
string, they are assigned to the positional parameters, starting with $0.
You can also check What is /bin/sh -c?
The "-c" flag does not belong to the Cronjob, it is used by unix sh executing the command:
/bin/sh -c date; echo Hello from the Kubernetes cluster
So you need to read the documentation for unix sh, not kubernetes.
I'm looking for a way to quickly run/restart a Job/Pod from the command line and override the command to be executed in the created container.
For context, I have a Kubernetes Job that gets executed as a part of our deploy process. Sometimes that Job crashes and I need to run certain commands inside the container the Job creates to debug and fix the problem (subsequent Jobs then succeed).
The way I have done this so far is:
Copy the YAML of the Job, save into a file
Clean up the YAML (delete Kubernetes-managed fields)
Change the command: field to tail -f /dev/null (so that the container stays alive)
kubectl apply -f job.yaml && kubectl get all && kubectl exec -ti pod/foobar bash
Run commands inside the container
kubectl delete job/foobar when I am done
This is very tedious. I am looking for a way to do something like the following
kubectl restart job/foobar --command "tail -f /dev/null"
# or even better
kubectl run job/foobar --exec --interactive bash
I cannot use the run command to create a Pod:
kubectl run --image xxx -ti
because the Job I am trying to restart has certain volumeMounts and other configuration I need to reuse. So I would need something like kubectl run --from-config job/foobar.
Is there a way to achieve this or am I stuck with juggling the YAML definition file?
Edit: the Job YAML looks approx. like this:
apiVersion: batch/v1
kind: Job
metadata:
name: database-migrations
labels:
app: myapp
service: myapp-database-migrations
spec:
backoffLimit: 0
template:
metadata:
labels:
app: myapp
service: myapp-database-migrations
spec:
restartPolicy: Never
containers:
- name: migrations
image: registry.example.com/myapp:977b44c9
command:
- "bash"
- "-c"
- |
set -e -E
echo "Running database migrations..."
do-migration-stuff-here
echo "Migrations finished at $(date)"
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/example/myapp/app/config/conf.yml
name: myapp-config-volume
subPath: conf.yml
- mountPath: /home/example/myapp/.env
name: myapp-config-volume
subPath: .env
volumes:
- name: myapp-config-volume
configMap:
name: myapp
imagePullSecrets:
- name: k8s-pull-project
The commands you suggested don't exist. Take a look at this reference where you can find all available commands.
Based on that documentation the task of the Job is to create one or more Pods and continue retrying execution them until the specified number of successfully terminated ones will be achieved. Then the Job tracks the successful completions. You cannot just update the Job because these fields are not updatable. To do what's you want you should delete current job and create one once again.
I recommend you to keep all your configurations in files. If you have a problem with configuring job commands, practice says that you should modify these settings in yaml and apply to the cluster - if your deployment crashes - by storing the configuration in files, you have a backup.
If you are interested how to improve this task, you can try those 2 examples describe below:
Firstly I've created several files:
example job (job.yaml):
apiVersion: batch/v1
kind: Job
metadata:
name: test1
spec:
template:
spec:
containers:
- name: test1
image: busybox
command: ["/bin/sh", "-c", "sleep 300"]
volumeMounts:
- name: foo
mountPath: "/script/foo"
volumes:
- name: foo
configMap:
name: my-conf
defaultMode: 0755
restartPolicy: OnFailure
patch-file.yaml:
spec:
template:
spec:
containers:
- name: test1
image: busybox
command: ["/bin/sh", "-c", "echo 'patching test' && sleep 500"]
and configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-conf
data:
test: |
#!/bin/sh
echo "skrypt test"
If you want to automate this process you can use plugin
A plugin is a standalone executable file, whose name begins with kubectl-. To install a plugin, move its executable file to anywhere on your PATH.
There is no plugin installation or pre-loading required. Plugin executables receive the inherited environment from the kubectl binary. A plugin determines which command path it wishes to implement based on its name.
Here is the file that can replace your job
A plugin determines the command path that it will implement based on its filename.
kubectl-job:
#!/bin/bash
kubectl patch -f job.yaml -p "$(cat patch-job.yaml)" --dry-run=client -o yaml | kubectl replace --force -f - && kubectl wait --for=condition=ready pod -l job-name=test1 && kubectl exec -it $(kubectl get pod -l job-name=test1 --no-headers -o custom-columns=":metadata.name") -- /bin/sh
This command uses an additional file (patch-job.yaml, see this link) - within we can put our changes for job.
Then you should change the permissions of this file and move it:
sudo chmod +x .kubectl-job
sudo mv ./kubectl-job /usr/local/bin
It's all done. Right now you can use it.
$ kubectl job
job.batch "test1" deleted
job.batch/test1 replaced
pod/test1-bdxtm condition met
pod/test1-nh2pv condition met
/ #
As you can see Job has been replaced (deleted and created).
You can also use single-line command, here is the example:
kubectl get job test1 -o json | jq "del(.spec.selector)" | jq "del(.spec.template.metadata.labels)" | kubectl patch -f - --patch '{"spec": {"template": {"spec": {"containers": [{"name": "test1", "image": "busybox", "command": ["/bin/sh", "-c", "sleep 200"]}]}}}}' --dry-run=client -o yaml | kubectl replace --force -f -
With this command you can change your job entering parameters "by hand". Here is the output:
job.batch "test1" deleted
job.batch/test1 replaced
As you can see this solution works as well.
I have a CronJob that runs a process in a container in Kubernetes.
This process takes in a time window that is defined by a --since and --until flag. This time window needs to be defined at container start time (when the cron is triggered) and is a function of the current time. An example running this process would be:
$ my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
So for the example above, I would like the time window to be from 1 hour ago to 1 hour in the future. Is there a way in Kubernetes to pass in a formatted datetime as a command argument to a process?
An example of what I am trying to do would be the following config:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-process
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: my-process
image: my-image
args:
- my-process
- --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ")
- --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
When doing this, the literal string "$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ")" would be passed in as the --since flag.
Is something like this possible? If so, how would I do it?
Note that in your CronJob you don't run bash or any other shell and command substitution is a shell feature and without one will not work. In your example only one command my-process is started in the container and as it is not a shell, it is unable to perform command substitution.
This one:
$ my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
will work properly because it is started in a shell so it may take advantage of shell features such as mentioned command substitution
One thing: date -v -1H +"%Y-%m-%dT%H:%M:%SZ" doesn't expand properly in bash shell with default GNU/Linux date implementation. Among others -v option is not recognized so I guess you're using it on MacOSX or some kind of BSD system. In my examples below I will use date version that works on Debian.
So for testing it on GNU/Linux it will be something like this:
date --date='-1 hour' +"%Y-%m-%dT%H:%M:%SZ"
For testing purpose I've tried it with simple CronJob from this example with some modifications:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
env:
- name: FROM
value: $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ")
- name: TILL
value: $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
args:
- /bin/sh
- -c
- date; echo from $(FROM) till $(TILL)
restartPolicy: OnFailure
It works properly. Below you can see the result of CronJob execution:
$ kubectl logs hello-1569947100-xmglq
Tue Oct 1 16:25:11 UTC 2019
from 2019-10-01T15:25:11Z till 2019-10-01T17:25:11Z
Apart from the example with use of environment variables I tested it with following code:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
args:
- /bin/sh
- -c
- date; echo from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
restartPolicy: OnFailure
and as you can see here command substitution also works properly:
$ kubectl logs hello-1569949680-fk782
Tue Oct 1 17:08:09 UTC 2019
from 2019-10-01T16:08:09Z till 2019-10-01T18:08:09Z
It works properly because in both examples first we spawn bash shell in our container and subsequently it runs other commands as simple echo provided as its argument. You can use your my-process command instead of echo only you'll need to provide it in one line with all its arguments, like this:
args:
- /bin/sh
- -c
- my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
This example will not work as there is no shell involved. echo command not being a shell will not be able to perform command substitution which is a shell feature:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
args:
- /bin/echo
- from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
restartPolicy: OnFailure
and the results will be a literal string:
$ kubectl logs hello-1569951180-fvghz
from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
which is similar to your case as your command, like echo isn't a shell and it cannot perform command substitution.
To sum up: The solution for that is wrapping your command as a shell argument. In first two examples echo command is passed along with other commands as shell argument.
Maybe it is visible more clearly in the following example with a bit different syntax:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
command: ["/bin/sh","-c"]
args: ["FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ'); TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ') ;echo from $FROM till $TILL"]
restartPolicy: OnFailure
man bash says:
-c If the -c option is present, then commands are read from the first non-option argument command_string.
so command: ["/bin/sh","-c"] basically means run a shell and execute following commands which then we pass to it using args. In bash commands should be separated with semicolon ; so they are run independently (subsequent command is executed no matter what was the result of executing previous command/commands).
In the following fragment:
args: ["FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ'); TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ') ;echo from $FROM till $TILL"]
we provide to /bin/sh -c three separate commands:
FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ')
which sets FROM environment variable to result of execution of date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ' command,
TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ')
which sets TILL environment variable to result of execution of date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ' command
and finally we run
echo from $FROM till $TILL
which uses both variables.
Exactly the same can be done with any other command.
I am trying to delete temporary pods and other artifacts using helm delete. I am trying to run this helm delete to run on a schedule. Here is my stand alone command which works
helm delete --purge $(helm ls -a -q temppods.*)
However if i try to run this on a schedule as below i am running into issues.
Here is what mycron.yaml looks like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronbox
namespace: mynamespace
spec:
serviceAccount: cron-z
successfulJobsHistoryLimit: 1
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cronbox
image: alpine/helm:2.9.1
args: ["delete", "--purge", "$(helm ls -a -q temppods.*)"
env:
- name: TILLER_NAMESPACE
value: mynamespace-build
- name: KUBECONFIG
value: /kube/config
volumeMounts:
- mountPath: /kube
name: kubeconfig
restartPolicy: OnFailure
volumes:
- name: kubeconfig
configMap:
name: cronjob-kubeconfig
I ran
oc create -f ./mycron.yaml
This created the cronjob
Every 5th minute a pod is getting created and the helm command that is part of the cron job runs.
I am expecting the artifacts/pods name beginning with temppods* to be deleted.
What i see in the logs of the pod is:
Error: invalid release name, must match regex ^(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])+$ and the length must not longer than 53
The CronJob container spec is trying to delete a release named (literally):
$(helm ls -a -q temppods.*)
This release doesn't exist, and fails helms expected naming conventions.
Why
The alpine/helm:2.9.1 container image has an entrypoint of helm. This means any arguments are passes directly to the helm binary via exec. No shell expansion ($()) occurs as there is no shell running.
Fix
To do what you are expecting you can use sh which is available in alpine images.
sh -uexc 'releases=$(helm ls -a -q temppods.*); helm delete --purge $releases'
In a Pod spec this translates to:
spec:
containers:
- name: cronbox
command: 'sh'
args:
- '-uexc'
- 'releases=$(helm ls -a -q temppods.*); helm delete --purge $releases;'
Helm
As a side note, helm is not the most reliable tool when clusters or releases get into vague states. Running multiple helm commands interacting with within the same release at the same time usually spells disaster and this seems on the surface like that is likely. Maybe there is a question in other ways to achieve this process your are implementing?