Is there a way to inject job id as an env variable into a container in kubernetes? - kubernetes

The yaml below injects the pod's name into the container as RUN_ID. If this cron job spins up 10 pods (parallelism = 10), each of the 10 pods will have a different run id. But I want all the 10 pods to have the same run id. DownwardApi doesn't seem to support retrieving the job id. Is there any other way to do it?
In my case it is not necessary that it needs to be the job id. Any random id that could be set in all 10 pods when a new job is spin up will do. So any ideas for that will also help.
apiVersion: batch/v1
kind: CronJob
metadata:
name: ${CRONJOB_NAME}
namespace: ${NAMESPACE_NAME}
spec:
schedule: "0 8 * * *"
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
backoffLimit: 4
parallelism: ${PARALLEL_JOBS_COUNT}
completions: ${PARALLEL_JOBS_COUNT}
template:
spec:
containers:
- name: ${CONTAINER_NAME}
image: ${DOCKER_IMAGE_NAME}
imagePullPolicy: IfNotPresent
env:
- name: RUN_ID
valueFrom:
fieldRef:
fieldPath: metadata.name ---> this gets the pod's name
.
.

I did RUN_ID=${POD_NAME%\-*} in a command to extract the job name from the pod name. This solved my use case.
spec:
containers:
- name: ${CONTAINER_NAME}
image: ${ACR_DNS}/${JMETER_DOCKER_IMAGE}
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- 'export RUN_ID=${POD_NAME%\-*}; cd /config; /entrypoint.sh -n -Jserver.rmi.ssl.disable=true -Ljmeter.engine=debug -Jjmeterengine.force.system.exit=true -t \$(JMETER_JMX_FILE)'
env:
- name: RUN_ID
valueFrom:
fieldRef:
fieldPath: metadata.name

Related

How to execute script shell in Kubernetes cronjob

I would like to run a shell script inside the Kubernetes using CronJob, here is my CronJon.yaml file :
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- /home/admin_/test.sh
restartPolicy: OnFailure
CronJob has been created ( kubectl apply -f CronJob.yaml )
when I get the list of cronjob I can see the cron job ( kubectl get cj ) and when I run "kubectl get pods" I can see the pod is being created, but pod crashes.
Can anyone help me to learn how I can create a CronJob inside the Kubernetes please ?
As correctly pointed out in the comments, you need to provide the script file in order to execute it via your CronJob. You can do that by mounting the file within a volume. For example, your CronJob could look like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- /myscript/test.sh
volumeMounts:
- name: script-dir
mountPath: /myscript
restartPolicy: OnFailure
volumes:
- name: script-dir
hostPath:
path: /path/to/my/script/dir
type: Directory
Example above shows how to use the hostPath type of volume in order to mount the script file.

subPathExpr for parallel pods

We have parallel jobs on EKS and we would like the jobs to write to hostPath.
We are using subPathExpr with environment variable as according to the documentation. However, after the run, the hostPath contains only the one folder probably due to racing condition from the parallel jobs and whichever job get hold of the hostPath.
We are on Kubernetes 1.17. Is subPathExpr meant for this use case of allowing parallel jobs to write to the same hostPath? What are other options to allow parallel jobs to write to host volume?
apiVersion: batch/v1
kind: Job
metadata:
name: gatling-job
spec:
ttlSecondsAfterFinished: 300 # delete after 5 minutes
completions: 5
parallelism: 5
backoffLimit: 0
template:
spec:
restartPolicy: "Never"
containers:
- name: gatling
image: GATLING_IMAGE_NAME
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
volumeMounts:
- name: perftest-results
mountPath: /opt/gatling/results
subPathExpr: $(POD_NAME)
volumes:
- name: perftest-results
hostPath:
path: /data/perftest-results
Tested with a simple job template as below, and files were created in respective folder and worked as expected.
Will investigate the actual project. Closing for now.
apiVersion: batch/v1
kind: Job
metadata:
name: subpath-jobs
labels:
name: subpath-jobs
spec:
completions: 5
parallelism: 5
backoffLimit: 0
template:
spec:
restartPolicy: "Never"
containers:
- name: busybox
image: busybox
workingDir: /outputs
command: [ "touch" ]
args: [ "a_file.txt" ]
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
volumeMounts:
- name: job-output
mountPath: /outputs
subPathExpr: $(POD_NAME)
volumes:
- name: job-output
hostPath:
path: /data/outputs
type: DirectoryOrCreate
# ls -R /data
/data:
outputs
/data/outputs:
subpath-jobs-6968q subpath-jobs-6zp4x subpath-jobs-nhh96 subpath-jobs-tl8fx subpath-jobs-w2h9f
/data/outputs/subpath-jobs-6968q:
a_file.txt
/data/outputs/subpath-jobs-6zp4x:
a_file.txt
/data/outputs/subpath-jobs-nhh96:
a_file.txt
/data/outputs/subpath-jobs-tl8fx:
a_file.txt
/data/outputs/subpath-jobs-w2h9f:
a_file.txt

How to run consul via kubernetes?

I tried running the pod (https://www.consul.io/docs/platform/k8s/run.html)
It failed with... containers with unready status: [consul]
kubectl create -f consul-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: consul-example
spec:
containers:
- name: example
image: "consul:latest"
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- "/bin/sh"
- "-ec"
- |
export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
consul kv put hello world
restartPolicy: Never

How to set the result of shell script into arguments of Kubernetes Cronjob regularly

I have trouble setting the result value of a shell script to arguments for Kubernetes Cronjob regularly.
Is there any good way to set the value refreshed everyday?
I use a Kubernetes cronjob in order to perform some daily task.
With the cronjob, a Rust application is launched and execute a batch process.
As one of arguments for the Rust app, I pass target date (yyyy-MM-dd formatted string) as a command-line argument.
Therefore, I tried to pass the date value into the definition yaml file for cronjob as follows.
And I try setting ${TARGET_DATE} value with following script.
In the sample.sh, the value for TARGET_DATE is exported.
cat sample.yml | envsubst | kubectl apply -f sample.sh
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-batch
namespace: some-namespace
spec:
schedule: "00 1 * * 1-5"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-container
image: sample/some-image
command: ["./run"]
args: ["${TARGET_DATE}"]
restartPolicy: Never
I expected that this will create TARGET_DATE value everyday, but it does not change from the date I just set for the first time.
Is there any good way to set result of shell script into args of cronjob yaml regularly?
Thanks.
You can use init containers for that https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
The idea is the following: you run your script that setting up this value inside init container, write this value into shared emptyDir volume. Then read this value from the main container. Here is example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-batch
namespace: some-namespace
spec:
schedule: "00 1 * * 1-5"
jobTemplate:
spec:
template:
spec:
initContainers:
- name: init-script
image: my-init-image
volumeMounts:
- name: date
mountPath: /date
command:
- sh
- -c
- "/my-script > /date/target-date.txt"
containers:
- name: some-container
image: sample/some-image
command: ["./run"]
args: ["${TARGET_DATE}"] # adjust this part to read from file
volumeMounts:
- name: date
mountPath: /date
restartPolicy: Never
volumes:
- name: date
emptyDir: {}
You can overwrite your docker entrypoint/ k8s container cmd and do this in one shot:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-batch
namespace: some-namespace
spec:
schedule: "00 1 * * 1-5"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-container
image: sample/some-image
command: ["/bin/sh"]
args:
- -c
- "./run ${TARGET_DATE}"
restartPolicy: Never

access logs in cron jobs kubernetes

im running cron job in kubernetes, jobs completes successfully and i log output to log file inside(path: storage/logs) but i cannot access that file due to container is in completed here is my job yaml.
apiVersion: v1
items:
- apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
chart: cronjobs-0.1.0
name: cron-cronjob1
namespace: default
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
app: cron
cron: cronjob1
spec:
containers:
- args:
- /usr/local/bin/php
- -c
- /var/www/html/artisan bulk:import
env:
- name: DB_CONNECTION
value: postgres
- name: DB_HOST
value: postgres
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: xxx
- name: DB_USERNAME
value: xxx
- name: DB_PASSWORD
value: xxxx
- name: APP_KEY
value: xxxxx
image: registry.xxxxx.com/xxxx:2ecb785-e927977
imagePullPolicy: IfNotPresent
name: cronjob1
ports:
- containerPort: 80
name: http
protocol: TCP
imagePullSecrets:
- name: xxxxx
restartPolicy: OnFailure
terminationGracePeriodSeconds: 30
schedule: '* * * * *'
successfulJobsHistoryLimit: 3
is there anyway i can get my log file content display on kubectl log command or other alternatives?
Cronjob runs pod according to the spec.schedule. After completing the task the pod's status will be set as completed, but the cronjob controller doesn't delete the pod after completing. And the log file content still there in the pod's container filesystem. So you need to do:
# here you can get the pod_name from the stdout of the cmd `kubectl get pods`
$ kubectl logs -f -n default <pod_name>
I guess you know that the pod is kept around as you have successfulJobsHistoryLimit: 3. Presumably your point is that your logging is going logged to a file and not stdout and so you don't see it with kubectl logs. If so maybe you could also log to stdout or put something into the job to log the content of the file at the end, for example in a PreStop hook.