dockerfile and kubernetes jobs ( assistance needed) - kubernetes

I have my dockerfile in which i have used postgres:12 image and i modified it using some ddl scripts and then i build this image and i can run the container through docker run command but how i can use Kubernetes jobs to run build image , as I dont have good exp on k8s.
This is my dockerfile here you can see it.
docker build . -t dockerdb
FROM postgres:12
ENV POSTGRES_PASSWORD xyz#123123!233
ENV POSTGRES_DB test
ENV POSTGRES_USER test
COPY ./Scripts /docker-entrypoint-initdb.d/
How i can customize the below code using the below requirement
apiVersion: batch/v1
kind: Job
metadata:
name: job-1
spec:
template:
metadata:
name: job-1
spec:
containers:
- name: postgres
image: gcr.io/project/pg_12:dev
command:
- /bin/sh
- -c
- "not sure what command should i give in last line"

Not sure how you are running the docker image
if you are running your docker image without passing any command you can directly run the image in Job.
docker run <imagename>
once your Dockerimage is ready and build you can run it directly
You job will get executed without passing any command
apiVersion: batch/v1
kind: Job
metadata:
name: job-1
spec:
template:
metadata:
name: job-1
spec:
containers:
- name: postgres
image: gcr.io/project/pg_12:dev
if you want to pass any argument or command that you can pass further
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: <CHANGE IMAGE URL>
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Just to update above template is for Cronjob, Cronjob run on specific time.

Related

Run scheduled task inside Pod in Kubernetes

I have a small instance of influxdb running in my kubernetes cluster.
The data of that instance is stored in a persistent storage.
But I also want to run the backup command from influx at scheduled interval.
influxd backup -portable /backuppath
What I do now is exec into the pod and run it manually.
Is there a way that I can do this automatically?
You can consider running a CronJob with bitnami kubectl which will execute the backup command. This is the same as exec into the pod and run except now you automate it with CronJob.
CronJob is the way to go here. It acts more or less like a crontab, but for Kubernetes.
As an example you could use this
apiVersion: batch/v1
kind: CronJob
metadata:
name: backup
spec:
schedule: 0 8 * * *
jobTemplate:
spec:
template:
spec:
containers:
- name: influxdb-backup
image: influxdb
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args:
- "-c"
- "influxd backup -portable /backuppath"
restartPolicy: Never
This will create a Job, everyday at 08:00, executing influxd backup -portable /backuppath. Of course, you have to edit it accordingly, to work on your environment.
This is the solution I have used for this question
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-backupscript
namespace: influx
data:
backupscript.sh: |
#!/bin/bash
echo 'getting pod name'
podName=$(kubectl get pods -n influx --field-selector=status.phase==Running --output=jsonpath={.items..metadata.name})
echo $podName
#echo 'create backup'
kubectl exec -it $podName -n influx -- /mnt/influxBackupScript/influxbackup.sh
echo 'done'
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: backup-cron
namespace: influx
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
volumes:
- name: backup-script
configMap:
name: cm-backupscript
defaultMode: 0777
containers:
- name: kubectl
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- /mnt/scripts/backupscript.sh
volumeMounts:
- name: backup-script
mountPath: "/mnt/scripts"
restartPolicy: Never
You can either run it as a cronjob and setup the image to be able to connect to the DB, or you can sidecar it alongside your db pod, and set it to run the cron image (i.e. will run as a mostly-idle container in the same pod as your DB)

Create or update existing postgres db container through kubernetes job

I have a Postgres DB container which is running in a Kubernetes cluster. I need to write a Kubernetes job to connect to the Postgres DB container and run the scripts from SQL file. I need to understand two things here
commands to run SQL script
how to load SQL file in Job.yaml file
Here is my sample yaml file for Kubernetes job
apiVersion: batch/v1
kind: Job
metadata:
name: init-db
spec:
template:
metadata:
name: init-db
labels:
app: init-postgresdb
spec:
containers:
- image: "docker.io/bitnami/postgresql:11.5.0-debian-9-r60"
name: init-db
command:
- psql -U postgres
env:
- name: DB_HOST
value: "knotted-iguana-postgresql"
- name: DB_DATABASE
value: "postgres"
restartPolicy: OnFailure
You have to mount the SQL file as a volumen from a configmap and use the psql cli to execute the commands from mounted file.
To execute commands from file you can change the command parameter on the yaml by this:
psql -a -f sqlCommand.sql
The configmap needs to be created using the file you pretend to mount more info here
kubectl create configmap sqlCommands.sql --from-file=sqlCommands.sql
Then you have to add the configmap and the mount statement on your job yaml and modify the command to use the mounted file.
apiVersion: batch/v1
kind: Job
metadata:
name: init-db
spec:
template:
metadata:
name: init-db
labels:
app: init-postgresdb
spec:
containers:
- image: "docker.io/bitnami/postgresql:11.5.0-debian-9-r60"
name: init-db
command: [ "bin/sh", "-c", "psql -a -f /sqlCommand.sql" ]
volumeMounts:
- name: sqlCommand
mountPath: /sqlCommand.sql
env:
- name: DB_HOST
value: "knotted-iguana-postgresql"
- name: DB_DATABASE
value: "postgres"
volumes:
- name: sqlCommand
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: sqlCommand.sql
restartPolicy: OnFailure
You should make a docker file for the same first, execute it and map the same working docker image to the kubernetes job yaml file.
You can add an entrypoint.sh in docker file, where you can place your scripts to be executed

How to run shell script using CronJobs in Kubernetes?

I am trying to run a shell script at regular interval of 1 minute using a CronJob.
I have created following Cron job in my openshift template:
- kind: CronJob
apiVersion: batch/v2alpha1
metadata:
name: "${APPLICATION_NAME}"
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: mycron-container
image: alpine:3
imagePullPolicy: IfNotPresent
command: [ "/bin/sh" ]
args: [ "/var/httpd-init/croyscript.sh" ]
volumeMounts:
- name: script
mountPath: "/var/httpd-init/"
volumes:
- name: script
configMap:
name: ${APPLICATION_NAME}-croyscript
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
concurrencyPolicy: Replace
The following is the configmap inserted as a volume in this job:
- kind: ConfigMap
apiVersion: v1
metadata:
name: ${APPLICATION_NAME}-croyscript
labels:
app: "${APPLICATION_NAME}"
data:
croyscript.sh: |
#!/bin/sh
if [ "${APPLICATION_PATH}" != "" ]; then
mkdir -p /var/httpd-resources/${APPLICATION_PATH}
fi
mkdir temp
cd temp
###### SOME CODE ######
This Cron job is running. as I can see the name of the job getting replaced every 1 min (as scheduled in my job). But it is not executing the shell script croyscript.sh
Am I doing anything wrong here? (Maybe I have inserted the configmap in a wrong way, so Job is not able to access the shell script)
Try below approach
Update permissions on configmap location
volumes:
- name: script
configMap:
name: ${APPLICATION_NAME}-croyscript
defaultMode: 0777
If this one doesnt work, most likely the script in mounted volume might have been with READONLY permissions.
use initContainer to copy the script to different location and set appropriate permissions and use that location in command parameter

How to set the result of shell script into arguments of Kubernetes Cronjob regularly

I have trouble setting the result value of a shell script to arguments for Kubernetes Cronjob regularly.
Is there any good way to set the value refreshed everyday?
I use a Kubernetes cronjob in order to perform some daily task.
With the cronjob, a Rust application is launched and execute a batch process.
As one of arguments for the Rust app, I pass target date (yyyy-MM-dd formatted string) as a command-line argument.
Therefore, I tried to pass the date value into the definition yaml file for cronjob as follows.
And I try setting ${TARGET_DATE} value with following script.
In the sample.sh, the value for TARGET_DATE is exported.
cat sample.yml | envsubst | kubectl apply -f sample.sh
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-batch
namespace: some-namespace
spec:
schedule: "00 1 * * 1-5"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-container
image: sample/some-image
command: ["./run"]
args: ["${TARGET_DATE}"]
restartPolicy: Never
I expected that this will create TARGET_DATE value everyday, but it does not change from the date I just set for the first time.
Is there any good way to set result of shell script into args of cronjob yaml regularly?
Thanks.
You can use init containers for that https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
The idea is the following: you run your script that setting up this value inside init container, write this value into shared emptyDir volume. Then read this value from the main container. Here is example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-batch
namespace: some-namespace
spec:
schedule: "00 1 * * 1-5"
jobTemplate:
spec:
template:
spec:
initContainers:
- name: init-script
image: my-init-image
volumeMounts:
- name: date
mountPath: /date
command:
- sh
- -c
- "/my-script > /date/target-date.txt"
containers:
- name: some-container
image: sample/some-image
command: ["./run"]
args: ["${TARGET_DATE}"] # adjust this part to read from file
volumeMounts:
- name: date
mountPath: /date
restartPolicy: Never
volumes:
- name: date
emptyDir: {}
You can overwrite your docker entrypoint/ k8s container cmd and do this in one shot:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-batch
namespace: some-namespace
spec:
schedule: "00 1 * * 1-5"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-container
image: sample/some-image
command: ["/bin/sh"]
args:
- -c
- "./run ${TARGET_DATE}"
restartPolicy: Never

How does argument passing in Kubernetes work?

A problem:
Docker arguments will pass from command line:
docker run -it -p 8080:8080 joethecoder2/spring-boot-web -Dcassandra_ip=127.0.0.1 -Dcassandra_port=9042
However, Kubernetes POD arguments will not pass from singlePod.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: spring-boot-web-demo
labels:
purpose: demonstrate-spring-boot-web
spec:
containers:
- name: spring-boot-web
image: docker.io/joethecoder2/spring-boot-web
env: ["name": "-Dcassandra_ip", "value": "127.0.0.1"]
command: ["java","-jar", "spring-boot-web-0.0.1-SNAPSHOT.jar", "-D","cassandra_ip=127.0.0.1", "-D","cassandra_port=9042"]
args: ["-Dcassandra_ip=127.0.0.1", "-Dcassandra_port=9042"]
restartPolicy: OnFailure
when I do:
kubectl create -f ./singlePod.yaml
Why don't you pass the arguments as env variables? It looks like you're using spring boot, so this shouldn't even require changes in the code since spring boot injects env variables.
The following should work:
apiVersion: v1
kind: Pod
metadata:
name: spring-boot-web-demo
labels:
purpose: demonstrate-spring-boot-web
spec:
containers:
- name: spring-boot-web
image: docker.io/joethecoder2/spring-boot-web
command: ["java","-jar", "spring-boot-web-0.0.1-SNAPSHOT.jar"]
env:
- name: cassandra_ip
value: "127.0.0.1"
- name: cassandra_port
value: "9042"