K8S cronjob scheduling on existing pod? - kubernetes

I have my application running in K8S pods. my application writes logs to a particular path for which I already have volume mounted on the pod. my requirement is to schedule cronjob which will trigger weekly once and read the logs from that pod volume and generate a report base on my script (which is basically filtering the logs based on some keywords). and send the report via mail.
unfortunately I am not sure how I will proceed on this as I couldn't get any doc or blog which talks about integrating conrjob to existing pod or volume.
apiVersion: v1
kind: Pod
metadata:
name: webserver
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
- name: sidecar-container
image: busybox
command: ["sh","-c","while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done"]
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: "discovery-cronjob"
labels:
app.kubernetes.io/name: discovery
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: log-report
image: busybox
command: ['/bin/sh']
args: ['-c', 'cat /var/log/nginx/access.log > nginx.log']
volumeMounts:
- mountPath: /log
name: shared-logs
restartPolicy: Never

I see two things here that you need to know:
Unfortunately, it is not possible to schedule a cronjob on an existing pod. Pods are ephemeral and job needs to finish. It would be impossible to tell if the job completed or not. This is by design.
Also in order to be able to see the files from one pod to another you must use a PVC. The logs created by your app have to be persisted if your job wants to access it. Here you can find some examples of how to Create ReadWriteMany PersistentVolumeClaims on your Kubernetes Cluster:
Kubernetes allows us to provision our PersistentVolumes dynamically
using PersistentVolumeClaims. Pods treat these claims as volumes. The
access mode of the PVC determines how many nodes can establish a
connection to it. We can refer to the resource provider’s docs for
their supported access modes.

Related

How to mount PVC to a (katib) Job specification?

I'd like to mount a PVC to a (katib) Job specification but can't find anything in the documentation nor any example?
I'm pretty sure that this should be possible as a Job is orchestrating pods and pods can do so. Or am I missing something?
Please find below the respective (katib) job specification
apiVersion: batch/v1
kind: Job
spec:
template:
spec:
containers:
- env:
- name: training-container
image: docker.io/romeokienzler/claimed-train-mobilenet_v2:0.3
command:
- "ipython"
- "/train-mobilenet_v2.ipynb"
- "optimizer=${trialParameters.optimizer}"
restartPolicy: Never
You can add the volume and volume mount to your Katib job template so that all the HPO jobs on Katib can share the same volumes. e.g.
apiVersion: batch/v1
kind: Job
spec:
template:
spec:
containers:
- name: training-container
image: docker.io/romeokienzler/claimed-train-mobilenet_v2:0.4
command:
- "ipython"
- "/train-mobilenet_v2.ipynb"
- "optimizer=${trialParameters.optimizer}"
volumeMounts:
- mountPath: /data/
name: data-volume
restartPolicy: Never
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: data-pvc
Also make sure your pvc is read write many so the pod on different nodes can mount on the same volume at the same time.

Executing a Script using a Cronjob Kubernetes Cluster

I have a 3 node K8 v1.21 cluster in AWS and looking for SOLID config to run a script using a cronjob. I have seen many documents on here and Google using cronjob and hostPath to Persistent Volumes/Claims to using ConfigMaps, the list goes one.
I keep getting "Back-off restarting failed container/CrashLoopBackOff" errors.
Any help is much appreciated.
cronjob.yaml
The script I am trying to run is basic for testing only
#! /bin/<br/>
kubectl create deployment nginx --image=nginx
Still getting the same error.
kubectl describe pod/xxxx
This hostPath in AWS cluster created using eksctl works.
apiVersion: v1
kind: Pod
metadata:
name: redis-hostpath
spec:
containers:
- image: redis
name: redis-container
volumeMounts:
- mountPath: /test-mnt
name: test-vol
volumes:
- name: test-vol
hostPath:
path: /test-vol
UPDATE
Tried running your config in GCP on a fresh cluster. Only thing I changed was the /home/script.sh to /home/admin/script.sh
Did you test this on your cluster?
Warning FailedPostStartHook 5m27s kubelet Exec lifecycle hook ([/home/mchung/script.sh]) for Container "busybox" in Pod "dumb-job-1635012900-qphqr_default(305c4ed4-08d1-4585-83e0-37a2bc008487)" failed - error: rpc error: code = Unknown desc = failed to exec in container: failed to create exec "0f9f72ccc6279542f18ebe77f497e8c2a8fd52f8dfad118c723a1ba025b05771": cannot exec in a deleted state: unknown, message: ""
Normal Killing 5m27s kubelet FailedPostStartHook
Assuming you're running it in a remote multi-node cluster (since you mentioned AWS in your question), hostPath is NOT an option there for volume mount. Your best choice would be to use a ConfigMap and use it as volume mount.
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-script
data:
script.sh: |
# write down your script here
And then:
apiVersion: batch/v1
kind: CronJob
metadata:
name: redis-job
spec:
schedule: '*/5 * * * *'
jobTemplate:
spec:
template:
spec:
containers:
- name: redis-container
image: redis
args:
- /bin/sh
- -c
- /home/user/script.sh
volumeMounts:
- name: redis-data
mountPath: /home/user/script.sh
subPath: script.sh
volumes:
- name: redis-data
configMap:
name: redis-script
Hope this helps. Let me know if you face any difficulties.
Update:
I think you're doing something wrong. kubectl isn't something you should run from another container / pod. Because it requires the necessary binary to be existed into that container and an appropriate context set. I'm putting a working manifest below for you to understand the whole concept of running a script as a part of cron job:
apiVersion: v1
kind: ConfigMap
metadata:
name: script-config
data:
script.sh: |-
name=StackOverflow
echo "I love $name <3"
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: dumb-job
spec:
schedule: '*/1 * * * *' # every minute
jobTemplate:
spec:
template:
spec:
containers:
- name: busybox
image: busybox:stable
lifecycle:
postStart:
exec:
command:
- /home/script.sh
volumeMounts:
- name: some-volume
mountPath: /home/script.sh
volumes:
- name: some-volume
configMap:
name: script-config
restartPolicy: OnFailure
What it'll do is it'll print some texts in the STDOUT in every minute. Please note that I have put only the commands that container is capable to execute, and kubectl is certainly not one of them which exists in that container out-of-the-box. I hope that is enough to answer your question.

Is there an efficient way to create a mechanism for automatic updating osrm map data in kubernetes?

We have created .yaml file to deploy osrm/osrm-backend (https://hub.docker.com/r/osrm/osrm-backend/tags) in a Kubernetes cluster.
We initially download the pbf file in the node's volume, then we create the necessary files for the service and finally the service starts.
You may find the yaml file below:
apiVersion: v1
kind: Service
metadata:
name: osrm-albania
labels:
app: osrm-albania
spec:
ports:
- port: 5000
targetPort: 5000
name: http
selector:
app: osrm-albania
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: osrm-albania
spec:
replicas: 1
selector:
matchLabels:
app: osrm-albania
template:
metadata:
labels:
app: osrm-albania
spec:
containers:
- name: osrm-albania
image: osrm/osrm-backend:latest
command: ["/bin/sh", "-c"]
args: ["osrm-extract -p /opt/car.lua /data/albania-latest.osm.pbf && osrm-partition /data/albania-latest.osrm && osrm-customize /data/albania-latest.osrm && osrm-routed --algorithm mld /data/albania-latest.osrm"]
ports:
- containerPort: 5000
name: osrm-port
volumeMounts:
- name: albania
readOnly: false
mountPath: /data
initContainers:
- name: get-osrm-file
image: busybox
command: ['wget', 'http://download.geofabrik.de/europe/albania-latest.osm.pbf', '--directory-prefix=/data']
volumeMounts:
- name: albania
readOnly: false
mountPath: /data
volumes:
- name: albania
emptyDir: {}
The problem is that we need to update the map data used by the osrm service, regularly. Which means to be able to re-download the pbf file and recreate the necessary files to be used by the service.
This might be achieved via kubernetes cronjobs which might has to use persistent volumes instead (Cron Jobs in Kubernetes - connect to existing Pod, execute script).
Is this the only way to achieve getting new map data and refresh the data used by the osrm service?
How exactly?
Is there a better - easier way to achieve this?
This is a tricky situation, I had the same problem in my cluster and I fixed dividing the job in more pods:
1 wget in a volume mount ('volume A')
2 extract, partition, customize in 'volume A'
3 copy 'volume A' to volume mount B
4 run osrm-routed with 'volume B'
In this way a set pod 1, 2, and 3 as a cronjob and each pod would do all operation without broke the service.
This issue was due by a large amount of time for the first 3 operation (2 to 3 hours).

Does kubernetes pod restart on failure on ImgPullBackOff

I have a kubernetes cluster working perfectly fine. I have 10 worker nodes and 1 master device. I have a below deployment.yaml file type as DaemonSet for pods and containers.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: deployment
namespace: mynamespace
spec:
replicas: 2
selector:
matchLabels:
name: deployment
template:
metadata:
labels:
name: deployment
spec:
#List of all the containers
containers:
- name: container1
image: CRname/container1
imagePullPolicy: Always
volumeMounts:
- mountPath: /share
name: share-files
securityContext:
privileged: true
- name: container2
image: CRname/container2
imagePullPolicy: Always
volumeMounts:
- mountPath: /share
name: share-files
securityContext:
privileged: true
volumes:
- name: share-files
hostPath:
path: /home/user/shared-folder
imagePullSecrets:
- name: Mysecret
nodeSelector:
NodeType: ALL
On starting above, these two containers start running on all the worker nodes and runs perfectly fine. But I have observed that sometimes, few nodes show error as ImagePullBackOff which means that due to some network or any other issues, it wasn't able to download the image. I did use describe command to check which image failed. But the problem is it didn't try to automatically redownload the image. I had to delete the pod and thus it is automatically created and then it works fine.
I just want to know why the pod shows this error and do not try to redownload the image. Is there anything which I can add in the yaml file so that it delete and recreate the pod automatically on any type of error.
EDIT I would also like to ask when the deployment is created, node starts to pull the image from the container registry initially for the first time. Once they are downloaded locally on the nodes, why does it has to pull the image again when the image is locally present.?
Please suggest some good options. Thanks.
imagePullPolicy: Always cause you download the image everytime pod restart.
And the pod is always restarted by daemonset, so just wait a bit more time.

How do I retain/access the custom log files from a completed cronjob?

I have a cronjob that is completing and outputting several log files.
I want to persist these files and be able access them after the pod has succeeded.
I've found I can access the stdout with oc logs -f <pod>, but I really need to access the log files.
I'm aware Openshift 2 apparently had an environment variable location OPENSHIFT_LOG_DIR that log files were written to, but Openshift 3.5 doesn't appear to have this.
What's my best way of logging and accessing the logs from the CronJob after the pod has succeeded and finished?
After a Job runs to completion, the Pod terminates, but it is not automatically deleted. Since it has completed, you need to use -a to see it. Once you have the Pod name, kubectl logs works as you would expect.
$ kubectl get pods -a
NAME READY STATUS RESTARTS AGE
curator-1499817660-6rzmf 0/1 Completed 0 28d
$ kubectl logs curator-1499817660-6rzmf
2017-07-12 00:01:10,409 INFO ...
A bit late but I hope this answer helps someone facing an almost similar context. For my case I needed to access some files generated by a CronJob and because the pod (and logs) are no longer accessible once the job completes I could not do so I was getting an error:
kubectl logs mongodump-backup-29087654-hj89
Error from server (NotFound): pods "mongodump-backup-27640120-n8p7z" not found
My solution was to deploy a Pod that could access the PVC. The Pod runs a busybox image as below :
apiVersion: v1
kind: Pod
metadata:
name: pvc-inspector
namespace: demos
spec:
containers:
- image: busybox
imagePullPolicy: "IfNotPresent"
name: pvc-inspector
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: "/tmp"
name: pvc-mount
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: mongo-backup-toolset-claim
After deploying this pod side by side to the CronJob I can exec into the pvc-inspector pod and then access files generated by the CronJob :
kubectl exec -it mongodump-backup-29087654-hj89 -- sh
cd tmp
ls
The pvc-inspector has to use the same persistentVolumeClaim as the CronJob and it also has to mount to the same directory as the CronJob.
The CronJob is a simple utility that is doing database backups of Mongo instances :
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mongodump-backup
spec:
schedule: "*/5 * * * *" #Cron job every 5 minutes
startingDeadlineSeconds: 60
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
spec:
containers:
- name: mongodump-backup
image: golide/mongo-backup-toolset:0.0.2
imagePullPolicy: "IfNotPresent"
env:
- name: DATABASE_NAME
value: "claims"
- name: MONGODB_URI
value: mongodb://root:mypasswordhere#mongodb-af1/claims
volumeMounts:
- mountPath: "/tmp"
name: mongodump-volume
command: ['sh', '-c',"./dumpp.sh"]
restartPolicy: OnFailure
volumes:
- name: mongodump-volume
persistentVolumeClaim:
claimName: mongo-backup-toolset-claim