Is there a way to run a job only once upon machine reboot in Kubernetes?
Thought of running a cronjob as static pod but seems kubelet does not like it.
Edit: Based on the replies, I'd like to clarify. I'm only looking at doing this through native Kubernetes. I'm ok writing a cronjob in Kubernetes but I need this to run only once and upon node reboot.
Not sure your platform.
For example, in AWS, ec2 instance has user_data (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html)
It will run commands on Your Linux/Windows Instance at Launch
You should be fine to find the similar solutions for other cloud providers or on-premise servers.
If I understand you correctly you should consider using DaemonSet:
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As
nodes are added to the cluster, Pods are added to them. As nodes are
removed from the cluster, those Pods are garbage collected. Deleting a
DaemonSet will clean up the Pods it created.
That way you could create a container with a job that would be run from a DaemonSet.
Alternatively you could consider DaemonJob:
This is an example CompositeController that's similar to Job, except
that a pod will be scheduled to each node, similar to DaemonSet.
Also there are:
Kubebuilder
Kubebuilder is a framework for building Kubernetes APIs using custom
resource definitions (CRDs).
and:
Metacontroller
Metacontroller is an add-on for Kubernetes that makes it easy to write
and deploy custom controllers in the form of simple scripts.
But the first option that I have provided would be easier to implement in my opinion.
Please let me know if that helped.
Consider using CronJob for this. It takes the same format as normal cron scheduler on linux.
Besides the usual format (minute / hour / day of month / month / day of week) that is widely used to indicate a schedule, cron scheduler also allows the use of #reboot.
This directive, followed by the absolute path to the script, will cause it to run when the machine boots.
Related
I am currently in a doubt on how to properly run kube-bench in my cluster (AKS cluster to be specific).
I used of course the job definition file from github, and modified it to run it as a cronjob (it is maybe an overkill but it isn't relevant in my question). This definition runs the pod in the default namespace, and I changed it as well to use a newly create kube-bench namespace. Furthermore, it runs it only on one node, and as I understood it, it runs the tests only on that node in this case.
So my questions would be:
is it a good practice to run it in separated namespace intended only for kube-bench?
If I have more nodes (and nodepools) in a cluster, do I need to run the job/pod on every node (eg. as an daemonset), and collect the report for every node individually?
Thanks in advance
I am new at Kubernetes and completely new to setting it up in EKS.
I am trying to achieve sharing of GPU between multiple pods, but for that going through few of the documents and articles, I found out I should update the kube-scheduler configuration with parameters which will then allow me the make the necessary changes for enabling sharing of GPU between pods.
Question
How do I update the kube-scheduler configuration in EKS. If update for the configuration is not possible, is there some other way I can setup kube-scheduler for only those pods which require GPU ?
I think you need a custom kubescheduler, and for your pods to be able to specify whether they want to use the default or the custom scheduler.
Kubernetes supports this: https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/ -- basically, create a .yaml file, run kubectl create -f on it, and you should see your scheduler. You'll want to run it in the kube-system namespace, and give it a unique name (so your pods have a way of saying which scheduler they want).
I haven't done this in EKS myself, but would be very surprised if you couldn't run a custom scheduler in EKS. Moreover, this aws blog post https://aws.amazon.com/blogs/opensource/virtual-gpu-device-plugin-for-inference-workload-in-kubernetes/ , which sounds similar to what you're looking for, seems like it would require a custom scheduler.
I have a problem statement where in there is a Kubernetes cluster and I have some pods running on it.
Now, I want some functions/processes to run once per deployment, independent of number of replicas.
These processes use the same image like the image in deployment yaml.
I cannot use initcontainers and sidecars, because they will run along with main container on pod for each replica.
I tried to create a new image and then a pod out of it. But this pod keeps on running, which is not good for cluster resource, as it should be destroyed after it has done its job. Also, the main container depends on the completion on this process, in order to run the "command" part of K8 spec.
Looking for suggestions on how to tackle this?
Theoretically, You could write an admission controller webhook for intercepting create/update deployments and triggering your functions as you want. If your functions need to be checked, use ValidatingWebhookConfiguration for validating the process and then deny or accept commands.
I have a console application which does some operations when run and I generate an image of it using docker. Now, I would like to deploy it to Kubernetes and run it every hour, is it possible that I could do it in K8?
I have read about Cron jobs but that's being offered only from version 1.4
The short answer. Sure, you can do it with a CronJob and yes it does create a Pod. You can configure Job History Limits to control how many failed, completed pods you want to keep before Kubernetes deletes them.
Note that CronJob is a subset of the Job resource.
I wonder how would one implement a colocated auxiliary container in a Pod within a Deployment which does not provide a service but rather a job/batch workload?
Background of my questions is, that I want to deploy a scalable service at which each instance needs configuration after its start. This configuration is done via a HTTP POST to its local colocated service instance. I've implemented a auxiliary container for this in order to benefit from the feature of colocation. So the auxiliary container always knows which instance needs to be configured.
Problem is, that the restartPolicy needs to be defined at the Pod level. I am looking for something like restart policy always for the service and a different restart policy onFailurefor the configuration job.
I know that k8s provides the Job resource for such workloads. But is there an option to colocate those jobs to Pods?
Furthermore I've stumbled across the so called init containers which might be defined via annotations. But these suffer the drawback, that k8s ensures that the actual Pod is only started after the init container did run. So for my very scenario it seems unsuitable.
As I understand you need your service running to configure it.
Your solution is workable and you can set restartPolicy: always you just need a way to tell your one off configuration container that it already ran. You could create and attach an emptyDir volume to your configuration container, create a file on it to mark your configuration successful and check for this file from your process. After your initialization you enter sleep in a loop. The downside is that some resources will be taken up by that container too.
Or you can just add an extra process in the same container and do the configuration (maybe with the file mentioned above as a guard to avoid configuring twice). So write a simple shell script like this and run it instead of your main process:
#!/bin/sh
(
[ -f /mnt/guard-vol/stamp ] && exit 0
/opt/my-config-process parameters && touch /mnt/guard-vol/stamp
) &
exec /opt/my-main-process "$#"
Alternatively you could implement a separate pod that queries the kubernetes API for pods of your service with label configured=false. Configure it and remove the label with the API. You should also modify your Service to select configured=true pods.