kubernetes schedule job after cron - kubernetes

I have Cron name "cronX" and a Job name "JobY"
how can I configure kubernetes to run "JobY" after "cronX" finished?
I know I can do it using API call from "cronX" to start "JobY" but I don't want to do that using an API call.
Is there any Kubernetes configuration to schedule this?

is it possible that this pod will contain 2 containers and one of them will run only after the second container finish?
Negative, more details here. If you only have 2 containers to run, you can place the first one under initContainers and another under containers and schedule the pod.
No built-in K8s configuration available to do workflow orchestration. You can try Argo workflow to do this.

Related

Designing K8 pod and proceses for initialization

I have a problem statement where in there is a Kubernetes cluster and I have some pods running on it.
Now, I want some functions/processes to run once per deployment, independent of number of replicas.
These processes use the same image like the image in deployment yaml.
I cannot use initcontainers and sidecars, because they will run along with main container on pod for each replica.
I tried to create a new image and then a pod out of it. But this pod keeps on running, which is not good for cluster resource, as it should be destroyed after it has done its job. Also, the main container depends on the completion on this process, in order to run the "command" part of K8 spec.
Looking for suggestions on how to tackle this?
Theoretically, You could write an admission controller webhook for intercepting create/update deployments and triggering your functions as you want. If your functions need to be checked, use ValidatingWebhookConfiguration for validating the process and then deny or accept commands.

Kubernetes: run a job on node reboot only once

Is there a way to run a job only once upon machine reboot in Kubernetes?
Thought of running a cronjob as static pod but seems kubelet does not like it.
Edit: Based on the replies, I'd like to clarify. I'm only looking at doing this through native Kubernetes. I'm ok writing a cronjob in Kubernetes but I need this to run only once and upon node reboot.
Not sure your platform.
For example, in AWS, ec2 instance has user_data (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html)
It will run commands on Your Linux/Windows Instance at Launch
You should be fine to find the similar solutions for other cloud providers or on-premise servers.
If I understand you correctly you should consider using DaemonSet:
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As
nodes are added to the cluster, Pods are added to them. As nodes are
removed from the cluster, those Pods are garbage collected. Deleting a
DaemonSet will clean up the Pods it created.
That way you could create a container with a job that would be run from a DaemonSet.
Alternatively you could consider DaemonJob:
This is an example CompositeController that's similar to Job, except
that a pod will be scheduled to each node, similar to DaemonSet.
Also there are:
Kubebuilder
Kubebuilder is a framework for building Kubernetes APIs using custom
resource definitions (CRDs).
and:
Metacontroller
Metacontroller is an add-on for Kubernetes that makes it easy to write
and deploy custom controllers in the form of simple scripts.
But the first option that I have provided would be easier to implement in my opinion.
Please let me know if that helped.
Consider using CronJob for this. It takes the same format as normal cron scheduler on linux.
Besides the usual format (minute / hour / day of month / month / day of week) that is widely used to indicate a schedule, cron scheduler also allows the use of #reboot.
This directive, followed by the absolute path to the script, will cause it to run when the machine boots.

Using initcontainers in a job to do some stuff after pod initialization

I'm currently trying to create a job in Kubernetes in a project I've just joined.
This job should wait until 2 other pods are up and running, then run a .sh script to create some data for the testers in the application.
I've figured out i should use initContainers. However, there is one point i don't understand.
Why should i include some environmental values under env tag under initContainers in job description .yaml file ?
I thought i was just waiting for the pods to be initialised, not creating them again. Am i missing something ?
Thanks in advance.
initContainers are like containers running in the Pod, but executed before the ones defined in the spec key containers.
Like containers, they share some namespaces and IPC, so it means that the Scheduler will check if the declared initContainers are successful, then it will schedule the containers.
Keep in mind that when you create a Pod, basically, you're creating an empty container named pause that will provide a namespace-base for the following containers: so, in the end, the initContainer is not really creating again a new Pod, like its name suggests, it's an initializer.

Run a container on a pod failure Kubernetes

I have a cronjob that runs and does things regularly. I want to send a slack message with the technosophos/slack-notify container when that cronjob fails.
Is it possible to have a container run when a pod fails?
There is nothing built in for this that i am aware of. You could use a web hook to get notified when a pod changes and look for state stuff in there. But you would have to build the plumbing yourself or look for an existing third party tool.
Pods and Jobs are different things. If you want to wait for a job that has failed and send an email after it has, you can do something like this in bash:
while true
do
kubectl wait --for=condition=failed job/myjob
kubectl run --image=technosophos/slack-notify --env="EMAIL=failure#yourdomain.com"
done
To the question: Is it possible to have a container run when a pod fails?
Yes , although there is nothing out of the box right now , but you can define a health check.
Then you can write a cron job , or a Jenkins job , or a custom kubernetes cluster service/controller that checks/probes that health check regularly and if the health check fails then you can run a container based on that.

Scheduling a controller to run every one hour in Kubernetes

I have a console application which does some operations when run and I generate an image of it using docker. Now, I would like to deploy it to Kubernetes and run it every hour, is it possible that I could do it in K8?
I have read about Cron jobs but that's being offered only from version 1.4
The short answer. Sure, you can do it with a CronJob and yes it does create a Pod. You can configure Job History Limits to control how many failed, completed pods you want to keep before Kubernetes deletes them.
Note that CronJob is a subset of the Job resource.