Run kubectl command after CronJob is scheduled but before it runs - kubernetes

I have multiple CronJobs scheduled to run on some days. I deploy them with Helm charts, and I need to test them - therefore, I'm looking for a solution to run cronjob once right after it gets deployed with Helm.
I have already tried using Helm post-hooks to create a job from CronJob
kubectl create job --from=cronjob/mycronjobname mycronjob-run-0
but for this I have to create a separate container with kubectl docker image, which is not a good option for me. (Another time, it waited till CronJob is executed to run the Job, maybe it was my mistake of some sort)
I also tried creating a separate Job to execute this command, but Helm deploys Job and only then CronJob, so it's also not an option.
I also tried having a postStart lifecycle in the CronJob container, however it also waits for the CronJob to be executed according to it's schedule.
Is there any way to do this?

Related

kubernetes schedule job after cron

I have Cron name "cronX" and a Job name "JobY"
how can I configure kubernetes to run "JobY" after "cronX" finished?
I know I can do it using API call from "cronX" to start "JobY" but I don't want to do that using an API call.
Is there any Kubernetes configuration to schedule this?
is it possible that this pod will contain 2 containers and one of them will run only after the second container finish?
Negative, more details here. If you only have 2 containers to run, you can place the first one under initContainers and another under containers and schedule the pod.
No built-in K8s configuration available to do workflow orchestration. You can try Argo workflow to do this.

How to make sure that the script is run from only one of the PODs of all replicas?

Our product is running on Kubernetes/Docker.
I have a POD which will be running multiple replicas. There is a script in the POD which will be run after POD start. The script needs the main process in the POD to be running.
The problem is the script should be run only from one POD. With multiple replicas the script will be run multiple times.
How to make sure that the script is run from only one of the PODs of all replicas?
Thanks
Chandra
It's not possible to run script from only one replica of a pod. Use cronjob for such usecase.
If using helm, helm post install hook can be used.

How to create a Cronjob in spinnaker

I created a cronJob using kubectl. I would like to manage this job using spinnaker but i cant find my created job in spinnaker.
I created the file running "kubectl create -f https://k8s.io/examples/application/job/cronjob.yaml"
This cronjob looks like this: https://k8s.io/examples/application/job/cronjob.yaml
There was an issue open on Github and based on recent comment on How do I deploy a CronJob? #2863
...
At this time, there is no support for showing CronJobs on the cluster screen, which is intended to focus more on "server-like" resources rather than jobs.
...
As for deploying CronJobs in Spinnaker:
... was not possible prior to Spinnaker 1.8. It has been possible to deploy cron jobs since Spinnaker 1.8
Like #Amityo mentioned you can show deployed CronJobs using kubectl get cronjob, and if you like to list jobs scheduled by this CronJob you can use kubectl get jobs.
This is really well explained in official Kubernetes Documentation Running Automated Tasks with a CronJob.

Scheduling a controller to run every one hour in Kubernetes

I have a console application which does some operations when run and I generate an image of it using docker. Now, I would like to deploy it to Kubernetes and run it every hour, is it possible that I could do it in K8?
I have read about Cron jobs but that's being offered only from version 1.4
The short answer. Sure, you can do it with a CronJob and yes it does create a Pod. You can configure Job History Limits to control how many failed, completed pods you want to keep before Kubernetes deletes them.
Note that CronJob is a subset of the Job resource.

How to "deploy" in kubernetes without any changes, just to get pods to cycle

What I am trying to do:
The app that runs in the Pod does some refreshing of its data files on start.
I need to restart the container each time I want to refresh the data.
(A refresh can take a few minutes, so I have a Probe checking for readiness.)
What I think is a solution:
I will run a scheduled job to do a rolling-update kind of deploy, which will take the old Pods out, one at a time and replace them, without downtime.
Where I'm stuck:
How do I trigger a deploy, if I haven't changed anything??
Also, I need to be able to do this from the scheduled job, obviously, so no manual editing..
Any other ways of doing this?
As of kubectl 1.15, you can run:
kubectl rollout restart deployment <deploymentname>
What this does internally, is patch the deployment with a kubectl.kubernetes.io/restartedAt annotation so the scheduler performs a rollout according to the deployment update strategy.
For previous versions of Kubernetes, you can simulate a similar thing:
kubectl set env deployment --env="LAST_MANUAL_RESTART=$(date +%s)" "deploymentname"
And even replace all in a single namespace:
kubectl set env --all deployment --env="LAST_MANUAL_RESTART=$(date +%s)" --namespace=...
According to documentation:
Note: a Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e. .spec.template) is changed, e.g. updating labels or container images of the template.
You can just use kubectl patch to update i.e. a label inside .spec.template.