How to create a Cronjob in spinnaker - kubernetes

I created a cronJob using kubectl. I would like to manage this job using spinnaker but i cant find my created job in spinnaker.
I created the file running "kubectl create -f https://k8s.io/examples/application/job/cronjob.yaml"
This cronjob looks like this: https://k8s.io/examples/application/job/cronjob.yaml

There was an issue open on Github and based on recent comment on How do I deploy a CronJob? #2863
...
At this time, there is no support for showing CronJobs on the cluster screen, which is intended to focus more on "server-like" resources rather than jobs.
...
As for deploying CronJobs in Spinnaker:
... was not possible prior to Spinnaker 1.8. It has been possible to deploy cron jobs since Spinnaker 1.8
Like #Amityo mentioned you can show deployed CronJobs using kubectl get cronjob, and if you like to list jobs scheduled by this CronJob you can use kubectl get jobs.
This is really well explained in official Kubernetes Documentation Running Automated Tasks with a CronJob.

Related

How to create an argo Workflow from a CronWorkflow, similar to how you can create a kubernetes Job from a CronJob?

I can create a Job based on a CronJob like this:
kubectl create job --from=cronjob/name-of-my-cron-job name-of-the-manually-triggered-job
But how can I create a Workflow based on a CronWorkflow without using the argo cmldline program? I only have kubectl on the machine where I need to run this.

Run kubectl command after CronJob is scheduled but before it runs

I have multiple CronJobs scheduled to run on some days. I deploy them with Helm charts, and I need to test them - therefore, I'm looking for a solution to run cronjob once right after it gets deployed with Helm.
I have already tried using Helm post-hooks to create a job from CronJob
kubectl create job --from=cronjob/mycronjobname mycronjob-run-0
but for this I have to create a separate container with kubectl docker image, which is not a good option for me. (Another time, it waited till CronJob is executed to run the Job, maybe it was my mistake of some sort)
I also tried creating a separate Job to execute this command, but Helm deploys Job and only then CronJob, so it's also not an option.
I also tried having a postStart lifecycle in the CronJob container, however it also waits for the CronJob to be executed according to it's schedule.
Is there any way to do this?

Stopping all pods in Kubernetes cluster before running database migration job

I deploy my App into the Kubernetes cluster using Helm. App works with database, so i have to run db migrations before installing new version of the app. I run migrations with Kubernetes Job object using Helm "pre-upgrade" hook.
The problem is when the migration job starts old version pods are still working with database. They can block objects in database and because of that migration job may fail.
So, i want somehow to automatically stop all the pods in cluster before migration job starts. Is there any way to do that using Kubernetes + Helm? Will appreciate all the answers.
There are two ways I can see that you can do this.
First option is to scale down the pods before the deployment (for example, via Jenkins, CircleCI, GitLab CI, etc)
kubectl scale --replicas=0 -n {namespace} {deployment-name}
helm install .....
The second option (which might be easier depending on how you want to maintain this going forward) is to add an additional pre-upgrade hook with a higher priority than the migrations hook so it runs before the upgrade job; and then use that do the kubectl scale down.

No YAML Files in K8s Deployment

TLDR: My understanding from learning all about K8s is that you need lots and lots of yaml files, however, I just deployed an app to a K8s clusters with 0 yaml files and it succeeded. Why is that? Does google cloud or K8s have defaults it uses when the app does not have any yaml file settings?
Longer:
I have a dockerized spring app that I deployed to a google cloud cluster I created via the UI.
It had 0 yaml files in there, so my expectation that kubectl deploy would fail, however, it succeeded and my stateless app is up there chugging away.
How does that work?
Well the gcp created for you in the background. I assume you pushed your docker image or CI to cluster and from there you just did few clicks right? same stuff you can do it on openshift environment. but in the background yaml file get's generated. if you edit the pod on your UI you will see that yaml file.
as above #Volodymyr Bilyachat said you can create deployment via imparative way or using declarative way(yaml). I would suggest always use declarative way.
you can see your deployment yaml file which you created from UI by doing
kubectl get deployment <deployment_name> -o yaml
kubectl get deployment <deployment_name> -o yaml > name.yaml #This will output your yaml file into name.yaml file
You can run your containers/pods using plain commands.
kubectl run podname --image=name
As you said 0 yaml files. But main idea of those files that you push them to source control and run test them via different environments using CI/CD.
Other benefit of yaml files that you can share configuration and someone else will be able to create infrastructure without having to write anything. Here is example how you can run elasticsearch with one command
kubectl apply -f https://download.elastic.co/downloads/eck/1.2.0/all-in-one.yaml

Delete AKS deployment's running pod on regular basis (Job)

I have been struggling for some time to figure out how to accomplish the following:
I want to delete running pod on Azure Kubernetes Service cluster on scheduled basis, so that it respawns from deployment. This is required that application re-reads configuration files stored on shared storage and shared with other application.
I have found out that Kubernetes Jobs might be handy to accomplish this, but there is some but.
I cant figure how can I select corresponding pod related to my deployment as it adds random string to the deployment name, i.e
deployment-name-546fcbf44f-wckh4
Using selectors to get my pod doesnt succeed as there is not such operator like LIKE
kubectl get pods --field-selector metadata.name=deployment-name
No resources found
Looking at the official docs one way of doing this would be like so:
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
echo $pods
you'd need to modify job-name to match your job name
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job