Kubernetes - Handle cronjobs like crontab - kubernetes

I have a lot of cronjobs I need to set on Kubernetes.
I want a file to manage them all and set them to Kubernetes on deployment. I wish that if I remove a cron from that file it will be removed from Kubernetes too.
Basically, I want to handle the corns like I'm handling them today on the machine (from a cron file that I would deploy). Add, remove and change crons.
I couldn't find a way of doing so. Does someone have an idea?
Library or framework I can use like helm? Or any other solution.

I highly recommend using gitops with argocd as a solution for Kubernetes configure management. Run crontab in deployment is a bad ideal because it hard to monitor your job result (cronjob job result can be get by kube-state-metrics exporter).
The ideal is packaging your manifest (it may be kubernetes manifest, kustomize, helm...etc...) -> put them to git -> argocd makes sure your configure deployed correctly
The advantages of gitops are include:
centralize your configuration
versioning your configuration
git authentication & authorization
traceable
multi-cluster deployment with argocd
automation deployment & sync
...
Gitops is not a difficult and is the mordern way for kubernetes configure management. Let's try

I used Helm to do so. I built a template to go over all crons, which I inserted as values to the helm template (Very similar to crontab but more structured) - see in the example.
Then, all I need to do is run a helm upgrade with a new corn (values) file and it updates everything accordingly. If I updated, removed, or added a new corn everything is happening automatically and with versioning. You can also add a namespace to your cronjobs to make it more encapsulated.
Here is a very good and easy-to-understand example I used. And its git repo

Related

Does helmfile sync will redeploy all existing helm charts

I have a few services running on a kubernetes cluster, and I use Helm Chart where I placed all my services. However, I was asked to transfer Helm Charts into Helmfile.
If I use
helmfile import myrepo/mychart
helmfile sync
Will it redeploy and substitute existing running pods or It will deploy just deploy additional services mentioned
Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
Helmfile is a declarative spec for deploying helm charts. It lets you...
Keep a directory of chart value files and maintain changes in version control.
Apply CI/CD to configuration changes.
Periodically sync to avoid skew in environments.
To avoid upgrades for each iteration of helm, the helmfile executable delegates to helm - as a result, helm must be installed.
Like #DavidMaze suggested, use helm diff command first to determine the changes and then use helm sync command for applying them.

How to start/trigger a job when a new version of deployment is released (image updated) on Kubernetes?

I have two environments (clusters): Production and Staging with two independent databases. They are both deployed on Kubernetes and production doesn't have a fixed schedule for new deployments but it happens on a weekly basis (roughly).
I would like to sync the production database with the staging database every time that a new release is deployed to production (kubernetes deployment is updated with new image).
Is there a way that I can set a job/cronjob to be triggered everytime this even happen?
The deployments are done using ArgoCD to pull the changes in the deployment manifest from a github repository.
I don't think this functionality is inherent to kubernetes; you are asking about something custom that can be implemented in a variety of ways (depending on your tool stack)
e.g.
if you are using helm to install to Production, you can use a post-install hook that triggers a Job that does what you want.
Perhaps ArgoCD has some post-install functionality that can also create a Job resource doing what you want.
I think you can also use a tool like Kyverno and write a policy to generate a K8s job upon any resource created in K8s.
This is exactly the case what Argo Events is for.
https://argoproj.github.io/argo-events/
There are many ways to implement this, but it depends on your exact situation how it’s best for you.
Eg. if you can use a Git tag event’s webhook you could go with an HTTP trigger to initiate a Job or Argo Workflow.

Best practises for helm kubernetes deployment pipeline in a development environment?

These are the best practises for a helm deployment which I figured out so far:
Use versioned images, because deploying via latest tag is not sufficient, as this may not trigger a pod recreate (see When does kubernetes helm trigger a pod recreate?).
Use hashed configmap metadata to restart pods on configmap changes
(see https://helm.sh/docs/howto/charts_tips_and_tricks/)
In a development environment new images are created often. Because I don't want to trash my container registry, I'd prefer using latest tags.
The only solution - I can think of - is to use versioned imaged and a cleanup job to remove old image from the registry. But this is quite complicated.
So what are your best practises for helm deployments in a development environment?
Indeed, using :latest will mean that your deployments will be mutable.
AWS ECR allows you to keep limited number of latest images according to certain regex. So you can use dev- prefix for your non-production deployments (for example triggered outside of master branch) and keep only 10 latest of them.

What is the right way to manage changes in kubernetes manifests?

I've been using terraform for a while and I really like it. I also set up Atlantis so that my team could have a "GitOps" flow. This is my current process:
Add or remove resources from Terraform files
Push changes to GitHub and create a pull request
Atlantis picks up changes and creates a terraform plan
When the PR is approved, Atlantis applies the changes
I recently found myself needing to set up a few managed Kubernetes clusters using Amazon EKS. While Terraform is capable of creating most of the basic infrastructure, it falls short when setting up some of the k8s resources (no support for gateways or ingress, no support for alpha/beta features, etc). So instead I've been relying on a manual approach using kubectl:
Add the resource to an existing file or create a new file
Add a line to a makefile that runs the appropriate command (kubectl apply or create) on the new file
If I'm using a helm chart, add a line with helm template and then kubectl apply (I didn't really like using tiller, and helm3 is getting rid of it anyway)
If I want to delete a resource, I do it manually with kubectl delete
This process feels nowhere near as clean as what we're doing in Terraform. There are several key problems:
There's no real dry-run. Using kubectl --dry-run or kubectl diff doesn't really work, it's only a client-side diff. Server-side diff functions are currently in alpha
There's no state file. If I delete stuff from the manifests, I have to remember to also delete it from the cluster manually.
No clear way to achieve gitops. I've looked at Weaveworks Flux but that seems to be geared more towards deploying applications.
The makefile is getting more and more complicated. It doesn't feel like this is scaleable.
I should acknowledge that I'm fairly new to Kubernetes, so might be overlooking something obvious.
Is there a way for me to achieve a process similar to what I have in Terraform, within the Kubernetes universe?
This is more of an opinion question so I'll answer with an opinion. If you like to manage configuration you can try some of these tools:
If you want to use existing YAML files (configurations) and use something at a higher level you can try kustomize.
If you want to manage Kubernetes configurations using Jsonnet you should take a look at Ksonnet. Keep in mind that Ksonnet will not be supported in the future.
If you want to just automatically do a helm update in an automated way, there is not a tool there yet. You will have to build something at this point to orchestrate everything. For example, we ended up creating an in house tool that does this.

How do I version control a kubernetes application?

I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.
This is what I currently do
Let's say I have a project composed of 3 services viz. postgres, express, nginx.
I create a directory called product-release that is as follows:
product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.
Questions:
This works for me so far, but is this "the right way"?
Why does helm have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?
Seems that https://redspread.com/ (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.
We'll stick with keeping yaml files in git together with the deployed application for now I guess.
We are using kubernetes/helm (the latest/incubated version) and a central repository for Helm charts (with references container images built for our component releases).
In other words, the Helm package definitions and its dependencies are separate from the source code and image definitions that make up the several components of our web applications.
Notice: Tiller has been removed in Helm v3. Checkout this answer to see details on why it needs tiller in Helm v2 and why it's removed in Helm v3: https://v3.helm.sh/docs/faq/#removal-of-tiller
According to the idea of GitOps, what you did is a right way (to perform release from a git repo). However, if you want to push it further to make it more common, you can plan more goals including:
Choose a configuration management system beyond k8s app declarative definition only. E.g., Helm (like above answer https://stackoverflow.com/a/42053983/914967), Kustomize. They're pure client-side only.
avoid custom release process by altering update.sh with popular tools like kubectl apply or helm install.
drive change delivery from git tags/branches by using a CI/CD engine like argocd, Travis CI or GitHub Actions.
Uses branching strategy so that you can try changes on test/staging/production/ environment before delivering it directly.