Best practises for helm kubernetes deployment pipeline in a development environment? - kubernetes

These are the best practises for a helm deployment which I figured out so far:
Use versioned images, because deploying via latest tag is not sufficient, as this may not trigger a pod recreate (see When does kubernetes helm trigger a pod recreate?).
Use hashed configmap metadata to restart pods on configmap changes
(see https://helm.sh/docs/howto/charts_tips_and_tricks/)
In a development environment new images are created often. Because I don't want to trash my container registry, I'd prefer using latest tags.
The only solution - I can think of - is to use versioned imaged and a cleanup job to remove old image from the registry. But this is quite complicated.
So what are your best practises for helm deployments in a development environment?

Indeed, using :latest will mean that your deployments will be mutable.
AWS ECR allows you to keep limited number of latest images according to certain regex. So you can use dev- prefix for your non-production deployments (for example triggered outside of master branch) and keep only 10 latest of them.

Related

Kubernetes - Handle cronjobs like crontab

I have a lot of cronjobs I need to set on Kubernetes.
I want a file to manage them all and set them to Kubernetes on deployment. I wish that if I remove a cron from that file it will be removed from Kubernetes too.
Basically, I want to handle the corns like I'm handling them today on the machine (from a cron file that I would deploy). Add, remove and change crons.
I couldn't find a way of doing so. Does someone have an idea?
Library or framework I can use like helm? Or any other solution.
I highly recommend using gitops with argocd as a solution for Kubernetes configure management. Run crontab in deployment is a bad ideal because it hard to monitor your job result (cronjob job result can be get by kube-state-metrics exporter).
The ideal is packaging your manifest (it may be kubernetes manifest, kustomize, helm...etc...) -> put them to git -> argocd makes sure your configure deployed correctly
The advantages of gitops are include:
centralize your configuration
versioning your configuration
git authentication & authorization
traceable
multi-cluster deployment with argocd
automation deployment & sync
...
Gitops is not a difficult and is the mordern way for kubernetes configure management. Let's try
I used Helm to do so. I built a template to go over all crons, which I inserted as values to the helm template (Very similar to crontab but more structured) - see in the example.
Then, all I need to do is run a helm upgrade with a new corn (values) file and it updates everything accordingly. If I updated, removed, or added a new corn everything is happening automatically and with versioning. You can also add a namespace to your cronjobs to make it more encapsulated.
Here is a very good and easy-to-understand example I used. And its git repo

How to clone a kubernetes deployment?

I need to clone my deployment including the volume. I couldn't find any resources on this question except about cloning volume using CSI driver as explained in this https://kubernetes-csi.github.io/docs/volume-cloning.html#overview
Is there a way to clone a deployment other than renaming the deployment and deploying it again through the kubectl?
Is there a way to clone a deployment other than renaming the
deployment and deploying it again through the kubectl?
Unfortunately no. At the time of this writing there is no such a tool that will allow to clone your deployment. So you will have to use the option already mentioned in the question or deploy that using helm chart.
With helm your can pass custom parameters such as --generate-name or edit values.yaml content to quickly deploy similar deployment.
Worth mentioning here is also Velero. A tool for backing up kubernetes resources and persistent volumes.

How do I version control a kubernetes application?

I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.
This is what I currently do
Let's say I have a project composed of 3 services viz. postgres, express, nginx.
I create a directory called product-release that is as follows:
product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.
Questions:
This works for me so far, but is this "the right way"?
Why does helm have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?
Seems that https://redspread.com/ (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.
We'll stick with keeping yaml files in git together with the deployed application for now I guess.
We are using kubernetes/helm (the latest/incubated version) and a central repository for Helm charts (with references container images built for our component releases).
In other words, the Helm package definitions and its dependencies are separate from the source code and image definitions that make up the several components of our web applications.
Notice: Tiller has been removed in Helm v3. Checkout this answer to see details on why it needs tiller in Helm v2 and why it's removed in Helm v3: https://v3.helm.sh/docs/faq/#removal-of-tiller
According to the idea of GitOps, what you did is a right way (to perform release from a git repo). However, if you want to push it further to make it more common, you can plan more goals including:
Choose a configuration management system beyond k8s app declarative definition only. E.g., Helm (like above answer https://stackoverflow.com/a/42053983/914967), Kustomize. They're pure client-side only.
avoid custom release process by altering update.sh with popular tools like kubectl apply or helm install.
drive change delivery from git tags/branches by using a CI/CD engine like argocd, Travis CI or GitHub Actions.
Uses branching strategy so that you can try changes on test/staging/production/ environment before delivering it directly.

Best CD strategy for Kubernetes Deployments

Our current CI deployment phase works like this:
Build the containers.
Tag the images as "latest" and < commit hash >.
Push images to repository.
Invoke rolling update on appropriate RC(s).
This has been working great for RC based deployments, but now that the Deployment object is becoming more stable and an underlying feature, we want to take advantage of this abstraction over our current deployment schemes and development phases.
What I'm having trouble with is finding a sane way to automate the update of a Deployment with the CI workflow. What I've been experimenting with is splitting up the git repo's and doing something like:
[App Build] Build the containers.
[App Build] Tag the images as "latest" and < commit hash >.
[App Build] Push images to repository.
[App Build] Invoke build of the app's Deployment repo, passing through the current commit hash.
[Deployment Build] Interpolate manifest file tokens (currently just the passed commit hash e.g. image: app-%%COMMIT_HASH%%)
[Deployment Build] Apply the updated manifest to the appropriate Deployment resource(s).
Surely though there's a better way to handle this. It would be great if the Deployment monitored for hash changes of the image's "latest" tag...maybe it already does? I haven't had success with this. Any thoughts or insights on how to better handle the deployment of Deployment would be appreciated :)
The Deployment only monitors for pod template (.spec.template) changes. If the image name didn't change, the Deployment won't do the update. You can trigger the rolling update (with Deployments) by changing the pod template, for example, label it with commit hash. Also, you'll need to set .spec.template.spec.containers.imagePullPolicy to Always (it's set to Always by default if :latest tag is specified and cannot be update), otherwise the image will be reused.
We've been practising what we call GitOps for a while now.
What we have is a reconciliation operator, which connect a cluster to configuration repository and makes sure that whatever Kubernetes resources (including CRDs) it finds in that repository, are applied to the cluster. It allows for ad-hoc deployment, but any ad-hoc changes to something that is defined in git will get undone in the next reconciliation cycle.
The operator is also able to watch any image registry for new tags, an update image attributes of Deployment, DaemonSet and StatefulSet types of objects. It makes a change in git first, then applies it to the cluster.
So what you need to do in CI is this:
Build the containers.
Tag the images as <commit_hash>.
Push images to repository.
The agent will take care of the rest for you, as long you've connected it to the right config repo where the app Deployment object can be found.
For a high-level overview, see:
Google Cloud Platform and Kubernetes
Deploy Applications & Manage Releases
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.

Kubernetes Deployments

While working on creating a platform that will do microservice deployments using Kubernetes, we want to take a Dependency on the Kubernetes Deployment Object. However, we saw the documentation http://kubernetes.io/v1.1/docs/user-guide/deployments.html says the following "Note that Deployment objects effectively have API version v1alpha1. Alpha objects may change or even be discontinued in future software releases"
I am wondering if we should go about using the Deployment concept to do our deployments, essentially rolling updates or since it could be discontinued or change, should we just reimplement the same concepts ourselves like , creating a rc with new labels, create new pods with different labels then both old rc and new rc, scale down the old rc by slowly removing pods from the old rc and slowly adding new pods into the new rc.
What is the plan or proposed changes for Deployment or that concept is going away for a better concept ?
Also i am wondering why OpenShift did not use the Deployment object, was it not ready at that time ?
OpenShifts deployment object preceded the upstream Kube object (being feature complete in the March 2015 time frame). Once Kube Deployments support the remaining features in OpenShift deployments, we'll automatically migrate them. Some things OpenShift deployments support that are not upstream yet
Automatic deployment when Docker registry tags change
Custom deployments (run your own deployment logic in a pod)
Deployment hooks - execute "bundle exec rake db:migrate" before or after deploying your app
Recreate deployment strategy
Ability to pause or "hold" a deployment so it does not automatically run (so admins can choose to deploy).
Ability for deployments to "fail" and be recorded (so that end users know that the code they pushed failed to start).
It will take time to add those remaining options.
As of now, the Deployment concept has been moved to "v1beta1". The concept will most probably be continued, because it is a declarative approach (vs. the imperative approach with the older replication controller etc.).
Can't tell anything about OpenShift but on GKE it works for me pretty well!
Deployment is planned to graduate to beta in 1.2 release. See related issue #15313 for the changes to be made. We will also have new kubectl commands for rolling update which uses Deployment, see issue #17168 and the proposal.