How do I version control a kubernetes application? - kubernetes

I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.
This is what I currently do
Let's say I have a project composed of 3 services viz. postgres, express, nginx.
I create a directory called product-release that is as follows:
product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.
Questions:
This works for me so far, but is this "the right way"?
Why does helm have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?

Seems that https://redspread.com/ (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.
We'll stick with keeping yaml files in git together with the deployed application for now I guess.

We are using kubernetes/helm (the latest/incubated version) and a central repository for Helm charts (with references container images built for our component releases).
In other words, the Helm package definitions and its dependencies are separate from the source code and image definitions that make up the several components of our web applications.

Notice: Tiller has been removed in Helm v3. Checkout this answer to see details on why it needs tiller in Helm v2 and why it's removed in Helm v3: https://v3.helm.sh/docs/faq/#removal-of-tiller
According to the idea of GitOps, what you did is a right way (to perform release from a git repo). However, if you want to push it further to make it more common, you can plan more goals including:
Choose a configuration management system beyond k8s app declarative definition only. E.g., Helm (like above answer https://stackoverflow.com/a/42053983/914967), Kustomize. They're pure client-side only.
avoid custom release process by altering update.sh with popular tools like kubectl apply or helm install.
drive change delivery from git tags/branches by using a CI/CD engine like argocd, Travis CI or GitHub Actions.
Uses branching strategy so that you can try changes on test/staging/production/ environment before delivering it directly.

Related

helm template - Helmfile, which way to go?

I am involved several projects CI/CD structures for deployment to Kubernetes with GitOps principles.
Some of the projects started before I joined them, I could not have to much influence on those and some others I was involved at the startup but I was not really happy with the end results so I was in a search of how would an ideal delivery pipeline for Kubernetes should look like.
After reading several peoples proposals and designs, I arrived a solution like the following.
I try to use Best Practices that there are consensus from several sources and the principles of 12 Factor App.
It starts with a Git Repository per Service principle and having a Service Pipeline that is producing an Executable, a Docker Image and pushing to a Docker Registry and Helm Chart with Docker Image id and with the configuration that is valid for all environments and push it to a Helm Repository.
So with every commit to the Service Git Repositories, Service Pipelines will trigger to produces new Docker Images and Helm Charts (There is only one convention, Helm Chart Version will be only increased if there is an actual change to the structure of the Helm Templates, placing only the actual Docker Image Id into the Helm Chart will not bump the Version of the Helm Charts).
A commit to the Service Git Repository would also trigger the Environment Pipeline for the Dev Environment (this is oversimplified to be able to keep the size of the diagram in check, for Feature and Bugfix branches Environment Pipeline can also create additional Namespaces under Dev k8s cluster).
At this point, is one big change from my previous production implementation of similar pipelines and the reason of this question. In those implementations, at this point Environment Pipeline would get all Service Helm Charts from Helm Repository with the help of the Helm Umbrella Chart (unlike the diagram below) and execute 'helm upgrade --install appXXX -n dev -f values-s1.yaml -f values-s2.yaml -f values-s3.yaml -f values-s4.yaml -f values-s5.yaml' which works but the disadvantage being the auditing.
We can identify by inspecting K8s Cluster what we deployed a later point in the time but it would be painful, so my idea is to follow GitOps principles (and many sources agrees with me) and render the manifests from Helm Umbrella Chart with 'helm template' during the 'Environment Pipeline' and commit those to Environment Repository, so that way, first those can be much more easily audited and secondly I can deploy those with the help of the Continuous Deployment tool like ArgoCD.
Now that I explained the precondition, we are arrived to my actual question, I also read from the same sources I mentioned, 'helmfile' is also an awesome tool when I read the documentation, it has really nice tool to prevent boilerplate code but considering, I am planning to synchronise the the state in the Environment Git Repository with ArgoCD, I will not use 'helmfile sync' and 'helm template' does basically what 'helmfile template' does, is using 'helmfile' also in this workflow is an overkill? Additionally I think the concept of Helmfile's 'environment.yaml' collides what I try to achieve with Environment Git Repository.
And secondly, if I decide also to use 'helmfile', mainly because of the awesome extra templating functions preventing the boilerplate, how should I integrate it with ArgoCD, it seems previously it could be integrated over...
data:
configManagementPlugins: |
- name: helmfile
generate:
command: ["/bin/sh", "-c"]
args: ["helmfile -q template --include-crds --skip-tests"]
but it seems now 'configManagementPlugins' is deprecated, how should I integrate it with ArgoCD?
Thx for answers.

Kubernetes - Handle cronjobs like crontab

I have a lot of cronjobs I need to set on Kubernetes.
I want a file to manage them all and set them to Kubernetes on deployment. I wish that if I remove a cron from that file it will be removed from Kubernetes too.
Basically, I want to handle the corns like I'm handling them today on the machine (from a cron file that I would deploy). Add, remove and change crons.
I couldn't find a way of doing so. Does someone have an idea?
Library or framework I can use like helm? Or any other solution.
I highly recommend using gitops with argocd as a solution for Kubernetes configure management. Run crontab in deployment is a bad ideal because it hard to monitor your job result (cronjob job result can be get by kube-state-metrics exporter).
The ideal is packaging your manifest (it may be kubernetes manifest, kustomize, helm...etc...) -> put them to git -> argocd makes sure your configure deployed correctly
The advantages of gitops are include:
centralize your configuration
versioning your configuration
git authentication & authorization
traceable
multi-cluster deployment with argocd
automation deployment & sync
...
Gitops is not a difficult and is the mordern way for kubernetes configure management. Let's try
I used Helm to do so. I built a template to go over all crons, which I inserted as values to the helm template (Very similar to crontab but more structured) - see in the example.
Then, all I need to do is run a helm upgrade with a new corn (values) file and it updates everything accordingly. If I updated, removed, or added a new corn everything is happening automatically and with versioning. You can also add a namespace to your cronjobs to make it more encapsulated.
Here is a very good and easy-to-understand example I used. And its git repo

Kubernetes "packaging" for environment do update and delete all in one batch for branch based environments?

Using Kubernetes we make use of Helm and Kustomize to bundle our application. This helps consistently updating something like an application, but gets kind of bloated for a hole “environment” or cluster.
ArgoCD seems like a good solution for updating a hole cluster, as you can “mirror” your git state to the cluster. This works even when dropping resources or updating an existing complex deployment.
Now I want to build branch based ephemeral environments and think ArgoCD seems a bit bloated for this feature as for every branch environment I would have to commit to the git repository and add something.
The idea is, every branch based environment lives in its own namespace. I search for a tool managing this namespace and being able to do updates, and drops of the hole thing.
What is a good solution for this problem?

What is the right way to manage changes in kubernetes manifests?

I've been using terraform for a while and I really like it. I also set up Atlantis so that my team could have a "GitOps" flow. This is my current process:
Add or remove resources from Terraform files
Push changes to GitHub and create a pull request
Atlantis picks up changes and creates a terraform plan
When the PR is approved, Atlantis applies the changes
I recently found myself needing to set up a few managed Kubernetes clusters using Amazon EKS. While Terraform is capable of creating most of the basic infrastructure, it falls short when setting up some of the k8s resources (no support for gateways or ingress, no support for alpha/beta features, etc). So instead I've been relying on a manual approach using kubectl:
Add the resource to an existing file or create a new file
Add a line to a makefile that runs the appropriate command (kubectl apply or create) on the new file
If I'm using a helm chart, add a line with helm template and then kubectl apply (I didn't really like using tiller, and helm3 is getting rid of it anyway)
If I want to delete a resource, I do it manually with kubectl delete
This process feels nowhere near as clean as what we're doing in Terraform. There are several key problems:
There's no real dry-run. Using kubectl --dry-run or kubectl diff doesn't really work, it's only a client-side diff. Server-side diff functions are currently in alpha
There's no state file. If I delete stuff from the manifests, I have to remember to also delete it from the cluster manually.
No clear way to achieve gitops. I've looked at Weaveworks Flux but that seems to be geared more towards deploying applications.
The makefile is getting more and more complicated. It doesn't feel like this is scaleable.
I should acknowledge that I'm fairly new to Kubernetes, so might be overlooking something obvious.
Is there a way for me to achieve a process similar to what I have in Terraform, within the Kubernetes universe?
This is more of an opinion question so I'll answer with an opinion. If you like to manage configuration you can try some of these tools:
If you want to use existing YAML files (configurations) and use something at a higher level you can try kustomize.
If you want to manage Kubernetes configurations using Jsonnet you should take a look at Ksonnet. Keep in mind that Ksonnet will not be supported in the future.
If you want to just automatically do a helm update in an automated way, there is not a tool there yet. You will have to build something at this point to orchestrate everything. For example, we ended up creating an in house tool that does this.

Best CD strategy for Kubernetes Deployments

Our current CI deployment phase works like this:
Build the containers.
Tag the images as "latest" and < commit hash >.
Push images to repository.
Invoke rolling update on appropriate RC(s).
This has been working great for RC based deployments, but now that the Deployment object is becoming more stable and an underlying feature, we want to take advantage of this abstraction over our current deployment schemes and development phases.
What I'm having trouble with is finding a sane way to automate the update of a Deployment with the CI workflow. What I've been experimenting with is splitting up the git repo's and doing something like:
[App Build] Build the containers.
[App Build] Tag the images as "latest" and < commit hash >.
[App Build] Push images to repository.
[App Build] Invoke build of the app's Deployment repo, passing through the current commit hash.
[Deployment Build] Interpolate manifest file tokens (currently just the passed commit hash e.g. image: app-%%COMMIT_HASH%%)
[Deployment Build] Apply the updated manifest to the appropriate Deployment resource(s).
Surely though there's a better way to handle this. It would be great if the Deployment monitored for hash changes of the image's "latest" tag...maybe it already does? I haven't had success with this. Any thoughts or insights on how to better handle the deployment of Deployment would be appreciated :)
The Deployment only monitors for pod template (.spec.template) changes. If the image name didn't change, the Deployment won't do the update. You can trigger the rolling update (with Deployments) by changing the pod template, for example, label it with commit hash. Also, you'll need to set .spec.template.spec.containers.imagePullPolicy to Always (it's set to Always by default if :latest tag is specified and cannot be update), otherwise the image will be reused.
We've been practising what we call GitOps for a while now.
What we have is a reconciliation operator, which connect a cluster to configuration repository and makes sure that whatever Kubernetes resources (including CRDs) it finds in that repository, are applied to the cluster. It allows for ad-hoc deployment, but any ad-hoc changes to something that is defined in git will get undone in the next reconciliation cycle.
The operator is also able to watch any image registry for new tags, an update image attributes of Deployment, DaemonSet and StatefulSet types of objects. It makes a change in git first, then applies it to the cluster.
So what you need to do in CI is this:
Build the containers.
Tag the images as <commit_hash>.
Push images to repository.
The agent will take care of the rest for you, as long you've connected it to the right config repo where the app Deployment object can be found.
For a high-level overview, see:
Google Cloud Platform and Kubernetes
Deploy Applications & Manage Releases
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.