Kubernetes config editing - kubernetes

Is there any CLI tools or libraries that allow to update container images (and other parameters) in K8S YAML/JSON configuration files?
For example, I have this YAML:
apiVersion: apps/v1
kind: Deployment
<...>
spec:
template:
spec:
containers:
- name: dmp-reports
image: example.com/my-image:v1
<...>
And I want to automatically update the image for this deployment in this file (basically, this is necessary for the CI/CD system).

We have the same issue on the Jenkins X project where we have many git repositories and as we change things like libraries or base docker images we need to change lots of versions in pom.xml, package.json, Dockerfiles, helm charts etc.
We use a simple CLI tool called UpdateBot which automates the generation of Pull Requests on all downstream repositories. We tend to think of this as Continuous Delivery for libraries and base images ;). e.g. here's the current Pull Requests that UpdateBot has generated on the Jenkins X organisation repositories
Then here's how we update Dockerfiles / helm charts as we release, say, new base images:
https://github.com/jenkins-x/builder-base/blob/master/jx/scripts/release.sh#L28-L29

You can use sed in your CI/CD pipeline to update the file and deploy. In jenkins its sh sed ......
You can also use Helm - create templates and you can specify the new image names (etc.) when deploying the release.

Related

helm template - Helmfile, which way to go?

I am involved several projects CI/CD structures for deployment to Kubernetes with GitOps principles.
Some of the projects started before I joined them, I could not have to much influence on those and some others I was involved at the startup but I was not really happy with the end results so I was in a search of how would an ideal delivery pipeline for Kubernetes should look like.
After reading several peoples proposals and designs, I arrived a solution like the following.
I try to use Best Practices that there are consensus from several sources and the principles of 12 Factor App.
It starts with a Git Repository per Service principle and having a Service Pipeline that is producing an Executable, a Docker Image and pushing to a Docker Registry and Helm Chart with Docker Image id and with the configuration that is valid for all environments and push it to a Helm Repository.
So with every commit to the Service Git Repositories, Service Pipelines will trigger to produces new Docker Images and Helm Charts (There is only one convention, Helm Chart Version will be only increased if there is an actual change to the structure of the Helm Templates, placing only the actual Docker Image Id into the Helm Chart will not bump the Version of the Helm Charts).
A commit to the Service Git Repository would also trigger the Environment Pipeline for the Dev Environment (this is oversimplified to be able to keep the size of the diagram in check, for Feature and Bugfix branches Environment Pipeline can also create additional Namespaces under Dev k8s cluster).
At this point, is one big change from my previous production implementation of similar pipelines and the reason of this question. In those implementations, at this point Environment Pipeline would get all Service Helm Charts from Helm Repository with the help of the Helm Umbrella Chart (unlike the diagram below) and execute 'helm upgrade --install appXXX -n dev -f values-s1.yaml -f values-s2.yaml -f values-s3.yaml -f values-s4.yaml -f values-s5.yaml' which works but the disadvantage being the auditing.
We can identify by inspecting K8s Cluster what we deployed a later point in the time but it would be painful, so my idea is to follow GitOps principles (and many sources agrees with me) and render the manifests from Helm Umbrella Chart with 'helm template' during the 'Environment Pipeline' and commit those to Environment Repository, so that way, first those can be much more easily audited and secondly I can deploy those with the help of the Continuous Deployment tool like ArgoCD.
Now that I explained the precondition, we are arrived to my actual question, I also read from the same sources I mentioned, 'helmfile' is also an awesome tool when I read the documentation, it has really nice tool to prevent boilerplate code but considering, I am planning to synchronise the the state in the Environment Git Repository with ArgoCD, I will not use 'helmfile sync' and 'helm template' does basically what 'helmfile template' does, is using 'helmfile' also in this workflow is an overkill? Additionally I think the concept of Helmfile's 'environment.yaml' collides what I try to achieve with Environment Git Repository.
And secondly, if I decide also to use 'helmfile', mainly because of the awesome extra templating functions preventing the boilerplate, how should I integrate it with ArgoCD, it seems previously it could be integrated over...
data:
configManagementPlugins: |
- name: helmfile
generate:
command: ["/bin/sh", "-c"]
args: ["helmfile -q template --include-crds --skip-tests"]
but it seems now 'configManagementPlugins' is deprecated, how should I integrate it with ArgoCD?
Thx for answers.

Apply Github hosted Kubernetes file with Helm

I am trying to set up a helmfile deployment for my local kubernetes cluster which is running using 'kind' (a lightweight alternative to minikube). I have charts set up for my app which are all deploying correctly, however I require an nginx-ingress controller. Luckily 'kind' provides one, which I am currently applying with the command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
It seems perverse that I should have everything else set up to deploy at the touch of a button, but still have to 'remember' (and also train my colleagues to remember...) to run this additional command.
I realise I could copy and paste and create my own version, but I would like to keep up to date with any changes made at source. Is it possible to create a chart that makes a reference to an external template?
I am looking at solutions using either helm or helmfile.
Your linked YAML file seems to have been generated from the ingress-nginx chart.
Subchart
You can include ingress-nginx as a subchart in Helm 3 by adding it as a dependency to your own chart. In Helm 3, this is done with the dependencies field in Chart.yaml, e.g.:
apiVersion: v2
name: my-chart
version: 0.1.0
dependencies:
- name: ingress-nginx
version: ~4.0.6
repository: https://kubernetes.github.io/ingress-nginx
condition: ingress-nginx.enabled
This may be problematic, however, if you need to install multiple versions of your own chart in the same cluster. To handle this, you'd need to consider the implications of multiple Ingress controllers.
Chart
Ingress controllers are capable of handling ingresses from various releases across multiple namespaces. Therefore, I would recommend maintaining ingress-nginx separately from your own releases that depend on it. This would mean installing ingress-nginx like you already are or as a separate chart (guide).
If you go this route, there are tools that help make it easier for devs to take a hands-off approach for setting up their K8s environments. Some popular ones include Skaffold, DevSpace, Tilt, and Helmfile.

Helm chart usage

i'm working on a kubernetes project where we have a each micro service with it's own helm chart, currently the helm chart of each microservice is with it in the code source repository, and now i want to create a qa environnement where the same code can be used but i'm having a problem customizing the helm chart for each environnement, my question is what is the best approach to handle a helm chart for a microservice?and should the helm chart be located in the repository of the source code?
thanks in advance
It's ok to have the chart in each microservice's repository.
Now, to deploy your system (no matter the environment), you need to helm install all those charts. How can you do this? You have two options, either you individually install each one, or the best approach, you create a meta chart.
What's this meta chart? Just another dummy chart, with dependencies to all of your microservices. So that you end up with something like:
apiVersion: v3
name: myservice
version: 1.0.0
dependencies:
- name: microserviceA
version: ">=1.0.0"
repository: "path_to_microserviceA_repo"
- name: microserviceB
version: ">=1.0.0"
repository: "path_to_microserviceA_repo"
Then, ideally you would have different values files with configuration for each environment you're going to deploy: QA, staging, production, personal for local development, etc

Parameterize Helm values yaml file in a CI pipeline

I have two projects:
Project A - Contains the Source code for my Microservice application
Project B - Contains the Kubernetes resources for Project A using Helm
Both the Projects reside in their own separate git repositories.
Project A builds using a full blown CI pipeline that build a Docker image with a tag version and pushes it into the Docker hub and then writes the version number for the Docker image into Project B via a git push from the CI server. It does so by committing a simple txt file with the Docker version that it just built.
So far so good! I now have Project B which contains this Docker version for the Microservice Project A and I now want to pass / inject this value into the Values.yaml so that when I package the Project B via Helm, I have the latest version.
Any ideas how I could get this implemented?
via a git push from the CI server. It does so by committing a simple txt file with the Docker version that it just built.
What I usually do here, is that I write the value to the correct field in the yaml directly. To work with yaml on the command line, I recommend the cli tool yq.
I usually use full Kubernetes Deployment manifest yaml and I typically update the image field with this yq command:
yq write --inplace deployment.yaml 'spec.template.spec.containers(name==myapp).image' <my-registry>/<my-image-repo>/<image-name>:<tag-name>
and after that commit the yaml file to the repo with yaml manifests.
Now, you use Helm but it is still Yaml, so you should be able to solve this in a similar way. Maybe something like:
yq write --inplace values.yaml 'app.image' <my-image>

How do I version control a kubernetes application?

I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.
This is what I currently do
Let's say I have a project composed of 3 services viz. postgres, express, nginx.
I create a directory called product-release that is as follows:
product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.
Questions:
This works for me so far, but is this "the right way"?
Why does helm have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?
Seems that https://redspread.com/ (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.
We'll stick with keeping yaml files in git together with the deployed application for now I guess.
We are using kubernetes/helm (the latest/incubated version) and a central repository for Helm charts (with references container images built for our component releases).
In other words, the Helm package definitions and its dependencies are separate from the source code and image definitions that make up the several components of our web applications.
Notice: Tiller has been removed in Helm v3. Checkout this answer to see details on why it needs tiller in Helm v2 and why it's removed in Helm v3: https://v3.helm.sh/docs/faq/#removal-of-tiller
According to the idea of GitOps, what you did is a right way (to perform release from a git repo). However, if you want to push it further to make it more common, you can plan more goals including:
Choose a configuration management system beyond k8s app declarative definition only. E.g., Helm (like above answer https://stackoverflow.com/a/42053983/914967), Kustomize. They're pure client-side only.
avoid custom release process by altering update.sh with popular tools like kubectl apply or helm install.
drive change delivery from git tags/branches by using a CI/CD engine like argocd, Travis CI or GitHub Actions.
Uses branching strategy so that you can try changes on test/staging/production/ environment before delivering it directly.