How could I preview the final result when applying kustomizations to a git repo with helm templates? - kubernetes-helm

I have a git repo with helm charts, and a git repo per environment with kustomizations for those charts. All of this in working great with flux. But I don't know how to "preview" my changes to the kustomization files.
Let's say I edit the dev environment kustomization, can I preview what the final yaml file will look like ?

You can have a look at 2 different commands:
flux diff - "The diff command does a build, then it performs a server-side dry-run and prints the diff."
kustomize build - "It recursively builds (aka hydrates) the kustomization.yaml you point it to, resulting in a set of Kubernetes resources ready to be deployed."
As Flux Kustomization only points to a standard kustomize file, you can use the kustomize build to see the manifests.
PS: For helm, check out helm template

Related

helm template - Helmfile, which way to go?

I am involved several projects CI/CD structures for deployment to Kubernetes with GitOps principles.
Some of the projects started before I joined them, I could not have to much influence on those and some others I was involved at the startup but I was not really happy with the end results so I was in a search of how would an ideal delivery pipeline for Kubernetes should look like.
After reading several peoples proposals and designs, I arrived a solution like the following.
I try to use Best Practices that there are consensus from several sources and the principles of 12 Factor App.
It starts with a Git Repository per Service principle and having a Service Pipeline that is producing an Executable, a Docker Image and pushing to a Docker Registry and Helm Chart with Docker Image id and with the configuration that is valid for all environments and push it to a Helm Repository.
So with every commit to the Service Git Repositories, Service Pipelines will trigger to produces new Docker Images and Helm Charts (There is only one convention, Helm Chart Version will be only increased if there is an actual change to the structure of the Helm Templates, placing only the actual Docker Image Id into the Helm Chart will not bump the Version of the Helm Charts).
A commit to the Service Git Repository would also trigger the Environment Pipeline for the Dev Environment (this is oversimplified to be able to keep the size of the diagram in check, for Feature and Bugfix branches Environment Pipeline can also create additional Namespaces under Dev k8s cluster).
At this point, is one big change from my previous production implementation of similar pipelines and the reason of this question. In those implementations, at this point Environment Pipeline would get all Service Helm Charts from Helm Repository with the help of the Helm Umbrella Chart (unlike the diagram below) and execute 'helm upgrade --install appXXX -n dev -f values-s1.yaml -f values-s2.yaml -f values-s3.yaml -f values-s4.yaml -f values-s5.yaml' which works but the disadvantage being the auditing.
We can identify by inspecting K8s Cluster what we deployed a later point in the time but it would be painful, so my idea is to follow GitOps principles (and many sources agrees with me) and render the manifests from Helm Umbrella Chart with 'helm template' during the 'Environment Pipeline' and commit those to Environment Repository, so that way, first those can be much more easily audited and secondly I can deploy those with the help of the Continuous Deployment tool like ArgoCD.
Now that I explained the precondition, we are arrived to my actual question, I also read from the same sources I mentioned, 'helmfile' is also an awesome tool when I read the documentation, it has really nice tool to prevent boilerplate code but considering, I am planning to synchronise the the state in the Environment Git Repository with ArgoCD, I will not use 'helmfile sync' and 'helm template' does basically what 'helmfile template' does, is using 'helmfile' also in this workflow is an overkill? Additionally I think the concept of Helmfile's 'environment.yaml' collides what I try to achieve with Environment Git Repository.
And secondly, if I decide also to use 'helmfile', mainly because of the awesome extra templating functions preventing the boilerplate, how should I integrate it with ArgoCD, it seems previously it could be integrated over...
data:
configManagementPlugins: |
- name: helmfile
generate:
command: ["/bin/sh", "-c"]
args: ["helmfile -q template --include-crds --skip-tests"]
but it seems now 'configManagementPlugins' is deprecated, how should I integrate it with ArgoCD?
Thx for answers.

Have Helm include files in a chart but not parse them and be able to refer to them with -f

Right now my application repos have directories that look like this:
myapp
code/
myappChartConfig/
myappChart/
dev.yaml
prod.yaml
The chart is in myappChart/ and my dev/prod settings are outside it in dev/prod yaml files. On deploy if it's dev or prod, the right config is supplied with -f.
I want instead to include my dev/prod YAML files inside the chart itself. So when I push the chart to a repo it includes the configs and when I pull it down I get the chart and its configs.
Does Helm support this? This is not the helmignore use case. I want to include these files in the chart but I don't want helm to process them as though they are manifests- they are values files (but not the default values.yaml file, env specific ones).
What I want to avoid is something wonky like naming the files dev.yaml.deploy and then have scripts pull down the chart and move and rename those files before running helm upgrade. It would be nice to refer to them with -f and have them be inside the chart's folder when it's pulled down.
You can split it into 2 charts.
Chart named prod includes a values.yaml file containing the configuration values of prod.
Chart named dev includes a values.yaml file containing the configuration values of dev.
Hope it's useful for you!

How to update Helm chart / Kubernetes manifests without "latest" tags?

I'm think I'm about to reinvent the wheel here. I have all the parts but am thinking: Somebody must have done this (properly) before me.
We have a a jenkins CI job that builds image-name:${BRANCH_NAME} and pushes it to a registry. We want to create a CD job that deploys this image-name:${BRANCH_NAME} to kubernetes cluster. And so now we run into the problem that if we call helm upgrade --install with the same image-name:${BRANCH_NAME} nothing happens, even if image-name:${BRANCH_NAME} now actually refers to a different sha256 sum. We (think we) understand this.
How is this generally solved? Are there best practices about this? I see two general approaches:
The CI job doesn't just create image-name:${BRANCH_NAME}, it also creates a unique tag, e.g. image-name:${BRANCH_NAME}-${BUILD_NUMBER}. The CD job never deploys the generic image-name:${BRANCH_NAME}, but always the unique image-name:${BRANCH_NAME}-${BUILD_NUMBER}.
After the CI job has created image-name:${BRANCH_NAME}, its SHA256 sum is retrieved somehow (e.g. with docker inspect or skopeo and helm is called with the SHA256 sum.
In both cases, we have two choices. Modify, commit and track a custom-image-tags.yaml file, or run helm with --set parameters for the image tags. If we go with option 1, we'll have to periodically remove "old tags" to save disk space.
And if we have a single CD job with a single helm chart that contains multiple images, this only gets more complicated.
Surely, there must be some opinionated tooling to do all this for us.
What are the ways to do this without re-inventing this particular wheel for the 4598734th time?
kbld gets me some of the way, but breaks helm
I've found kbld, which allows me to:
helm template my-chart --values my-vals.yml | kbld -f - | kubectl apply -f -
which basically implements 2 above, but now helm is unaware that the chart has been installed so I can't helm uninstall it. :-( I'm hoping there is some better approach...
kbld can also be used "fully" with helm...
Yes, the docs suggest:
$ helm template my-chart --values my-vals.yml | kbld -f - | kubectl apply -f -
But this also works:
$ cat kbld-stdin.sh
#!/bin/bash
kbld -f -
$ helm upgrade --install my-chart --values my-vals.yml --post-renderer ./kbld-stdin.sh
With --post-renderer, helm list, helm uninstall, etc. all still work.
One approach is that every build of Jenkins CI Job should create docker image with new semantic versioned image tag.
To generate the image tag, you need to tag every git commit with a semantic version which is an increment of the previous commit tag.
For Example :
Your first commit in a git repository master branch will be tagged as 0.0.1 and your docker image tag will be 0.0.1
Then when the CI build runs for the next git commit in master branch, that git commit in the git repository will be tagged as 0.0.2 and your docker image tag will be 0.0.2
Since you have single helm chart for multiple images, your CI Build can then download the latest version of your helm chart, change the docker image tag and upload the helm chart with same helm version.
If you create a new git release branch, then it should be tagged with 0.1.0 and docker image created for this new git release branch should be tagged as 0.1.0
You can use this tag in the Maven pom.xml for Java Applications as well.
Using the docker image tag, developers can checkout the correpsonding git tag to find what is the source code corresponding to that docker image tag. It will help them with debugging and also for providing fixes.
Please also read https://medium.com/#mccode/using-semantic-versioning-for-docker-image-tags-dfde8be06699

Parameterize Helm values yaml file in a CI pipeline

I have two projects:
Project A - Contains the Source code for my Microservice application
Project B - Contains the Kubernetes resources for Project A using Helm
Both the Projects reside in their own separate git repositories.
Project A builds using a full blown CI pipeline that build a Docker image with a tag version and pushes it into the Docker hub and then writes the version number for the Docker image into Project B via a git push from the CI server. It does so by committing a simple txt file with the Docker version that it just built.
So far so good! I now have Project B which contains this Docker version for the Microservice Project A and I now want to pass / inject this value into the Values.yaml so that when I package the Project B via Helm, I have the latest version.
Any ideas how I could get this implemented?
via a git push from the CI server. It does so by committing a simple txt file with the Docker version that it just built.
What I usually do here, is that I write the value to the correct field in the yaml directly. To work with yaml on the command line, I recommend the cli tool yq.
I usually use full Kubernetes Deployment manifest yaml and I typically update the image field with this yq command:
yq write --inplace deployment.yaml 'spec.template.spec.containers(name==myapp).image' <my-registry>/<my-image-repo>/<image-name>:<tag-name>
and after that commit the yaml file to the repo with yaml manifests.
Now, you use Helm but it is still Yaml, so you should be able to solve this in a similar way. Maybe something like:
yq write --inplace values.yaml 'app.image' <my-image>

Using Helm to manage my "app" but kubectl to manage the version

So, what I'm trying to do is use helm to install an application to my kubernetes cluster. Let's say the image tag is 1.0.0 in the chart.
Then, as part of a CI/CD build pipeline, I'd like to update the image tag using kubectl, i.e. kubectl set image deployment/myapp...
The problem is if I subsequently make any change to the helm chart (e.g. number of replicas), and I helm upgrade myapp this will revert the image tag back to 1.0.0.
I've tried passing in the --reuse-values flag to the helm upgrade command but that hasn't helped.
Anyone have any ideas? Do I need to use helm to update the image tag? I'm trying to avoid this, as the chart is not available at this stage in the pipeline.
When using CI/CD to build and deploy, you should use a single source-of-truth, that means a file versioned in e.g. Git and you do all changes in that file. So if you use Helm charts, they should be stored in e.g. Git and all changes (e.g. new image) should be done in your Git repository.
You could have a build pipeline that in the end commit the new image to a Kubernetes config repository. Then a deployment pipeline is triggered that use Helm or Kustomize to apply your changes and possibly execute tests.