Using Helm to manage my "app" but kubectl to manage the version - kubernetes

So, what I'm trying to do is use helm to install an application to my kubernetes cluster. Let's say the image tag is 1.0.0 in the chart.
Then, as part of a CI/CD build pipeline, I'd like to update the image tag using kubectl, i.e. kubectl set image deployment/myapp...
The problem is if I subsequently make any change to the helm chart (e.g. number of replicas), and I helm upgrade myapp this will revert the image tag back to 1.0.0.
I've tried passing in the --reuse-values flag to the helm upgrade command but that hasn't helped.
Anyone have any ideas? Do I need to use helm to update the image tag? I'm trying to avoid this, as the chart is not available at this stage in the pipeline.

When using CI/CD to build and deploy, you should use a single source-of-truth, that means a file versioned in e.g. Git and you do all changes in that file. So if you use Helm charts, they should be stored in e.g. Git and all changes (e.g. new image) should be done in your Git repository.
You could have a build pipeline that in the end commit the new image to a Kubernetes config repository. Then a deployment pipeline is triggered that use Helm or Kustomize to apply your changes and possibly execute tests.

Related

Is There a Way to Detect Changes made to Resources Deployed by a Helm Chart

We have several resources deployed as part of a helm (v3) chart. Some time ago, I made changes to resources deployed by that helm chart manually, via kubectl. This caused some drift between the values in the yaml resources deployed by the helm release (as show by helm get values <release>) and what is actually deployed in the cluster
Example: kubectl describe deployment <deployment> shows an updated image that was manually applied via a kubectl re-apply. Whereas helm show values <release> shows the original image used by helm for said deployment.
I realize that I should have performed a helm upgrade with a modified values.yaml file to execute the image change, but I am wondering if there is a way for me to sync the state of the values I manually updated with the values in the helm release. The goal is to create a new default values.yaml that reflect the current state of the cluster resources.
Thanks!
This is a community wiki answer posted for better visibility. Feel free to expand it.
According to the Helm issue 2730 this feature will not be added in the Helm, as it is outside of the scope of the project.
It looks like there is no existing tool right from the Helm, that would help to port/adapt the life kubernetes resource back into existing or new helm charts/releases.
Based on this, you can use one of the following options:
As suggested by #David Maze. The Helm Diff Plugin will show you the difference between the chart output and the cluster, but then you need to manually update values.yaml and templates.
The helm-adopt plugin is a helm plugin to adopt existing k8s resources into a new generated helm chart.

How to update Helm chart / Kubernetes manifests without "latest" tags?

I'm think I'm about to reinvent the wheel here. I have all the parts but am thinking: Somebody must have done this (properly) before me.
We have a a jenkins CI job that builds image-name:${BRANCH_NAME} and pushes it to a registry. We want to create a CD job that deploys this image-name:${BRANCH_NAME} to kubernetes cluster. And so now we run into the problem that if we call helm upgrade --install with the same image-name:${BRANCH_NAME} nothing happens, even if image-name:${BRANCH_NAME} now actually refers to a different sha256 sum. We (think we) understand this.
How is this generally solved? Are there best practices about this? I see two general approaches:
The CI job doesn't just create image-name:${BRANCH_NAME}, it also creates a unique tag, e.g. image-name:${BRANCH_NAME}-${BUILD_NUMBER}. The CD job never deploys the generic image-name:${BRANCH_NAME}, but always the unique image-name:${BRANCH_NAME}-${BUILD_NUMBER}.
After the CI job has created image-name:${BRANCH_NAME}, its SHA256 sum is retrieved somehow (e.g. with docker inspect or skopeo and helm is called with the SHA256 sum.
In both cases, we have two choices. Modify, commit and track a custom-image-tags.yaml file, or run helm with --set parameters for the image tags. If we go with option 1, we'll have to periodically remove "old tags" to save disk space.
And if we have a single CD job with a single helm chart that contains multiple images, this only gets more complicated.
Surely, there must be some opinionated tooling to do all this for us.
What are the ways to do this without re-inventing this particular wheel for the 4598734th time?
kbld gets me some of the way, but breaks helm
I've found kbld, which allows me to:
helm template my-chart --values my-vals.yml | kbld -f - | kubectl apply -f -
which basically implements 2 above, but now helm is unaware that the chart has been installed so I can't helm uninstall it. :-( I'm hoping there is some better approach...
kbld can also be used "fully" with helm...
Yes, the docs suggest:
$ helm template my-chart --values my-vals.yml | kbld -f - | kubectl apply -f -
But this also works:
$ cat kbld-stdin.sh
#!/bin/bash
kbld -f -
$ helm upgrade --install my-chart --values my-vals.yml --post-renderer ./kbld-stdin.sh
With --post-renderer, helm list, helm uninstall, etc. all still work.
One approach is that every build of Jenkins CI Job should create docker image with new semantic versioned image tag.
To generate the image tag, you need to tag every git commit with a semantic version which is an increment of the previous commit tag.
For Example :
Your first commit in a git repository master branch will be tagged as 0.0.1 and your docker image tag will be 0.0.1
Then when the CI build runs for the next git commit in master branch, that git commit in the git repository will be tagged as 0.0.2 and your docker image tag will be 0.0.2
Since you have single helm chart for multiple images, your CI Build can then download the latest version of your helm chart, change the docker image tag and upload the helm chart with same helm version.
If you create a new git release branch, then it should be tagged with 0.1.0 and docker image created for this new git release branch should be tagged as 0.1.0
You can use this tag in the Maven pom.xml for Java Applications as well.
Using the docker image tag, developers can checkout the correpsonding git tag to find what is the source code corresponding to that docker image tag. It will help them with debugging and also for providing fixes.
Please also read https://medium.com/#mccode/using-semantic-versioning-for-docker-image-tags-dfde8be06699

Force pod to fetch the latest images after successful build inside CI pipeline via github actions - kubernetes

I'm currently developing my ci/cd pipeline via 'gitHub' actions.
My k8s deployments being managed by 'helm' and runs on GKE and my 'images' stored in 'gcp'
I've successfully manages to build and deploy a new image via 'gitHub' actions, and now I've
would like that one of the pods will fetch the latest version after the image was deployed to 'gcp'.
As I understand the current flow is to update the helm chart version after creating the new image and run 'helm upgrade' from k8s (am I right?), but currently I would like to skip the helm
versioning part and just force the pod to get the new image.
Until now to make it work, after I was creating the new image I was simply deleting the pod and because the deployment is exists the pod was recreated, but my questions is:
Should I do the same from my 'CI' pipeline(deleting the pod) or there is another way doing that?
Use kubectl rollout
If you are using latest tag for image and imagePullPolicy is set as Always, you can try kubectl rollout command to fetch the latest built image.
But latest image tag is not recommended for the prod deployment, because you cannot ensure the full control of the deployment version.
Update image tag in values.yaml file
If you have some specific reasons to avoid chart version bump, you can only update the values.yaml file and try helm upgrade command with the new values.yaml file which has the new image tag. In this case, you have to use specific image tags, not latest.
If you have to use latest image tag, you can use sha256 value of the image as the tag in the values.yaml file.

How to manage helm image tag deployments

I have all our applications in Kubernetes Helm charts using:
# values.yaml
default:
IMAGE_REPO: myorg
IMAGE_NAME: api
IMAGE_TAG: latest
I understand that in order for Helm to know it has to re-deploy the pods (i.e. pull down the latest image) I have to change the the IMAGE_TAG. My question is how is this managed? Do I manually update the values.yaml file every deploy, git commit, git pull on the master, and then run helm upgrade api --values values.yaml ./?
Or is it better to just leave values.yaml on latest and update via the command line directly like:
helm upgrade api --values values.yaml ./ --set IMAGE_TAG=ab31f452
Use git (99% of the time)
For a production deployment or anywhere that needs tracking I would want it in git and pushed from there. The helm chart will also evolve over time with your app so this also means you get checkpoints of working app versions with the helm chart.
For development or snapshot environments that don't need to be reproduced, I sometimes might go with the less formal method of helm pushing out new image tags as needed. More so if you have something like Jenkins or any job runner that tracks when and how things happen.
This is very dependent on the environment the app runs in. It can range from applications that require an audit trail all the way from a dev, through testing to production deployment where it has to be in git, over to the other end of the spectrum of throw stuff at production by hand (where you end up wanting it in git).
I understand that in order for Helm to know it has to re-deploy the pods (i.e. pull down the latest image) I have to change the the IMAGE_TAG
This isn't entirely correct, kubernetes will reschedule pods when the resource spec changes. You could change an annotation or label on the pod spec and pods would be replaced. Then imagePullPolicy: Always can be set in the pod spec.
Still, don't use that to rely on :latest. It will bite you one day.
The recommended image tag for production environment is immutable tags. So that we can easily get to know which version is running on the k8s cluster. Also you have to run the command like this because the image tag is nested vales.
helm upgrade api --values values.yaml ./ --set **default.IMAGE_TAG**=ab31f452

Helm Chart from different Repository

I do have multiple repositories like Project1, Project2, Project3.
I do have 1 repository where Helm charts are managed (deploy1).
I do this on Azure DevOps.
I now added a build Pipepline to Project1, which is working as expected.
Now i went into project deploy1 and wanted to create a new Release Pipeline, which is to be triggerd from Project1 build.
Now i would want to use the Helm chart from deploy1 to deploy to my kubernetes Cluster based on the published build from Project1.
Is this possible, is this even the correct approach?
Some might suggest that i keep the Helm Chart within Project1, but isnt that counter intuitive?
I also do not want to keep a copy of the same Helm Chart in every Projectn repository.
As i understood a Helm chart is used to manage a set of kubernetes Resources.
And if possible i would like to be able to remove my entire applicationstack, Project1, Project2 & Project3 with the unstill command from 1 Helm file.
well, I'd suggest using Azure Container Registry (acr) to store helm charts. That way you can use repo1 as source for all helm charts, when you'd build the charts, you'd package them and push to the acr. Then in other releases you'd use the same acr to pull those charts and apply them.
this can be done with az cli:
helm package --version $(build.buildId) --destination $(build.artifactStagingDirectory) %name%
az acr helm push %name%.tgz
you can pull them with az as well