Helm and configmap checksum annotations - kubernetes

I'm working on a Jenkins deployment using a wrapper for the standard chart (stable/jenkins). The chart includes a value flag to allow you totally replace the configmap with your own as long as you match the format of the original. But I'm running in to a problem because the checksum annotation in the deployment is based on the original configmap, not my replacement. So I have to manually force the deployment pods to re-roll after updating the configmap. I could use a post-upgrade hook in my own chart with a job that does the scale-down-and-back-up dance, but that seems slightly gross.

This is not currently possible with Helm 2, but will be doable in Helm 3 more directly via chart scripts.
My eventual solution was to fork the jenkins chart and cut it down to only the parts I needed.

Related

Helmfile with additional resource without chart

I know this is maybe a weird question, but I want to ask if it's possible to also manage single resources (like f.e. a configmap/secret) without a seperate chart?
F.e. I try to install a nginx-ingress and would like to additionally apply a secret map which includes http-basic-authentication data.
I can just reference the nginx-ingress-repo directly in my helmfile, but do I really need to create a seperate helm chart to also apply the http-basic-secret?
I have many releases which need a single, additional resource (like a json configmap, a single secret) and it would be cumbersome to always need a seperate chart file for each release?
Thank you!
Sorry, Helmfile only manages entire Helm releases.
There are a couple of escape hatches you might be able to use. Helmfile hooks can run arbitrary shell commands on the host (as distinct from Helm hooks, which usually run Jobs in the cluster) and so you could in principle kubectl apply a file in a hook. Helmfile also has some integration with Kustomize and it might be possible to add resources this way. As you've noted you can also write local charts and put whatever YAML you need in those.
The occasional chart does support including either arbitrary extra resources or specific configuration content; the Bitnami MariaDB chart, to pick one, supports putting anything you want under an extraDeploy value. You could use this in combination with Helmfile values: to inject more resources
releases:
- name: mariadb
chart: bitnami/mariadb
values:
- extraDeploy:
- |-
apiVersion: v1
kind: ConfigMap
...

Multiple deployments in same Helm Chart

I have a unique scenario where I have to deploy two different deployments, I have created a helm chart wherein programmatically I change a part of deployment at run time (while applying charts as overrides).
My Helm Chart is very simple it consist of a namespace and a deployment.
Now, when I apply the Helm Chart first time - which on runtime overrides values of first deployment and say it has attribute_name it substitute to value Var_A as expected . It works well. It creates namespace and creates deployment with attribute_name having value Var_A. So good so far .....
...but next when I apply the Helm Chart to deploy my second deployment which needs attribute_name to be Var_B it does not get applied, reason Helm complains that namespace already exists (rightly so).
I am wondering how to implement this solution ?
Would I need a new HelmChart just for namespace and other HelmChart for deployments? Any recommendations?

Deploy Container in K8s in case of only config Map change argocd

I want to redeploy an application in k8s using GitOps(ArgoCD) in case of an only config Map change, how ArgoCD will understand to restart the container as we all know without restarting the container new config map is not going to take effect.
Scenario - If one container is running from ArgoCD and I have to modify configmap yaml file in GitHub and ArgoCD will automatically understand and sync the updated values but container will not restart as we are not modifying in Deployment Yaml Files, so how config map will take effect in the container
Found a workaround for the above question, We can include a parameter(Jenkins Build Number) as env variable in the Deployment config and it will be updated on every build from CI Pipeline, so in case of only config Change in Git repo, deployment will also be rolled out because Build number Parameter will change after running the Pipelines and as we all know ArgoCD will automatically be triggered once any change is done in Git repo connected to ArgoCD
ArgoCD itself doesn't handle this, however other tools can. With Helm this is generally handled inside the chart by hashing the config content into an annotation in the pod template. Kustomize offers the configmap and secret generators which put a hash in the object name and rewrite the pod template to include it. There's also operator solutions like Reloader which does a similar trick to Helm but via an operator.

Using kubectl roll outs to update my images, but need to also keep my deployment object in version control

In My CICD, I am:
generating a new image with a unique tag. foo:dev-1339 and pushing it to my image repo (ECR).
Then I am using a rolling update to update my deployment.
kubectl rolling-update frontend --image=foo:dev-1339
But I have a conflict here.
What if I also need to update some part of my deployment object as stored in a deployment.yaml file. Lets say harden a health check or add a parameter?
Then when I re apply my deployment object as a whole it will not be in sync with the current replica set, the tag will get reverted and I will lose that image update as it exists in the cluster.
How do I avoid this race condition?
A typical solution here is to use a templating layer like Helm or Kustomize.
In Helm, you'd keep your Kubernetes YAML specifications in a directory structure called a chart, but with optional templating. You can specify things like
image: myname/myapp:{{ .Values.tag | default "latest" }}
and then deploy the chart with
helm install myapp --name myapp --set tag=20191211.01
Helm keeps track of these values (in Secret objects in the cluster) so they don't get tracked in source control. You could check in a YAML-format file with settings and use helm install -f to reference that file instead.
In Kustomize, your CI tool would need to create a kustomize.yaml file for per-deployment settings, but then could set
images:
- name: myname/myapp
newTag: 20191211.01
If you trust your CI tool to commit to source control then it can check this modified file in as part of its deployment sequence.
Imperative vs Declarative workflow
There is two fundamental ways of using kubectl for applying changes to your cluster. The Imperative way, when you do commands is a good way for experimentation and development environment. kubectl rolling-updated is an example of an imperative command. See Managing Kubernetes using Imperative Commands.
For a production environment, it is recommended to use a Declarative workflow, by editing manifest-files, store them in a Git-repository. Automatically start a CICD work when you commit or merge. kubectl apply -f <file> or more interesting kubectl apply -k <file> is an example of this workflow. See Declarative Management using Config files or more interesting Declarative Management using Kustomize
CICD for building image and deployment
Building an artifact from source code, including a container image may be done in a CICD pipeline. Managing application config and applying it to the Kubernetes cluster may also be done in a CICD pipeline. You may want to automatize it all, e.g. for doing Continuous Deployment and combine both pipelines to a single long pipeline. This is a more complicated setup and there is no single answer on how to do it. When the build-parts is done, it may trigger an update of the image field in the app configuration repository to trigger the configuration-pipeline.
Unfortunately there is no solution, either from the command line or through the yaml files
As per the doc here, "...a Deployment is a higher-level controller that automates rolling updates of applications declaratively, and therefore is recommended" over the use of Replication Controllers and kubectl rolling-update. Updating the image of a Deployment will trigger Deployment's rollout.
An approach could be to update the Deployment configuration yaml (or json) under version control in the source repo and apply the changed Deployment configuration from the version control to the cluster.

How to bind kubernetes resource to helm release

If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.