Multiple deployments in same Helm Chart - kubernetes-helm

I have a unique scenario where I have to deploy two different deployments, I have created a helm chart wherein programmatically I change a part of deployment at run time (while applying charts as overrides).
My Helm Chart is very simple it consist of a namespace and a deployment.
Now, when I apply the Helm Chart first time - which on runtime overrides values of first deployment and say it has attribute_name it substitute to value Var_A as expected . It works well. It creates namespace and creates deployment with attribute_name having value Var_A. So good so far .....
...but next when I apply the Helm Chart to deploy my second deployment which needs attribute_name to be Var_B it does not get applied, reason Helm complains that namespace already exists (rightly so).
I am wondering how to implement this solution ?
Would I need a new HelmChart just for namespace and other HelmChart for deployments? Any recommendations?

Related

Update helm chart values for different environments

I have helm charts created for a microservice that I have built and everything is working as expected. Now, I have created a new k8s namespace and I want to try to deploy the same helm charts as my old namespace. Although, I have just one value that I need it different while everything else remain the same.
Do I have to create another values.yaml for the new namespace and copy everything over and update the one field I want updated ? Or is there any other way ? I do not want to use the --set method of passing updates to the command line.
David suggested the right way. You can use different values.yaml where you can specify the namespace you want to deploy the chart:
$ helm install -f another-namespace-values.yaml <my-release> .
It's also entirely possible to launch helm chart with multiple values.
For more reading please check values section of helm docs.

Different deployment configurations using Helm

I would like to have a slightly different deployment configuration in different invironments. That is, in Prod and Ver, I don't want all containers to be deployed.
With docker-compose we solve that by having incremental docker-compose files that we combine, like: docker-compose up -f docker-compose.yml -f docker-compose-prod.yml
How can that be done using Helm charts?
We have a structure with Chart.yaml and values.yaml in the top, and then one yaml file per container in a subfolder. The naive solution would be to copy that structure and leave out some of the chart files, but I would prefer to have only one file (at most one file!) per service.
We deploy to AKS using CircleCI.
To summarize:
Today, each service has it's own yaml file, and on every deploy, all of them gets deployed. I want to configure my charts so that only a subset of the services gets deployed in certain environments.
EDIT:
kubectl has the the possibility to use selectors, like kubectl create cfg.yaml --selector=tier=frontend or kubectl create cfg.yaml --selector=environment=prod and I already tag my containers, so that would have been simple. But helm install does not have the possibility to accept a similar flag and pass it to kubectl.
just create one values file for each environment and target those:
helm install . -f values.production.yaml
helm install . -f values.development.yaml
you can use condition to toggle deployments, imagine you have something,yaml which you want conditionally deployed:
{{ if .Values.something}}
something.yaml original content goes here
{{ end }}

How to bind kubernetes resource to helm release

If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.

How to add smtp settings to prometheus-operator using helm chart?

I am new to the Kubernetes and especially using helm. I installed the charts and it works fine with default values. I want to the add smtp server setting in the values.yml file for the chart. I am confused on how to inject the values while installing the chart. This is the chart that I am using https://github.com/helm/charts/tree/master/stable/prometheus-operator.
After installing the helm chart with default values I see that there is a deployment called prometheus-operator-grafana which has values GF_SECURITY_ADMIN_USER and GF_SECURITY_ADMIN_PASSWORD but I am not sure where these values are coming from.
Help with how these values work and how to inject them would be appreciated.
The interaction between parent and child chart values is summarize very well in this SO answer: helm overriding Chart and Values yaml from a base template chart
There are two separate grafana chart mechanisms that control such a thing: adminUser and adminPassword or admin.existingSecret along with admin.userKey and admin.passwordkey
Thus, helm ... --set grafana.adminUser=ninja --set grafana.adminPassword=hunter2 will do what you want. The fine manual even says they are using grafana as a subchart, and documents that exact setting as the first value underneath the grafana.enabled setting. Feel free to file an issue with the helm chart to spend the extra characters and document the grafana.adminUser setting, too

Helm and configmap checksum annotations

I'm working on a Jenkins deployment using a wrapper for the standard chart (stable/jenkins). The chart includes a value flag to allow you totally replace the configmap with your own as long as you match the format of the original. But I'm running in to a problem because the checksum annotation in the deployment is based on the original configmap, not my replacement. So I have to manually force the deployment pods to re-roll after updating the configmap. I could use a post-upgrade hook in my own chart with a job that does the scale-down-and-back-up dance, but that seems slightly gross.
This is not currently possible with Helm 2, but will be doable in Helm 3 more directly via chart scripts.
My eventual solution was to fork the jenkins chart and cut it down to only the parts I needed.