Add key value pair to helm values.yaml file - kubernetes-helm

I have a use case to write a randomly generated password back to values.yaml file.
I don't want to use install or upgrade command, having a hard time reading through documentation.
Apart from going through the helm way, I tried updating values.yaml file through yq command but it is removing the comments in the file.

Related

Use multiple value files on helm template

I got into a scenario where I need to create a deployment template to use multiple value files one after the another without merging or overriding.
The valueFiles as specified in argoCD application yaml file under valueFiles parameter.
Can someone help on how to overcome this situation. I need to create a helm template which should take value file one after another using some range function or so.
Thanks in advance
A simple helm template example to use pass multiple value files one after another to a helm template.

Kubernetes + Helm - only restart pods if new version/change

Whenever I run my basic deploy command, everything is redeployed in my environment. Is there any way to tell Helm to only apply things if there were changes made or is this just the way it works?
I'm running:
helm upgrade --atomic MyInstall . -f CustomEnvironmentData.yaml
I didn't see anything in the Helm Upgrade documentation that seemed to indicate this capability.
I don't want to bounce my whole evironment unless I have to.
There's no way to tell Helm to do this, but also no need. If you submit an object to the Kubernetes API server that exactly matches something that's already there, generally nothing will happen.
For example, say you have a Deployment object that specifies image: my/image:{{ .Values.tag }} and replicas: 3. You submit this once with tag: 20200904.01. Now you run the helm upgrade command you show, with that tag value unchanged in the CustomEnvironmentData.yaml file. This will in fact trigger the deployment controller inside Kubernetes. That sees that it wants 3 pods to exist with the image my/image:20200904.01. Those 3 pods already exist, so it does nothing.
(This is essentially the same as the "don't use the latest tag" advice: if you try to set image: my/image:latest, and redeploy your Deployment with this tag, since the Deployment spec is unchanged Kubernetes won't do anything, even if the version of the image in the registry has changed.)
You should probably use helm diff upgrade
https://github.com/databus23/helm-diff
$ helm diff upgrade - h
Show a diff explaining what a helm upgrade would change.
This fetches the currently deployed version of a release
and compares it to a chart plus values.
This can be used visualize what changes a helm upgrade will
perform.
Usage:
diff upgrade[flags] [RELEASE] [CHART]
Examples:
helm diff upgrade my-release stable / postgresql--values values.yaml
Flags:
-h, --help help for upgrade
--detailed - exitcode return a non - zero exit code when there are changes
--post - renderer string the path to an executable to be used for post rendering. If it exists in $PATH, the binary will be used, otherwise it will try to look for the executable at the given path
--reset - values reset the values to the ones built into the chart and merge in any new values
--reuse - values reuse the last release's values and merge in any new values
--set stringArray set values on the command line(can specify multiple or separate values with commas: key1 = val1, key2 = val2)
--suppress stringArray allows suppression of the values listed in the diff output
- q, --suppress - secrets suppress secrets in the output
- f, --values valueFiles specify values in a YAML file(can specify multiple)(default[])
--version string specify the exact chart version to use.If this is not specified, the latest version is used
Global Flags:
--no - color remove colors from the output

Choosing between alternative dependencies in helm

I have a helm chart for an application that needs some kind of database.
Both mysql or postgresql would be fine.
I would like to give the chart user the option to install one of these as a dependency like this:
dependencies:
- name: mysql
version: 0.10.2
repository: https://kubernetes-charts.storage.googleapis.com/
condition: mysql.enabled
- name: postgresql
version: 3.11.5
repository: https://kubernetes-charts.storage.googleapis.com/
condition: postgresql.enabled
However this makes it possible to enable both of them.
Is there an easy way to make sure only one is selected?
I was thinking of a single variable selecting one of [mysql, postgres, manual] and depend on a specific database if it is selected. - Is there a way to do so?
I don't think there's a straightforward way to do this. In particular, it looks like the requirements.yaml condition: field only takes a boolean value (or a list of them) and not an arbitrary expression. From the Helm documentation:
The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent’s values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value. Only the first valid path found in the list is evaluated and if no paths exist then the condition has no effect.
(The tags mechanism described below that is extremely similar and doesn't really help.)
When it comes down to actually writing your Deployment spec you have a more normal conditional system and can test that only one of the values is set; so I don't think you can prevent having redundant databases installed, but you'll at least only use one of them. You could also put an after-the-fact warning to this effect in your NOTES.txt file.
{{ if and .Values.mysql.enabled .Values.postgresql.enabled -}}
WARNING: you have multiple databases enabled in your Helm values file.
Both MySQL and PostgreSQL are installed as part of this chart, but only
PostgreSQL is being used. You can update this chart installation setting
`--set mysql.enabled=false` to remove the redundant database.
{{ end -}}

Passing long configuration file to Kubernetes

I like the work methology of Kuberenetes, use self-contained image and pass the configuration in a ConfigMap, as a volume.
Now this worked great until I tried to do this thing with Liquibase container, The SQL is very long ~1.5K lines, and Kubernetes rejects it as too long.
Error from Kubernetes:
The ConfigMap "liquibase-test-content" is invalid: metadata.annotations: Too long: must have at most 262144 characters
I thought of passing the .sql files as a hostPath, but as I understand these hostPath's content is probably not going to be there
Is there any other way to pass configuration from the K8s directory to pods? Thanks.
The error you are seeing is not about the size of the actual ConfigMap contents, but about the size of the last-applied-configuration annotation that kubectl apply automatically creates on each apply. If you use kubectl create -f foo.yaml instead of kubectl apply -f foo.yaml, it should work.
Please note that in doing this you will lose the ability to use kubectl diff and do incremental updates (without replacing the whole object) with kubectl apply.
Since 1.18 you can use server-side apply to circumvent the problem.
kubectl apply --server-side=true -f foo.yml
where server-side=true runs the apply command on the server instead of the client.
This will properly show conflicts with other actors, including client-side apply and thus fail:
Apply failed with 4 conflicts: conflicts with "kubectl-client-side-apply" using apiextensions.k8s.io/v1:
- .status.conditions
- .status.storedVersions
- .status.acceptedNames.kind
- .status.acceptedNames.plural
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See http://k8s.io/docs/reference/using-api/api-concepts/#conflicts
If the changes are intended you can simple use the first option:
kubectl apply --server-side=true -force-conflicts -f foo.yml
You can use an init container for this. Essentially, put the .sql files on GitHub or S3 or really any location you can read from and populate a directory with it. The semantics of the init container guarantee that the Liquibase container will only be launched after the config files have been downloaded.

What is the best way of creating Helm Charts in a single repository for different deployment environments?

We are using Helm Charts for deploying a service in several environments on Kubernetes cluster. Now for each environment there are a list of variables like the database url, docker image tag etc. What is the most obvious and correct way of defining Helm related values.yaml in such case where all the Helm template files remain same for all the environment except for some parameters as stated above.
One way to do this would be using multiple value files, which helm now allows. Assume you have the following values files:
values1.yaml:
image:
repository: myimage
tag: 1.3
values2.yaml
image:
pullPolicy: Always
These can both be used on command line with helm as:
$ helm install -f values1.yaml,values2.yaml <mychart>
In this case, these values will be merged into
image:
repository: myimage
tag: 1.3
pullPolicy: Always
You can see the values that will be used by giving the "--dry-run --debug" options to the "helm install" command.
Order is important. If the same value appears in both files, the values from values2.yaml will take precedent, as it was specified last. Each chart also comes with a values file. Those values will be used for anything not specified in your own values file, as if it were first in the list of values files you provided.
In your case, you could specify all the common settings in values1.yaml and override them as necessary with values2.yaml.