I have a helm chart that can either use an internal database or an external database. The values are mutually exclusive. If one value is true, the other value should be false.
Is there a way to enforce mutual exclusivity so a user doesn't accidentally enable both?
Example to use built in database (redis)
helm install foo --set redis.enabled=true --set corvus.enabled=false
Examaple to use an external database (corvus)
helm install foo --set redis.enabled=false --set corvus.enabled=true --set corvus.location=foobar
I have considered not using 2 separate values redis.enabled corvus.enabled and instead using a single value like database which can be set to either internal or external, however because helm conditionals in the requriements.yaml can only perform logic on a boolean, I don't believe this is possible.
dependencies:
- name: redis
version: 4.2.7
repository: https://kubernetes-charts.storage.googleapis.com
condition: redis.enabled,global.redis.enabled
You can use some Sprig templating magic in order to force the config keys to be mutually exclusive. For your case, you can add a block of the following sort to any of your Chart's templates.
{{- if .Values.redis.enabled }}
{{- if .Values.corvus.enabled }}
{{- fail "redis and corvus are mutually exclusive!" }}
{{- end }}
{{- end }}
This will cause the Chart installation to fail when both config values are evaluated as true.
Related
We deploy our microservices in multiple AWS regions. I therefore want to be able to do this in a Helm chart values.yaml file.
# Default region
aws_region: us-east-1
aws_ecrs:
us-east-1: 01234567890.dkr.ecr.us-east-1.amazonaws.com
eu-north-1: 01234567890.dkr.ecr.eu-nort-1.amazonaws.com
image:
name: microservice0
repository: {{ .Values.aws_ecrs.{{ .Values.aws_region }} }} # I know this is incorrect
So now when I install the chart, I just want to do
$ helm install microservice0 myChart/ --set aws_region=eu-north-1
and the appropriate repository will be assigned to .Values.image.repository. Can I do this? If so what is the correct syntax?
NOTE: The image repository is just one value that depends on the AWS region, we have many more other values that also depend on the AWS region.
Pass the repository name as an ordinary Helm value.
# templates/deployment.yaml
image: {{ .Values.repository }}/my-image:{{ .Values.tag }}
Create a separate file per region. This does not necessarily need to be in the same place as the Helm chart. Provide the regional values as ordinary top-level values. You'll have multiple files that provide the same values and that's fine.
# eu-north-1.yaml
repository: 01234567890.dkr.ecr.eu-nort-1.amazonaws.com
Then when you deploy the chart, use the helm install -f option to use the correct per-region values. These values will override anything in the chart's values.yaml file, but anything you don't specifically set here will use those default values from the chart.
helm install microservice0 myChart/ \
--set-string tag=20220201 \
-f eu-north-1.yaml
You can in principle use the Go template index function to do the lookup as you describe; the top-level structure in Variable value as yaml key in helm chart is similar to what you show in the question. This is more complex to implement in the templating code, though, and it means you have different setups for the values that must vary per region and those that can't.
I have been trying to add automountServiceAccountToken: false into deployment using helm but my changes are reflecting inside deployment in kubernetes.
I tried below in helpers.tpl
{{- "<chart-name>.automountserviceaccounttoken" }}
{{- default "false" .Values.automountserviceaccounttoken.name }}
{{- end }}
in app-deployment.yaml
automountServiceAccountToken: {{- include "<chart-name>.automountserviceaccounttoken" . }}
in values.yaml
automountServiceAccountToken: false
But I can't see the changes. Please guide
You can give a try with following troubleshooting steps
In the helpers.tpl file you are taking the
automountserviceaccounttoken value from the values.yaml. In
values.yaml you metnioned automountserviceaccounttoken:false but
in the tpl file you are accesing the value like
automountserviceaccounttoken.name there is no attribute called
name under automountserviceaccounttoken in values file. Although you
are using default value in function sometimes it may not include it.
So correct he value in values.yaml.
Debug the deployed heml chart by using a command $helm template template-name. It will download the generated helm template along
with values. Check whether your desired values are reflecting or
not.
In case you are redeploying the chart try upgrading it by $helm upgrade [RELEASE] [CHART] and make sure your values are reflecting.
Before installing the helm chart running with dry-run will give us
the templates with compiled values. So using dry run will helps to
confirm the templates. Command for dry-run $helm install chart-name . --dry-run
Fore more information refer to official document
I'm trying to install a CRD present inside a helm chart.
My openapi schema is working as expected but for one tiny hiccup:
I want to add a dynamic enum to the CRD, using the values that I'll pass with helm install
Something like this:
clientns:
type: string
enum: [{{ range .Values.rabbitmqjob.default.namespaces | split }}]
when I run the install command as:
helm install . --values values.yaml --generate-name --set "rabbitmqjob.default.namespaces={ns1,ns2}" -n ns1
I get the following error:
Error: INSTALLATION FAILED: failed to install CRD crds/crd.yaml: error parsing : error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.rabbitmqjob.default.namespaces":interface {}(nil)}
My question is:
Is it even possible to do this while installing a crd
If yes, then where am I going wrong?
Thanks in advance.
helm install --set has some unusual syntax. In your setup, where you specify
helm install ... --set "rabbitmqjob.default.namespaces={ns1,ns2}"
Helm turns that into the equivalent of YAML
rabbitmqjob:
default:
namespaces:
- ns1
- ns2
That is, --set key={value,value} makes the value already be a list type, so you don't need string-manipulation functions like split to find its values.
The easiest way to dump this back to YAML is to use the minimally-documented toYaml function:
clientns:
type: string
enum:
{{ toYaml .Values.rabbitmqjob.default.namespaces | indent 4 }}
There is a similar toJson that will also be syntactically correct but will fit on a single line
enum: {{ toJson .Values.rabbitmqjob.default.namespaces }}
or if you do want to loop through it by hand, range will return the individual values without specific extra processing.
enum:
{{- range .Values.rabbitmqjob.default.namespaces }}
- {{ . }}
{{- end }}
If you get odd YAML errors like this, running helm template --debug with the same options will print out the rendered-but-invalid YAML and that can often help you see a problem.
This isn't specific to CRDs. I'd consider it slightly unusual to have configurable elements in a custom resource definition, since this defines the schema for both the controller code that processes custom resource objects and the other services that will install those objects. You'd hit the same syntactic concerns anywhere in your Helm chart, though.
I am using helm version 2.14.1. I have created helm charts for an application that will be deployed by users to test their code on kubernetes cluster. I want to add labels for username values, so I can retrieve deployments by users (deployments by user labels). Is there a way to include system username in helm charts just like we do in Java with System.getProperty("user.name"). My helm template is like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "common.fullname" . }}--{{ Release.name }}
labels:
application: {{ include "common.name" . }}
branch: "{{ Release.name }}"
username: "{{ System.user.name }}" # need to fetch the logged in user from system here
spec:
...
Is there a standard way to achieve this or is there anyway I can allow users to input there usernames from command line while using helm install or helm template commands?
EDIT:
Although, the --set works for me in setting the values for my chart, I also need to set the same value in the dependencies. Something like this:
values.yaml
username: ""
dependency1:
username: {{ .Values.username }}
dependency2:
username: {{ .Values.username }}
...
Of course the above implementation doesn't work. I need to reference the set value in the dependencies as well
This is a community wiki answer based on the comments and posted for better visibility. Feel free to expand it.
You can use the helm template command with a --set option:
--set stringArray set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
--set-string stringArray set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
The --set parameters have the highest precedence among other methods of passing values into the charts. It means that by default values come from the values.yaml which can be overridden by a parent chart's values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
You can check more details and examples in the official docs.
I have resolved this. Thanks for help #MichaelAlbers and #WytrzymaĆyWiktor. So the solution is as below.
helm template path/to/chart --set global.username=username
And then in all the templates refer to this value as {{ .Values.global.username }}. This works for any dependency chart as well.
Is it best practice to include installation of sub-charts in global part of values.yaml. Example..
Root level values.yaml
global:
foo: bar
subchartA:
enable: true
Or the best practice is to have subcharts out of the global part as shown.
global:
foo: bar
subchartA:
enable: true
Please provide a brief explanation why. Thank you
Subchart configuration settings need to be at the top level, outside a global: block.
At a style level, each chart should be independently installable, whether or not it's used as a subchart. Something like the stable/mysql chart is a reasonable example: you can manually helm install mysql stable/mysql --set mysqlPassword=... without mentioning global. That means when you include it as a dependency its settings need to be under the subchart's key in the values.yaml file.
At a mechanical level, when the subchart is run, the subchartA settings are promoted up to be .Values and then the original global: is merged with that (see Subcharts and Globals). So the subchart itself needs to be aware of the difference
{{/* Option 1 */}}
{{ .Values.global.subchartA.enabled }}
{{/* Option 2 (within subchartA) */}}
{{ .Values.enabled }}
and at the top level you need to use the form that's compatible with the included chart.
(If you browse through the "stable" Helm chart repository you'll see global used fairly sparingly; rabbitmq allows you to declare global.imagePullSecrets but that's close to it.)