I am using helm version 2.14.1. I have created helm charts for an application that will be deployed by users to test their code on kubernetes cluster. I want to add labels for username values, so I can retrieve deployments by users (deployments by user labels). Is there a way to include system username in helm charts just like we do in Java with System.getProperty("user.name"). My helm template is like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "common.fullname" . }}--{{ Release.name }}
labels:
application: {{ include "common.name" . }}
branch: "{{ Release.name }}"
username: "{{ System.user.name }}" # need to fetch the logged in user from system here
spec:
...
Is there a standard way to achieve this or is there anyway I can allow users to input there usernames from command line while using helm install or helm template commands?
EDIT:
Although, the --set works for me in setting the values for my chart, I also need to set the same value in the dependencies. Something like this:
values.yaml
username: ""
dependency1:
username: {{ .Values.username }}
dependency2:
username: {{ .Values.username }}
...
Of course the above implementation doesn't work. I need to reference the set value in the dependencies as well
This is a community wiki answer based on the comments and posted for better visibility. Feel free to expand it.
You can use the helm template command with a --set option:
--set stringArray set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
--set-string stringArray set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
The --set parameters have the highest precedence among other methods of passing values into the charts. It means that by default values come from the values.yaml which can be overridden by a parent chart's values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
You can check more details and examples in the official docs.
I have resolved this. Thanks for help #MichaelAlbers and #WytrzymaĆyWiktor. So the solution is as below.
helm template path/to/chart --set global.username=username
And then in all the templates refer to this value as {{ .Values.global.username }}. This works for any dependency chart as well.
Related
We deploy our microservices in multiple AWS regions. I therefore want to be able to do this in a Helm chart values.yaml file.
# Default region
aws_region: us-east-1
aws_ecrs:
us-east-1: 01234567890.dkr.ecr.us-east-1.amazonaws.com
eu-north-1: 01234567890.dkr.ecr.eu-nort-1.amazonaws.com
image:
name: microservice0
repository: {{ .Values.aws_ecrs.{{ .Values.aws_region }} }} # I know this is incorrect
So now when I install the chart, I just want to do
$ helm install microservice0 myChart/ --set aws_region=eu-north-1
and the appropriate repository will be assigned to .Values.image.repository. Can I do this? If so what is the correct syntax?
NOTE: The image repository is just one value that depends on the AWS region, we have many more other values that also depend on the AWS region.
Pass the repository name as an ordinary Helm value.
# templates/deployment.yaml
image: {{ .Values.repository }}/my-image:{{ .Values.tag }}
Create a separate file per region. This does not necessarily need to be in the same place as the Helm chart. Provide the regional values as ordinary top-level values. You'll have multiple files that provide the same values and that's fine.
# eu-north-1.yaml
repository: 01234567890.dkr.ecr.eu-nort-1.amazonaws.com
Then when you deploy the chart, use the helm install -f option to use the correct per-region values. These values will override anything in the chart's values.yaml file, but anything you don't specifically set here will use those default values from the chart.
helm install microservice0 myChart/ \
--set-string tag=20220201 \
-f eu-north-1.yaml
You can in principle use the Go template index function to do the lookup as you describe; the top-level structure in Variable value as yaml key in helm chart is similar to what you show in the question. This is more complex to implement in the templating code, though, and it means you have different setups for the values that must vary per region and those that can't.
I have been trying to add automountServiceAccountToken: false into deployment using helm but my changes are reflecting inside deployment in kubernetes.
I tried below in helpers.tpl
{{- "<chart-name>.automountserviceaccounttoken" }}
{{- default "false" .Values.automountserviceaccounttoken.name }}
{{- end }}
in app-deployment.yaml
automountServiceAccountToken: {{- include "<chart-name>.automountserviceaccounttoken" . }}
in values.yaml
automountServiceAccountToken: false
But I can't see the changes. Please guide
You can give a try with following troubleshooting steps
In the helpers.tpl file you are taking the
automountserviceaccounttoken value from the values.yaml. In
values.yaml you metnioned automountserviceaccounttoken:false but
in the tpl file you are accesing the value like
automountserviceaccounttoken.name there is no attribute called
name under automountserviceaccounttoken in values file. Although you
are using default value in function sometimes it may not include it.
So correct he value in values.yaml.
Debug the deployed heml chart by using a command $helm template template-name. It will download the generated helm template along
with values. Check whether your desired values are reflecting or
not.
In case you are redeploying the chart try upgrading it by $helm upgrade [RELEASE] [CHART] and make sure your values are reflecting.
Before installing the helm chart running with dry-run will give us
the templates with compiled values. So using dry run will helps to
confirm the templates. Command for dry-run $helm install chart-name . --dry-run
Fore more information refer to official document
I'm trying to install a CRD present inside a helm chart.
My openapi schema is working as expected but for one tiny hiccup:
I want to add a dynamic enum to the CRD, using the values that I'll pass with helm install
Something like this:
clientns:
type: string
enum: [{{ range .Values.rabbitmqjob.default.namespaces | split }}]
when I run the install command as:
helm install . --values values.yaml --generate-name --set "rabbitmqjob.default.namespaces={ns1,ns2}" -n ns1
I get the following error:
Error: INSTALLATION FAILED: failed to install CRD crds/crd.yaml: error parsing : error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.rabbitmqjob.default.namespaces":interface {}(nil)}
My question is:
Is it even possible to do this while installing a crd
If yes, then where am I going wrong?
Thanks in advance.
helm install --set has some unusual syntax. In your setup, where you specify
helm install ... --set "rabbitmqjob.default.namespaces={ns1,ns2}"
Helm turns that into the equivalent of YAML
rabbitmqjob:
default:
namespaces:
- ns1
- ns2
That is, --set key={value,value} makes the value already be a list type, so you don't need string-manipulation functions like split to find its values.
The easiest way to dump this back to YAML is to use the minimally-documented toYaml function:
clientns:
type: string
enum:
{{ toYaml .Values.rabbitmqjob.default.namespaces | indent 4 }}
There is a similar toJson that will also be syntactically correct but will fit on a single line
enum: {{ toJson .Values.rabbitmqjob.default.namespaces }}
or if you do want to loop through it by hand, range will return the individual values without specific extra processing.
enum:
{{- range .Values.rabbitmqjob.default.namespaces }}
- {{ . }}
{{- end }}
If you get odd YAML errors like this, running helm template --debug with the same options will print out the rendered-but-invalid YAML and that can often help you see a problem.
This isn't specific to CRDs. I'd consider it slightly unusual to have configurable elements in a custom resource definition, since this defines the schema for both the controller code that processes custom resource objects and the other services that will install those objects. You'd hit the same syntactic concerns anywhere in your Helm chart, though.
I'm deploying a spring cloud data flow cluster on kubernetes with helm and the chart from bitnami. This works fine.
Now I need an additional template to add a route. Is there a way to somehow add this or inherit from the bitnami chart and extend it? Of course I'd like to reuse all of the variables which are already defined for the spring cloud data flow deployment.
That chart has a specific extension point for doing things like this. The list of "Common parameters" in the linked documentation includes a line
Name: extraDeploy; Description: Array of extra objects to deploy with the release; Value: []
The implementation calls through to a helper in the Bitnami Common Library Chart that calls the Helm tpl function on the value, serializing it to YAML first if it's not a string, so you can use Helm templating within that value.
So specifically for the Bitnami charts, you can include an extra object in your values.yaml file:
extraDeploy:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: '{{ include "common.names.fullname" . }}'
...
As a specific syntactic note, the value of extraDeploy is a list of either strings or dictionaries, but any templating is rendered after the YAML is parsed; this is different from the normal Helm template flow. In the example above I've included a YAML object, but then quoted a string value that begins with a {{ ... }} template, lest it otherwise be parsed as a YAML mapping. You could also force the whole thing to be a string, though it might be harder to work with in an IDE.
extraDeploy:
- |-
metadata:
name: {{ include "common.names.fullname" . }}
You can just create the YAML template file in the templates folder and it will get deployed with the chart.
You can also edit the existing YAML template accordingly and extend it no need to inherit or much things.
For example, if you are looking forward to adding the ingress into your chart, add ingress template and respective values block in values.yaml file
You can add this whole YAML template in folder : https://github.com/helm/charts/blob/master/stable/ghost/templates/ingress.yaml
and specific values.yaml block for ingress.
Or for example your chart dont have any deployment and you want to add deployment you can write your own template or use form internet.
Deployment : https://github.com/helm/charts/tree/master/stable/ghost/templates
there is deployment.yaml file template and you can get specific variables that the template uses into values.yaml and you have extended the chart successfully.
I have a helm chart that can either use an internal database or an external database. The values are mutually exclusive. If one value is true, the other value should be false.
Is there a way to enforce mutual exclusivity so a user doesn't accidentally enable both?
Example to use built in database (redis)
helm install foo --set redis.enabled=true --set corvus.enabled=false
Examaple to use an external database (corvus)
helm install foo --set redis.enabled=false --set corvus.enabled=true --set corvus.location=foobar
I have considered not using 2 separate values redis.enabled corvus.enabled and instead using a single value like database which can be set to either internal or external, however because helm conditionals in the requriements.yaml can only perform logic on a boolean, I don't believe this is possible.
dependencies:
- name: redis
version: 4.2.7
repository: https://kubernetes-charts.storage.googleapis.com
condition: redis.enabled,global.redis.enabled
You can use some Sprig templating magic in order to force the config keys to be mutually exclusive. For your case, you can add a block of the following sort to any of your Chart's templates.
{{- if .Values.redis.enabled }}
{{- if .Values.corvus.enabled }}
{{- fail "redis and corvus are mutually exclusive!" }}
{{- end }}
{{- end }}
This will cause the Chart installation to fail when both config values are evaluated as true.