How to add automountServiceAccountToken: false using Helm - kubernetes

I have been trying to add automountServiceAccountToken: false into deployment using helm but my changes are reflecting inside deployment in kubernetes.
I tried below in helpers.tpl
{{- "<chart-name>.automountserviceaccounttoken" }}
{{- default "false" .Values.automountserviceaccounttoken.name }}
{{- end }}
in app-deployment.yaml
automountServiceAccountToken: {{- include "<chart-name>.automountserviceaccounttoken" . }}
in values.yaml
automountServiceAccountToken: false
But I can't see the changes. Please guide

You can give a try with following troubleshooting steps
In the helpers.tpl file you are taking the
automountserviceaccounttoken value from the values.yaml. In
values.yaml you metnioned automountserviceaccounttoken:false but
in the tpl file you are accesing the value like
automountserviceaccounttoken.name there is no attribute called
name under automountserviceaccounttoken in values file. Although you
are using default value in function sometimes it may not include it.
So correct he value in values.yaml.
Debug the deployed heml chart by using a command $helm template template-name. It will download the generated helm template along
with values. Check whether your desired values are reflecting or
not.
In case you are redeploying the chart try upgrading it by $helm upgrade [RELEASE] [CHART] and make sure your values are reflecting.
Before installing the helm chart running with dry-run will give us
the templates with compiled values. So using dry run will helps to
confirm the templates. Command for dry-run $helm install chart-name . --dry-run
Fore more information refer to official document

Related

How can I do this in a Helm chart's values.yaml file?

We deploy our microservices in multiple AWS regions. I therefore want to be able to do this in a Helm chart values.yaml file.
# Default region
aws_region: us-east-1
aws_ecrs:
us-east-1: 01234567890.dkr.ecr.us-east-1.amazonaws.com
eu-north-1: 01234567890.dkr.ecr.eu-nort-1.amazonaws.com
image:
name: microservice0
repository: {{ .Values.aws_ecrs.{{ .Values.aws_region }} }} # I know this is incorrect
So now when I install the chart, I just want to do
$ helm install microservice0 myChart/ --set aws_region=eu-north-1
and the appropriate repository will be assigned to .Values.image.repository. Can I do this? If so what is the correct syntax?
NOTE: The image repository is just one value that depends on the AWS region, we have many more other values that also depend on the AWS region.
Pass the repository name as an ordinary Helm value.
# templates/deployment.yaml
image: {{ .Values.repository }}/my-image:{{ .Values.tag }}
Create a separate file per region. This does not necessarily need to be in the same place as the Helm chart. Provide the regional values as ordinary top-level values. You'll have multiple files that provide the same values and that's fine.
# eu-north-1.yaml
repository: 01234567890.dkr.ecr.eu-nort-1.amazonaws.com
Then when you deploy the chart, use the helm install -f option to use the correct per-region values. These values will override anything in the chart's values.yaml file, but anything you don't specifically set here will use those default values from the chart.
helm install microservice0 myChart/ \
--set-string tag=20220201 \
-f eu-north-1.yaml
You can in principle use the Go template index function to do the lookup as you describe; the top-level structure in Variable value as yaml key in helm chart is similar to what you show in the question. This is more complex to implement in the templating code, though, and it means you have different setups for the values that must vary per region and those that can't.

Kubernetes CRD schema addition of enum from values.yaml

I'm trying to install a CRD present inside a helm chart.
My openapi schema is working as expected but for one tiny hiccup:
I want to add a dynamic enum to the CRD, using the values that I'll pass with helm install
Something like this:
clientns:
type: string
enum: [{{ range .Values.rabbitmqjob.default.namespaces | split }}]
when I run the install command as:
helm install . --values values.yaml --generate-name --set "rabbitmqjob.default.namespaces={ns1,ns2}" -n ns1
I get the following error:
Error: INSTALLATION FAILED: failed to install CRD crds/crd.yaml: error parsing : error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.rabbitmqjob.default.namespaces":interface {}(nil)}
My question is:
Is it even possible to do this while installing a crd
If yes, then where am I going wrong?
Thanks in advance.
helm install --set has some unusual syntax. In your setup, where you specify
helm install ... --set "rabbitmqjob.default.namespaces={ns1,ns2}"
Helm turns that into the equivalent of YAML
rabbitmqjob:
default:
namespaces:
- ns1
- ns2
That is, --set key={value,value} makes the value already be a list type, so you don't need string-manipulation functions like split to find its values.
The easiest way to dump this back to YAML is to use the minimally-documented toYaml function:
clientns:
type: string
enum:
{{ toYaml .Values.rabbitmqjob.default.namespaces | indent 4 }}
There is a similar toJson that will also be syntactically correct but will fit on a single line
enum: {{ toJson .Values.rabbitmqjob.default.namespaces }}
or if you do want to loop through it by hand, range will return the individual values without specific extra processing.
enum:
{{- range .Values.rabbitmqjob.default.namespaces }}
- {{ . }}
{{- end }}
If you get odd YAML errors like this, running helm template --debug with the same options will print out the rendered-but-invalid YAML and that can often help you see a problem.
This isn't specific to CRDs. I'd consider it slightly unusual to have configurable elements in a custom resource definition, since this defines the schema for both the controller code that processes custom resource objects and the other services that will install those objects. You'd hit the same syntactic concerns anywhere in your Helm chart, though.

Include system username in helm charts in helm version 2.14.1

I am using helm version 2.14.1. I have created helm charts for an application that will be deployed by users to test their code on kubernetes cluster. I want to add labels for username values, so I can retrieve deployments by users (deployments by user labels). Is there a way to include system username in helm charts just like we do in Java with System.getProperty("user.name"). My helm template is like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "common.fullname" . }}--{{ Release.name }}
labels:
application: {{ include "common.name" . }}
branch: "{{ Release.name }}"
username: "{{ System.user.name }}" # need to fetch the logged in user from system here
spec:
...
Is there a standard way to achieve this or is there anyway I can allow users to input there usernames from command line while using helm install or helm template commands?
EDIT:
Although, the --set works for me in setting the values for my chart, I also need to set the same value in the dependencies. Something like this:
values.yaml
username: ""
dependency1:
username: {{ .Values.username }}
dependency2:
username: {{ .Values.username }}
...
Of course the above implementation doesn't work. I need to reference the set value in the dependencies as well
This is a community wiki answer based on the comments and posted for better visibility. Feel free to expand it.
You can use the helm template command with a --set option:
--set stringArray set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
--set-string stringArray set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
The --set parameters have the highest precedence among other methods of passing values into the charts. It means that by default values come from the values.yaml which can be overridden by a parent chart's values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
You can check more details and examples in the official docs.
I have resolved this. Thanks for help #MichaelAlbers and #WytrzymaƂyWiktor. So the solution is as below.
helm template path/to/chart --set global.username=username
And then in all the templates refer to this value as {{ .Values.global.username }}. This works for any dependency chart as well.

Rendered manifests contain a resource that already exists. Could not get information about the resource: resource name may not be empty

I Installed Helm 3 on my windows laptop where i have kube config configured as well. But when i try to install my local helm chart, i;m getting the below error.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: resource name may not be empty
I tried helm ls --all --all-namespaces but i don't see anything. Please help me!
I think you have to check if you left off any resource without - name:
I had the same issue. In values.yaml I had
name:
and in deployment.yaml I tried to access this "name" via {{ .Values.name }}. I found out, that {{ .Values.name }} doesn't work for me at all. I had to use {{ Chart.Name }} in deployment.yaml as the built-in object. ref: https://helm.sh/docs/chart_template_guide/builtin_objects/
If you want to access the "name", you can put it into values.yaml for example like this:
something:
name:
and them access it from deployment.yaml (for example) like {{ .Values.something.name }}
Had same error message. I solved the problem using helm lint on the folder of a dependecy charts that I just added. That pointed me to some bad assignment of values.
Beware: helm lint on the parent folder didn't highlight any problem in the dependencies folders.
I suppose there is already the same resource existing in your namespace where you are trying to install or in your helm chart you are trying to create the same resource twice.
Try to create a new namespace and try helm install if you still face the issue then definitely there is some issue with your helm install.
I faced the same error, the fix was to correct the sub-chart name in values.yaml file of main chart.
Best bet would be to run helm template . in the chart directory and verify that name or namespace fields are not empty. This was the case with me atleast.
Most likely, one of the deployments you removed left behind a clusterole.
Check if you have one with kubectl get clusterole
Once you find it, you can delete it with kubectl delete clusterrole <clusterrolename>

How to convert YAML to JSON when saving files into container using Kubernetes Configmap

We are going to write Helm chart and providing configuration file using configmap.
For some reasons our app is using JSON format configuration file. Currently we provide configuration file in Helm chart's values.yaml like this.
conffiles:
app_conf.json:
...(content in YAML)...
And to make it easy to modify, in values.yaml we use YAML format and in configmap's template we did conversion using "toJson",
data:
{{- range $key, $value := .Values.conffiles }}
{{ $key }}: |
{{ toJson $value | default "{}" | indent 4 }}
{{- end -}}
{{- end -}}
So in values.yaml it's YAML, and in configmap it will be JSON, then in container it will be stored into JSON file.
Our question is,
Is there a way to convert YAML to JSON when saving files into container? That is, we hope those configuration content could be 1) YAML in values.yaml 2) YAML in configmap 3) JSON file in container
Thank in advance.
I don't think there is anything out of the box but you do have options, depending upon your motivation.
Your app is looking for json and the configmap is mounted for your app to read that json. Your helm deployment isn't going to modify the container itself. But you could change your app to read yaml instead of json.
If you want to be able to easily see the yaml and json versions you could create two configmaps - one containing yaml and one with json.
Or if you're just looking to be able to see what the yaml was that was used to create the configmap then you could use helm get values <release_name> to look at the values that were used to create that release (which will include the content of the conffiles entry).