I'm getting this error when linting my helm project
$ helm lint --debug
==> Linting .
[INFO] Chart.yaml: icon is recommended
[ERROR] templates/: render error in "myProject/templates/configmap.yaml": template: myProject/templates/configmap.yaml:26:27: executing "myProject/templates/configmap.yaml" at <.Values.fileServiceH...>: can't evaluate field fileHost in type interface {}
Error: 1 chart(s) linted, 1 chart(s) failed
This is my configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myProject-configmap
data:
tkn.yaml: |
iss: "{{ .Values.iss }}"
aud: "{{ .Values.aud }}"
db.yaml: |
database: "{{ .Values.database }}"
user: "{{ .Values.user }}"
host: "{{ .Values.host }}"
dialect: "{{ .Values.dialect }}"
pool:
min: "{{ .Values.pool.min }}"
max: "{{ .Values.pool.max }}"
acquire: "{{ .Values.pool.acquire }}"
idle: "{{ .Values.pool.idle }}"
fileservice.yaml: |
fileServiceHost:
fileHost: "{{ .Values.fileServiceHost.fileHost }}"
notificationservice.yaml: |
notificationServiceHost:
notificationHost: "{{ .Values.notificationservice.notificationHost }}"
organizationservice.yaml: |
organizationServiceHost:
organizationHost: "{{ .Values.organizationservice.organizationHost }}"
organizations.yaml: |
organizations: {{ .Values.organizations | toJson | indent 4 }}
epic.yaml: |
redirectUri: "{{ .Values.redirectUri }}"
This is my /vars/dev/fileservice.yaml file
fileServiceHost:
fileHost: 'https://example.com'
What is wrong that i'm getting this lint error?
You want to either use .Files.Get to load the yaml files or take the yaml content that you have in the yaml files and capture it in the values.yaml so that you can insert it directly in your configmap with toYaml.
If the values are just static and you don't need the user to override them then .Files.Get is better for you. If you want to be able to override the content in the yaml files easily at install time then just represent them in the values.yaml file.
Related
There's a similar question that alludes to the possibility of auto-generating a uuid in helm charts when used as a secret or configmap. I'm trying precisely to do that, but I'm getting a new uuid each time.
My test case:
---
{{- $config := (lookup "v1" "ConfigMap" .Release.Namespace "{{ .Release.Name }}-testcase") -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Release.Name }}-testcase"
namespace: "{{ .Release.Namespace }}"
labels:
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
data:
{{- if $config }}
TEST_VALUE: {{ $config.data.TEST_VALUE | quote }}
{{- else }}
TEST_VALUE: {{ uuidv4 | quote }}
{{ end }}
I initially deploy this with:
helm upgrade --install --namespace test mytest .
If I run it again, or run with helm diff upgrade --namespace test mytest . I get a new value for TEST_VALUE. When I dump the contents of $config it's an empty map {}.
I'm using Helm v3.9.0, kubectl 1.24, and kube server is 1.22.
NOTE: I couldn't ask in a comment thread on the other post because I don't have enough reputation.
Refering to my issue where you enclosed your stack overflow post : https://github.com/helm/helm/issues/11187
A way to make your configmap work is to save as a variable before conditionnaly set your value. This means every time you upgrade, you'll generate a UUID which will normally will not be used, but this is not dramatic.
When assigning an existing value, := should become =.
Also don't forget to b64enc your value in your manifest
{{- $config := uuidv4 | b64enc | quote -}}
{{- $config_lookup := (lookup "v1" "ConfigMap" .Release.Namespace "{{ .Release.Name }}-testcase") -}}
{{- if $config_lookup -}}
{{- $config = $config_lookup.data.TEST_VALUE -}}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Release.Name }}-testcase"
namespace: "{{ .Release.Namespace }}"
labels:
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
data:
TEST_VALUE: {{ $config | quote }}
I have written an Ansible operator using the operator-sdk that assists with our cluster on boarding: from a single ProjectDefinition resource, the operator provisions a namespace, groups, rolebindings, resourcequotas, and limitranges.
The resourcequotas and limitranges are defined in ProjectQuota resource, and are referenced by name in the ProjectDefinition.
When the content of a ProjectQuota resource is modified, I want to ensure that those changes propagate to any managed namespaces that use the named quota. Right now, I'm doing it this way:
calculating a content hash for the ProjectQuota
Look up namespaces that reference the quota using a label selector
Annotate the discovered namespaces with the content hash
The namespace update triggers a reconciliation of the associated ProjectDefinition because of the ownerReference on the namespace, and this in turn propagates the changes in the ProjectQuota.
In other words, the projectquota role does this:
---
# tasks file for ProjectQuota
- name: Calculate content_hash
set_fact:
content_hash: "{{ quotadef|to_json|hash('sha256') }}"
vars:
quotadef:
q: "{{ resource_quota|default({}) }}"
l: "{{ limit_range|default({}) }}"
- name: Lookup up affected namespaces
kubernetes.core.k8s_info:
api_version: v1
kind: namespace
label_selectors:
- "example.com/named-quota = {{ ansible_operator_meta.name }}"
register: namespaces
- name: Update namespaces
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: "{{ namespace.metadata.name }}"
annotations:
example.com/named-quota-hash: "{{ content_hash }}"
loop: "{{ namespaces.resources }}"
loop_control:
loop_var: namespace
label: "{{ namespace.metadata.name }}"
And the projectdefinition role does (among other things) this:
- name: "{{ ansible_operator_meta.name }} : handle named quota"
when: >-
quota.quota_name|default(false)
block:
- name: "{{ ansible_operator_meta.name }} : look up named quota"
kubernetes.core.k8s_info:
api_version: "{{ apiVersion }}"
kind: ProjectQuota
name: "{{ quota.quota_name }}"
register: named_quota
- name: "{{ ansible_operator_meta.name }} : label namespace"
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: "{{ ansible_operator_meta.name }}"
namespace: "{{ ansible_operator_meta.name }}"
labels:
example.com/named-quota: "{{ quota.quota_name }}"
- name: "{{ ansible_operator_meta.name }} : apply resourcequota"
when: >-
"resourceQuota" in named_quota.resources[0].spec
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: ResourceQuota
metadata:
name: "default"
namespace: "{{ ansible_operator_meta.name }}"
labels:
example.com/project: "{{ ansible_operator_meta.name }}"
spec: "{{ named_quota.resources[0].spec.resourceQuota }}"
- name: "{{ ansible_operator_meta.name }} : apply limitrange"
when: >-
"limitRange" in named_quota.resources[0].spec
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: LimitRange
metadata:
name: "default"
namespace: "{{ ansible_operator_meta.name }}"
labels:
example.com/project: "{{ ansible_operator_meta.name }}"
spec: "{{ named_quota.resources[0].spec.limitRange }}"
This all works, but this seems like the sort of thing for which there
is probably a canonical solution. Is this it? I initially tried using
the generation on the ProjectQuota instead of a content hash, but
this value isn't exposed to the role by the Ansible operator.
I have to write this condition in helm chart syntax in job.yaml file so that imagePullSecrets get's executed only when the condition is satisfied.
Condition is
when: (network.docker.username | default('', true) | trim != '') and (network.docker.password | default('', true) | trim != '')
To write above condition below this code:
imagePullSecrets:
- name: "{{ $.Values.image.pullSecret }}"
Ideally, Docker username & password should come from Secrets. Here's the Sample helm code to use if in yaml file:
imagePullSecrets:
{{ if and (ne $.Values.network.docker.password '') (ne $.Values.network.docker.username '') }}
- name: "{{ $.Values.image.pullSecret }}"
{{ end }}
And values.yaml should have:
network:
docker:
username: your-uname
password: your-pwd
I have this Secret resource yaml:
...
stringData:
imageTag: {{ .Values.image.tag | quote }}
...
In the value file:
image:
tag: "6597745"
...
When running the helm template command results to a generated yaml file with the value:
...
stringData:
imageTag: "65977\u200b45"
...
Seems like a bug in helm. To get around this issue, I have to do this:
...
stringData:
imageTag: "{{ .Values.image.Tag }}"
...
Is there a better solution? I am using helm version 2.15.2
I'm trying to install Kritis using :
azureuser#Azure:~/kritis/docs/standalone$ helm install kritis https://storage.googleapis.com/kritis-charts/repository/kritis-charts-0.2.0.tgz --set certificates.ca="$(cat ca.crt)" --set certificates.cert="$(cat kritis.crt)" --set certificates.key="$(cat kritis.key)" --debug
But I'm getting the next error:
install.go:148: [debug] Original chart version: ""
install.go:165: [debug] CHART PATH: /home/azureuser/.cache/helm/repository/kritis-charts-0.2.0.tgz
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(ClusterRole.metadata): unknown field "kritis.grafeas.io/install" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta
helm.go:76: [debug] error validating "": error validating data: ValidationError(ClusterRole.metadata): unknown field "kritis.grafeas.io/install" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta
helm.sh/helm/v3/pkg/kube.scrubValidationError
/home/circleci/helm.sh/helm/pkg/kube/client.go:520
helm.sh/helm/v3/pkg/kube.(*Client).Build
/home/circleci/helm.sh/helm/pkg/kube/client.go:135
Is there a way to know exactly on which file the error is being triggered? and what exactly that error means?
The original chart files are available here : https://github.com/grafeas/kritis/blob/master/kritis-charts/templates/preinstall/clusterrolebinding.yaml
You cant get from where exactly this coming from but this output is giving some clues regarding that.
In your error message we have some useful information:
helm.go:76: [debug] error validating "": error validating data: ValidationError(ClusterRole.metadata): unknown field "kritis.grafeas.io/install" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta
error validating ""
ClusterRole
kritis.grafeas
You can download your chart and dig into it for these terms using cat as follows:
$ wget https://storage.googleapis.com/kritis-charts/repository/kritis-charts-0.2.0.tgz
$ tar xzvf kritis-charts-0.2.0.tgz
$ cd kritis-charts/
If your grep for kritis.grafeas.io/install, you can see a "variable" being set:
$ grep -R "kritis.grafeas.io/install" *
values.yaml:kritisInstallLabel: "kritis.grafeas.io/install"
Now we can grep this variable and check what we can find:
$ grep -R "kritisInstallLabel" *
templates/rbac.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/rbac.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/kritis-server-deployment.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/preinstall/pod.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/preinstall/pod.yaml: - {{ .Values.kritisInstallLabel }}
templates/preinstall/serviceaccount.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/preinstall/clusterrolebinding.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/postinstall/pod.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/postinstall/pod.yaml: - {{ .Values.kritisInstallLabel }}
templates/secrets.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/predelete/pod.yaml: {{ .Values.kritisInstallLabel }}: ""
templates/kritis-server-service.yaml: {{ .Values.kritisInstallLabel }}: ""
values.yaml:kritisInstallLabel: "kritis.grafeas.io/install"
In this output we can see a rbac.yaml file. That matches with one of the terms we are looking for (ClusterRole):
If we read this file, we can see the ClusterRole and a line referring to kritisInstallLabel:
- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ .Values.clusterRoleBindingName }}
labels:
{{ .Values.kritisInstallLabel }}: ""
{{ .Values.kritisInstallLabel }}: "" will be translated as .Values.kritis.grafeas.io/install by helm and that's where your error is coming from.