When trying to use the helm function: lookup, I do not get any result at all as expected.
My Secret that I try to read looks like this
apiVersion: v1
data:
adminPassword: VG9wU2VjcmV0UGFzc3dvcmQxIQ==
adminUser: YWRtaW4=
kind: Secret
metadata:
annotations:
sealedsecrets.bitnami.com/cluster-wide: "true"
name: activemq-artemis-broker-secret
namespace: common
type: Opaque
The template helm chart that should load the adminUser and adminPassword data looks like this
apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
name: {{ .Values.labels.app }}
namespace: common
spec:
{{ $secret := lookup "v1" "Secret" .Release.Namespace "activemq-artemis-broker-secret" }}
adminUser: {{ $secret.data.adminUser }}
adminPassword: {{ $secret.data.adminPassword }}
When deploying this using ArgoCD I get the following error:
failed exit status 1: Error: template: broker/templates/deployment.yaml:7:23:
executing "broker/templates/deployment.yaml" at <$secret.data.adminUser>:
nil pointer evaluating interface {}.adminUser Use --debug flag to render out invalid YAML
Both the secret and the deployment is in the same namespace (common).
If I try to get the secret with kubectl it works as below
kubectl get secret activemq-artemis-broker-secret -n common -o json
{
"apiVersion": "v1",
"data": {
"adminPassword": "VG9wU2VjcmV0UGFzc3dvcmQxIQ==",
"adminUser": "YWRtaW4="
},
"kind": "Secret",
"metadata": {
"annotations": {
"sealedsecrets.bitnami.com/cluster-wide": "true"
},
"creationTimestamp": "2022-10-10T14:40:49Z",
"name": "activemq-artemis-broker-secret",
"namespace": "common",
"ownerReferences": [
{
"apiVersion": "bitnami.com/v1alpha1",
"controller": true,
"kind": "SealedSecret",
"name": "activemq-artemis-broker-secret",
"uid": "edff38fb-a966-47a6-a706-cb197ac1797d"
}
],
"resourceVersion": "127303988",
"uid": "0679fc5c-7465-4fe1-9197-b483073e93c2"
},
"type": "Opaque"
}
What is wrong here. I use helm version: 3.8.1 and Go version: 1.75
This error is the result of two parts working together:
First, helm's lookup only works in a running cluster, not when running helm template (without --validate). If run in that manner it returns nil. (It is usually used as lookup ... | default dict {}, to avoid a nasty error message).
Second, you're deploying with ArgoCD that is actually running helm template internally when deploying a helm chart. See open issue: https://github.com/argoproj/argo-cd/issues/5202 . The issue mentions a plugin that can be used to change this behaviour. However, doing so requires some reconfiguration of argocd itself, which is not trivial and is not without side effects.
Related
I want to use and convert a YAML file to JSON in k8s helm template. The template looks like:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-file
data:
sounds.json: |-
{{ .Files.Get "file/sounds.yaml" | toPrettyJson | indent 4 }}
and the file/sounds.yaml is:
animal:
dog:
sound: bark
cat:
sound: meow
sheep:
sound: baa
The outcome from helm template command is:
$ helm template release-name chart-name
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-file
data:
sounds.json: |-
"animal:\n dog:\n sound: bark\n cat:\n sound: meow\n sheep:\n sound: baa\n"
but I want the result to be:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-value
data:
sounds.json: |-
{
"animal": {
"cat": {
"sound": "meow"
},
"dog": {
"sound": "bark"
},
"sheep": {
"sound": "baa"
}
}
}
If I use .Values instead of .Files I am able to get the same result but my need is to do with .Files only. Is there any function or something by which I can achieve the expected result?
I am trying to deploy the superset Helm chart with a customized image. There's no option to specify imagePullSecrets for the chart. Using k8s on DigitalOcean. I linked the repository, and tested it using a basic deploy, and it "just works". That is to say, the pods get the correct value for imagePullSecret, and pulling just works.
However, when trying to install the Helm chart, the used imagePullSecret mysteriously gets a registry- prefix (there's already a -registry suffix, so it becomes registry-xxx-registry when it should just be xxx-registry). The values on the default service account are correct.
To illustrate, default service accounts for both namespaces:
$ kubectl get sa default -n test -o yaml
apiVersion: v1
imagePullSecrets:
- name: xxx-registry
kind: ServiceAccount
metadata:
creationTimestamp: "2022-04-14T14:26:41Z"
name: default
namespace: test
resourceVersion: "13125"
uid: xxx-xxx
secrets:
- name: default-token-9ggrm
$ kubectl get sa default -n superset -o yaml
apiVersion: v1
imagePullSecrets:
- name: xxx-registry
kind: ServiceAccount
metadata:
creationTimestamp: "2022-04-14T14:19:47Z"
name: default
namespace: superset
resourceVersion: "12079"
uid: xxx-xxx
secrets:
- name: default-token-wkdhv
LGTM, but after trying to install the helm chart (which fails because of registry auth), I can see that the wrong secret is set on the pods:
$ kubectl get -n superset pods -o json | jq '.items[] | {name: .spec.containers[0].name, sa: .spec.serviceAccount, secret: .spec.imagePullSecrets}'
{
"name": "superset",
"sa": "default",
"secret": [
{
"name": "registry-xxx-registry"
}
]
}
{
"name": "xxx-superset-postgresql",
"sa": "default",
"secret": [
{
"name": "xxx-registry"
}
]
}
{
"name": "redis",
"sa": "xxx-superset-redis",
"secret": null
}
{
"name": "superset",
"sa": "default",
"secret": [
{
"name": "registry-xxx-registry"
}
]
}
{
"name": "superset-init-db",
"sa": "default",
"secret": [
{
"name": "registry-xxx-registry"
}
]
}
In the test namespace the secret name is just correct. Extra interesting is that postgres DOES have the correct secret name, and that uses a Helm dependency. So it seems like there's an issue in the superset Helm chart that is causing this, but there's no imagePullSecrets values being set anywhere in the templates. And as you can see above, they are using the default service account.
I have already tried destroying and recreating the whole cluster, but the problem recurs.
I have tried version 0.5.10 (latest) of the Helm chart and version 0.3.5, both result in the same issue.
https://github.com/apache/superset/tree/dafc841e223c0f01092a2e116888a3304142e1b8/helm/superset
https://github.com/apache/superset/tree/1.3/helm/superset
Ansible v2.9.25
I'm trying to merge two data structures with Ansible. I'm almost there but I'm not able to merge all data.
Let me explain:
I want to merge main_roles together with default_roles:
main_roles:
- name: admin
role_ref: admin
subjects:
- name: test
kind: ServiceAccount
- name: test2
kind: ServiceAccount
- name: edit
role_ref: edit
subjects:
- name: test
kind: ServiceAccount
default_roles:
- name: edit
role_ref: edit
subjects:
- name: merge_me
kind: ServiceAccount
I'm successfully combining with the following task:
- name: "Setting var roles_managed"
set_fact:
roles_managed: "{{ roles_managed | default([]) + [ item | combine(default_roles|selectattr('name', 'match', item.name) |list)] }}"
loop: "{{ main_roles }}"
Printing the var via loop.
- name: "print all roles"
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ roles_managed }}"
ok: [] => (item={u'subjects': [{u'kind': u'ServiceAccount', u'name': u'test'}, {u'kind': u'ServiceAccount', u'name': u'test2'}], u'name': u'admin', u'role_ref': u'admin'}) => {
"msg": {
"name": "admin",
"role_ref": "admin",
"subjects": [
{
"kind": "ServiceAccount",
"name": "test"
},
{
"kind": "ServiceAccount",
"name": "test2"
}
]
}
}
ok: [] => (item={u'subjects': [{u'kind': u'ServiceAccount', u'name': u'merge_me'}], u'name': u'edit', u'role_ref': u'edit'}) => {
"msg": {
"name": "edit",
"role_ref": "edit",
"subjects": [
{
"kind": "ServiceAccount",
"name": "merge_me"
}
]
}
}
This results in a combine on the item.name. But I want the result to be also a merge of the subjects. So i would need a end result of merge_me and test (the subjects under name:edit):
- name: edit
role_ref: edit
subjects:
- name: merge_me
kind: ServiceAccount
- name: test
kind: ServiceAccount
What I'm understanding Ansible is not merging recursively by default. So I would need to set recursive=true in the combine filter. See: Combining hashes/dictionaries
But I'm not able to set this successfully in my context.
When I try: {{ roles_managed | default([]) + [ item | combine(default_role_bindings, recursive=true|selectattr('name', 'match', item.name) |list)] }} for example I'm getting an 'bool' object is not iterable" error code...
I've tried many variations and searched many other posts. But I'm still unsuccessfully after probably too many hours ;). Hoping someone has a solution!
Both main_roles and default_roles are lists. It's not possible to combine lists. Instead, it's possible to add them and groupby name. Then combine the dictionaries with the same name, e.g.
- set_fact:
my_roles: "{{ my_roles|d([]) + [item.1|combine(list_merge='append')] }}"
loop: "{{ (main_roles + default_roles)|groupby('name') }}"
gives
my_roles:
- name: admin
role_ref: admin
subjects:
- kind: ServiceAccount
name: test
- kind: ServiceAccount
name: test2
- name: edit
role_ref: edit
subjects:
- kind: ServiceAccount
name: test
- kind: ServiceAccount
name: merge_me
Use list_merge='append', available since 2.10, to append the items of the lists.
Append the subjects on your own if the option append is not available in your version, e.g. the task below gives the same result
- set_fact:
my_roles: "{{ my_roles|d([]) + [item.1.0|combine({'subjects':_subj})] }}"
loop: "{{ (main_roles + default_roles)|groupby('name') }}"
vars:
_subj: "{{ item.1|map(attribute='subjects')|flatten }}"
I am trying to build ConfigMap data directly from values.yaml in helm
My Values.yaml
myconfiguration: |-
key1: >
{ "Project" : "This is config1 test"
}
key2 : >
{
"Project" : "This is config2 test"
}
And the configMap
apiVersion: v1
kind: ConfigMap
metadata:
name: poc-secrets-configmap-{{ .Release.Namespace }}
data:
{{.Values.myconfiguration | indent 1}}
But the data is empty when checked on the pod
Name: poc-secrets-configmap-xxx
Namespace: xxx
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: poc-secret-xxx
meta.helm.sh/release-namespace: xxx
Data
====
Events: <none>
Can anyone suggest
You are missing indentation in your values.yaml file, check YAML Multiline
myconfiguration: |-
key1: >
{ "Project" : "This is config1 test"
}
key2 : >
{
"Project" : "This is config2 test"
}
Also, the suggested syntax for YAML files is to use 2 spaces for indentation, so you may want to change your configmap to {{.Values.myconfiguration | indent 2}}
I'm trying to upgrade a helm chart,
I get the error function "pod" not defined which make sense because I really have no such function.
The "pod" is coming from a json file which I convert into a configmap and helm is reading this value as a function and not as a straight string which is part of the json file.
This is a snippet of my configmap:
# Generated from 'pods' from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml
# Do not change in-place! In order to change this file first read following link:
# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-%s" (include "prometheus-operator.fullname" $) "services-health" | trunc 63 | trimSuffix "-" }}
labels:
{{- if $.Values.grafana.sidecar.dashboards.label }}
{{ $.Values.grafana.sidecar.dashboards.label }}: "1"
{{- end }}
app: {{ template "prometheus-operator.name" $ }}-grafana
{{ include "prometheus-operator.labels" $ | indent 4 }}
data:
services-health.json: |-
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"targets": [
{
"expr": "{__name__=~\"kube_pod_container_status_ready\", container=\"aggregation\",kubernetes_namespace=\"default\",chart=\"\"}",
"format": "time_series",
"instant": false,
"intervalFactor": 2,
"legendFormat": "{{pod}}",
"refId": "A"
}
}
{{- end }}
The error I get is coming from this line: "legendFormat": "{{pod}}",
And this is the error I get:
helm upgrade --dry-run prometheus-operator-chart
/home/ubuntu/infra-devops/helm/vector-chart/prometheus-operator-chart/
Error: UPGRADE FAILED: parse error in "prometheus-operator/templates/grafana/dashboards/services-health.yaml":
template:
prometheus-operator/templates/grafana/dashboards/services-health.yaml:1213:
function "pod" not defined
I tried to escape it but nothing worked.
Anyone get idea about how I can work around this issue?
Escaping gotpl placeholders is possible using backticks. For example, in your scenario, instead of using {{ pod }} you could write {{` {{ pod }} `}}.
Move your dashboard json to a separate file, let's say name it dashboard.json.
Then in your configmap file: instead of listing the json down inline, reference the dashboard.json file by typing the following:
data:
services-health.json: |-
{{ .Files.Get "dashboard.json" | indent 4 }}
That would solve the problem!
In the case of my experiments, I replaced
"legendFormat": "{{ pod }}",
with
"legendFormat": "{{ "{{ pod }}" }}",
and it was very happy to return the syntax I needed (Specifically for the grafana-operator GrafanaDashboard CRD).
Keeping json file out of config map and sourcing it within config map works, but make sure to keep the json file out of template directory while using with helm, or else it will try to search for {{ pod }} .