I am trying to figure out how to escape these pieces of a yml file in order to use with helm.
- name: SYSLOG_TAG
value: '{{ index .Container.Config.Labels "io.kubernetes.pod.namespace" }}[{{ index .Container.Config.Labels "io.kubernetes.pod.name" }}]'
- name: SYSLOG_HOSTNAME
value: '{{ index .Container.Config.Labels "io.kubernetes.container.name" }}'
The yml file is a DaemonSet for sending logs to papertrail with instructions here for a standard kubernetes manual deployment https://help.papertrailapp.com/kb/configuration/configuring-centralized-logging-from-kubernetes/ . Here is a link to the full yml file https://help.papertrailapp.com/assets/files/papertrail-logspout-daemonset.yml .
I found some answers on how to escape the curly braces and quotes, but still can't seem to get it to work. It would be easiest if there was some way to just get helm to not evaluate each entire value.
The last I tried was this, but still results in an error.
value: ''"{{" index .Container.Config.Labels \"io.kubernetes.pod.namespace\" "}}"["{{" index .Container.Config.Labels \"io.kubernetes.pod.name\" "}}"]''
- name: SYSLOG_HOSTNAME
value: ''"{{" index .Container.Config.Labels \"io.kubernetes.container.name\" "}}"''
This is the error:
Error: UPGRADE FAILED: YAML parse error on templates/papertrail-logspout-daemonset.yml: error converting YAML to JSON: yaml: line 21: did not find expected key
I can hardcode values for both of these and it works fine. I don't quite understand how these env variables work, but what happens is that logs are sent to papertrail for each pod in a node with the labels from each of those pods. Namespace, pod name, and container name.
env:
- name: ROUTE_URIS
value: "{{ .Values.backend.log.destination }}"
{{ .Files.Get "files/syslog_vars.yaml" | indent 13 }}
Two sensible approaches come to mind.
One is to define a template that expands to the string {{, at which point you can use that in your variable expansion. You don't need to specially escape }}.
{{- define "cc" }}{{ printf "{{" }}{{ end -}}
- name: SYSLOG_HOSTNAME
value: '{{cc}} index .Container.Config.Labels "io.kubernetes.container.name" }}'
A second approach, longer-winded but with less escaping, is to create an external file that has these environment variable fragments.
# I am files/syslog_vars.yaml
- name: SYSLOG_HOSTNAME
value: '{{ index .Container.Config.Labels "io.kubernetes.container.name" }}'
Then you can include the file. This doesn't apply any templating in the file, it just reads it as literal text.
env:
{{ .Files.Get "files/syslog_vars.yaml" | indent 2 }}
The important point with this last technique, and the problem you're encountering in the question, is that Helm reads an arbitrary file, expands all of the templating, and then tries to interpret the resulting text as YAML. The indent 2 part of this needs to match whatever the rest of your env: block has; if this is deep inside a deployment spec it might need to be 8 or 10 spaces. helm template will render a chart to text without trying to do additional processing, which is really helpful for debugging.
Related
I want to create few pods from same image (I have the Dockerfile) so i want to use ReplicaSets.
but the final CMD command need to be different for each container.
for exmple
(https://www.devspace.sh/docs/5.x/configuration/images/entrypoint-cmd):
image:
frontend:
image: john/appfrontend
cmd:
- run
- dev
And the other container will do:
image:
frontend:
image: john/appfrontend
cmd:
- run
- <new value>
Also I would like to move the CMD value from a list, so i would like the value there to be variable (it will be in a loop so each Pod will have to be created separately).
Is it possible?
You can't directly do this as you've described it. A ReplicaSet manages some number of identical Pods, where the command, environment variables, and every other detail except for the Pod name are the same across every replica.
In practice you don't usually directly use ReplicaSets; instead, you create a Deployment, which creates one or more ReplicaSets, which create Pods. The same statement and mechanics apply to Deployments, though.
Since this is specifically in the context of a Helm chart, you can have two separate Deployment YAML files in your chart, but then use Helm templating to reduce the amount of code that needs to be repeated. You can add a helper template to templates/_helpers.tpl that contains most of the data for a container
# templates/_helpers.tpl
{{- define "myapp.container" -}}
image: my-image:{{ .Values.tag }}
env:
- name: FOO
value: bar
- name: ET
value: cetera
{{ end -}}
Now you can have two template Deployment files, but provide a separate command: for each.
# templates/deployment-one.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "myapp.name" . }}-one
labels:
{{ include "myapp.labels" . | indent 4 }}
spec:
replicas: {{ .Values.one.replicas }}
template:
metadata:
labels:
{{ include "myapp.labels" . | indent 8 }}
spec:
containers:
- name: frontend
{{ include "myapp.container" . | indent 10 }}
command:
- npm
- run
- dev
There is still a fair amount to copy and paste, but you should be able to cp the whole file. Most of the boilerplate is Kubernetes boilerplate and every Deployment will have these parts; little of it is specific to any given application.
If your image has a default CMD (this is good practice) then you can omit the command: override on one of the Deployments, and it will run that default CMD.
In the question you make specific reference to Dockerfile CMD. One important terminology difference is that Kubernetes command: overrides Docker ENTRYPOINT, and Kubernetes args: matches CMD. If you are using an entrypoint wrapper script, in this example you will need to provide args: instead of command: so that the wrapper is still invoked.
In values.yaml I have defined
data_fusion:
tags:
- tag1
- tag2
- tag3
instances:
- name: test
location: us-west1
creating_cron: #once
removing_cron: #once
config:
type: Developer
displayName: test
dataprocServiceAccount: 'my#project.iam.gserviceaccount.com'
pipelines:
- name: test_rep
pipeline: '{"my_json":{}}'
- name: test222
location: us-west1
creating_cron: #once
removing_cron: #once
config:
type: Basic
displayName: test222
dataprocServiceAccount: 'my222#project.iam.gserviceaccount.com'
pipelines:
- name: test_rep222
pipeline: '{"my_json222":{}}'
- name: test_rep333
pipeline: '{"my_json333":{}}'
- name: test_rep444
pipeline: '{"my_json444":{}}'
You guys can see I have 3 tags and 2 instances. The first instance contains 1 pipeline, the second instance contains 3 pipelines.
I want to pass tags and instances to my yaml file:
another_config: {{ .Values.blabla.blablabla }}
data_fusion:
tags:
- array of tags should be here
instances:
- array of instances (and associated pipelines) should be here
Or just simply
another_config: {{ .Values.blabla.blablabla }}
data_fusion:
{{.Whole.things.should.be.here}}
Could you guys please help? Since I'm new to helm so I don't know how to pass the complicated array (or the whole big section of yaml).
Helm includes an underdocumented toYaml function that converts an arbitrary object to YAML syntax. Since YAML is whitespace-sensitive, it's useful to note that toYaml's output starts at the first column and ends with a newline. You can combine this with the indent function to make the output appear at the correct indentation.
apiVersion: v1
type: ConfigMap
metadata: { ... }
data:
data_fusion: |-
{{ .Values.data_fusion | toYaml | indent 4 }}
Note that the last line includes indent 4 to indent the resulting YAML block (two spaces more than the previous line), and that there is no white space before the template invocation.
In this example I've included the content as a YAML block scalar (the |- on the second-to-last line) inside a ConfigMap, but you can use this same technique anywhere you've configured complex settings in Helm values, even if it's Kubernetes settings for things like resource constraints or ingress paths.
While composing helm chart with few sub-charts I've encountered collision. Long story short, I'm creating config with some value, and config name is generated. But subchart expects that generated names to be referenced directly in Values.yaml file.
That's actually service with PG database & I'm trying to install prometheus-postgres-exporter to enable Prometheus monitoring for the DB. But that's not the point.
So, I have some config for building DB connection string:
apiVersion: v1
kind: Secret
metadata:
name: {{ include "myapp.fullname" . }}-secret
type: Opaque
data:
PG_CONN_STRING: {{ printf "postgresql://%s:%s#%s:%s/%s" .Values.postgresql.postgresqlUsername .Values.postgresql.postgresqlPassword (include "postgresql.fullname" .) .Values.postgresql.service.port .Values.postgresql.postgresqlDatabase | b64enc | quote }}
Okay, that works file. However, when I'm trying to install prometheus-postgres-exporter, that requires naming specific secret name where DB connection string can be obtained. I have a problem with that name should be generated, so I can't provide exact value. Not sure, how can I reference that? Obviously, passing same template code doesn't work, since replacement is single-pass, not recursive.
prometheus-postgres-exporter:
serviceMonitor:
enabled: true
config:
datasourceSecret:
name: "{{ include "myapp.fullname" . }}-secret" # Doesn't work, unfortunatelly
key: "PG_CONN_STRING"
Is there known way to overcome this other than hardcoding values?
I have helmfile
releases:
- name: controller
values:
- values/valuedata.yaml
hooks:
{{ toYaml .Values.hooks }}
file with values
hooks:
- events: [ "presync" ]
showlogs: true
command: "bash"
args: [ "args"]
I want to pass the hooks from values how I can do it ?
I tried many ways and I got an error
This is the command
helmfile --file ./myhelmfile.yaml sync
failed to read myhelmfile.yaml: reading document at index 1: yaml: line 26: did not find expected '-' indicator
What you try to do is to inline part of the values.yaml into your template. Therefore you need to take care of the indentation properly.
In your case I think it'll be something like this:
releases:
- name: controller
values:
- values/valuedata.yaml
hooks:
{{ toYaml .Values.hooks | indent 6 }}
You can find a working example of a similar case here.
I am creating a helm chart for my app. In the templates directory, I have a config-map.yaml with this in it
{{- with Values.xyz }}
xyz.abc-def: {{ .abc-def }}
{{- end }}
When I try to run helm install I get a
Error: parse error in "config-map.yaml": template:config-map.yaml:2: unexpected bad character U+002D '-' in command.
Is there a way to use dashes in the name and variable for helm?
Might be worth trying using index method:
xyz.abc-def: {{ index .Values.xyz "abc-def" }}
looks like it's still restricted by helm to allow hyphens in variable names (as well in subchart names) and index is a workaround
I faced the same issue because the resource defined had a '-' in its name:
resources:
{{- with .Values.my-value }}
After I removed the '-', the error disappeared:
resources:
{{- with .Values.myvalue }}