In values.yaml I have defined
data_fusion:
tags:
- tag1
- tag2
- tag3
instances:
- name: test
location: us-west1
creating_cron: #once
removing_cron: #once
config:
type: Developer
displayName: test
dataprocServiceAccount: 'my#project.iam.gserviceaccount.com'
pipelines:
- name: test_rep
pipeline: '{"my_json":{}}'
- name: test222
location: us-west1
creating_cron: #once
removing_cron: #once
config:
type: Basic
displayName: test222
dataprocServiceAccount: 'my222#project.iam.gserviceaccount.com'
pipelines:
- name: test_rep222
pipeline: '{"my_json222":{}}'
- name: test_rep333
pipeline: '{"my_json333":{}}'
- name: test_rep444
pipeline: '{"my_json444":{}}'
You guys can see I have 3 tags and 2 instances. The first instance contains 1 pipeline, the second instance contains 3 pipelines.
I want to pass tags and instances to my yaml file:
another_config: {{ .Values.blabla.blablabla }}
data_fusion:
tags:
- array of tags should be here
instances:
- array of instances (and associated pipelines) should be here
Or just simply
another_config: {{ .Values.blabla.blablabla }}
data_fusion:
{{.Whole.things.should.be.here}}
Could you guys please help? Since I'm new to helm so I don't know how to pass the complicated array (or the whole big section of yaml).
Helm includes an underdocumented toYaml function that converts an arbitrary object to YAML syntax. Since YAML is whitespace-sensitive, it's useful to note that toYaml's output starts at the first column and ends with a newline. You can combine this with the indent function to make the output appear at the correct indentation.
apiVersion: v1
type: ConfigMap
metadata: { ... }
data:
data_fusion: |-
{{ .Values.data_fusion | toYaml | indent 4 }}
Note that the last line includes indent 4 to indent the resulting YAML block (two spaces more than the previous line), and that there is no white space before the template invocation.
In this example I've included the content as a YAML block scalar (the |- on the second-to-last line) inside a ConfigMap, but you can use this same technique anywhere you've configured complex settings in Helm values, even if it's Kubernetes settings for things like resource constraints or ingress paths.
Related
I want to create few pods from same image (I have the Dockerfile) so i want to use ReplicaSets.
but the final CMD command need to be different for each container.
for exmple
(https://www.devspace.sh/docs/5.x/configuration/images/entrypoint-cmd):
image:
frontend:
image: john/appfrontend
cmd:
- run
- dev
And the other container will do:
image:
frontend:
image: john/appfrontend
cmd:
- run
- <new value>
Also I would like to move the CMD value from a list, so i would like the value there to be variable (it will be in a loop so each Pod will have to be created separately).
Is it possible?
You can't directly do this as you've described it. A ReplicaSet manages some number of identical Pods, where the command, environment variables, and every other detail except for the Pod name are the same across every replica.
In practice you don't usually directly use ReplicaSets; instead, you create a Deployment, which creates one or more ReplicaSets, which create Pods. The same statement and mechanics apply to Deployments, though.
Since this is specifically in the context of a Helm chart, you can have two separate Deployment YAML files in your chart, but then use Helm templating to reduce the amount of code that needs to be repeated. You can add a helper template to templates/_helpers.tpl that contains most of the data for a container
# templates/_helpers.tpl
{{- define "myapp.container" -}}
image: my-image:{{ .Values.tag }}
env:
- name: FOO
value: bar
- name: ET
value: cetera
{{ end -}}
Now you can have two template Deployment files, but provide a separate command: for each.
# templates/deployment-one.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "myapp.name" . }}-one
labels:
{{ include "myapp.labels" . | indent 4 }}
spec:
replicas: {{ .Values.one.replicas }}
template:
metadata:
labels:
{{ include "myapp.labels" . | indent 8 }}
spec:
containers:
- name: frontend
{{ include "myapp.container" . | indent 10 }}
command:
- npm
- run
- dev
There is still a fair amount to copy and paste, but you should be able to cp the whole file. Most of the boilerplate is Kubernetes boilerplate and every Deployment will have these parts; little of it is specific to any given application.
If your image has a default CMD (this is good practice) then you can omit the command: override on one of the Deployments, and it will run that default CMD.
In the question you make specific reference to Dockerfile CMD. One important terminology difference is that Kubernetes command: overrides Docker ENTRYPOINT, and Kubernetes args: matches CMD. If you are using an entrypoint wrapper script, in this example you will need to provide args: instead of command: so that the wrapper is still invoked.
How to append a list to another list inside a dictionary using Helm?
I have a Helm chart specifying the key helm inside of an Argo CD Application (see snippet below).
Now given a values.yaml file, e.g.:
helm:
valueFiles:
- myvalues1.yaml
- myvalues2.yaml
I want to append helm.valuesFiles to the one below. How can I achieve this? The merge function doesn't seem to satisfy my needs in this case, since precedence will be given to the first dictionary.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
# Add labels to your application object.
labels:
name: guestbook
spec:
# The project the application belongs to.
project: default
# Source of the application manifests
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git # Can point to either a Helm chart repo or a git repo.
targetRevision: HEAD # For Helm, this refers to the chart version.
path: guestbook # This has no meaning for Helm charts pulled directly from a Helm repo instead of git.
# helm specific config
chart: chart-name # Set this when pulling directly from a Helm repo. DO NOT set for git-hosted Helm charts.
helm:
passCredentials: false # If true then adds --pass-credentials to Helm commands to pass credentials to all domains
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
# Release name override (defaults to application name)
releaseName: guestbook
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-prod.yaml
https://raw.githubusercontent.com/argoproj/argo-cd/master/docs/operator-manual/application.yaml
If you only need to append helm.valueFiles to the existing .spec.source.helm.valueFiles, you can range through the list in the values file and add the list items like this:
valueFiles:
- values-prod.yaml
{{- range $item := .Values.helm.valueFiles }}
- {{ $item }}
{{- end }}
I have helmfile
releases:
- name: controller
values:
- values/valuedata.yaml
hooks:
{{ toYaml .Values.hooks }}
file with values
hooks:
- events: [ "presync" ]
showlogs: true
command: "bash"
args: [ "args"]
I want to pass the hooks from values how I can do it ?
I tried many ways and I got an error
This is the command
helmfile --file ./myhelmfile.yaml sync
failed to read myhelmfile.yaml: reading document at index 1: yaml: line 26: did not find expected '-' indicator
What you try to do is to inline part of the values.yaml into your template. Therefore you need to take care of the indentation properly.
In your case I think it'll be something like this:
releases:
- name: controller
values:
- values/valuedata.yaml
hooks:
{{ toYaml .Values.hooks | indent 6 }}
You can find a working example of a similar case here.
bases:
- common.yaml
releases:
- name: controller
values:
- values/controller-values.yaml
hooks:
- events: [ "presync" ]
....
- events: [ "postsync" ]
.....
common.yaml
environments:
default:
values:
- values/common-values.yaml
common-values
a:b
I want to move the values of the hooks to file when I added it to common.values it worked but I want to add it to different files and not to the common, so I tried to add base
bases:
- common.yaml
- hooks.yaml
releases:
- name: controller
values:
- values/controller-values.yaml
hooks:
{{ toYaml .Values.hooks | indent 6 }}
hooks.yaml
environments:
default:
values:
- values/hooks-values.yaml
hooks-values.yaml
hooks:
- events: [ "presync" ]
....
- events: [ "postsync" ]
.....
but I got an error
parsing: template: stringTemplate:21:21: executing "stringTemplate" at <.Values.hooks>: map has no entry for key "hooks"
I tried also to change it o
hooks:
- values/hooks-values.yaml
and I got an error
line 22: cannot unmarshal !!str values/... into event.Hook
I think the first issue is when specifying both common.yaml and hooks.yaml under bases:, they are not merged properly. Since they provide same keys, most probably the one that is included later under bases: overrides the other.
To solve that you can use a single entry in bases in helmfile:
bases:
- common.yaml
and then add your value files to common.yaml:
environments:
default:
values:
- values/common-values.yaml
- values/hooks-values.yaml
I don't claim this is best practice, but it should work :)
The second issue is that bases is treated specially, i.e. helmfile.yaml is rendered before base layering is processed, therefore your values (coming from bases) are not available at a point where you can reference them directly in the helmfile. If you embedded environments directly in the helmfile, it would be fine. But if you want to keep using bases, there seems to be couple of workarounds, and the simplest seemed to be adding --- after bases as explained in the next comment on the same thread.
So, a working version of your helmfile could be:
bases:
- common.yaml
---
releases:
- name: controller
chart: stable/nginx
version: 1.24.1
values:
- values/controller-values.yaml
hooks:
{{ toYaml .Values.hooks | nindent 6 }}
PS: chart: stable/nginx is just chosen randomly to be able to helmfile build.
I am trying to figure out how to escape these pieces of a yml file in order to use with helm.
- name: SYSLOG_TAG
value: '{{ index .Container.Config.Labels "io.kubernetes.pod.namespace" }}[{{ index .Container.Config.Labels "io.kubernetes.pod.name" }}]'
- name: SYSLOG_HOSTNAME
value: '{{ index .Container.Config.Labels "io.kubernetes.container.name" }}'
The yml file is a DaemonSet for sending logs to papertrail with instructions here for a standard kubernetes manual deployment https://help.papertrailapp.com/kb/configuration/configuring-centralized-logging-from-kubernetes/ . Here is a link to the full yml file https://help.papertrailapp.com/assets/files/papertrail-logspout-daemonset.yml .
I found some answers on how to escape the curly braces and quotes, but still can't seem to get it to work. It would be easiest if there was some way to just get helm to not evaluate each entire value.
The last I tried was this, but still results in an error.
value: ''"{{" index .Container.Config.Labels \"io.kubernetes.pod.namespace\" "}}"["{{" index .Container.Config.Labels \"io.kubernetes.pod.name\" "}}"]''
- name: SYSLOG_HOSTNAME
value: ''"{{" index .Container.Config.Labels \"io.kubernetes.container.name\" "}}"''
This is the error:
Error: UPGRADE FAILED: YAML parse error on templates/papertrail-logspout-daemonset.yml: error converting YAML to JSON: yaml: line 21: did not find expected key
I can hardcode values for both of these and it works fine. I don't quite understand how these env variables work, but what happens is that logs are sent to papertrail for each pod in a node with the labels from each of those pods. Namespace, pod name, and container name.
env:
- name: ROUTE_URIS
value: "{{ .Values.backend.log.destination }}"
{{ .Files.Get "files/syslog_vars.yaml" | indent 13 }}
Two sensible approaches come to mind.
One is to define a template that expands to the string {{, at which point you can use that in your variable expansion. You don't need to specially escape }}.
{{- define "cc" }}{{ printf "{{" }}{{ end -}}
- name: SYSLOG_HOSTNAME
value: '{{cc}} index .Container.Config.Labels "io.kubernetes.container.name" }}'
A second approach, longer-winded but with less escaping, is to create an external file that has these environment variable fragments.
# I am files/syslog_vars.yaml
- name: SYSLOG_HOSTNAME
value: '{{ index .Container.Config.Labels "io.kubernetes.container.name" }}'
Then you can include the file. This doesn't apply any templating in the file, it just reads it as literal text.
env:
{{ .Files.Get "files/syslog_vars.yaml" | indent 2 }}
The important point with this last technique, and the problem you're encountering in the question, is that Helm reads an arbitrary file, expands all of the templating, and then tries to interpret the resulting text as YAML. The indent 2 part of this needs to match whatever the rest of your env: block has; if this is deep inside a deployment spec it might need to be 8 or 10 spaces. helm template will render a chart to text without trying to do additional processing, which is really helpful for debugging.