bases:
- common.yaml
releases:
- name: controller
values:
- values/controller-values.yaml
hooks:
- events: [ "presync" ]
....
- events: [ "postsync" ]
.....
common.yaml
environments:
default:
values:
- values/common-values.yaml
common-values
a:b
I want to move the values of the hooks to file when I added it to common.values it worked but I want to add it to different files and not to the common, so I tried to add base
bases:
- common.yaml
- hooks.yaml
releases:
- name: controller
values:
- values/controller-values.yaml
hooks:
{{ toYaml .Values.hooks | indent 6 }}
hooks.yaml
environments:
default:
values:
- values/hooks-values.yaml
hooks-values.yaml
hooks:
- events: [ "presync" ]
....
- events: [ "postsync" ]
.....
but I got an error
parsing: template: stringTemplate:21:21: executing "stringTemplate" at <.Values.hooks>: map has no entry for key "hooks"
I tried also to change it o
hooks:
- values/hooks-values.yaml
and I got an error
line 22: cannot unmarshal !!str values/... into event.Hook
I think the first issue is when specifying both common.yaml and hooks.yaml under bases:, they are not merged properly. Since they provide same keys, most probably the one that is included later under bases: overrides the other.
To solve that you can use a single entry in bases in helmfile:
bases:
- common.yaml
and then add your value files to common.yaml:
environments:
default:
values:
- values/common-values.yaml
- values/hooks-values.yaml
I don't claim this is best practice, but it should work :)
The second issue is that bases is treated specially, i.e. helmfile.yaml is rendered before base layering is processed, therefore your values (coming from bases) are not available at a point where you can reference them directly in the helmfile. If you embedded environments directly in the helmfile, it would be fine. But if you want to keep using bases, there seems to be couple of workarounds, and the simplest seemed to be adding --- after bases as explained in the next comment on the same thread.
So, a working version of your helmfile could be:
bases:
- common.yaml
---
releases:
- name: controller
chart: stable/nginx
version: 1.24.1
values:
- values/controller-values.yaml
hooks:
{{ toYaml .Values.hooks | nindent 6 }}
PS: chart: stable/nginx is just chosen randomly to be able to helmfile build.
Related
In values.yaml I have defined
data_fusion:
tags:
- tag1
- tag2
- tag3
instances:
- name: test
location: us-west1
creating_cron: #once
removing_cron: #once
config:
type: Developer
displayName: test
dataprocServiceAccount: 'my#project.iam.gserviceaccount.com'
pipelines:
- name: test_rep
pipeline: '{"my_json":{}}'
- name: test222
location: us-west1
creating_cron: #once
removing_cron: #once
config:
type: Basic
displayName: test222
dataprocServiceAccount: 'my222#project.iam.gserviceaccount.com'
pipelines:
- name: test_rep222
pipeline: '{"my_json222":{}}'
- name: test_rep333
pipeline: '{"my_json333":{}}'
- name: test_rep444
pipeline: '{"my_json444":{}}'
You guys can see I have 3 tags and 2 instances. The first instance contains 1 pipeline, the second instance contains 3 pipelines.
I want to pass tags and instances to my yaml file:
another_config: {{ .Values.blabla.blablabla }}
data_fusion:
tags:
- array of tags should be here
instances:
- array of instances (and associated pipelines) should be here
Or just simply
another_config: {{ .Values.blabla.blablabla }}
data_fusion:
{{.Whole.things.should.be.here}}
Could you guys please help? Since I'm new to helm so I don't know how to pass the complicated array (or the whole big section of yaml).
Helm includes an underdocumented toYaml function that converts an arbitrary object to YAML syntax. Since YAML is whitespace-sensitive, it's useful to note that toYaml's output starts at the first column and ends with a newline. You can combine this with the indent function to make the output appear at the correct indentation.
apiVersion: v1
type: ConfigMap
metadata: { ... }
data:
data_fusion: |-
{{ .Values.data_fusion | toYaml | indent 4 }}
Note that the last line includes indent 4 to indent the resulting YAML block (two spaces more than the previous line), and that there is no white space before the template invocation.
In this example I've included the content as a YAML block scalar (the |- on the second-to-last line) inside a ConfigMap, but you can use this same technique anywhere you've configured complex settings in Helm values, even if it's Kubernetes settings for things like resource constraints or ingress paths.
I have an argocd ApplicationSet created. I have the following merge keys setup:
generators:
- merge:
mergeKeys:
- path
generators:
- matrix:
generators:
- git:
directories:
- path: aws-ebs-csi-driver
- path: cluster-autoscaler
repoURL: >-
...
revision: master
- clusters:
selector:
matchLabels:
argocd.argoproj.io/secret-type: cluster
- list:
elements:
- path: aws-ebs-csi-driver
namespace: system
- path: cluster-autoscaler
namespace: system
Syncing the application set however generates:
- lastTransitionTime: "2022-08-08T21:54:05Z"
message: the parameters from a generator were not unique by the given mergeKeys,
Merge requires all param sets to be unique. Duplicate key was {"path":"aws-ebs-csi-driver"}
reason: ApplicationGenerationFromParamsError
status: "True"
Any help is appreciated.
The matrix generator is producing one set of parameters for each combination of directory and cluster.
If there is more than one cluster, then there will be one parameter set with path: aws-ebs-csi-driver for each cluster.
The merge generator requires that each parameter used as a merge key be completely unique. That mode was the original design of the merge generator, but more modes may be supported in the future.
Argo CD v2.5 will support go templated ApplicationSets, which might provide an easier way to solve your problem.
I have helmfile
releases:
- name: controller
values:
- values/valuedata.yaml
hooks:
{{ toYaml .Values.hooks }}
file with values
hooks:
- events: [ "presync" ]
showlogs: true
command: "bash"
args: [ "args"]
I want to pass the hooks from values how I can do it ?
I tried many ways and I got an error
This is the command
helmfile --file ./myhelmfile.yaml sync
failed to read myhelmfile.yaml: reading document at index 1: yaml: line 26: did not find expected '-' indicator
What you try to do is to inline part of the values.yaml into your template. Therefore you need to take care of the indentation properly.
In your case I think it'll be something like this:
releases:
- name: controller
values:
- values/valuedata.yaml
hooks:
{{ toYaml .Values.hooks | indent 6 }}
You can find a working example of a similar case here.
I've been making some tests with a Kubernetes cluster and I installed the loki-promtail stack by means of the helm loki/loki-stack chart.
The default configuration works fine, but now I would like to add some custom behaviour to the standard promtail config.
According to the Promtail documentation I tried to customise the values.xml in this way:
promtail:
extraScrapeConfigs:
- job_name: dlq-reader
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- template:
source: new_key
template: 'test'
- output:
source: new_key
The expected behaviour is that every log line is replaced by the static text "test" (of course this is a silly test just to get familiar with this environment).
What I see is that this configuration is correctly applied to the loki config-map but without any effect: the log lines looks exactly as if this additional configuration wasn't there.
The loki-stack chart version is 0.39.0 which installs loki 1.5.0.
I cannot see any error in the loki/promtails logs... Any suggestion?
I finally discovered the issue then I post what I found in case this might help anyone else with the same issue.
In order to modify the log text or to add custom labels, the correct values.yaml section to provide is pipelineStages instead of extraScrapeConfigs. Then, the previous snippet must be changed in the following way:
promtail:
pipelineStages:
- docker: {}
- match:
selector: '{container="dlq-reader"}'
stages:
- template:
source: new_key
template: 'test'
- output:
source: new_key
I have a fairly large playbook that is capable of updating up to 10 services on a given host.
Let's say I have the services a b c d and I'd like to be to be able to selectively update the services by passing command line arguments, but default to updating everything when no arguments are passed. How could you do this in Ansible without being able to drop into arbitrary scripting?
Right now what I have is a when check on each service and define whether or not the service is true at playbook invocation. Given I may have as many as 10 services, I can't write boolean logic to accommodate every possibility.
I was hoping there is maybe a builtin like $# in bash that lists all arguments and I can do a check along the lines of when: $#.length = 0
ansible-playbook deploy.yml -e "a=true b=true d=true"
when: a == "true"
when: b == "true"
when: c == "true"
when: d == "true"
I would suggest to use tags. Lets say we have two services for example nginx and fpm. Then tag the play for nginx with nginx and for fpm with fpm. Below is an example for task level tagging, let say its named play.yml
- name: tasks for nginx
service: name=nginx state=reloaded
tags:
- nginx
- name: tasks for php-fpm
service: name=php-fpm state=reloaded
tags:
- fpm
Exeucting ansible-playbook play.yml will by default run both the tasks. But, If i change the command to
ansible-playbook play.yml --tags "nginx"
then only the task with nginx tag is executed. Tags can also be applied over play level or role level.
Play level tagging would look like
- hosts: all
remote_user: user
tasks:
- include: play1.yml
tags:
- play1
- include: play2.yml
tags:
- play2
In this case, all tasks inside the playbook play1.yml will inherit the tag play1 and the same for play2. While running ansible-playbook with the tag play1 then all tasks inside play1.yml are executed. Rather if we dont specify any tag all tasks from play1 and play2 are executed.
Note: A tasks is not limited to just one tag.
If you have a single play that you want to loop over the services, define that list in group_vars/all or somewhere else that makes sense:
services:
- first
- second
- third
- fourth
Then your tasks in start_services.yml playbook can look like this:
- name: Ensure passed variables are in services list
fail:
msg: "{{ item }} not in services list"
when: item not in services
with_items: "{{ varlist | default(services) }}"
- name: Start services
service:
name: "{{ item }}"
state: started
with_items: "{{ varlist | default(services) }}"
Pass in varlist as a JSON array:
$ ansible-playbook start_services.yml --extra-vars='{"varlist":[first,third]}'