How do you add multiple config files to configMap with kustomize configMapGenerator by using a pattern/regex/...? - kubernetes

Currently I do this:
configMapGenerator:
- name: sql-config-map
files:
- "someDirectory/one.sql"
- "someDirectory/two.sql"
- "someDirectory/three.sql"
and I would like to do sth. like this:
configMapGenerator:
- name: sql-config-map
files:
- "someDirectory/*.sql"
Is this somehow possible?

Nope.
See discussion around that feature in comment on "configMapGenerator should allow directories as input"
The main reason:
To move towards explicit dependency declaration, we're moving away from allowing globs in the kustomization file

This command works fine and will edit your kustomization.yaml:
kustomize edit add configmap my-configmap --from-file="$PWD/my-files/*"
The my-files directory has to be in the same folder that the kustomization.yaml file.

Related

Helm template - Overriding values file

I am currently trying to test some changes, specifically to see if a chart picks up/inherits changes from a top level values file. This top level values file should override any settings in the values file for this chart. To test this, I am trying to use the following command:
helm template --values path/to/top/level/values.yaml path/to/chart > output.yaml
However, when viewing the output for this, the chart still retains the values defined in the chart, and not the values that have been set in the top level values file.
I have tried a number of variations of this command, such as:
helm template path/to/chart --values path/to/top/level/values.yaml > output.yaml
helm template -f path/to/chart/values.yaml --values path/to/top/level/values.yaml > output.yaml
helm template path/to/top/level/values.yaml --values path/to/chart > output.yaml
Am I using this command correctly? Is what I am trying to achieve only possible when doing a helm install or upgrade? e.g. https://all.docs.genesys.com/PrivateEdition/Current/PEGuide/HelmOverrides
Overriding values from a parent (you call it top-level) chart mychart works like a charm and exactly as described in the Helm docs.
A values.yaml in folder mychart/charts/mysubchart
dessert: cake
can be overriden by a values.yaml in folder mychart
mysubchart:
dessert: ice cream
Any directives inside of the mysubchart section will be sent to the mysubchart chart.
Rendering the parent (top-level) chart works like that:
helm template mychart -f mychart/values.yaml
What if what you want is to combine from 2 yaml files, is it possible?
Example:
values.yaml:
blackBoxSidecar:
enabled: true
targets:
- target: esb:443
module: tcp_connect
values-namespace.yaml:
blackBoxSidecar:
targets:
- target: rabbitmq-namespace:443
module: tcp_connect
What I want to get is:
blackBoxSidecar:
enabled: true
targets:
- target: esb:443
module: tcp_connect
- target: rabbitmq-namespace:443
module: tcp_connect

Argo Events file event source does not detect file

I am building a processing pipeline for genomic data for my master thesis and I am using Argo.
Basically, I have a fully functioning processing workflow implemented in Argo Workflows and now I am trying to create an EventSource for detecting when a folder is written by the sequencer (then the folder name should be passed to the workflow through a Sensor).
The first problem is that the sequencer takes some time to write all the data, thus I cannot start the workflow as soon as the base directory is created. Therefore, the idea is to wait for a specific file inside the new run folder to be created, then start the workflow.
For simulating this, I am coping an old run folder inside the watched directory.
Now, I have implemented the following EventSource, which does not listen to the specific file mentioned before, but just to the run folder and it works, the event is detected.
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
name: directory-event-source
namespace: tesi-fabrici
spec:
template:
container:
volumeMounts:
- mountPath: /test_dir
name: test-dir
volumes:
- name: test-dir
nfs:
server: 10.128.2.231
path: /tesi_fabrici
file:
directoryCreated:
watchPathConfig:
directory: "/test_dir/watched_dir/"
path: "210818_M70903_0027_000000000-JVRB4"
# pathRegexp: TODO with regex
eventType: CREATE
Now, I simulated what was said before, by copying all the data except for that one file and lastly, copying that file. Following the script for doing this.
#!/bin/bash
inputDirName=$1
inputDirPath=$2
sampleSheet=$3
outputPath=$4
rsync -hr --progress "$inputDirPath$inputDirName" $outputPath --exclude $sampleSheet
rsync -hr --progress "$inputDirPath${inputDirName}/$sampleSheet" "$outputPath$inputDirName"
And I run it from a pod in the cluster (with the same nfs folder mounted) as below:
./copy_script.sh 210818_M70903_0027_000000000-JVRB4 /external_prod_dir/AREA/MiSeqDx/ SampleSheet.csv /external_test_dir/watched_dir/
The file in question is the SampleSheet.csv. Now I modified the EventSource as it follows in order to listen to the creation of the sample sheet:
...
...
file:
directoryCreated:
watchPathConfig:
directory: "/test_dir/watched_dir/"
path: "210818_M70903_0027_000000000-JVRB4/SampleSheet.csv"
# pathRegexp: TODO with regex
eventType: CREATE
The data gets copied correctly, but in this case, the EventSource is not detecting the creation of the SampleSheet.csv.
By doing some testing, I noticed that the field path: expects a file or a folder, but the EventSource does not work when I use a path, like in my case.
Solving this particular case could be easy, I change the EventSource as it follows
...
...
file:
directoryCreated:
watchPathConfig:
directory: "/test_dir/watched_dir/210818_M70903_0027_000000000-JVRB4/"
path: "SampleSheet.csv"
# pathRegexp: TODO with regex
eventType: CREATE
and the creation of the sample sheet gets caught, but there is going to be only what's written in path: and I would need also the run folder name.
But the problem is, in a real scenario, the run folder names change but follow the same pattern as the folder I am using here (210818_M70903_0027_000000000-JVRB4). Therefore my plan was to use a regex to capture [path_of_new_run_folder]/SampleSheet.csv, and I don't think I can use a regex in the directory: but only in pathRegexp:
I hope I was pretty clear in what my problem is and please let me know how can I solve this.

Use different name of the kustomization.yaml

For CI/CD purposes, the project is maintaining 2 kustomization.yaml files
Regular deployments - kustomization_deploy.yaml
Rollback deployment - kustomization_rollback.yaml
To run kustomize build, a file with the name "kustomization.yaml" is required in the current directory.
If the project wants to use kustomization_rollback.yaml and NOT kustomization.yaml, how is this possible? Does kustomize accept file name as an argument? Docs do not specify anything on this.
Currently there is no possibility to change the behavior of kustomize to support other file names (by using precompiled binaries) than:
kustomization.yaml
kustomization.yml
Kustomization
All of the below cases will produce the same error output:
kubectl kustomize dir/
kubectl apply -k dir/
kustomize build dir/
Error: unable to find one of 'kustomization.yaml', 'kustomization.yml' or 'Kustomization' in directory 'FULL_PATH/dir'
Depending on the CI/CD platform/solution/tool you should try other way around like for example:
split the Deployment into 2 directories kustomization_deploy/kustomization_rollback with kustomization.yaml
As a side note!
File names that kustomize uses are placed in the:
/kubernetes/vendor/sigs.k8s.io/kustomize/pkg/constants/constants.go
// Package constants holds global constants for the kustomize tool.
package constants
// KustomizationFileNames is a list of filenames that can be recognized and consumbed
// by Kustomize.
// In each directory, Kustomize searches for file with the name in this list.
// Only one match is allowed.
var KustomizationFileNames = []string{
"kustomization.yaml",
"kustomization.yml",
"Kustomization",
}
The logic behind choosing the Kustomization file is placed in:
/kubernetes/vendor/sigs.k8s.io/kustomize/pkg/target/kusttarget.go
Additional reference:
Github.com: Kubernetes-sigs: Kustomize
Kubernetes.io: Docs: Tasks: Manage kubernetes objects: Kustomization

How to set a variable from another yaml file in azure-pipeline.yml

I have an environment.yml shown as follow, I would like to read out the content of the name variable (core-force) and set it as a value of the global variable in my azure-pipeline.yamal file how can I do it?
name: core-force
channels:
- conda-forge
dependencies:
- click
- Sphinx
- sphinx_rtd_theme
- numpy
- pylint
- azure-cosmos
- python=3
- flask
- pytest
- shapely
in my azure-pipeline.yml file I would like to have something like
variables:
tag: get the value of the name from the environment.yml aka 'core-force'
Please check this example:
File: vars.yml
variables:
favoriteVeggie: 'brussels sprouts'
File: azure-pipelines.yml
variables:
- template: vars.yml # Template reference
steps:
- script: echo My favorite vegetable is ${{ variables.favoriteVeggie }}.
Please note, that variables are simple string and if you want to use list you may need do some workaraund in powershell in place where you want to use value from that list.
If you don't want to use template functionality as it is shown above you need to do these:
create a separate job/stage
define step there to read environment.yml file and set variables using REST API or Azure CLI
create another job/stage and move you current build defitnion into there
I found this topic on developer community where you can read:
Yaml variables have always been string: string mappings. The doc appears to be currently correct, though we may have had a bug when last you visited.
We are preparing to release a feature in the near future to allow you to pass more complex structures. Stay tuned!
But I don't have more info bout this.
Global variables should be stored in a separate template file. This file ideally would be in a separate repo where other repos can refer to this.
Here is another answer for this

overriding values in kubernetes helm subcharts

I'm building a helm chart for my application, and I'm using stable/nginx-ingress as a subchart. I have a single overrides.yml file that contains (among other overrides):
nginx-ingress:
controller:
annotations:
external-dns.alpha.kubernetes.io/hostname: "*.{{ .Release.Name }}.mydomain.com"
So, I'm trying to use the release name in the overrides file, and my command looks something like: helm install mychart --values overrides.yml, but the resulting annotation does not do the variable interpolation, and instead results in something like
Annotations: external-dns.alpha.kubernetes.io/hostname=*.{{ .Release.Name }}.mydomain.com
I installed the subchart by using helm fetch, and I'm under the (misguided?) impression that it would be best to leave the fetched thing as-is, and override values in it - however, if variable interpolation isn't available with that method, I will have to put my values in the subchart's values.yaml.
Is there a best practice for this? Is it ok to put my own values in the fetched subchart's values.yaml? If I someday helm fetch this subchart again, I'll have to put those values back in by hand, instead of leaving them in an untouched overrides file...
Thanks in advance for any feedback!
I found the issue on github -- it is not supported yet:
https://github.com/kubernetes/helm/issues/2133
Helm 3.x (Q4 2019) now includes more about this, but for chart only, not for subchart (see TBBle's comment)
Milan Masek adds as a comment:
Thankfully, latest Helm manual says how to achieve this.
The trick is:
enclosing variable in " or in a yaml block |-, and
then referencing it in a template as {{ tpl .Values.variable . }}
This seems to make Helm happy.
Example:
$ cat Chart.yaml | grep appVersion
appVersion: 0.0.1-SNAPSHOT-d2e2f42
$ cat platform/shared/t/values.yaml | grep -A2 image:
image:
tag: |-
{{ .Chart.AppVersion }}
$ cat templates/deployment.yaml | grep image:
image: "{{ .Values.image.repository }}:{{ tpl .Values.image.tag . }}"
$ helm template . --values platform/shared/t/values.betradar.yaml | grep image
image: "docker-registry.default.svc:5000/namespace/service:0.0.1-SNAPSHOT-d2e2f42"
imagePullPolicy: Always
image: busybox
Otherwise there is an error thrown..
$ cat platform/shared/t/values.yaml | grep -A1 image:
image:
tag: {{ .Chart.AppVersion }}
1 $ helm template . --values platform/shared/t/values.yaml | grep image
Error: failed to parse platform/shared/t/values.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Chart.AppVersion":interface {}(nil)}
For Helm subchart, TBBle adds to issue 2133
#MilanMasek 's solution won't work in general for subcharts, because the context . passed into tpl will have the subchart's values, not the parent chart's values.
!<
It happens to work in the specific example this ticket was opened for, because .Release.Name should be the same in all the subcharts.
It won't work for .Chart.AppVersion as in the tpl example.
There was a proposal to support tval in #3252 for interpolating templates in values files, but that was dropped in favour of a lua-based Hook system which has been proposed for Helm v3: #2492 (comment)
That last issue 2492 include workarounds like this one:
You can put a placeholder in the text that you want to template and then replace that placeholder with the template that you would like to use in yaml files in the template.
For now, what I've done in the CI job is run helm template on the values.yaml file.
It works pretty well atm.
cp values.yaml templates/
helm template $CI_BUILD_REF_NAME ./ | sed -ne '/^# Source:
templates\/values.yaml/,/^---/p' > values.yaml
rm templates/values.yaml
helm upgrade --install ...
This breaks if you have multiple -f values.yml files, but I'm thinking of writing a small helm wrapper that runs essentially runs that bash script for each values.yaml file.
fsniper illustrates again the issue:
There is a use case where you would need to pass deployment name to dependency charts where you have no control.
For example I am trying to set podAffinity for zookeeper. And I have an application helm chart which sets zookeeper as a dependency.
In this case, I am passing pod antiaffinity to zookeeper via values. So in my apps values.yaml file I have a zookeeper.affinity section.
If I had the ability to get the release name inside the values yaml I would just set this as default and be done with it.
But now for every deployment I have to override this value, which is a big problem.
Update Oct. 2022, from issue 2133:
lazychanger proposes
I submitted a plugin to override values.yaml with additional templates.
See lazychanger/helm-viv: "Helm-variable-in-values" and its example.