Helm: How to override helm value that is an array/dict - kubernetes-helm

Is it possible to redefine an array in helm values.yaml?
In the values.yaml I have among other top level values one of like this:
attribute:
component1:
type: Type1
component2:
type: Type2
In the templates it is used this way:
{{- $key := "component1" }}
{{- if .Values.attribute }}
{{- $profile := .Values.attribute }}
{{- if hasKey .Values.attribute $key }}
{{- $profile = index .Values.attribute $key }}
{{- end -}}
{{ if $profile.type }}
...
I would like to achive by helm template --set (or any other means) that I set the "type" on the common level and remove all of the component level ones together with the components themselves. So the rest of the values should remain the same, but the result regarding the "attribute" should be:
attribute:
type: MyType
I've tried:
helm template --set attribute.type=MyType
But this would just add this new element to the "attribute" array beside the components.
helm templates --set "attribute={type: MyType}"
But this fails as the attribute is now a list instead of the expected map[string].

I need to update than data in running release:
resources:
limits:
cpu: 200m
memory: 450Mi
requests:
cpu: 50m
memory: 300Mi
And next command worsk for me:
linux#0:~$ helm -n testns upgrade test-release repo/testchart --version 0.2.4 --set resources.limits.cpu=200m --set resources.limits.memory=750Mi --set resources.requests.cpu=50m --set resources.requests.memory=700Mi --reuse-values --debug

Related

ArgoCD hooks- running a PreSync hook only when it has changed

We have some database migration jobs that we occationally want to run before deploy a new version of an app. The common approach for this in ArgoCD seems to be to use PreSync hooks, which I have tested and which seems to work, but I'm finding it a little bit limited in terms of functionality, and am unsure if I'm missing something or if that's just how it is.
How I would like it to work, is to only run the db migration jobs when they have changed in some way (most likely a new image), however the way presync jobs seem to be designed (and understandably so) is to always run the specified job on every sync. Functionally, this is fine, the migration job will take ~20 seconds to start and finish and end up doing nothing, however it's clearly not ideal to have this happen for every single unrelated change.
I'm hoping there is some way of accomplishing this "ArgoCD natively" that's I'm just missing.
The job template i'm using currently (and which runs each sync) is this:
{{- define "project.migration_job" -}}
{{- $appsettings := (get .Values.global.apps .name) }}
---
apiVersion: batch/v1
kind: Job
metadata:
generateName: {{ .name }}-
annotations:
argocd.argoproj.io/hook: PreSync
spec:
template:
spec:
automountServiceAccountToken: false
containers:
- name: {{ .name }}
image: "{{ .Values.global.repo }}/{{ .name }}:{{ $appsettings.image }}"
resources:
requests:
memory: {{ $appsettings.memory | default "256Mi" | quote }}
cpu: {{ $appsettings.cpu | default "75m" | quote }}
limits:
memory: {{ $appsettings.memory | default "256Mi" | quote }}
cpu: {{ $appsettings.cpu | default "75m" | quote }}
env:
{{- include "project.environment_variables" (dict "Values" .Values "env" .env) | trim | nindent 12 -}}
{{- include "project.secret_environment_variables" (dict "Values" .Values "secrets" .secrets) | trim | nindent 12 }}
restartPolicy: Never
backoffLimit: 2
{{ end -}}
Thanks for any help.
I don't know if there's a native solution, but this could help:
In your PreSync hook:
IMAGE_TAG_TO_DEPLOY: Get the image tag that will be deployed from the source repo (Ex.: curl -LSs https://x-access-token:"$GITHUB_TOKEN#raw.githubusercontent.com/company/project/master/path-to-image-tag.yaml" and parse the image tag value)
COMMIT_ID_TO_DEPLOY: Find the commit ID that corresponds to IMAGE_TAG_TO_DEPLOY
COMMIT_ID_DEPLOYED: Find the commit ID pointed to by a git tag named currently-deployed
If COMMIT_ID_TO_DEPLOY == COMMIT_ID_DEPLOYED, end the PreSync Hook
Perform the required actions for the PreSync hook.
Add the git tag currently-deployed to COMMIT_ID_TO_DEPLOY in your git repo

Create configmap with outside yaml files for kubernetes

I'm quite new for kubernetes. I am trying to create configmap with using yaml file which was user defined.
helm upgrade --install test --namespace test --create-namespace . -f xxx/user-defined.yaml
user can add any yaml file with using 'f' option.
for example;
cars.yaml
cars:
- name: Mercedes
model: E350
So command will be;
helm upgrade --install test --namespace test --create-namespace . -f xxx/cars.yaml
My question is, I want to create configmap which is name 'mercedes-configmap'
I need to read that values from cars.yaml and create automaticaly configmap with name and data of cars.yaml
Update,
I've created below configmap template;
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.cars.name }}-configmap
data:
{{- range .Files }}
{{ .Files.Get . | toYaml | quote }}
{{- end }}
The only issue that I faced, I couldnt get the whole file data.
Welcome to the community!
I have created a helm template for configmap. It works this way: you can pass configmap name - name and file name - fname where data is stored and/or it can read files from a specific folder.
Please find the template (first 3 lines are commented, it's two working implementations of logic to check values existing):
{{/*
{{ if not (or (empty .Values.name) (empty .Values.fname)) }}
*/}}
{{ if and (not (empty .Values.name)) (not (empty .Values.fname)) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.name }}-configmap
data:
{{- ( .Files.Glob .Values.fname ).AsConfig | nindent 2 }}
---
{{ end }}
{{ $currentScope := .}}
{{ range $path, $_ := .Files.Glob "userfiles/*.yaml" }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ base $path | trimSuffix ".yaml" }}-configmap
data:
{{- with $currentScope}}
{{ base $path }}: |
{{- ( .Files.Get $path ) | nindent 4 }}
{{- end }}
---
{{ end }}
First part of the template checks if configmap name and file name are set and if so, it renders it. Second part goes to userfiles directory and gets all yamls within.
You find github repo where I shared file examples and configmap.
To render the template with cars2.yaml and with/without files within userfiles directory:
helm template . --set name=cars2 --set fname=cars2.yaml
To render the same template with only files in userfiles directory:
helm template .
P.S. helm v3.5.4 was used
Useful links:
Accessing files in helm
Flow control
File path functions

Helm function to set value based on a variable?

I'm learning Helm to setup my 3 AWS EKS clusters - sandbox, staging, and production.
How can I set up my templates so some values are derived based on which cluster the chart is being installed at? For example, in my myapp/templates/deployment.yaml I may want
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
I may want replicas to be either 1, 2, or 4 depending if I'm installing the chart in my sandbox, staging, or production cluster respectively? I wanna do same trick for cpu and memory requests and limits for my pods for example.
I was thinking of having something like this in my values.yaml file
environments:
- sandbox
- staging
- production
perClusterValues:
replicas:
- 1
- 2
- 4
cpu:
requests:
- 256m
- 512m
- 1024m
limits:
- 512m
- 1024m
- 2048m
memory:
requests:
- 1024Mi
- 1024Mi
- 2048Mi
limits:
- 2048Mi
- 2048Mi
- 3072Mi
So if I install a helm chart in the sandbox environment, I want to be able to do
$ helm install myapp myapp --set environment=sandbox
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
{{- if not .Values.autoscaling.enabled }}
# In pseudo-code, in my YAML files
# Get the index value from .Values.environments list
# based on pass-in environment parameter
{{ $myIndex = indexOf .Values.environments .Value.environment }}
replicas: {{ .Values.perClusterValues.replicas $myIndex }}
{{- end }}
I hope you understand my logic, but what is the correct syntax? Or is this even a good approach?
You can use the helm install -f option to pass an extra YAML values file in, and this takes precedence over the chart's own values.yaml file. So using exactly the template structure you already have, you can provide alternate values files
# sandbox.yaml
autoscaling:
enabled: false
replicaCount: 1
# production.yaml
autoscaling:
enabled: true
replicaCount: 5
And then when you go to deploy the chart, run it with
helm install myapp . -f production.yaml
(You can also helm install --set replicaCount=3 to override specific values, but the --set syntax is finicky and unusual; using a separate YAML file per environment is probably easier. Some tooling might be able to take advantage of JSON files also being valid YAML to write out additional deploy-time customizations.)

What is the difference between fullnameOverride and nameOverride in Helm?

I could find both fullnameOverride and nameOverride in Helm chart.Please help clarifying what is the difference between these two with an example.
nameOverride replaces the name of the chart in the Chart.yaml file, when this is used to construct Kubernetes object names. fullnameOverride completely replaces the generated name.
These come from the template provided by Helm for new charts. A typical object in the templates is named
name: {{ include "<CHARTNAME>.fullname" . }}
If you install a chart with a deployment with this name, and where the Chart.yaml file specifies name: chart-name...
helm install release-name ., the Deployment will be named release-name-chart-name
helm install release-name . --set nameOverride=name-override, the Deployment will be named release-name-name-override
helm install release-name . --set fullnameOverride=fullname-override, the Deployment will be named fullname-override
The generated ...fullname template is (one code branch omitted, still from the above link)
{{- define "<CHARTNAME>.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
So if fullnameOverride is provided, that completely replaces the rest of the logic in the template. Otherwise the name is constructed from the release name and the chart name, where nameOverride overrides the chart name.

Helm templating doesn't let me use dash in names

I am creating a helm chart for my app. In the templates directory, I have a config-map.yaml with this in it
{{- with Values.xyz }}
xyz.abc-def: {{ .abc-def }}
{{- end }}
When I try to run helm install I get a
Error: parse error in "config-map.yaml": template:config-map.yaml:2: unexpected bad character U+002D '-' in command.
Is there a way to use dashes in the name and variable for helm?
Might be worth trying using index method:
xyz.abc-def: {{ index .Values.xyz "abc-def" }}
looks like it's still restricted by helm to allow hyphens in variable names (as well in subchart names) and index is a workaround
I faced the same issue because the resource defined had a '-' in its name:
resources:
{{- with .Values.my-value }}
After I removed the '-', the error disappeared:
resources:
{{- with .Values.myvalue }}