Where is .Values taken from in kubernetes configuration yaml file - kubernetes

I can't seem to find the formal definition of .Values (taken from here)
image: {{ .Values.image.repo }}/rs-mysql-db:{{ .Values.image.version }}
From the docs, it is definitely related to helm chart:
Note that all of Helm's built-in variables begin with an uppercase letter to easily distinguish them from user-defined values: .Release.Name, .Capabilities.KubeVersion.
But in the above example (robot-shop/K8s/helm/templates) I don't see any values.yaml file - what am I missing?

It's under the helm folder:
https://github.com/instana/robot-shop/blob/master/K8s/helm/values.yaml
# Registry and repository for Docker images
# Default is docker/robotshop/image:latest
image:
repo: robotshop
version: latest
pullPolicy: IfNotPresent

Related

Multiple deployments using one helm chart and chart dependencies with alias

I have 10 Deployments that are currently managed by 10 unique helm releases.
But these deployments are so similar that I wish to group them together and manage it as one helm release.
All deployments use the same helm chart, the only difference is in the environment variables passed to the deployment pod which is set in values.yaml.
Can I do this using helm chart dependencies and the alias field?
dependencies:
- name: subchart
repository: http://localhost:10191
version: 0.1.0
alias: deploy-1
- name: subchart
repository: http://localhost:10191
version: 0.1.0
alias: deploy-2
- name: subchart
repository: http://localhost:10191
version: 0.1.0
alias: deploy-3
I can now set values for individual deploys in one parent Chart's values.yaml:
deploy-1:
environment:
key: value-1
deploy-2:
environment:
key: value-2
deploy-3:
environment:
key: value-3
But 99% of the values that I need to set for all the deployments are the same.
How do I avoid duplicating them across the deploy-n keys in the parent chart's values.yaml file?
If you have 10 deployments that are similar, but have only a few differences in environment variables, you can group them together and manage them as one helm release using a single values.yaml file. Here's how you can achieve this:
Define a single dependency in your parent chart's Chart.yaml file that points to the helm chart for your deployments:
dependencies:
- name: my-deployments
version: 1.0.0
repository: https://charts.example.com
Create a single values.yaml file in your parent chart's templates directory that defines the values for your deployments:
my-deployments:
environment:
key1: value1
key2: value2
key3: value3
In your deployment manifests, replace any environment variable values that vary between deployments with references to the values in the parent chart's values.yaml file:
env:
- name: ENV_VAR_1
value: {{ .Values.my-deployments.environment.key1 }}
- name: ENV_VAR_2
value: {{ .Values.my-deployments.environment.key2 }}
- name: ENV_VAR_3
value: {{ .Values.my-deployments.environment.key3 }}
This way, you only need to define the environment variable values that vary between deployments in the parent chart's values.yaml file, and Helm will automatically apply these values to all the deployments.
Note that this approach assumes that all of your deployments use the same Helm chart with the same structure for their deployment manifests. If this is not the case, you may need to adjust the approach accordingly to fit the specific needs of your deployments.

Helm template syntax when using subcharts?

I asked Use one Helm chart for ALL microservices? and now I'm trying to implement the answer I accepted, i.e., using sub charts. (Note: If there's a better answer for that post, please put it in that post, not here.)
Per the answer I accepted, I have the following directory structure
my-deployment-repo/
|- base-microservice/
|- templates/
|- deployment.yml
|- service.yml
|- Chart.yaml
|- values.yaml
|- myapp/
|- Chart.yaml
|- values.yaml
base-microservice/values.yaml file has
image:
name: ""
version: ""
repository: 01234567890.dkr.ecr.us-east-1.amazonaws.com
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
service:
type: NodePort
port: 5000
# More key/value pairs defined
myapp/Chart.yaml has
apiVersion: v2
name: myapp
description: A Helm chart for Kubernetes
...
dependencies:
- alias: my-microservice-1
name: base-microservice
version: "0.1.0"
repository: file://../base-microservice
- alias: my-microservice-2
name: base-microservice
version: "0.1.0"
repository: file://../base-microservice
myapp/values.yaml simply has this because I want myapp to use ALL the values in base-microservice/values.yaml except for the values I provide here.
my-microservice-1:
image:
name: foo
version: 1.2.3
my-microservice-2:
image:
name: bar
version: 4.5.6
So now when I do a...
$ helm update ./myapp
$ helm install myapp myapp/
...I want to be able to get, for example, the deployment for the alias microservice-1
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
...
containers:
- name: foo # image name for microservice-1 alias
# These are from the different values.yaml files
# <repository from base-microservice>/<image name from myapp>:v<ersion from myapp>
image: 01234567890.dkr.ecr.us-east-1.amazonaws.com/foo:1.2.3
IOW, what should the base-microserivce/templates/deployment.yaml syntax be to...
spec.template.spec.containers.name: {{ what should be here to produce "foo" }} and
spec.template.spec.containers.image: {{ what should be here to produce "01234567890.dkr.ecr.us-east-1.amazonaws.com/foo:1.2.3" }}
I hope that makes sense. TIA!
When the template file is eventually rendered, .Values will be a subset specific to this subchart. So in your template code, just use .Values the same way you would if it were a standalone chart.
containers:
- name: foo {{-/* not templated */}}
image: {{ with .Values.image }}{{ .repository }}/{{ .name }}:{{ .version }}{{ end }}
I've intentionally chosen to not template name in this example. A container name is only useful in a couple of very specific contexts (to review kubectl logs in a multi-container Pod, for example) and IME it's much easier to set it to a fixed name than to try to template it. You could use {{ .Values.image.name }} here as well if you wanted to.

Different name required to override value in Helm subchart

I have read the Helm docs and various StackOverflow questions - this is not (I hope!) a lazy question. I'm having an issue overriding a single particular value in a Helm chart, not having trouble with the concept in general.
I'm trying to install the Gitea helm chart on a k8s cluster on Raspberry Pis (that is - on arm64 architecture). Since the default memcached dependency chart is from Bitnami, who don't support arm64, I have overridden the image appropriately (to arm64v8/memcached, link).
However, this new image has a different entrypoint - /entrypoint.sh instead of /run.sh. Referencing the relevant part of the template, I believed I needed to override memcached.args, but that didn't work as expected:
$ cat values.yaml
memcached:
image:
repository: "arm64v8/memcached"
tag: "1.6.17"
args:
- "/entrypoint.sh"
diagnosticMode:
enabled: false
$ helm template gitea-charts/gitea --values values.yaml
[...]
# Source: gitea/charts/memcached/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-memcached
namespace: gitea
labels: [...]
spec:
selector:
matchLabels: [...]
replicas: 1
template:
metadata:
labels: [...]
spec:
[...]
serviceAccountName: release-name-memcached
containers:
- name: memcached
image: docker.io/arm64v8/memcached:1.6.17
imagePullPolicy: "IfNotPresent"
args:
- /run.sh # <----- this should be `/entrypoint.sh`
env:
- name: BITNAMI_DEBUG
value: "false"
ports:
- name: memcache
containerPort: 11211
[...]
However, when I instead overrode memcached.arguments, the expected behaviour occurred - the contents of memcached.arguments rendered in the template's args (or, if memcached.arguments was empty, no args were rendered)
Where is this mapping from arguments to args taking place?
Note in particular that the Bitnami chart docs refer to args, so this is unexpected - though note also that the Bitnami chart's values.yaml refers to arguments in the comment (this is what prompted me to try this "obviously wrong" approach!). In the "Upgrade to 5.0.0 notes", we see "arguments has been renamed to args." - but the Gitea chart is using a >5.0.0 version of the Bitnami chart.
You're reasoning is correct. And the current parameter name is definitely called args (arguments is deprecated, someone just forgot to update the comment here).
Now, why arguments work for you and args? I think you're just using the old version, before it was renamed. I checked it and:
Gitea chart uses version 5.9.0 from the repo https://raw.githubusercontent.com/bitnami/charts/pre-2022/bitnami
This corresponds to the following Helm Chart: https://charts.bitnami.com/bitnami/memcached-5.9.0.tgz (you can check it here).
When you extract this file chart, you see it's the old version of chart (with arguments not yet renamed to args).

How to append a list to another list inside a dictionary using Helm?

How to append a list to another list inside a dictionary using Helm?
I have a Helm chart specifying the key helm inside of an Argo CD Application (see snippet below).
Now given a values.yaml file, e.g.:
helm:
valueFiles:
- myvalues1.yaml
- myvalues2.yaml
I want to append helm.valuesFiles to the one below. How can I achieve this? The merge function doesn't seem to satisfy my needs in this case, since precedence will be given to the first dictionary.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
# Add labels to your application object.
labels:
name: guestbook
spec:
# The project the application belongs to.
project: default
# Source of the application manifests
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git # Can point to either a Helm chart repo or a git repo.
targetRevision: HEAD # For Helm, this refers to the chart version.
path: guestbook # This has no meaning for Helm charts pulled directly from a Helm repo instead of git.
# helm specific config
chart: chart-name # Set this when pulling directly from a Helm repo. DO NOT set for git-hosted Helm charts.
helm:
passCredentials: false # If true then adds --pass-credentials to Helm commands to pass credentials to all domains
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
# Release name override (defaults to application name)
releaseName: guestbook
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-prod.yaml
https://raw.githubusercontent.com/argoproj/argo-cd/master/docs/operator-manual/application.yaml
If you only need to append helm.valueFiles to the existing .spec.source.helm.valueFiles, you can range through the list in the values file and add the list items like this:
valueFiles:
- values-prod.yaml
{{- range $item := .Values.helm.valueFiles }}
- {{ $item }}
{{- end }}

How best to have files on volumes in Kubernetes using helm charts?

The plan is to move my dockerized application to Kubernetes.
The docker container uses couple of files - which I used to mount on the docker volumes by specifying in the docker-compose file:
volumes:
- ./license.dat:/etc/sys0/license.dat
- ./config.json:/etc/sys0/config.json
The config file would be different for different environments, and the license file would be the same across.
How do I define this in a helm template file (yaml) so that it is available for the running application?
What is generally the best practise for this? Is it also possible to define the configuration values in values.yaml and the config.json file could get it?
Since you are dealing with json a good example to follow might be the official stable/centrifugo chart. It defines a ConfigMap that contains a config.json file:
data:
config.json: |-
{{ toJson .Values.config| indent 4 }}
So it takes a config section from the values.yaml and transforms it to json using the toJson function. The config can be whatever you want define in that yaml - the chart has:
config:
web: true
namespaces:
- name: public
anonymous: true
publish: true
...
In the deployment.yaml it creates a volume from the configmap:
volumes:
- name: {{ template "centrifugo.fullname" . }}-config
configMap:
name: {{ template "centrifugo.fullname" . }}-config
Note that {{ template "centrifugo.fullname" . }}-config matches the name of the ConfigMap.
And mounts it into the deployment's pod/s:
volumeMounts:
- name: "{{ template "centrifugo.fullname" . }}-config"
mountPath: "/centrifugo"
readOnly: true
This approach would let you populate the json config file from the values.yaml so that you can set different values for different environments by supplying custom values file per env to override the default one in the chart.
To handle the license.dat you can add an extra entry to the ConfigMap to define an additional file but with static content embedded. Since that is a license you may want to switch the ConfigMap to a Secret instead, which is a simple change of replacing the word ConfigMap for Secret in the definitions. You could try it with ConfigMap first though.