How best to have files on volumes in Kubernetes using helm charts? - kubernetes

The plan is to move my dockerized application to Kubernetes.
The docker container uses couple of files - which I used to mount on the docker volumes by specifying in the docker-compose file:
volumes:
- ./license.dat:/etc/sys0/license.dat
- ./config.json:/etc/sys0/config.json
The config file would be different for different environments, and the license file would be the same across.
How do I define this in a helm template file (yaml) so that it is available for the running application?
What is generally the best practise for this? Is it also possible to define the configuration values in values.yaml and the config.json file could get it?

Since you are dealing with json a good example to follow might be the official stable/centrifugo chart. It defines a ConfigMap that contains a config.json file:
data:
config.json: |-
{{ toJson .Values.config| indent 4 }}
So it takes a config section from the values.yaml and transforms it to json using the toJson function. The config can be whatever you want define in that yaml - the chart has:
config:
web: true
namespaces:
- name: public
anonymous: true
publish: true
...
In the deployment.yaml it creates a volume from the configmap:
volumes:
- name: {{ template "centrifugo.fullname" . }}-config
configMap:
name: {{ template "centrifugo.fullname" . }}-config
Note that {{ template "centrifugo.fullname" . }}-config matches the name of the ConfigMap.
And mounts it into the deployment's pod/s:
volumeMounts:
- name: "{{ template "centrifugo.fullname" . }}-config"
mountPath: "/centrifugo"
readOnly: true
This approach would let you populate the json config file from the values.yaml so that you can set different values for different environments by supplying custom values file per env to override the default one in the chart.
To handle the license.dat you can add an extra entry to the ConfigMap to define an additional file but with static content embedded. Since that is a license you may want to switch the ConfigMap to a Secret instead, which is a simple change of replacing the word ConfigMap for Secret in the definitions. You could try it with ConfigMap first though.

Related

Use fieldRef in Kubernetes configMap

I have the following environment variable in my Kubernetes template:
envFrom:
- configMapRef:
name: configmap
env:
- name: MACHINENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
I would like to use the value from 'fieldRef' in a config map instead. Would this kind of modification be possible?
In other words, I want to add the 'MACHINENAME' environment variable to the config map, so I don't have to use the 'env:' block.
You cannot do this in the way you describe.
A ConfigMap only contains fixed string-key/string-value pairs. You cannot embed a more complex structure into a ConfigMap, or say that a ConfigMap value will be resolved using the downward API when a Pod is created. The node name of the pod, and most of the other downward API information, will be different for each pod using the ConfigMap (and likely even for each replica of the same deployment) and so there is no fixed value you can put into a ConfigMap.
You tagged this question with the Helm deployment tool. If you're using Helm, and you're simply trying to avoid repeating this boilerplate in every Deployment spec, you can write a helper template that includes this definition
{{/* templates/_helpers.tpl */}}
{{- define "machinename" -}}
- name: MACHINENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- end -}}
Now in your Deployment spec, you can include this template rather than retyping the whole YAML block.
containers:
- envFrom:
- configMapRef:
name: configmap
env:
{{ include "machinename . | indent 6 }}
(The exact indent value will depend on the context where you include it, and should be two more than the number of spaces at the start of the env: line. It is important that the line containing indent not itself be indented.)
Yes, using a ConfigMap would be possible. This Stack Overflow post is quite old, but has some good information in:
Advantage of using configmaps for environment variables with Kubernetes/Helm
You would need to either mount the ConfigMap as a volume or consume via environment variables by using envFrom. This guide provides both examples:
https://matthewpalmer.net/kubernetes-app-developer/articles/ultimate-configmap-guide-kubernetes.html
You can use the volume mount option and merge different configmap or env
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: mysecret
items:
- key: username
path: my-group/my-username
- downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "cpu_limit"
resourceFieldRef:
containerName: container-test
resource: limits.cpu
- configMap:
name: myconfigmap
items:
- key: config
path: my-group/my-config
Ref : https://kubernetes.io/docs/concepts/storage/projected-volumes/
initContainer
There is another alternative you can follow if you want to merge those values.
Either you merge configmap & fieldRef with InitContainer as it's your Node name so, have to get value first and edit/add value to configmap with initContainer.

Can a deploy with multiple ReplicaSets run CMD different command?

I want to create few pods from same image (I have the Dockerfile) so i want to use ReplicaSets.
but the final CMD command need to be different for each container.
for exmple
(https://www.devspace.sh/docs/5.x/configuration/images/entrypoint-cmd):
image:
frontend:
image: john/appfrontend
cmd:
- run
- dev
And the other container will do:
image:
frontend:
image: john/appfrontend
cmd:
- run
- <new value>
Also I would like to move the CMD value from a list, so i would like the value there to be variable (it will be in a loop so each Pod will have to be created separately).
Is it possible?
You can't directly do this as you've described it. A ReplicaSet manages some number of identical Pods, where the command, environment variables, and every other detail except for the Pod name are the same across every replica.
In practice you don't usually directly use ReplicaSets; instead, you create a Deployment, which creates one or more ReplicaSets, which create Pods. The same statement and mechanics apply to Deployments, though.
Since this is specifically in the context of a Helm chart, you can have two separate Deployment YAML files in your chart, but then use Helm templating to reduce the amount of code that needs to be repeated. You can add a helper template to templates/_helpers.tpl that contains most of the data for a container
# templates/_helpers.tpl
{{- define "myapp.container" -}}
image: my-image:{{ .Values.tag }}
env:
- name: FOO
value: bar
- name: ET
value: cetera
{{ end -}}
Now you can have two template Deployment files, but provide a separate command: for each.
# templates/deployment-one.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "myapp.name" . }}-one
labels:
{{ include "myapp.labels" . | indent 4 }}
spec:
replicas: {{ .Values.one.replicas }}
template:
metadata:
labels:
{{ include "myapp.labels" . | indent 8 }}
spec:
containers:
- name: frontend
{{ include "myapp.container" . | indent 10 }}
command:
- npm
- run
- dev
There is still a fair amount to copy and paste, but you should be able to cp the whole file. Most of the boilerplate is Kubernetes boilerplate and every Deployment will have these parts; little of it is specific to any given application.
If your image has a default CMD (this is good practice) then you can omit the command: override on one of the Deployments, and it will run that default CMD.
In the question you make specific reference to Dockerfile CMD. One important terminology difference is that Kubernetes command: overrides Docker ENTRYPOINT, and Kubernetes args: matches CMD. If you are using an entrypoint wrapper script, in this example you will need to provide args: instead of command: so that the wrapper is still invoked.

How to append a list to another list inside a dictionary using Helm?

How to append a list to another list inside a dictionary using Helm?
I have a Helm chart specifying the key helm inside of an Argo CD Application (see snippet below).
Now given a values.yaml file, e.g.:
helm:
valueFiles:
- myvalues1.yaml
- myvalues2.yaml
I want to append helm.valuesFiles to the one below. How can I achieve this? The merge function doesn't seem to satisfy my needs in this case, since precedence will be given to the first dictionary.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
# Add labels to your application object.
labels:
name: guestbook
spec:
# The project the application belongs to.
project: default
# Source of the application manifests
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git # Can point to either a Helm chart repo or a git repo.
targetRevision: HEAD # For Helm, this refers to the chart version.
path: guestbook # This has no meaning for Helm charts pulled directly from a Helm repo instead of git.
# helm specific config
chart: chart-name # Set this when pulling directly from a Helm repo. DO NOT set for git-hosted Helm charts.
helm:
passCredentials: false # If true then adds --pass-credentials to Helm commands to pass credentials to all domains
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
# Release name override (defaults to application name)
releaseName: guestbook
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-prod.yaml
https://raw.githubusercontent.com/argoproj/argo-cd/master/docs/operator-manual/application.yaml
If you only need to append helm.valueFiles to the existing .spec.source.helm.valueFiles, you can range through the list in the values file and add the list items like this:
valueFiles:
- values-prod.yaml
{{- range $item := .Values.helm.valueFiles }}
- {{ $item }}
{{- end }}

Helm - Configmap - Read and update the file name

I have the application properties defined for each environment inside a config folder.
config/
application-dev.yml
application-dit.yml
application-sit.yml
When i'm trying to deploy the application in dev, i need to create configmap by considering the applicaiton-dev.yml with a name application.yml.
When i'm trying to deploy the application in dit i need to create configmap by considering the application-dit.yml. But the name of the file should be always application.yml inside the configmap.
Any suggestions?
When using helm to manage projects, different values.yaml files are generally used to distinguish between different environments (development/pre-release/online).
Suppose your configmap file is as follows:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ $.Values.cm.name }}
data:
application.yml : |-
{{ $.Files.Get {{ $.Values.cm.path }} | nindent 4 }}
In dev, define values-dev.yaml file
cm:
name: test
path: config/application-dev.yml
When you install the chart in dev, you can use the following command:
helm install test . -f values-dev.yaml
In dit, define values-dit.yaml file
cm:
name: test
path: config/application-dit.yml
When you install the chart in dit, you can use the following command:
helm install test . -f values-dit.yaml

Modifying Environmental Variables of Helm Chart Dependencies without Forking

I am creating a Helm chart that depends on several Helm charts that are not maintained by me, and I would like to make some configurations to these subcharts. The configurations are not too complex, I just want to add a several environmental variables to each of the containers. However, the env fields of the containers are not already templated out in the Helm charts. I would like to avoid forking these charts and maintaining them myself since this is such a trivial change.
Is there an easy way to provide environmental variables to several containers in Kubernetes in a flexible way, either through Helm or another tool?
I am currently looking into using Kustomize to do the last mile changes after Helm fills out the templates, but I am getting hung up on setting up Kustomize patches. In my scenario, I have the environmental variables being filled out by Helm in a ConfigMap. I would like to add an envFrom field to read the ConfigMap and add the given environment variables to the containers. I want to add the envFrom to the resource YAML files through Kustomize. The snag I am hitting is that Kustomize patch.yaml files are resource specific. Below is an example of my patch.yaml and my kustomization.yaml respectively.
patch.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: does-not-matter
spec:
template:
spec:
containers:
- name: server
envFrom:
- configMapRef:
name: my-env
kustomization.yaml:
resources:
- all.yaml
patches:
- path: patch.yaml
target:
kind: "StatefulSet"
name: "*"
To perform the Kustomization, I run:
helm install perceptor ../ --post-renderer ./kustomize
Which basically just fills out the Helm templates and passes them to Kustomize to do the last-mile patches.
In the patch, I have to specify the name of the container ("server") to properly inject my configMap. What I would really like to do is be able to provide those environment variables to all containers in a given deployment (as defined by the target constraints in kustomization.yaml), regardless of their name. From what I have seen, it almost looks like I will have to write a separate patch for each container, which is suboptimal. I just start working with Kubernetes, so it is possible that I am missing something that would easily solve this problem.
I understand, that you don't want to break the open/closed principle of subchart your umbrella chart depends on by forking it, but still you have a right to propose a changes to it by making it more extension-able and flexible. Yes, I would suggest you to submit a Pull Request/request a new Feature to the helm chart project in context.
The following code snippet won't break current functionality, and give users a chance to introduce custom environment variables based on existing ConfigMap(s) in desired resource's Spec.
helm_template.yaml
#helm template
...
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
{{- if .Values.envConfigs }}
{{- range $key, $config := $.Values.envConfigs }}
- name: {{ $key }}
valueFrom:
configMapKeyRef:
name: {{ $config }}
key: {{ $key | quote }}
{{- end }}
{{- end }}
values.yaml
#
# values.yaml
#
envConfigs:
Q3_CFG_MAP: Q3DM17
Q3_CFG_TIMEOUT: 30
# if empty use:
# envConfigs: {}