I asked Use one Helm chart for ALL microservices? and now I'm trying to implement the answer I accepted, i.e., using sub charts. (Note: If there's a better answer for that post, please put it in that post, not here.)
Per the answer I accepted, I have the following directory structure
my-deployment-repo/
|- base-microservice/
|- templates/
|- deployment.yml
|- service.yml
|- Chart.yaml
|- values.yaml
|- myapp/
|- Chart.yaml
|- values.yaml
base-microservice/values.yaml file has
image:
name: ""
version: ""
repository: 01234567890.dkr.ecr.us-east-1.amazonaws.com
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
service:
type: NodePort
port: 5000
# More key/value pairs defined
myapp/Chart.yaml has
apiVersion: v2
name: myapp
description: A Helm chart for Kubernetes
...
dependencies:
- alias: my-microservice-1
name: base-microservice
version: "0.1.0"
repository: file://../base-microservice
- alias: my-microservice-2
name: base-microservice
version: "0.1.0"
repository: file://../base-microservice
myapp/values.yaml simply has this because I want myapp to use ALL the values in base-microservice/values.yaml except for the values I provide here.
my-microservice-1:
image:
name: foo
version: 1.2.3
my-microservice-2:
image:
name: bar
version: 4.5.6
So now when I do a...
$ helm update ./myapp
$ helm install myapp myapp/
...I want to be able to get, for example, the deployment for the alias microservice-1
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
...
containers:
- name: foo # image name for microservice-1 alias
# These are from the different values.yaml files
# <repository from base-microservice>/<image name from myapp>:v<ersion from myapp>
image: 01234567890.dkr.ecr.us-east-1.amazonaws.com/foo:1.2.3
IOW, what should the base-microserivce/templates/deployment.yaml syntax be to...
spec.template.spec.containers.name: {{ what should be here to produce "foo" }} and
spec.template.spec.containers.image: {{ what should be here to produce "01234567890.dkr.ecr.us-east-1.amazonaws.com/foo:1.2.3" }}
I hope that makes sense. TIA!
When the template file is eventually rendered, .Values will be a subset specific to this subchart. So in your template code, just use .Values the same way you would if it were a standalone chart.
containers:
- name: foo {{-/* not templated */}}
image: {{ with .Values.image }}{{ .repository }}/{{ .name }}:{{ .version }}{{ end }}
I've intentionally chosen to not template name in this example. A container name is only useful in a couple of very specific contexts (to review kubectl logs in a multi-container Pod, for example) and IME it's much easier to set it to a fixed name than to try to template it. You could use {{ .Values.image.name }} here as well if you wanted to.
Related
I am using Kubernetes with Helm 3.8.0, with windows docker desktop configured on WSL2.
Sometime, after running: helm install, and retrieve a container, the container that is created behind sense, is an old container that created before (even after restarting the computer).
i.e: Now the yaml is declared with password: 12345, and database: test. before I tried to run the container yaml with password: 11111, and database: my_database.
Now when I do helm install mychart ./mychart --namespace test-chart --create-namespace for the current folder chart, the container is running with password: 11111 and database: my_datatbase, instead of the new parameters provided. There is no current yaml code with the old password, so I don't understand why the docker is run with the old one.
I did several actions, such as docker system prune, restarting Windows Docker Desktop, but still I get the old container, that cannot be seen, even in Windows Docker Desktop, I have checked the option in: Settings -> Kubernetes -> Show System Containers -> Show system containers.
After some investigations, I realized that that may be because of Kubernetes has it's own garbage collection handling of containers, and that is why I may refer to old container, even I didn't mean to.
In my case, I am creating a job template (I didn't put any line that reference this job in the _helpers.tpl file - I never changed that file, and I don't know whether that may cause a problem).
Here is my job template:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "myChart.fullname" . }}-migration
labels:
name: {{ include "myChart.fullname" . }}-migration
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-300"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
parallelism: 1
completions: 1
backoffLimit: 1
template:
metadata:
labels:
app: {{ template "myChart.name" . }}
release: {{ .Release.Namespace }}
spec:
initContainers:
- name: wait-mysql
image: {{ .Values.mysql.image }}
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: "12345"
- name: MYSQL_DATABASE
value: test
command:
- /bin/sh
- -c
- |
service mysql start &
until mysql -uroot -p12345 -e 'show databases'; do
echo `date +%H:%M:%S`' - Waiting for mysql...'
sleep 5
done
containers:
- name: migration
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
command: {{- toYaml .Values.image.entrypoint | nindent 12 }}
args: {{- toYaml .Values.image.cmd | nindent 12}}
restartPolicy: Never
In the job - there is a database, which is first created, and after that it has data that is populated with code.
Also, are the annotations (hooks) are necessary?
After running helm install myChart ./myChart --namespace my-namespace --create-namespace, I realized that I am using very old container, which I don't really need.
I didn't understand if I write the meta data, as the following example (in: Garbage Collection) really help, and what to put in uid, whether I don't know it, or don't have it.
metadata:
...
ownerReferences:
- apiVersion: extensions/v1beta1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
Sometimes I really want to reference existing pod (or container) from several templates (use the same container, which is not stateless, such as database container - one template for the pod and the other for the job) - How can I do that, also?
Is there any command (in command line, or a kind of method) that clear all the cached in Garbage Collection, or not use Garbage Collection at all? (What are the main benefits for the GC of Kubernetes?)
I have the application properties defined for each environment inside a config folder.
config/
application-dev.yml
application-dit.yml
application-sit.yml
When i'm trying to deploy the application in dev, i need to create configmap by considering the applicaiton-dev.yml with a name application.yml.
When i'm trying to deploy the application in dit i need to create configmap by considering the application-dit.yml. But the name of the file should be always application.yml inside the configmap.
Any suggestions?
When using helm to manage projects, different values.yaml files are generally used to distinguish between different environments (development/pre-release/online).
Suppose your configmap file is as follows:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ $.Values.cm.name }}
data:
application.yml : |-
{{ $.Files.Get {{ $.Values.cm.path }} | nindent 4 }}
In dev, define values-dev.yaml file
cm:
name: test
path: config/application-dev.yml
When you install the chart in dev, you can use the following command:
helm install test . -f values-dev.yaml
In dit, define values-dit.yaml file
cm:
name: test
path: config/application-dit.yml
When you install the chart in dit, you can use the following command:
helm install test . -f values-dit.yaml
I am trying to use the module community.kubernetes.k8s – Manage Kubernetes (K8s) objects with variables from the role (e.g. role/sampleRole/vars file).
I am failing when it comes to the integer point e.g.:
- name: sample
community.kubernetes.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ name }}"
namespace: "{{ namespace }}"
labels:
app: "{{ app }}"
spec:
replicas: 2
selector:
matchLabels:
app: "{{ app }}"
template:
metadata:
labels:
app: "{{ app }}"
spec:
containers:
- name: "{{ name }}"
image: "{{ image }}"
ports:
- containerPort: {{ containerPort }}
When I deploy with this format obviously it will fail at it can not parse the "reference" to the var.
Sample of error:
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)
Syntax Error while loading YAML.
found unacceptable key (unhashable type: 'AnsibleMapping')
The error appears to be in 'deploy.yml': line <some line>, column <some column>, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
ports:
- containerPort: {{ containerPort }}
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
When I use quotes on the variable e.g. - containerPort: "{{ containerPort }}" then I get the following error (part of it):
v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Ports: []v1.ContainerPort: v1.ContainerPort.ContainerPort: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|nerPort\\\\\":\\\\\"80\\\\\"}]}],\\\\\"d|..., bigger context ...|\\\\\",\\\\\"name\\\\\":\\\\\"samplegreen\\\\\",\\\\\"ports\\\\\":[{\\\\\"containerPort\\\\\":\\\\\"80\\\\\"}]}],\\\\\"dnsPolicy\\\\\":\\\\\"ClusterFirst\\\\\",\\\\\"restartPolicy\\\\\"|...\",\"field\":\"patch\"}]},\"code\":422}\\n'", "reason": "Unprocessable Entity", "status": 422}
I tried to cast the string to int by using - containerPort: "{{ containerPort | int }}" but it did not worked. The problem seems to be coming from the quotes, independently how I define the var in my var file e.g. containerPort: 80 or containerPort: "80".
I found a similar question on the forum Ansible, k8s and variables but the user seems not to have the same problems that I am having.
I am running with the latest version of the module:
$ python3 -m pip show openshift
Name: openshift
Version: 0.11.2
Summary: OpenShift python client
Home-page: https://github.com/openshift/openshift-restclient-python
Author: OpenShift
Author-email: UNKNOWN
License: Apache License Version 2.0
Location: /usr/local/lib/python3.8/dist-packages
Requires: ruamel.yaml, python-string-utils, jinja2, six, kubernetes
Is there any workaround this problem or is it a bug?
Update (08-01-2020): The problem is fixed on version 0.17.0.
$ python3 -m pip show k8s
Name: k8s
Version: 0.17.0
Summary: Python client library for the Kubernetes API
Home-page: https://github.com/fiaas/k8s
Author: FiaaS developers
Author-email: fiaas#googlegroups.com
License: Apache License
Location: /usr/local/lib/python3.8/dist-packages
Requires: requests, pyrfc3339, six, cachetools
You could try the following as a workaround; in this example, we're creating a text template, and then using the from_yaml filter to transform this into our desired data structure:
- name: sample
community.kubernetes.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ name }}"
namespace: "{{ namespace }}"
labels:
app: "{{ app }}"
spec: "{{ spec|from_yaml }}"
vars:
spec: |
replicas: 2
selector:
matchLabels:
app: "{{ app }}"
template:
metadata:
labels:
app: "{{ app }}"
spec:
containers:
- name: "{{ name }}"
image: "{{ image }}"
ports:
- containerPort: {{ containerPort }}
The solution provided by larsks works perfectly. Although I got another problem on my case where I use templates with a bit more complex cases (e.g. loops etc) where I found my self having the same problem.
The only solution that I had before was to use ansible.builtin.template – Template a file out to a remote server and simply ssh the some_file.yml.j2 to one of my Master nodes and deploy through ansible.builtin.shell – Execute shell commands on targets (e.g. kubectl apply -f some_file.yml).
Thanks to community.kubernetes.k8s – Manage Kubernetes (K8s) objects I am able to do all this work with a single task e.g. (example taken from documentation):
- name: Read definition template file from the Ansible controller file system
community.kubernetes.k8s:
state: present
template: '/testing/deployment.j2'
The only requirement that the user needs to have in advance is to have the kubeconfig file placed in the default location (~/.kube/config) or use the kubeconfig flag to point to the location of the file.
As a last step I use it delegate_to to localhost command e.g.
- name: Read definition template file from the Ansible controller file system
community.kubernetes.k8s:
state: present
template: '/testing/deployment.j2'
delegate_to: localhost
The way that this task works is that the user ssh to himself and run kubectl apply -f some_file.yml.j2 towards the LB or Master node API and the API applies the request (if the user has the permissions).
I am creating a Helm chart that depends on several Helm charts that are not maintained by me, and I would like to make some configurations to these subcharts. The configurations are not too complex, I just want to add a several environmental variables to each of the containers. However, the env fields of the containers are not already templated out in the Helm charts. I would like to avoid forking these charts and maintaining them myself since this is such a trivial change.
Is there an easy way to provide environmental variables to several containers in Kubernetes in a flexible way, either through Helm or another tool?
I am currently looking into using Kustomize to do the last mile changes after Helm fills out the templates, but I am getting hung up on setting up Kustomize patches. In my scenario, I have the environmental variables being filled out by Helm in a ConfigMap. I would like to add an envFrom field to read the ConfigMap and add the given environment variables to the containers. I want to add the envFrom to the resource YAML files through Kustomize. The snag I am hitting is that Kustomize patch.yaml files are resource specific. Below is an example of my patch.yaml and my kustomization.yaml respectively.
patch.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: does-not-matter
spec:
template:
spec:
containers:
- name: server
envFrom:
- configMapRef:
name: my-env
kustomization.yaml:
resources:
- all.yaml
patches:
- path: patch.yaml
target:
kind: "StatefulSet"
name: "*"
To perform the Kustomization, I run:
helm install perceptor ../ --post-renderer ./kustomize
Which basically just fills out the Helm templates and passes them to Kustomize to do the last-mile patches.
In the patch, I have to specify the name of the container ("server") to properly inject my configMap. What I would really like to do is be able to provide those environment variables to all containers in a given deployment (as defined by the target constraints in kustomization.yaml), regardless of their name. From what I have seen, it almost looks like I will have to write a separate patch for each container, which is suboptimal. I just start working with Kubernetes, so it is possible that I am missing something that would easily solve this problem.
I understand, that you don't want to break the open/closed principle of subchart your umbrella chart depends on by forking it, but still you have a right to propose a changes to it by making it more extension-able and flexible. Yes, I would suggest you to submit a Pull Request/request a new Feature to the helm chart project in context.
The following code snippet won't break current functionality, and give users a chance to introduce custom environment variables based on existing ConfigMap(s) in desired resource's Spec.
helm_template.yaml
#helm template
...
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
{{- if .Values.envConfigs }}
{{- range $key, $config := $.Values.envConfigs }}
- name: {{ $key }}
valueFrom:
configMapKeyRef:
name: {{ $config }}
key: {{ $key | quote }}
{{- end }}
{{- end }}
values.yaml
#
# values.yaml
#
envConfigs:
Q3_CFG_MAP: Q3DM17
Q3_CFG_TIMEOUT: 30
# if empty use:
# envConfigs: {}
I have my deployment.yaml file within the templates directory of Helm charts with several environment variables for the container I will be running using Helm.
Now I want to be able to pull the environment variables locally from whatever machine the helm is ran so I can hide the secrets that way.
How do I pass this in and have helm grab the environment variables locally when I use Helm to run the application?
Here is some part of my deployment.yaml file
...
...
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
- name: "USERNAME"
value: "app-username"
- name: "PASSWORD"
value: "28sin47dsk9ik"
...
...
How can I pull the value of USERNAME and PASSWORD from local environment variables when I run helm?
Is this possible? If yes, then how do I do this?
You can export the variable and use it while running helm install.
Before that, you have to modify your chart so that the value can be set while installation.
Skip this part, if you already know, how to setup template fields.
As you don't want to expose the data, so it's better to have it saved as secret in kubernetes.
First of all, add this two lines in your Values file, so that these two values can be set from outside.
username: root
password: password
Now, add a secret.yaml file inside your template folder. and, copy this code snippet into that file.
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
password: {{ .Values.password | b64enc }}
username: {{ .Values.username | b64enc }}
Now tweak your deployment yaml template and make changes in env section, like this
...
...
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: {{ .Release.Name }}-auth
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: {{ .Release.Name }}-auth
...
...
If you have modified your template correctly for --set flag,
you can set this using environment variable.
$ export USERNAME=root-user
Now use this variable while running helm install,
$ helm install --set username=$USERNAME ./mychart
If you run this helm install in dry-run mode, you can verify the changes,
$ helm install --dry-run --set username=$USERNAME --debug ./mychart
[debug] Created tunnel using local port: '44937'
[debug] SERVER: "127.0.0.1:44937"
[debug] Original chart version: ""
[debug] CHART PATH: /home/maruf/go/src/github.com/the-redback/kubernetes-yaml-drafts/helm-charts/mychart
NAME: irreverant-meerkat
REVISION: 1
RELEASED: Fri Apr 20 03:29:11 2018
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
username: root-user
COMPUTED VALUES:
password: password
username: root-user
HOOKS:
MANIFEST:
---
# Source: mychart/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: irreverant-meerkat-auth
data:
password: password
username: root-user
---
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: irreverant-meerkat
labels:
app: irreverant-meerkat
spec:
replicas: 1
template:
metadata:
name: irreverant-meerkat
labels:
app: irreverant-meerkat
spec:
containers:
- name: irreverant-meerkat
image: alpine
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: irreverant-meerkat-auth
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: irreverant-meerkat-auth
imagePullPolicy: IfNotPresent
restartPolicy: Always
selector:
matchLabels:
app: irreverant-meerkat
You can see that the data of username in secret has changed to root-user.
I have added this example into github repo.
There is also some discussion in kubernetes/helm repo regarding this. You can see this issue to know about all other ways to use environment variables.
you can pass env key value from the value yaml by setting the deployment yaml as below :
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
{{- range $name, $value := .Values.env }}
- name: {{ $name }}
value: {{ $value }}
{{- end }}
in the values.yaml :
env:
- name: "USERNAME"
value: ""
- name: "PASSWORD"
value: ""
when you install the chart you can pass the username password value
helm install chart_name --name release_name --set env.USERNAME="app-username" --set env.PASSWORD="28sin47dsk9ik"
For those looking to use data structures instead lists for their env variable files, this has worked for me:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
values.yaml:
env:
FOO: "BAR"
USERNAME: "CHANGEME"
PASWORD: "CHANGEME"
That way I can access specific values by name in other parts of the helm chart and pass the sensitive values via helm command line.
To get away from having to set each secret manually, you can use:
export MY_SECRET=123
envsubst < values.yaml | helm install my-release . --values -
where ${MY_SECRET} is referenced in your values.yaml file like:
mychart:
secrets:
secret_1: ${MY_SECRET}
Helm 3.1 supports post rendering (https://helm.sh/docs/topics/advanced/#post-rendering) which passes the manifest to a script before it is actually send to Kubernetes API. Post rendering allows to manipulate the manifest in multiple ways (e.g. use kustomize on top of Helm).
The simplest form of a post renderer which replaces predefined environment values could look like this:
#!/bin/sh
envsubst <&0
Note this will replace every occurance of $<VARNAME> which could collide with variables in the templates like shell scripts in liveness probes. So better explicitly define the variables you want to get replaced: envsubst '${USERNAME} ${PASSWORD}' <&0
Define your env variables in the shell:
export USERNAME=john PASSWORD=my-secret
In the tempaltes (e.g. secret.yaml) use the values defined in the values.yaml:
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
username: {{ .Values.username }}
password: {{ .Values.password }}
Note that you can not apply string transformations like b64enc on the strings as the get injected in the manifest after Helm has already processed all YAML files. Instead you can encode them in the post renderer if required.
In the values.yaml use the variable placeholders:
...
username: ${USERNAME}
password: ${PASSWORD}
The parameter --post-renderer is supported in several Helm commands e.g.
helm install --dry-run --post-renderer ./my-post-renderer.sh my-chart
By using the post renderer the variables/placeholders automatically get replaced by envsubst without additional scripting.
i guess the question is how to lookup for env variable inside chart by looking at the env variables it-self and not by passing this with --set.
for example: i have set a key "my_db_password" and want to change the values by looking at the value in env variable is not supported.
I am not very sure on GO template, but I guess this is disabled as what they explain in helm documentation. "We removed two for security reasons: env and expandenv (which would have given chart authors access to Tiller’s environment)." https://helm.sh/docs/developing_charts/#know-your-template-functions
I think one simple way is just set the value directly. for example, in your Values.yml, you want pass the service name:
...
myapp:
service:
name: ""
...
Your service.yml just use this value as usual:
{{ .Values.myapp.service.name }}
Then to set the value, use --set, like: --set myapp.service.name=hello
Then, for example, if you want to use the environment variable, do export before that:
#set your env variable
export MYAPP_SERVICE=hello
#pass it to helm
helm install myapp --set myapp.service.name=$MYAPP_SERVICE.
If you do debug like:
helm install myapp --set myapp.service.name=$MYAPP_SERVICE --debug --dry-run ./myapp
You can see this information at the beginning of your yml which your "hello" was set.
USER-SUPPLIED VALUES:
myapp:
service:
name: hello
As an alternative to pass local environment variables, I like to store these kind of sensitive values in a folder ignored by your VCS, and use Helm .Files object to read them and provide the values to your templates.
In my opinion, the advantage is that it doesn't require the host that will operate the Helm chart to set any OS specific environment variable, and makes the chart self-contained whilst not exposing these values.
# In a folder not committed, e.g. <chart_base_directory>/secrets
username: app-username
password: 28sin47dsk9ik
Then in your chart templates:
# In deployment.yaml file
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
stringData::
{{ .Files.Get "<chart_base_directory>/secrets" | indent 2 }}
As a result, everything the Chart needs is accessible from within the directory where you define everything else. And instead of setting system-wide env vars, it just needs a file.
This file can be generated automatically, or copied from a committed template with dummy values. Helm will also fire an error early on install/update if this isn't defined, as opposed to creating your secret with username="" and password="" if your env vars haven't been defined, which only becomes obvious once your changes are applied to the cluster.