Helm Environment Variables in if else - kubernetes

When am building the image path, this is how I want to build the image Path, where the docker registry address, I want to fetch it from the configMap.
I can't hard code the registry address in the values.yaml file because for each customer the registry address would be different and I don't want to ask customer to enter this input manually. These helm charts are deployed via argoCD, so fetching registryIP via shell and then invoking the helm command is also not an option.
I tried below code, it isn't working because the env variables will not be available in the context where image path is present.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "helm-guestbook.fullname" . }}
spec:
template:
metadata:
labels:
app: {{ template "helm-guestbook.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
{{- if eq .Values.isOnPrem "true" }}
image: {{ printf "%s/%s:%s" $dockerRegistryIP .Values.image.repository .Values.image.tag }}
{{- else }}
env:
- name: DOCKER_REGISTRY_IP
valueFrom:
configMapKeyRef:
name: docker-registry-config
key: DOCKER_REGISTRY_IP
Any pointers on how can I solve this using helm itself ? Thanks

Check out the lookup function, https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function
Though this could get very complicated very quickly, so be careful to not overuse it.

Related

pass constant to skaffold

I am trying to use a constant in skaffold, and to access it in skaffold profile:
example export SOME_IP=199.99.99.99 && skaffold run -p dev
skaffold.yaml
...
deploy:
helm:
flags:
global:
- "--debug"
releases:
- name: ***
chartPath: ***
imageStrategy:
helm:
explicitRegistry: true
createNamespace: true
namespace: "***"
setValueTemplates:
SKAFFOLD_SOME_IP: "{{.SOME_IP}}"
and in dev.yaml profile I need somehow to access it,
something like:
{{ .Template.SKAFFOLD_SOME_IP }} and it should be rendered as 199.99.99.99
I tried to use skaffold envTemplate and setValueTemplates fields, but could not get success, and could not find any example on web
Basically found a solution which I truly don't like, but it works:
in dev profile: values.dev.yaml I added a placeholder
_anchors_:
- &_IPAddr_01 "<IPAddr_01_TAG>" # will be replaced with SOME_IP
The <IPAddr_01_TAG> will be replaced with const SOME_IP which will become 199.99.99.99 at the skaffold run
Now to run skaffold I will do:
export SOME_IP=199.99.99.99
sed -i "s/<IPAddr_01_TAG>/$SOME_IP/g" values/values.dev.yaml
skaffold run -p dev
so after the above sed, in dev profile: values.dev.yaml, we will see the SOME_IP const instead of placeholder
_anchors_:
- &_IPAddr_01 "199.99.99.99"
To use the SKAFFOLD_SOME_IP variable that you have set in your skaffold.yaml you can write the chart template for Kubernetes Deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
labels:
app: {{ .Chart.Name }}
spec:
selector:
matchLabels:
app: {{ .Chart.Name }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ .Chart.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: {{ .Values.image }}
env:
- name: SKAFFOLD_SOME_IP
value: "{{ .Values.SKAFFOLD_SOME_IP }}"
This will create an environment variable SKAFFOLD_SOME_IP for Kubernetes pods. And you can access it using 'go', for example, like this:
os.Getenv("SKAFFOLD_SOME_IP")

helm - run pods and dependencies with a predefined flow order

I am using K8S with helm.
I need to run pods and dependencies with a predefined flow order.
How can I create helm dependencies that run the pod only once (i.e - populate database for the first time), and exits after first success?
Also, if I have several pods, and I want to run the pod only on certain conditions occurs and after creating a pod.
Need to build 2 pods, as is described as following:
I have a database.
1st step is to create the database.
2nd step is to populate the db.
Once I populate the db, this job need to finish.
3rd step is another pod (not the db pod) that uses that database, and always in listen mode (never stops).
Can I define in which order the dependencies are running (and not always parallel).
What I see for helm create command that there are templates for deployment.yaml and service.yaml, and maybe pod.yaml is better choice?
What are the best charts types for this scenario?
Also, need the to know what is the chart hierarchy.
i.e: when having a chart of type: listener, and one pod for database creation, and one pod for the database population (that is deleted when finished), I may have a chart tree hierarchy that explain the flow.
The main chart use the populated data (after all the sub-charts and templates are run properly - BTW, can I have several templates for same chart?).
What is the correct tree flow
Thanks.
There is a fixed order with which helm with create resources, which you cannot influence apart from hooks.
Helm hooks can cause more problems than they solve, in my experience. This is because most often they actually rely on resources which are only available after the hooks are done. For example, configmaps, secrets and service accounts / rolebindings. Leading you to move more and more things into the hook lifecycle, which isn't idiomatic IMO. It also leaves them dangling when uninstalling a release.
I tend to use jobs and init containers that blocks until the jobs are done.
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- name: mysql
image: mysql
---
apiVersion: batch/v1
kind: Job
metadata:
name: migration
spec:
ttlSecondsAfterFinished: 100
template:
spec:
initContainers:
- name: wait-for-db
image: bitnami/kubectl
args:
- wait
- pod/mysql
- --for=condition=ready
- --timeout=120s
containers:
- name: migration
image: myapp
args: [--migrate]
restartPolicy: Never
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
initContainers:
- name: wait-for-migration
image: bitnami/kubectl
args:
- wait
- job/migration
- --for=condition=complete
- --timeout=120s
containers:
- name: myapp
image: myapp
args: [--server]
Moving the migration into its own job, is beneficial if you want to scale your application horizontally. Your migration need to run only 1 time. So it doesn't make sense to run it for each deployed replica.
Also, in case a pod crashes and restarts, the migration doest need to run again. So having it in a separate one time job, makes sense.
The main chart structure would look like this.
.
├── Chart.lock
├── charts
│ └── mysql-8.8.26.tgz
├── Chart.yaml
├── templates
│ ├── deployment.yaml # waits for db migration job
│ └── migration-job.yaml # waits for mysql statefulset master pod
└── values.yaml
You can achieve this using helm hooks and K8s Jobs, below is defining the same setup for Rails applications.
The first step, define a k8s job to create and populate the db
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "my-chart.name" . }}-db-prepare
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded
labels:
app: {{ template "my-chart.name" . }}
chart: {{ template "my-chart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
backoffLimit: 4
template:
metadata:
labels:
app: {{ template "my-chart.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ template "my-chart.name" . }}-db-prepare
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/docker-entrypoint.sh"]
args: ["rake", "db:extensions", "db:migrate", "db:seed"]
envFrom:
- configMapRef:
name: {{ template "my-chart.name" . }}-configmap
- secretRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
initContainers:
- name: init-wait-for-dependencies
image: wshihadeh/wait_for:v1.2
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/docker-entrypoint.sh"]
args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]
envFrom:
- configMapRef:
name: {{ template "my-chart.name" . }}-configmap
- secretRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
imagePullSecrets:
- name: {{ .Values.imagePullSecretName }}
restartPolicy: Never
Note the following :
1- The Job definitions have helm hooks to run on each deployment and to be the first task
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded
2- the container command, will take care of preparing the db
command: ["/docker-entrypoint.sh"]
args: ["rake", "db:extensions", "db:migrate", "db:seed"]
3- The job will not start until the db-connection is up (this is achieved via initContainers)
args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]
the second step is to define the application deployment object. This can be a regular deployment object (make sure that you don't use helm hooks ) example :
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "my-chart.name" . }}-web
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
labels:
app: {{ template "my-chart.name" . }}
chart: {{ template "my-chart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.webReplicaCount }}
selector:
matchLabels:
app: {{ template "my-chart.name" . }}
release: {{ .Release.Name }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
labels:
app: {{ template "my-chart.name" . }}
release: {{ .Release.Name }}
service: web
spec:
imagePullSecrets:
- name: {{ .Values.imagePullSecretName }}
containers:
- name: {{ template "my-chart.name" . }}-web
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/docker-entrypoint.sh"]
args: ["web"]
envFrom:
- configMapRef:
name: {{ template "my-chart.name" . }}-configmap
- secretRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
{{ toYaml .Values.resources | indent 12 }}
restartPolicy: {{ .Values.restartPolicy }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
if I understand correctly, you want to build a dependency chain in your deployment strategy to ensure certain things are prepared before any of your applications starts. in your case, you want a deployed and pre-populated database, before your app starts.
I propose to not build a dependency chain like this, because it makes things complicated in your deployment pipeline and prevents proper scaling of your deployment processes if you start to deploy more than a couple apps in the future. in highly dynamic environments like kubernetes, every deployment should be able to check the prerequisites it needs to start on its own without depending on a order of deployments.
this can be achieved with a combination of initContainers and probes. both can be specified per deployment to prevent it from failing if certain prerequisites are not met and/or to fullfill certain prerequisites before a service starts routing traffic to your deployment (in your case the database).
in short:
to populate a database volume before the database starts, use an initContainer
to let the database serve traffic after its initialization and prepopulation, define probes to check for these conditions. your database will only start to serve traffic after its livenessProbe and readinessProbe has succeeded. if it needs extra time, protect the pod from beeing terminated with a startupProbe.
to ensure the deployment of your app does not start and fail before the database is ready, use an initContainer to check if the database is ready to serve traffic before your app starts.
check out
https://12factor.net/ (general guideline for dynamic systems)
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
for more information.

Create K8s Secret with Pre-Install hook

I'm running a Migration Job as a pre-install hook so I created a secret also with DB values as a pre-install hook with lesser weight(should run before migration) and everything works fine, both secret and migration. The problem is the secret is deleted afterwards, which causes the regular pods to fail because it can't find the secret and I can't figure out why.
apiVersion: v1
kind: Secret
metadata:
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.secrets.name }}
chart: {{ .Values.secrets.name }}
name: {{ .Values.secrets.name }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-5"
type: Opaque
data:
{{- range $key, $val := .Values.secrets.values }}
{{ $key }}: {{ $val }}
{{- end}}
This is what the migration job looks like:
kind: Job
metadata:
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.migration.name }}
chart: {{ .Values.migration.name }}
name: {{ .Values.migration.name }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
spec:
backoffLimit: 4
template:
metadata:
labels:
app: {{ .Values.migration.name }}
release: {{ .Values.migration.name }}
spec:
containers:
#other config container values
env:
- name: APP_ROLE
value: {{ .Values.migration.role | quote }}
envFrom:
- secretRef:
name: {{ .Values.secrets.name }}
restartPolicy: Never
You've been caught using chart hooks in a way that's not really intended.
Have a look at the official helm docs for chart hooks here: Helm Docs
Scroll to the very bottom, to "Hook Deletion Policies", you'll read:
If no hook deletion policy annotation is specified, the before-hook-creation behavior applies by default.
What happens, is helm runs the hook that creates the secret, it creates it, succeeds, goes on to run the next hook ( your migration ) and deletes the secret again before executing that.
Hooks are not intended to create resources that are stay. You could try to hack your way around it by setting a hook-deletion-policy of hook-failed to the secret, but i'm not really sure what the outcome will be.
Ideally, you don't run the Migration job of your app in an Init Container of your app. This way, you would create the secrets normally, without a hook, and the init container and the app could reuse the same secret.

Why doesn't helm use the name defined in the deployment template?

i.e. from name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod below
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ template "project1234.name" . }}
chart: {{ template "project1234.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: {{ template "project1234.module5678.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "project1234.name" . }}
template:
metadata:
labels:
app: {{ template "project1234.name" . }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod
ports:
- containerPort: 1234
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
I am expecting the pod name to be:
pod/project1234-module5678-pod
Instead, the resulting Pod name is:
pod/chartname-project1234-module5678-dc7db787-skqvv
...where (in my understanding):
chartname is from: helm install --name chartname -f values.yaml .
project1234 is from:
# Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: project1234 Helm chart for Kubernetes
name: project1234
version: 0.1.0
module5678 is from:
# values.yaml
rbac:
create: true
serviceAccounts:
module5678:
create: true
name:
image:
name: <image location>
tag: 1.5
pullSecret: <pull secret>
gitlab:
secretName: <secret name>
username: foo
password: bar
module5678:
enabled: true
name: module5678
ingress:
enabled: true
replicaCount: 1
resources: {}
I've tried changing name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod into a plain string value like "podname1234" and it isn't followed. I even tried removing the name setting entirely and the resulting pod name remains the same.
Pods created from a Deployment always have a generated name based on the Deployment's name (and also the name of the intermediate ReplicaSet, if you go off and look for it). You can't override it.
Given the YAML you've shown, I'd expect that this fragment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "project1234.module5678.fullname" . }}
expands out to a Deployment name of chartname-project1234-module5678; the remaining bits are added in by the ReplicaSet and then the Pod itself.
If you do look up the Pod and kubectl describe pod chartname-project1234-module5678-dc7db787-skqvv you will probably see that it has a single container that has the expected name project1234-module5678-pod. Pretty much the only time you need to use this is if you need to kubectl logs (or, more rarely, kubectl exec) in a multi-container pod; if you are in this case, you'll appreciate having a shorter name, and since the container names are always scoped to the specific pod in which they appear, there's nothing wrong with using a short fixed name here
spec:
containers:
- name: container

How do I load multiple templated config files into a helm chart?

So I am trying to build a helm chart.
in my templates file I've got a file like:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
{{ Do something here to load up a set of files | indent 2 }}
I have another directory in my chart: configmaps
where a set of json files, that themselves will have templated variables in them:
a.json
b.json
c.json
Ultimately I'd like to be sure in my chart I can reference:
volumes:
- name: config-a
configMap:
name: config-map
items:
- key: a.json
path: a.json
I had same problem for a few weeks ago with adding files and templates directly to container.
Look for the sample syntax:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap-{{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
nginx_conf: {{ tpl (.Files.Get "files/nginx.conf") . | quote }}
ssl_conf: {{ tpl (.Files.Get "files/ssl.conf") . | quote }}
dhparam_pem: {{ .Files.Get "files/dhparam.pem" | quote }}
fastcgi_conf: {{ .Files.Get "files/fastcgi.conf" | quote }}
mime_types: {{ .Files.Get "files/mime.types" | quote }}
proxy_params_conf: {{ .Files.Get "files/proxy_params.conf" | quote }}
Second step is to reference it from deployment:
volumes:
- name: {{ $.Release.Name }}-configmap-volume
configMap:
name:nginx-configmap-{{ $.Release.Name }}
items:
- key: dhparam_pem
path: dhparam.pem
- key: fastcgi_conf
path: fastcgi.conf
- key: mime_types
path: mime.types
- key: nginx_conf
path: nginx.conf
- key: proxy_params_conf
path: proxy_params.conf
- key: ssl_conf
path: ssl.conf
It's actual for now. Here you can find 2 types of importing:
regular files without templating
configuration files with dynamic variables inside
Please do not forget to read official docs:
https://helm.sh/docs/chart_template_guide/accessing_files/
Good luck!
include all files from directory config-dir/, with {{ range ..:
my-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
{{- $files := .Files }}
{{- range $key, $value := .Files }}
{{- if hasPrefix "config-dir/" $key }} {{/* only when in config-dir/ */}}
{{ $key | trimPrefix "config-dir/" }}: {{ $files.Get $key | quote }} {{/* adapt $key as desired */}}
{{- end }}
{{- end }}
my-deployment.yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
template:
...
spec:
containers:
- name: my-pod-container
...
volumeMounts:
- name: my-volume
mountPath: /config
readOnly: true # is RO anyway for configMap
volumes:
- name: my-volume
configMap:
name: my-configmap
# defaultMode: 0555 # mode rx for all
I assume that a.json,b.json,c.json etc. is a defined list and you know all the contents (apart from the bits that you want to set as values through templated variables). I'm also assuming you only want to expose parts of the content of the files to users and not to let the user configure the whole file content. (But if I'm assuming wrong and you do want to let users set the whole file content then the suggestion from #hypnoglow of following the datadog chart seems to me a good one.) If so I'd suggest the simplest way to do it is to do:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
a.json:
# content of a.json in here, including any templated stuff with {{ }}
b.json:
# content of b.json in here, including any templated stuff with {{ }}
c.json:
# content of c.json in here, including any templated stuff with {{ }}
I guess you'd like to mount then to the same directory. It would be tempting for cleanliness to use different configmaps but that would then be a problem for mounting to the same directory. It would also be nice to be able to load the files independently using .Files.Glob to be able to reference the files without having to put the whole content in the configmap but I don't think you can do that and still use templated variables in them... However, you can do it with Files.Get to read the file content as a string and the pass that into tpl to put it through the templating engine as #Oleg Mykolaichenko suggests in https://stackoverflow.com/a/52009992/9705485. I suggest everyone votes for his answer as it is the better solution. I'm only leaving my answer here because it explains why his suggestion is so good and some people may prefer the less abstract approach.