Kubernetes - config map templated variables - kubernetes

I have two env vars in a pod (or a config map):
- TARGET_URL=http://www.example.com
- TARGET_PARAM=param
Is there any way for me to provide a third env var which is derived from both these vars, something like ${TARGET_URL}/mysite/${TARGET_PARAM}.
Thanks!

For environment variables (and a couple of other fields in the pod spec, including args and command) there is a Make-like $(VARIABLE) name that will get expanded; see for example the documentation for .env.value. This could look like:
env:
- name: TARGET_URL
valueFrom:
configMapKeyRef:
key: cm
name: TARGET_URL
- name: TARGET_PARAM
valueFrom:
configMapKeyRef:
key: cm
name: TARGET_PARAM
- name: TARGET_DETAIL_URL
value: $(TARGET_URL)/mysite/$(TARGET_PARAM)
If you are depending on mounting a ConfigMap into a container as files, then it can only contain static content; this trick won't work.

I don't think it is possible right now, without a 3rd party tool. regarding api ref it does not support multi variable in YAML. But I will tell you about a 3rd party tool -- Helm
It is possible to achieve it using Helm. your template will look like:
containers:
- name: {{.Values.Backend.name }}
image: "{{ .Values.Backend.image.repository }}:{{ .Values.Backend.image.tag }}"
imagePullPolicy: "{{ .Values.Backend.image.pullPolicy }}"
args:
- name: TARGET_URL
value: {{ .Value.URL}}
- name: TARGET_PARAM
value: {{ .Value.PARAM}}
- name: URL
value: {{ .Value.URL }}/mysite/{{ .Value.PARAM}}
and you will add to the file values.yaml parameters for TARGET_URL and TARGET_PARAM
URL: http://www.example.com
PARAM: param

You can do using an init script that you can call at docker entry point.

Related

Use fieldRef in Kubernetes configMap

I have the following environment variable in my Kubernetes template:
envFrom:
- configMapRef:
name: configmap
env:
- name: MACHINENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
I would like to use the value from 'fieldRef' in a config map instead. Would this kind of modification be possible?
In other words, I want to add the 'MACHINENAME' environment variable to the config map, so I don't have to use the 'env:' block.
You cannot do this in the way you describe.
A ConfigMap only contains fixed string-key/string-value pairs. You cannot embed a more complex structure into a ConfigMap, or say that a ConfigMap value will be resolved using the downward API when a Pod is created. The node name of the pod, and most of the other downward API information, will be different for each pod using the ConfigMap (and likely even for each replica of the same deployment) and so there is no fixed value you can put into a ConfigMap.
You tagged this question with the Helm deployment tool. If you're using Helm, and you're simply trying to avoid repeating this boilerplate in every Deployment spec, you can write a helper template that includes this definition
{{/* templates/_helpers.tpl */}}
{{- define "machinename" -}}
- name: MACHINENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- end -}}
Now in your Deployment spec, you can include this template rather than retyping the whole YAML block.
containers:
- envFrom:
- configMapRef:
name: configmap
env:
{{ include "machinename . | indent 6 }}
(The exact indent value will depend on the context where you include it, and should be two more than the number of spaces at the start of the env: line. It is important that the line containing indent not itself be indented.)
Yes, using a ConfigMap would be possible. This Stack Overflow post is quite old, but has some good information in:
Advantage of using configmaps for environment variables with Kubernetes/Helm
You would need to either mount the ConfigMap as a volume or consume via environment variables by using envFrom. This guide provides both examples:
https://matthewpalmer.net/kubernetes-app-developer/articles/ultimate-configmap-guide-kubernetes.html
You can use the volume mount option and merge different configmap or env
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: mysecret
items:
- key: username
path: my-group/my-username
- downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "cpu_limit"
resourceFieldRef:
containerName: container-test
resource: limits.cpu
- configMap:
name: myconfigmap
items:
- key: config
path: my-group/my-config
Ref : https://kubernetes.io/docs/concepts/storage/projected-volumes/
initContainer
There is another alternative you can follow if you want to merge those values.
Either you merge configmap & fieldRef with InitContainer as it's your Node name so, have to get value first and edit/add value to configmap with initContainer.

Is it possible to retrieve pod name from values.yaml file of helm chart?

Quite new to Helm. Currently, I create an env variable in a way that when I deploy my pod, I am able to see the pod name in the environment variables list. This can be done like so in the template file:
containers:
- name: my_container
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Is it possible to do something similar in the values.yaml file (maybe in an extraEnv field?) and then use this value in the .tpl? Other configurations, like configmap names, depend on it, in order to be unique between pods and I want to easily retrieve the value like so:
volumes:
- name: my_vol
configMap:
name: {{ .Values.pathto.extraEnv.podname }}
Thanks in advance!

How can I check if a k8s secret exists in a Helm chart/k8s template, or use a default value?

I have a template part like:
spec:
containers:
- name: webinspect-runner-{{ .Values.pipeline.sequence }}
...
env:
- name: wi_base_url
valueFrom:
secretKeyRef:
name: webinspect
key: wi-base-url
- name: wi_type
valueFrom:
secretKeyRef:
name: webinspect
key: wi-type
The webinspect/wi_type secret may be missing. I want the container also don't have the wi_type envvar or get a default value (better) when the secret is missing, but k8s just reports CreateContainerConfigError: couldn't find key wi-type in Secret namespace/webinspect and the pod fails.
Is there a way to use a default value, or skip the block if the secret does not exist?
Two options, the first is add optional: true to the secretKeyRef block(s) which makes it skip. The second is a much more complex approach using the lookup template function in Helm. Probably go with the first :)

Can an OpenShift template env variable be used in the name of a secretKeyRef?

I have a template and a deployment in it, One of my deployment's env variables is a value from a secret that contains the name of the namespace that just used the template.
The secret must contain the name of the namespace, I can't change it.
I know that it is possible to use env variables as value of different env variables. Looks like this:
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: FULL_NAME
value: ${MY_NAME}.$(MY_NAMESPACE)
This way, if my namespace's name is app and the MY_NAME parameter is dan:
echo $FULL_NAME
> dan.app
But I want to use that name as a reference to a secret name. Like this:
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: FULL_NAME_SECRET
valueFrom:
secretKeyRef:
name: ${MY_NAME}.$(MY_NAMESPACE)
key: app_name
I get an event that says:
Pod "test-0" is invalid[spec.containers[0].env[1].valueFrom.secretKeyRef.name: Invalid value: "dan.$(MY_NAMESPACE)": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start with an alphanumeric character
Well basically it does not translate the $(MY_NAMESPACE) to the actual value of its env variable.
The secretKeyRef: secret name must be a fixed string. Variable expansion only happens in a couple of places in the pod spec (in env: values, command:, and args:, but nowhere else).
For your immediate problem, it may help to recognize that a Pod and any matching Secret must be in the same namespace. You don't need to include the namespace name in the Secret name, and there's no way to refer to a Secret in a different namespace if you needed to.
- name: FULL_NAME_SECRET
valueFrom:
secretKeyRef:
name: dan # always in the same namespace
key: app_name
More broadly, you can use a tool like Helm to fill in some of these values from templates. These can use data known at install time to construct "fixed strings" that can be submitted to Kubernetes.
{{/* In a Helm chart's templates/deployment.yaml */}}
- name: FULL_NAME_SECRET
valueFrom:
secretKeyRef:
name: {{ include "dan.name" . }}.{{ .Release.Namespace }}
key: app_name
In this the parts in the curly braces {{ ... }} are expanded by Helm's template engine. .Release.Namespace is the namespace into which you're installing. include "dan.name" . calls a helper template that constructs the "dan" name, in a way that supports multiple installations in the same namespace.

Empty variable when using `status.hostIP` as reference field for my env variable in kubernetes

I'm deploying a kubernetes pod using helm v3, my kubectl client and server are above 1.7 so it should support reference fields. However when i deploy, the value is just empty.
using
environment:
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Where the DD_AGENT_HOST is my env variable that should be given the host ip.
Any idea on why this might be happening?
Had to add it this to the container specification directly, not passing from an env and using include from helm as that doesn't work
The issue is related to helm app deployment template(IF you use one). For instance, if you have deployment.yaml with
env:
{{- range .Values.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
And one of env values is valueFrom, you have to add explicitly (unless there is a nicer way of doing it):
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Otherwise copy of the range above will not use valueFromand as a result DD_AGENT_HOST will be empty