String operation on env variables on Kubernetes - kubernetes

I have a question regarding Kubernetes YAML string operations.
I need to set an env variable based on the hostname of the container that is deployed and append a port number to this variable.
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
How do I create another env variable that uses MY_POD_NAME and makes
it look like this uri://$MY_POD_NAME:9099/
This has to be defined as an env variable. Are there string operations allowed in Kubernetes YAML files?

You can do something like
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_URI
value: "uri://$(MY_POD_NAME):9099/"
We are using that since K8s 1.4
$() is processed by k8s itself, does not work everywhere, but works for env variables.
If your container contains bash, you can also leverage bash variable expansion
"command": ["/bin/bash"],
"args": [ "-c",
"MY_POD_URI_BASH=uri://${MY_POD_NAME}:9099/ originalEntryPoint.sh
],
${} is not touched by k8s, but evaluated later in container by bash. If you have a chance, prefer the first option with $()
note: order matters in declaration. In example above, if MY_POD_NAME is defined later in env array, the expansion will not work.

You can't do this directly.
You should run a startup script using the Pod ENV variables you have accessible to set any additional variable that you need, and launch your service after that in the startup script.

Related

How to convert an environment variable declared inside a "yaml" file to something inside the ".env" file?

I have an environment variable file that is declared like following inside a deployment.yaml file:
env:
- name: NATS_CLIENT_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
I am just wondering to know if I want to use an .env file instead of the deployment.yaml file, how can I declare the above port within the .env?
Maybe you can create a configmap that contains your .env and use your configmap to inject the environment variables.
An example here: https://humanitec.com/blog/handling-environment-variables-with-kubernetes

Is it possible to retrieve pod name from values.yaml file of helm chart?

Quite new to Helm. Currently, I create an env variable in a way that when I deploy my pod, I am able to see the pod name in the environment variables list. This can be done like so in the template file:
containers:
- name: my_container
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Is it possible to do something similar in the values.yaml file (maybe in an extraEnv field?) and then use this value in the .tpl? Other configurations, like configmap names, depend on it, in order to be unique between pods and I want to easily retrieve the value like so:
volumes:
- name: my_vol
configMap:
name: {{ .Values.pathto.extraEnv.podname }}
Thanks in advance!

Using a variable within a path in Kubernetes

I have a simple StatefulSet with two containers. I just want to share a path by an emptyDir volume:
volumes:
- name: shared-folder
emptyDir: {}
The first container is a busybox:
- image: busybox
name: test
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /cache
name: shared-folder
The second container creates a file on /cache/<POD_NAME>. I want to mount both paths within the emptyDir volume to be able to share files between containers.
volumeMounts:
- name: shared-folder
mountPath: /cache/$(HOSTNAME)
Problem. The second container doesn't resolve /cache/$(HOSTNAME) so instead of mounting /cache/pod-0 it mounts /cache/$(HOSTNAME). I have also tried getting the POD_NAME and setting as env variable but it doesn't resolve it neither.
Dows anybody knows if it is possible to use a path like this (with env variables) in the mountPath attribute?
To use mountpath with env variable you can use subPath with expanded environment variables (k8s v1.17+).
In your case it would look like following:
containers:
- env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- mountPath: /cache
name: shared-folder
subPathExpr: $(MY_POD_NAME)
I tested here and just using Kubernetes (k8s < 1.16) with env variables isn't possible to achieve what you want, basically what is happening is that variable will be accessible only after the pod gets deployed and you're referencing it before it happens.
You can use Helm to define your mounthPath and statefulset with the same value in the values.yaml file, then get this same value and set as a value for the mounthPath field and statefulset name. You can see about this here.
Edit:
Follow Matt's answer if you are using k8s 1.17 or higher.
The problem is that YAML configuration files are POSTed to Kubernetes exactly as they are written. This means that you need to create a templated YAML file, in which you will be able to replace the referenced ti environment variables with values bound to environment variables.
As this is a known "quirk" of Kubernetes there already exist tools to circumvent this problem. Helm is one of those tools which is very pleasant to use

Set Kubernetes env variable from container's image version

I was wondering if it's possible to refer to image field in Kubernetes deployment yaml file,
as
env:
- name: VERSION
value:
valueFrom:
containerField: spec.image
Please let me know. Thank you.
image value in pod definition cannot be passed as environment variable using fieldRef.
The only supported values are metadata.name, metadata.namespace, metadata.labels, metadata.annotations, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs and resource fields (memory, cpu request/limits) and container ephemeral storage limit/request.
As a workaround it can be passed using labels and then passing this label as an environment variable, example:
env:
- name: VERSION
valueFrom:
fieldRef:
fieldPath: metadata.labels['version']

Can an openshift template parameter refer to the project name in which it is getting deployed?

I am trying to deploy Kong API Gateway via template to my openshift project. The problem is that Kong seems to be doing some DNS stuff that causes sporadic failure of DNS resolution. The workaround is to use the FQDN (<name>.<project_name>.svc.cluster.local). So, in my template i would like to do:
- env:
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_HOST
value: "{APP_NAME}.{PROJECT_NAME}.svc.cluster.local"
I am just not sure how to get the current PROJECT_NAME of if perhaps there is a default set of available parameters...
You can read the namespace(project name) from the Kubernetes downward API into an environment variable and then use that in the value perhaps.
See the OpenShift docs here for example.
Update based on Claytons comment:
Tested and the following snippet from the deployment config works.
- env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: EXAMPLE
value: example.$(MY_POD_NAMESPACE)
Inside the running container:
sh-4.2$ echo $MY_POD_NAMESPACE
testing
sh-4.2$ echo $EXAMPLE
example.testing
In the environment screen of the UI it appears as a string value such as example.$(MY_POD_NAMESPACE)