Use an environment variable as integer in yaml file - kubernetes

I have an application in a container which reads a YAML file which contains data like
initializationCount=0
port=980
Now that I want to remove those hard coded values inside the application and get them out of the container. Hence I created a configMap with all configuration values. I used the config map keys as environmental variables while deploying the pod.
My issue is that, If I want to use these environment variables in my yaml file like
initializationCount=${iCount}
port=${port}
The API which reads this YAML file throws number format Exception since the env variables are always strings. I do not have control over the API which reads my yaml file.
I have tried
initializationCount=!!int ${iCount}
but it does not work.

Rather than pulling in the configmap values as environment variables, try mounting the configmap as a volume at runtime.
The configmap should have one key which is the name of your YAML file. the value for that key should be the contents of the file.
This data will be mounted to the container's filesystem when the pod initializes. That way your app will read the config YAML the same way it has been, but the values will be externalized in the configmap.
Something like this:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-config
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
config.yaml: |
initializationCount=0
port=980
Kubernetes docs here

Related

configmaps are not passing properly to the containers

I have a kubectl config map like below.
apiVersion: v1
data:
server.properties: |+
server.hostname=test.com
kind: ConfigMap
metadata:
name: my-config
And I tried to read this config inside a container.
containers:
- name: testserver
env:
- name: server.hostname
valueFrom:
configMapKeyRef:
name: my-config
key: server.properties.server.hostname
However, these configs are not passing to the container properly. Do I need do any changes to my configs?
What you have in there isn't the right key. ConfigMaps are strictly 1 level of k/v pairs. The |+ syntax is YAML for a multiline string but the fact the data inside that is also YAML is not something the system knows. As far as Kubernetes is concerned you have one key there, server.properties, with a string value that is opaque.

How to create a volume that mounts a file which it's path configured in a ConfigMap

I'll describe what is my target then show what I had done to achieve it... my goal is to:
create a configmap that holds a path for properties file
create a deployment, that has a volume mounting the file from the path configured in configmap
What I had done:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: "my.properties"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-client-deployment
spec:
selector:
matchLabels:
app: my-client
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: my-client
spec:
containers:
- name: my-client-container
image: {{ .Values.image.client}}
imagePullPolicy: {{ .Values.pullPolicy.client }}
ports:
- containerPort: 80
env:
- name: MY_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: my-configmap
key: my_properties_file_name
volumeMounts:
- name: config
mountPath: "/etc/config"
readOnly: true
imagePullSecrets:
- name: secret-private-registry
volumes:
# You set volumes at the Pod level, then mount them into containers inside that Pod
- name: config
configMap:
# Provide the name of the ConfigMap you want to mount.
name: my-configmap
# An array of keys from the ConfigMap to create as files
items:
- key: "my_properties_file_name"
path: "my.properties"
The result is having a file namedmy.properties under /etc/config, BUT the content of that file is "my.properties" (as it was indicated as the file name in the configmap), and not the content of properties file as I have it actually in my localdisk.
How can I mount that file, using it's path configured in a configmap?
Put the content of the my.properties file directly inside the ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: |
This is the content of the file.
It supports multiple lines but do take care of the indentation.
Or you can also use a kubectl create configmap command:
kubectl create configmap my-configmap --from-file=my_properties_file_name=./my.properties
In either method, you are actually passing the snapshot of the content of the file on the localdisk to kubernetes to store. Any changes you make to the file on the localdisk won't be reflected unless you re-create the configmap.
The design of kubernetes allows running kubectl command against kubernetes cluster located on the other side of the globe so you can't simply mount a file on your localdisk to be accessible in realtime by the cluster. If you want such mechanism, you can't use a ConfigMap, but instead you would need to setup a shared volume that is mounted by both your local machine and the cluster for example using a NFS server.

Kubernetes: Define environment variables dependent on other ones using "envFrom"

I have two ConfigMap files. One is supposed to be "secret" values and the other has regular values and should import the secrets.
Here's the sample secret ConfigMap:
kind: ConfigMap
metadata:
name: secret-cm
data:
MY_SEKRET: 'SEKRET'
And the regular ConfigMap file:
kind: ConfigMap
metadata:
name: regular-cm
data:
SOME_CONFIG: 123
USING_SEKRET: $(MY_SEKRET)
And my deployment is as follows:
kind: Deployment
spec:
template:
spec:
containers:
- name: my_container
envFrom:
- configMapRef:
name: secret-cm
- configMapRef:
name: regular-cm
I was hoping that my variable USING_SEKRET would be "SEKRET" because of the order the envFrom files are imported but they just appear as "$(MY_SEKRET)" on the Pods.
I've also tried setting the dependent variable as an env directly at the Deployment but it results on the same problem:
kind: Deployment
...
env:
- name: MY_SEKRET
# Not the expected result because the variable is openly visible but should be hidden
value: 'SEKRET'
I was trying to follow the documentation guides, based on the Define an environment dependent variable for a container but I haven't seen examples similar to what I want to do.
Is there a way to do this?
EDIT:
To explain my idea behind this structure, secret-cm whole file will be encrypted at the repository so not all peers will be able to see its contents.
On the other hand, I still want to be able to show everyone where its variables are used, hence the dependency on regular-cm.
With that, authorized peers can run kubectl commands and variable replacements of secret-cm would work properly but for everyone else the file is hidden.
You did not explain why you want to define two configmap (one getting value from another) but I am assuming that you want the env parameter name define in confgimap be independent of paramter name used by your container in pod. If that is the case then create your configmap
kind: ConfigMap metadata: name: secret-cm data: MY_SEKRET: 'SEKRET'
Then in your deployment use the env variable from configmap
kind: Deployment
spec:
template:
spec:
containers:
- name: my_container
env:
- name: USING_SEKRET
valueFrom:
configMapKeyRef:
name: secret-cm
key: MY_SEKRET
Now when you access env variable $USING_SEKRET, it will show value as 'SEKRET'
incase your requirement is different then ignore this response and provide more details.

using ConfigMap created using Generator in Kustomize/Kubernetes

I have been trying to figure out how to consume a ConfigMap created using a ConfigMap generator via Kustomize.
When created using Kustomize generators, the configMaps are named with a special suffix. See here:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap-from-generator
Question is how can this be referenced?
You don't reference it yourself. Kustomize recognizes where the configMap is used in the other resources (like a Deployment) and changes those references to use the name+hash.
The reason for this is so that if you change the configMap, Kustomize generates a new hash and updates the Deployment, causing a rolling restart of the Pods.
If you don't want this behavior, you can add the following to your kustomization.yaml file:
generatorOptions:
disableNameSuffixHash: true
It is specified there in the doc. When you do kubectl apply -k . a configmap created named game-config-4-m9dm2f92bt.
You can check that the ConfigMap was created like this: kubectl get configmap. This ConfigMap will contains a field data where your given datas will belong.
Now as usual you can use this configmap in a pod. Like below:
Ex from k8s:
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
restartPolicy: Never
You can use ConfigMap as volume also, like this example from k8s doc:
apiVersion: v1
kind: Pod
metadata:
name: configmap-demo-pod
spec:
containers:
- name: demo
image: alpine
command: ["sleep", "3600"]
env:
# Define the environment variable
- name: PLAYER_INITIAL_LIVES # Notice that the case is different here
# from the key name in the ConfigMap.
valueFrom:
configMapKeyRef:
name: game-demo # The ConfigMap this value comes from.
key: player_initial_lives # The key to fetch.
- name: UI_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: game-demo
key: ui_properties_file_name
volumeMounts:
- name: config
mountPath: "/config"
readOnly: true
volumes:
# You set volumes at the Pod level, then mount them into containers inside that Pod
- name: config
configMap:
# Provide the name of the ConfigMap you want to mount.
name: game-demo
# An array of keys from the ConfigMap to create as files
items:
- key: "game.properties"
path: "game.properties"
- key: "user-interface.properties"
path: "user-interface.properties
You can see k8s official doc
I was struggling with this too. I could not figure out why kustomize was not updating the configmap name for the volume in the deployment to include the hash. What solved this for me was to add namespace: <namespace> in the kustomization.yaml for both the base and overlay.

Is it possible to get container host path from environment variable?

I am trying to set configurable host paths in kubernetes, but I am facing issues. I created a config map, which has the path and then I'm trying to replace the placeholder with the config map value. Here is my configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: php
namespace: app
spec:
template:
spec:
containers:
- name: php
env:
- name: PHP_FOLDER
valueFrom:
configMapKeyRef:
name: local-paths
key: CODE_PATH
volumes:
- name: src-code
hostPath:
path: PHP_FOLDER
type: Directory
I also tried
apiVersion: apps/v1
kind: Deployment
metadata:
name: php
namespace: app
spec:
template:
spec:
containers:
- name: php
env:
- name: PHP_FOLDER
valueFrom:
configMapKeyRef:
name: local-paths
key: CODE_PATH
volumes:
- name: src-code
hostPath:
path: $(PHP_FOLDER)
type: Directory
I either get Error: Error response from daemon:
create $(PHP_FOLDER): "$(PHP_FOLDER)" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path or MountVolume.SetUp failed for volume "src-code" : hostPath type check failed: PHP_FOLDER is not a directory
You just can't use environment values in yaml files directly. What you did actually a lot worse. ConfigMaps or secrets are created in runtime which means they won't be available until container starts running. However yaml parsing happens before pod or deployment even created. I think you should practice timeline more until you are using kubernetes in production or even testing.
In this case best practice would be to use bash script to change yaml file just before deploying and automatize it.
You cannot use a variable for path definition.
As other users stated correctly, you can not use env variables in plain yaml/ kubectl. I suggest you add a lean layer before the deployment command and template the manifest with e.g. envsubst or ansible