Kubernetes ConfigMap: import from file as many values instead of as one? - kubernetes

Creating a new ConfigMap from a file:
kubernetes create configmap foo --from-file=foo
This is how the ConfigMap looks internally:
kubernetes get configmaps foo -o yaml
apiVersion: v1
data:
foo: |
VAR1=value1
VAR2=value2
Then, I use this ConfigMap to create a set of environment variables in the container:
apiVersion: v1
kind: Pod
metadata:
labels:
name: app
name: app
spec:
containers:
- name: app-server
image: app:latest
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: foo
command: ["/bin/bash", "-c", "printenv"]
When the container command runs, I see the following output for printenv:
foo=VAR1=value1
VAR2=value2
So, an echo $foo command in the pod returns:
VAR1=value1 VAR2=value2
According to the documentation for ConfigMap with --from-file, this is expected behaviour.
What would be a creative way (and the proper place) to somehow get the values of this file available to the pod as individual env variables VAR1, VAR2, VAR3, etc. ?

This is not possible with the current version (1.6.x) of Kubernetes. As written in the offical documentation for kubectl create configmap:
--from-file: Key file can be specified using its file path, in which case file basename will be used as configmap key, or optionally with a key and file path, in which case the given key will be used. Specifying a directory will iterate each named file in the directory whose basename is a valid configmap key.
When you want to create a configmap which is used like this, as input for the envFrom container configuration you could create it with the --from-literal option like this:
kubectl create configmap foo --from-literal=var1=value1 --from-literal=var2=value2
To still keep the file, you could transform your file into somethings which then runs this command like this:
eval "kubectl create configmap foo $(cat foo.txt | sed -e 's/^/--from-literal=/' | tr "\n" ' ')"
Along with that maybe checking the outstanding proposals like the --flatten flag proposal on Github are worth your time.
Also keep an eye on the variable naming. iirc VAR1 and VAR2 are not valid property names - they have to be lower case which might cause some issues when passing them on.

Related

How to refrence pod's shell env variable in configmap data section

I have a configmap.yaml file as below :
apiVersion: v1
kind: ConfigMap
metadata:
name: abc
namespace: monitoring
labels:
app: abc
version: 0.17.0
data:
application.yml: |-
myjava:
security:
enabled: true
abc:
server:
access-log:
enabled: ${myvar}. ## this is not working
"myvar" value is available in pod as shell environment variable from secretkeyref field in deployment file.
Now I want to replace myvar shell environment variable in configmap above i.e before application.yml file is available in pod it should have replaced myvar value. which is not working i tried ${myvar} and $(myvar) and "#{ENV['myvar']}"
Is that possible in kubernetes configmap to reference with in data section pod's environment variable if yes how or should i need to write a script to replace with sed -i application.yml etc.
Is that possible in kubernetes configmap to reference with in data section pod's environment variable
That's not possible. A ConfigMap is not associated with a particular pod, so there's no way to perform the sort of variable substitution you're asking about. You would need to implement this logic inside your containers (fetch the ConfigMap, perform variable substitution yourself, then consume the data).

Duplicated env variable names in pod definition, what is the precedence rule to determine the final value?

Using Kubernetes 1.19.3, I initialize env variable values using 3 different ways:
env field with explicit key/value in the pod definition
envFrom using configMapRef and secretRef
When a key name is duplicated, as shown in the example below, DUPLIK1 and DUPLIK2 are defined multiple times with different values.
What is the precedence rule that Kubernetes uses to assign the final value to the variable?
# create some test Key/Value configs and Key/Value secrets
kubectl create configmap myconfigmap --from-literal=DUPLIK1=myConfig1 --from-literal=CMKEY1=CMval1 --from-literal=DUPLIK2=FromConfigMap -n mydebugns
kubectl create secret generic mysecret --from-literal=SECRETKEY1=SECval1 --from-literal=SECRETKEY2=SECval2 --from-literal=DUPLIK2=FromSecret -n mydebugns
# create a test pod
cat <<EOF | kubectl apply -n mydebugns -f -
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
image: busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: DUPLIK1
value: "Key/Value defined in field env"
envFrom:
- configMapRef:
name: myconfigmap
- secretRef:
name: mysecret
restartPolicy: Never
EOF
Show environement variables values. The result is deterministic. Deleting resources + recreate always end up with the same result.
kubectl logs pod/pod1 -n mydebugns
CMKEY1=CMval1
DUPLIK1=Key/Value defined in field env
DUPLIK2=FromSecret
SECRETKEY1=SECval1
SECRETKEY2=SECval2
Cleanup test resources
kubectl delete pod/pod1 -n mydebugns
kubectl delete cm/myconfigmap -n mydebugns
kubectl delete secret/mysecret -n mydebugns
From Kubernetes docs:
envVar: List of environment variables to set in the container.
Cannot be updated.
envFrom: List of sources to populate environment variables in the
container. The keys defined within a source must be a C_IDENTIFIER.
All invalid keys will be reported as an event when the container is
starting. When a key exists in multiple sources, the value associated
with the last source will take precedence. Values defined by an Env
with a duplicate key will take precedence. Cannot be updated.
The above link clearly states the env will take precedence over envFrom and cannot be updated.
Also, when a referenced key is present in multiple resources, the value associated with the last source will override all previous values.
Based on the above, the result you are seeing is expected behavior:
DUPLIK1 is added as env field and thus cannot be updated
DUPLIK2 is added as envFrom and so the one from the secret takes precedence as it is defined at the last

Can kubectl delete environment variable?

Here I can update the envs through kubectl patch, then is there any method that can delete envs except re-deploy a deployment.yaml?
$ kubectl patch deployment demo-deployment -p '{"spec":{"template":{"spec":{"containers":[{"name": "demo-deployment","env":[{"name":"foo","value":"bar"}]}]}}}}'
deployment.extensions "demo-deployment" patched
Can I delete the env "foo" through command line not using a re-deploy on the whole deployment?
This is coming late but for newcomers, you can use the following kubectl command to remove an existing env variable from a deployment
kubectl set env deployment/DEPLOYMENT_NAME VARIABLE_NAME-
Do not omit the hyphen (-) at the end
If you are fine with redeployment then follow the below steps
Create configmap and include your environment variables
Load env variables from configmap in the deployment
envFrom:
- configMapRef:
name: app-config
If you want to delete env variable then remove those key-value pairs from configmap
It will cause redeployment. You can also delete the pod from corresponding deployment
Consider that containers is an array inside an object. Arrays can only be fetched by their index, as opposed to objects which can be fetched via key value pairs. See reference here. So there is a workaround for using index.
Here you have env that are placed into the container:
spec:
containers:
- env:
- name: DEMO_GREETING
value: Hello from the environment
- name: DSADASD
value: asdsad
Here you have a command to remove the anv using index:
kubectl patch deployments asd --type=json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/env/1"}]
And the result:
spec:
containers:
- env:
- name: DEMO_GREETING
value: Hello from the environment
This will still however restart your pod.
Hope that helps!

replace configmap contents with some environment variables

i am running a statefulset where i use volumeClaimTemplates. everything's fine there.
i also have a configmap where i would like to essentially replace some entries with the name of the pod for each pod that this config file is projected onto; eg, if the configmap data is:
ThisHost=<hostname -s>
OtherConfig1=1
OtherConfig1=2
...
then for the statefulset pod named mypod-0, the config file should contain ThisHost=mypod-0 and ThisHost=mypod-1 for mypod-1.
how could i do this?
The hostnames are contained in environment variables within the pod by default called HOSTNAME.
It is possible to modify the configmap itself if you first:
mount the configmap and set it to ThisHost=hostname -s (this will create a file in the pod's filesystem with that text)
pass a substitution command to the pod when starting (something like $ sed 's/hostname/$HOSTNAME/g' -i /path/to/configmapfile)
Basically, you mount the configmap and then replace it with the environment variable information that is available within the pod. It's just a substitution operation.
Look at the example below:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["sed"]
args: ["'s/hostname/$HOSTNAME'", "-i", "/path/to/config/map/mount/point"]
restartPolicy: OnFailure
The args' syntax might need some adjustments but you get the idea.
Please let me know if that helped.

K8S deployment executing shell scripts reading configuration data

In K8S, what is the best way to execute scripts in container (POD) once at deployment, which reads from confuguration files which are part of the deployment and seed ex mongodb once?
my project consist of k8s manifest files + configuration files
I would like to be able to update the config files locally and then redeploy via kubectl or helm
In docker-compose i could create a volume ponting at the directory where the config files resides and then in the command part execute bash -c cmds reading from the config files in the volume. How is this best done in K8S? I don't want to include the configuration files in a image via dockerfile, forcing me to rebuild the image before redeploying again via kubectl or helm
How is this best done in K8S?
There are several ways to skin a cat, but my suggestion would be to do the following:
Keep configuration in configMap and mount it as separate volume. Such a map is kept as k8s manifest, making all changes to it separate from docker build image - no need to rebuild or keep sensitive data within image. You can also use instead (or together with) secret in the same manner as configMap.
Use initContainers to do the initialization before main container is to be brought online, covering for your 'once on deployment' automatically. Alternatively (if init operation is not repeatable) you can use Jobs instead and start it when necessary.
Here is excerpt of example we are using on gitlab runner:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: ss-my-project
spec:
...
template:
....
spec:
...
volumes:
- name: volume-from-config-map-config-files
configMap:
name: cm-my-config-files
- name: volume-from-config-map-script
projected:
sources:
- configMap:
name: cm-my-scripts
items:
- key: run.sh
path: run.sh
mode: 0755
# if you need to run as non-root here is how it is done:
securityContext:
runAsNonRoot: true
runAsUser: 999
supplementalGroups: [999]
containers:
- image: ...
name: ...
command:
- /scripts/run.sh
...
volumeMounts:
- name: volume-from-config-map-script
mountPath: "/scripts"
readOnly: true
- mountPath: /usr/share/my-app-config/config.file
name: volume-from-config-map-config-files
subPath: config.file
...
You can, ofc, mount several volumes from config maps or combine them in one single, depending on frequency of your changes and affected parts. This is example with two separately mounted configMaps just to illustrate the principle (and mark script executable), but you can use only one for all required files, put several files into one or put single file into each - as per your need.
Example of such configMap is like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-my-scripts
data:
run.sh: |
#!/bin/bash
echo "Doing some work here..."
And example of configMap covering config file is like so:
kind: ConfigMap
apiVersion: v1
metadata:
name: cm-my-config-files
data:
config.file: |
---
# Some config.file (example name) required in project
# in whatever format config file actually is (just example)
... (here is actual content like server.host: "0" or EFG=True or whatever)
Playing with single or multiple files in configMaps can yield result you want, and depending on your need you can have as many or as few as you want.
In docker-compose i could create a volume ponting at the directory where the config files resides and then in the command part execute bash -c cmds reading from the config files in the volume.
In k8s equivalent of this would be hostPath but then you would seriously hamper k8s ability to schedule pods to different nodes. This might be ok if you have single node cluster (or while developing) to ease change of config files, but for actual deployment above approach is advised.