How to refrence pod's shell env variable in configmap data section - kubernetes

I have a configmap.yaml file as below :
apiVersion: v1
kind: ConfigMap
metadata:
name: abc
namespace: monitoring
labels:
app: abc
version: 0.17.0
data:
application.yml: |-
myjava:
security:
enabled: true
abc:
server:
access-log:
enabled: ${myvar}. ## this is not working
"myvar" value is available in pod as shell environment variable from secretkeyref field in deployment file.
Now I want to replace myvar shell environment variable in configmap above i.e before application.yml file is available in pod it should have replaced myvar value. which is not working i tried ${myvar} and $(myvar) and "#{ENV['myvar']}"
Is that possible in kubernetes configmap to reference with in data section pod's environment variable if yes how or should i need to write a script to replace with sed -i application.yml etc.

Is that possible in kubernetes configmap to reference with in data section pod's environment variable
That's not possible. A ConfigMap is not associated with a particular pod, so there's no way to perform the sort of variable substitution you're asking about. You would need to implement this logic inside your containers (fetch the ConfigMap, perform variable substitution yourself, then consume the data).

Related

Using sensitive environment variables in Kubernetes configMaps

I know you can use ConfigMap properties as environment variables in the pod spec, but can you use environment variables declared in the pods spec inside the configmap?
For example:
I have a secret password which I wish to access in my configmap application.properties. The secret looks like so:
apiVersion: v1
data:
pw: THV3OE9vcXVpYTll==
kind: Secret
metadata:
name: foo
namespace: foo-bar
type: Opaque
so inside the pod spec I reference the secret as an env var. The configMap will be mounted as a volume from within the spec:
env:
- name: PASSWORD
valueFrom:
secretKeyRef:
name: foo
key: pw
...
and inside my configMap I can then reference the secret value like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: application.properties
namespace: foo-bar
data:
application.properties: /
secret.password=$(PASSWORD)
Anything I've found online is just about consuming configMap values as env vars and doesn't mention consuming env vars in configMap values.
Currently it's not a Kubernetes Feature.
There is a closed issue requesting this feature and it's kind of controversial topic because the discussion is ongoing many months after being closed:
Reference Secrets from ConfigMap #79224
Referencing the closing comment:
Best practice is to not use secret values in envvars, only as mounted files. if you want to keep all config values in a single object, you can place all the values in a secret object and reference them that way.
Referencing secrets via configmaps is a non-goal... it confuses whether things mounting or injecting the config map are mounting confidential values.
I suggest you to read the entire thread to understand his reasons and maybe find another approach for your environment to get this variables.
"OK, but this is Real Life, I need to make this work"
Then I recommend you this workaround:
Import Data to Config Map from Kubernetes Secret
It makes the substitution with a shell in the entrypoint of the container.

replace configmap contents with some environment variables

i am running a statefulset where i use volumeClaimTemplates. everything's fine there.
i also have a configmap where i would like to essentially replace some entries with the name of the pod for each pod that this config file is projected onto; eg, if the configmap data is:
ThisHost=<hostname -s>
OtherConfig1=1
OtherConfig1=2
...
then for the statefulset pod named mypod-0, the config file should contain ThisHost=mypod-0 and ThisHost=mypod-1 for mypod-1.
how could i do this?
The hostnames are contained in environment variables within the pod by default called HOSTNAME.
It is possible to modify the configmap itself if you first:
mount the configmap and set it to ThisHost=hostname -s (this will create a file in the pod's filesystem with that text)
pass a substitution command to the pod when starting (something like $ sed 's/hostname/$HOSTNAME/g' -i /path/to/configmapfile)
Basically, you mount the configmap and then replace it with the environment variable information that is available within the pod. It's just a substitution operation.
Look at the example below:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["sed"]
args: ["'s/hostname/$HOSTNAME'", "-i", "/path/to/config/map/mount/point"]
restartPolicy: OnFailure
The args' syntax might need some adjustments but you get the idea.
Please let me know if that helped.

Conditional container declaration in k8s config

I am using v1beta2 of kubernetes and I have Deployment kind of configuration.
In this configuration, I have a base conf of my app and I want to add conditionally a second docker image (container) in the same pod.
My config file :
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: ${MY_APP_NAME}
spec:
containers:
- name: my_first_container
image: image_url
[...]
- name: my_second_container <------ I want to put conditional declaration of this container
[...]
I don't want to add the second container in a spearate pod.
The condition is based on a variable like ${K8S_CONTAINER2_CONDITION} valued by sed command in linux.
This command replace variables like ${MY_APP_NAME}.
How can I put conditional declaration of this container ?
For some applications, I need to deploy both containers and for others, only the first one. But I have only one k8s configuration file (yaml).
you should look at helm charts for customizing the deployment file at deploy time

k8s - trigger new pod creation via config map update

I have a deployment for which the env variables for pod are set via config map.
envFrom:
- configMapRef:
name: map
My config map will look like this
apiVersion: v1
data:
HI: HELLO
PASSWORD: PWD
USERNAME: USER
kind: ConfigMap
metadata:
name: map
all the pods have these env variables set from map. Now If I change the config map file and apply - kubectl apply -f map.yaml i get the confirmation that map is configured. However it does not trigger new pods creation with updated env variables.
Interestingly this one works
kubectl set env deploy/mydeploy PASSWORD=NEWPWD
But not this one
kubectl set env deploy/mydeploy --from=cm/map
But I am looking for the way for new pods creation with updated env variables via config map!
Interestingly this one works
kubectl set env deploy/mydeploy PASSWORD=NEWPWD
But not this one
kubectl set env deploy/mydeploy --from=cm/map
This is expected behavior. Your pod manifest hasn't changed in second command (when you use the cm), that's why Kubernetes not recreating it.
There are several ways to deal with that. Basically what you can do is artificially change Pod manifest every time ConfigMap changes, e.g. adding annotation to the Pod with sha256sum of ConfigMap content. This is actually what Helm suggests you do. If you are using Helm it can be done as:
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]
From here: https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md#automatically-roll-deployments-when-configmaps-or-secrets-change
Just make sure you add annotation to Pod (template) object, not the Deployment itself.
The simple answer is NO.
In case you are not using helm & looking for a hack, after updating the configMap, just use an dummy env variable - keep updating the value just to trigger the rolling update.
kubectl set env deploy/mydeploy DUMMY_ENV_FOR_ROLLING_UPDATE=dummyval

If i change my ConfigMap key value after deployment, does deployment of that application which is using configMap values need to be restarted?

i have a sample nodejs application which uses an envVar environment variable, i have deployed this on kubernetes cluster. I am passing the env variable through config map.
once deployed and when pods is all running, if i change my config map with new value. Should deployment of my nodejs application need to be redone after this?
configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: app1-config
namespace: default
data:
envVal: '12345' # initial value
apiUrl: http://a4235a7ee247011e8aa6f0213eb6eb14-1392003683.us-west-2.elb.amazonaws.com/myapp4
after updating the configmap.yaml
configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: app1-config
namespace: default
data:
envVal: '56789' # changed value
apiUrl: http://a4235a7ee247011e8aa6f0213eb6eb14-1392003683.us-west-2.elb.amazonaws.com/myapp4
When you mount the keys from the ConfigMap as environment variables, you would need to restart your pod for the changes to take effect.
When you mount it as volume into you system, the files in the volume will be updated automatically. The update is not immediate, there is some TTL configured in the kubelet before it checks for changes / does the update. But it is normally quite quick. However it would still depend on your application how it loads the data from the file - whether it will be able to update its self on the fly when the files change or whether these data were loaded only once at startup.