Kubernetes: Define environment variables dependent on other ones using "envFrom" - kubernetes

I have two ConfigMap files. One is supposed to be "secret" values and the other has regular values and should import the secrets.
Here's the sample secret ConfigMap:
kind: ConfigMap
metadata:
name: secret-cm
data:
MY_SEKRET: 'SEKRET'
And the regular ConfigMap file:
kind: ConfigMap
metadata:
name: regular-cm
data:
SOME_CONFIG: 123
USING_SEKRET: $(MY_SEKRET)
And my deployment is as follows:
kind: Deployment
spec:
template:
spec:
containers:
- name: my_container
envFrom:
- configMapRef:
name: secret-cm
- configMapRef:
name: regular-cm
I was hoping that my variable USING_SEKRET would be "SEKRET" because of the order the envFrom files are imported but they just appear as "$(MY_SEKRET)" on the Pods.
I've also tried setting the dependent variable as an env directly at the Deployment but it results on the same problem:
kind: Deployment
...
env:
- name: MY_SEKRET
# Not the expected result because the variable is openly visible but should be hidden
value: 'SEKRET'
I was trying to follow the documentation guides, based on the Define an environment dependent variable for a container but I haven't seen examples similar to what I want to do.
Is there a way to do this?
EDIT:
To explain my idea behind this structure, secret-cm whole file will be encrypted at the repository so not all peers will be able to see its contents.
On the other hand, I still want to be able to show everyone where its variables are used, hence the dependency on regular-cm.
With that, authorized peers can run kubectl commands and variable replacements of secret-cm would work properly but for everyone else the file is hidden.

You did not explain why you want to define two configmap (one getting value from another) but I am assuming that you want the env parameter name define in confgimap be independent of paramter name used by your container in pod. If that is the case then create your configmap
kind: ConfigMap metadata: name: secret-cm data: MY_SEKRET: 'SEKRET'
Then in your deployment use the env variable from configmap
kind: Deployment
spec:
template:
spec:
containers:
- name: my_container
env:
- name: USING_SEKRET
valueFrom:
configMapKeyRef:
name: secret-cm
key: MY_SEKRET
Now when you access env variable $USING_SEKRET, it will show value as 'SEKRET'
incase your requirement is different then ignore this response and provide more details.

Related

configmaps are not passing properly to the containers

I have a kubectl config map like below.
apiVersion: v1
data:
server.properties: |+
server.hostname=test.com
kind: ConfigMap
metadata:
name: my-config
And I tried to read this config inside a container.
containers:
- name: testserver
env:
- name: server.hostname
valueFrom:
configMapKeyRef:
name: my-config
key: server.properties.server.hostname
However, these configs are not passing to the container properly. Do I need do any changes to my configs?
What you have in there isn't the right key. ConfigMaps are strictly 1 level of k/v pairs. The |+ syntax is YAML for a multiline string but the fact the data inside that is also YAML is not something the system knows. As far as Kubernetes is concerned you have one key there, server.properties, with a string value that is opaque.

Use an environment variable as integer in yaml file

I have an application in a container which reads a YAML file which contains data like
initializationCount=0
port=980
Now that I want to remove those hard coded values inside the application and get them out of the container. Hence I created a configMap with all configuration values. I used the config map keys as environmental variables while deploying the pod.
My issue is that, If I want to use these environment variables in my yaml file like
initializationCount=${iCount}
port=${port}
The API which reads this YAML file throws number format Exception since the env variables are always strings. I do not have control over the API which reads my yaml file.
I have tried
initializationCount=!!int ${iCount}
but it does not work.
Rather than pulling in the configmap values as environment variables, try mounting the configmap as a volume at runtime.
The configmap should have one key which is the name of your YAML file. the value for that key should be the contents of the file.
This data will be mounted to the container's filesystem when the pod initializes. That way your app will read the config YAML the same way it has been, but the values will be externalized in the configmap.
Something like this:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-config
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
config.yaml: |
initializationCount=0
port=980
Kubernetes docs here

Is it possible to get container host path from environment variable?

I am trying to set configurable host paths in kubernetes, but I am facing issues. I created a config map, which has the path and then I'm trying to replace the placeholder with the config map value. Here is my configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: php
namespace: app
spec:
template:
spec:
containers:
- name: php
env:
- name: PHP_FOLDER
valueFrom:
configMapKeyRef:
name: local-paths
key: CODE_PATH
volumes:
- name: src-code
hostPath:
path: PHP_FOLDER
type: Directory
I also tried
apiVersion: apps/v1
kind: Deployment
metadata:
name: php
namespace: app
spec:
template:
spec:
containers:
- name: php
env:
- name: PHP_FOLDER
valueFrom:
configMapKeyRef:
name: local-paths
key: CODE_PATH
volumes:
- name: src-code
hostPath:
path: $(PHP_FOLDER)
type: Directory
I either get Error: Error response from daemon:
create $(PHP_FOLDER): "$(PHP_FOLDER)" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path or MountVolume.SetUp failed for volume "src-code" : hostPath type check failed: PHP_FOLDER is not a directory
You just can't use environment values in yaml files directly. What you did actually a lot worse. ConfigMaps or secrets are created in runtime which means they won't be available until container starts running. However yaml parsing happens before pod or deployment even created. I think you should practice timeline more until you are using kubernetes in production or even testing.
In this case best practice would be to use bash script to change yaml file just before deploying and automatize it.
You cannot use a variable for path definition.
As other users stated correctly, you can not use env variables in plain yaml/ kubectl. I suggest you add a lean layer before the deployment command and template the manifest with e.g. envsubst or ansible

Kubernetes import environment variables from a different .yml file

Is it possible to import environment variables from a different .yml file into the deployment file. My container requires environment variables.
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: <removed>
imagePullPolicy: Always
env:
- name: NODE_ENV
value: "TEST"
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
vars.yml
NODE_ENV: TEST
What i'd like is to declare my variables in a seperate file and simply import them into the deployment.
What you describe sounds like a helm use case. If your deployment were part of a helm chart/template then you could have different values files (which are yaml) and inject the values from them into the template based on your parameters at install time. Helm is a common choice for helping to manage env-specific config.
But note that if you just want to inject an environment variable in your yaml rather than taking it from another yaml then a popular way to do that is envsubst.

How to dynamically populate values into Kubernetes yaml files

I would like to pass in some of the values in kubernetes yaml files during runtime like reading from config/properties file.
what is the best way to do that?
In the below example, I do not want to hardcode the port value, instead read the port number from config file.
Ex:
logstash.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: 33044 (looking to read this port from config file)
env:
- name: INPUT_PORT
value: "5044"
config.yaml
logstash_port: 33044
This sounds like a perfect use case for Helm (www.helm.sh).
Helm Charts helps you define, install, and upgrade Kubernetes applications. You can use a pre-defined chart (like Nginx, etc) or create your own chart.
Charts are structured like:
mychart/
Chart.yaml
values.yaml
charts/
templates/
...
In the templates folder, you can include your ReplicationController files (and any others). In the values.yaml file you can specify any variables you wish to share amongst the templates (like port numbers, file paths, etc).
The values file can be as simple or complex as you require. An example of a values file:
myTestService:
containerPort: 33044
image: "logstash"
You can then reference these values in your template file using:
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: {{ .Values.myTestService.containerPort }}
env:
- name: INPUT_PORT
value: "5044"
Once finished you can compile into Helm chart using helm package mychart. To deploy to your Kubernetes cluster you can use helm install mychart-VERSION.tgz. That will then deploy your chart to the cluster. The version number is set within the Chart.yaml file.
You can use Kubernetes ConfigMaps for this. ConfigMaps are introduced to include external configuration files such as property files.
First create a ConfigMap artifact out of your property like follows:
kubectl create configmap my-config --from-file=db.properties
Then in your Deployment yaml you can provide it as a volume binding or environment variables
Volume binding :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
volumeMounts:
- name: config-volume
mountPath: /etc/creds <mount path>
volumes:
- name: config-volume
configMap:
name: my-config
Here under mountPath you need to provide the location of your container where your property file should resides. And underconfigMap name you should define the name of your configMap you created.
Environment variables way :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
env:
- name: DB_PROPERTIES
valueFrom:
configMapKeyRef:
name: my-config
items:
- key: <propert name>
path: <path/to/property>
Here under the configMapKeyRef section under name you should define your config map name you created. e.g. my-config. Under the items you should define the key(s) of your property file and path to each of the key, Kubernetes will automatically resolve the value of the property internally.
You can find more about ConfigMap here.
https://kubernetes-v1-4.github.io/docs/user-guide/configmap/
There are some parameters you can't change once a pod is created. containerPort is one of them.
You can add a new container to a pod though. And open a new port.
The parameters you CAN change, you can do it either by dynamically creating or modifying the original deployment (say with sed) and running kubectl replace -f FILE command, or through kubectl edit DEPLOYMENT command; which automatically applies the changes.