Can i get a configMap value from an external file? - kubernetes

I have this configMap defined :
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
labels:
app: my-config
data:
myConfiguration.json: |
{
"configKey": [
{
"key" : "value"
},
{
"key" : "value"
}
}
and this is how i use it in my pod :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: someimage
name: someimage
spec:
selector:
matchLabels:
app: someimage
replicas: 1
template:
metadata:
labels:
app: someimage
spec:
containers:
- image: someimage
name: someimage
command:
- mb
- --configfile
- /configFolder/myConfig.json
ports:
- containerPort: 2525
volumeMounts:
- name: config-volume
mountPath: /configFolder
hostname: somehost
restartPolicy: Always
nodeSelector:
beta.kubernetes.io/os: linux
volumes:
- name: config-volume
configMap:
name: my-config
items:
- key: myConfiguration.json
path: myConfiguration.json
my question is : is it possible to keep the value of the myconfiguration (the json string) in a separate file, separated from the configmap? in order to keep it clean? How would i need to change the deployment and the configmap yaml definitions so i do not have to change the application?
Important : i cannot use any separate templating tool.
thanks

Yes you can! Using Kustomize.
Kustomize is kubectl sub-command introduced in 1.14, and it has a lot of features that will help customize your deployments.
To do that you'll have to use ConfigMaps Generators. This will require an additional file kustomization.yml.
So if your deployment yaml file is deployment.yaml and your configMap's name is my-config, then the kustomization.yaml should look something like this
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
configMapGenerator:
- name: my-config
files:
- myConfiguration.json
- myConfiguration2.json # you can use multiple files
To run kustomize you'll have to use kubectl apply with the -k option.
Edit: Kustomize will append the value of the hash of you ConfigMaps into their names. Having that it will be able to track changes on you configurations and trigger a redeploy for you whenever they change.
So no need for deleting your pods whenever you configMaps are altered.

Related

Populating a Containers environment values with mounted configMap in Kubernetes

I'm currently learning Kubernetes and recently learnt about using ConfigMaps for a Containers environment variables.
Let's say I have the following simple ConfigMap:
apiVersion: v1
data:
MYSQL_ROOT_PASSWORD: password
kind: ConfigMap
metadata:
creationTimestamp: null
name: mycm
I know that a container of some deployment can consume this environment variable via:
kubectl set env deployment mydb --from=configmap/mycm
or by specifying it manually in the manifest like so:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
key: MYSQL_ROOT_PASSWORD
name: mycm
However, this isn't what I am after, since I'd to manually change the environment variables each time the ConfigMap changes.
I am aware that mounting a ConfigMap to the Pod's volume allows for the auto-updating of ConfigMap values. I'm currently trying to find a way to set a Container's environment variables to those stored in the mounted config map.
So far I have the following YAML manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
replicas: 1
selector:
matchLabels:
app: mydb
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mydb
spec:
containers:
- image: mariadb
name: mariadb
resources: {}
args: ["export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD)"]
volumeMounts:
- name: config-volume
mountPath: /etc/config
env:
- name: MYSQL_ROOT_PASSWORD
value: temp
volumes:
- name: config-volume
configMap:
name: mycm
status: {}
I'm attempting to set the MYSQL_ROOT_PASSWORD to some temporary value, and then update it to mounted value as soon as the container starts via args: ["export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD)"]
As I somewhat expected, this didn't work, resulting in the following error:
/usr/local/bin/docker-entrypoint.sh: line 539: /export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD): No such file or directory
I assume this is because the volume is mounted after the entrypoint. I tried adding a readiness probe to wait for the mount but this didn't work either:
readinessProbe:
exec:
command: ["sh", "-c", "test -f /etc/config/MYSQL_ROOT_PASSWORD"]
initialDelaySeconds: 5
periodSeconds: 5
Is there any easy way to achieve what I'm trying to do, or is it impossible?
So I managed to find a solution, with a lot of inspiration from this answer.
Essentially, what I did was create a sidecar container based on the alpine K8s image that mounts the configmap and constantly watches for any changes, since the K8s API automatically updates the mounted configmap when the configmap is changed. This required the following script, watch_passwd.sh, which makes use of inotifywait to watch for changes and then uses the K8s API to rollout the changes accordingly:
update_passwd() {
kubectl delete secret mysql-root-passwd > /dev/null 2>&1
kubectl create secret generic mysql-root-passwd --from-file=/etc/config/MYSQL_ROOT_PASSWORD
}
update_passwd
while true
do
inotifywait -e modify "/etc/config/MYSQL_ROOT_PASSWORD"
update_passwd
kubectl rollout restart deployment $1
done
The Dockerfile is then:
FROM docker.io/alpine/k8s:1.25.6
RUN apk update && apk add inotify-tools
COPY watch_passwd.sh .
After building the image (locally in this case) as mysidecar, I create the ServiceAccount, Role, and RoleBinding outlined here, adding rules for deployments so that they can be restarted by the sidecar.
After this, I piece it all together to create the following YAML Manifest (note that imagePullPolicy is set to Never, since I created the image locally):
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
replicas: 3
selector:
matchLabels:
app: mydb
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mydb
spec:
serviceAccountName: secretmaker
containers:
- image: mysidecar
name: mysidecar
imagePullPolicy: Never
command:
- /bin/sh
- -c
- |
./watch_passwd.sh $(DEPLOYMENT_NAME)
env:
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
volumeMounts:
- name: config-volume
mountPath: /etc/config
- image: mariadb
name: mariadb
resources: {}
envFrom:
- secretRef:
name: mysql-root-passwd
volumes:
- name: config-volume
configMap:
name: mycm
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: secretmaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: mydb
name: secretmaker
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete", "list"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: mydb
name: secretmaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secretmaker
subjects:
- kind: ServiceAccount
name: secretmaker
namespace: default
---
It all works as expected! Hopefully this is able to help someone out in the future. Also, if anybody comes across this and has a better solution please feel free to let me know :)

Replacing properties file in container using configmaps in kubernetes

I am trying to replace properties file in container using configMap and volumeMount in deployment.yaml file.
Below is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-properties
spec:
selector:
matchLabels:
app: agent-2
replicas: 2
template:
metadata:
labels:
app: agent-2
spec:
containers:
- name: agent-2
image: agent:latest
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/usr/local/tomcat/webapps/agent/WEB-INF/classes/conf/application.properties"
name: "applictaion-conf"
subPath: "application.properties"
volumes:
- name: applictaion-conf
configMap:
name: dddeagent-configproperties
items:
- key: "application.properties"
path: "application.properties"
Below is snippet from configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-configp
data:
application.properties: |-
AGENT_HOME = /var/ddeagenthome
LIC_MAXITERATION=5
LIC_MAXDELAY=10000
After deployment, complete folder structure is getting mounted instead of single file. Because of which all the files are getting deleted from existing folder.
Version - 1.21.13
I checked this configuration and there are few misspelling. You are referring to config map "dddeagent-configproperties" but you have defined a ConfigMap object named as "agent-configp".
configMap: name: dddeagent-configproperties
Should be:
configMap: name: agent-configp
Besides that there a few indentation errors, so I will paste a fixed files at the end of the answer.
To the point of your question: your approach is correct and as I tested in my setup everything was working properly without any issues. I created a sample pod with mounted the ConfigMap the same way you are doing it (in the directory where there are other files). The ConfigMap was mounted as a file as it should and other files were still available in the directory.
Mounts:
/app/upload/test-folder/file-1 from application-conf (rw,path="application.properties")
Your approach is the same as described here.
Please double check that on the pod without mounted config map the directory /usr/local/tomcat/webapps/agent/WEB-INF/classes/conf really exists and other files are here. As your image is not public avaiable, I checked with the tomcat image and /usr/local/tomcat/webapps/ directory is empty. Note that even if this directory is empty, the Kubernetes will create agent/WEB-INF/classes/conf directories and application.properties file here, when you want to mount a file.
Fixed deployment and ConfigMap files with good indentation and without misspellings:
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-properties
spec:
selector:
matchLabels:
app: agent-2
replicas: 2
template:
metadata:
labels:
app: agent-2
spec:
containers:
- name: agent-2
image: agent:latest
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/usr/local/tomcat/webapps/agent/WEB-INF/classes/conf/application.properties"
name: "application-conf"
subPath: "application.properties"
volumes:
- name: application-conf
configMap:
name: agent-configp
items:
- key: "application.properties"
path: "application.properties"
Config file:
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-configp
data:
application.properties: |-
AGENT_HOME = /var/ddeagenthome
LIC_MAXITERATION=5
LIC_MAXDELAY=1000

Use kustomize to set hostPath path

Is it possible to use kustomize to specify a volume hostPath from an env variable?
I have a Kubernetes manifest that describes my deployment consisting of a container.
During development, I use a different image (that contains dev tools) and mount code from my host into the container. This way I can make code changes without having to re-deploy.
I'm using a patchStategicMerge to replace the production image, with the one I want to use during dev and mount the code into the container i.e.
kustomization.yaml
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- my-service.yaml
my-service.yaml
---
apiVersion: apps/v1
...
...
spec:
containers:
- name: myservice
image: myservice-dev-image:1.0.0
command: ['CompileDaemon', '--build=make build', '--command=./myservice']
volumeMounts:
- name: code
mountPath: /go/src/app
volumes:
- name: code
hostPath:
path: /source/mycodepath/github.com/myservice
What I'd like to do is make the path configurable via an environment variable, so that I don't have to check my specific path (/source/mycodepath/) into git, and so that other developers can easily use it in their own environment.
Is it possible to do this with kustomize ?
Create following directory structure
k8s
k8s/base
k8s/overlays
k8s/overlays/bob
k8s/overlays/sue
First we need to create the base. The base is the default template and it provide the bits that apply to both people. In k8s/base create a file called app.yaml and populate with the following (actually paste yours here. You can put other common bits in there too separated by a --- and new line).
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myservice
namespace: default
spec:
strategy:
type: RollingUpdate
replicas: 1
template:
metadata:
labels:
name: myservice
app: myservice
spec:
containers:
- name: myservice
image: myservice-dev-image:1.0.0
command: ['CompileDaemon', '--build=make build', '--command=./myservice']
volumeMounts:
- name: code
mountPath: /go/src/app
volumes:
- name: code
hostPath:
path: /xxx
Next in the same directory (k8s/base) create another file called kustomization.yaml and populate with:
resources:
- app.yaml
Next we will create two overlays: one for Bob and one for Sue.
In k8s/overlays/bob let's create Bob's custom changes as app.yaml and populate with the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myservice
namespace: default
spec:
template:
spec:
volumes:
- name: code
hostPath:
path: /users/bob/code
Now also in k8s/overlays/bob create another file called kustomization.yaml with the following:
resources:
- ../../base
patchesStrategicMerge:
- app.yaml
We can copy the two files in k8s/overlays/bob into the k8s/overlays/sue directory and just change the path in the volumes: bit.
Next we need to do a kustomize build to generate the resulting versions - bob and sue.
If the k8s directory is in your code directory, open terminal (with Kustomize installed and run:
kustomize build k8s/overlays/bob
That should show you what Bob's kustomization will look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myservice
namespace: default
spec:
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: myservice
name: myservice
spec:
containers:
- command:
- CompileDaemon
- --build=make build
- --command=./myservice
image: myservice-dev-image:1.0.0
name: myservice
volumeMounts:
- mountPath: /go/src/app
name: code
volumes:
- hostPath:
path: /users/bob/code
name: code
To apply that you can run:
kustomize build k8s/overlays/bob | kubectl apply -f -
To apply Sue you can run:
kustomize build k8s/overlays/sue| kubectl apply -f -
Yaml is sensitive about spaces and I'm not sure this will sit well in a Stackoverflow answer so I've put on Github as well: https://github.com/just1689/kustomize-local-storage

Is there a way to dynamically add values in deployment.yml files?

I have deployment.yml file where i'm mounting service logging folder to a folder in host machine.
The issue is when i run multiple instances using the same deployment.yml file like scaling up all the instances are logging to a same file. Is there a way to solve this by dynamically creating folder in host machine based on container id or something. Any suggestions is appreciated.
My current deployment.yml file is
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
spec:
selector:
matchLabels:
app: logstash
replicas: 2
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: logstash:6.8.6
volumeMounts:
- mountPath: /usr/share/logstash/config/
name: config
- mountPath: /usr/share/logstash/logs/
name: logs
volumes:
- name: config
hostPath:
path: "/etc/logstash/"
- name: logs
hostPath:
path: "/var/logs/logstash"
There are some fields in kubernetes which you can get dynamically like node name, pod name, pod ip, etc. Refer this (https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) doc for examples.
Here is an example where you can set node-name as an environment variable.
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
You can change your deployment in such a way that it creates a file by adding node name to it.. In this way you can have different file name on each node. Recommended is to create a daemonset instead of deployment which will spawn one pod on each selected nodes (selection can be done using node selector).
you can use sed for dynamically adding some values
for example:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
spec:
selector:
matchLabels:
app: logstash
replicas: 2
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: logstash:6.8.6
volumeMounts:
- mountPath: /usr/share/logstash/config/
name: config
- mountPath: /usr/share/logstash/logs/
name: logs
volumes:
- name: config
hostPath:
path: {path}
- name: logs
hostPath:
path: "/var/logs/logstash"
Now I want to add dynamically add the path
I will simply
set -i "s|{path}:'/etc/logstash/'|g" deployment.yml
In this way, you can put as many values as you want before deploying the file.

How to dynamically populate values into Kubernetes yaml files

I would like to pass in some of the values in kubernetes yaml files during runtime like reading from config/properties file.
what is the best way to do that?
In the below example, I do not want to hardcode the port value, instead read the port number from config file.
Ex:
logstash.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: 33044 (looking to read this port from config file)
env:
- name: INPUT_PORT
value: "5044"
config.yaml
logstash_port: 33044
This sounds like a perfect use case for Helm (www.helm.sh).
Helm Charts helps you define, install, and upgrade Kubernetes applications. You can use a pre-defined chart (like Nginx, etc) or create your own chart.
Charts are structured like:
mychart/
Chart.yaml
values.yaml
charts/
templates/
...
In the templates folder, you can include your ReplicationController files (and any others). In the values.yaml file you can specify any variables you wish to share amongst the templates (like port numbers, file paths, etc).
The values file can be as simple or complex as you require. An example of a values file:
myTestService:
containerPort: 33044
image: "logstash"
You can then reference these values in your template file using:
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: {{ .Values.myTestService.containerPort }}
env:
- name: INPUT_PORT
value: "5044"
Once finished you can compile into Helm chart using helm package mychart. To deploy to your Kubernetes cluster you can use helm install mychart-VERSION.tgz. That will then deploy your chart to the cluster. The version number is set within the Chart.yaml file.
You can use Kubernetes ConfigMaps for this. ConfigMaps are introduced to include external configuration files such as property files.
First create a ConfigMap artifact out of your property like follows:
kubectl create configmap my-config --from-file=db.properties
Then in your Deployment yaml you can provide it as a volume binding or environment variables
Volume binding :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
volumeMounts:
- name: config-volume
mountPath: /etc/creds <mount path>
volumes:
- name: config-volume
configMap:
name: my-config
Here under mountPath you need to provide the location of your container where your property file should resides. And underconfigMap name you should define the name of your configMap you created.
Environment variables way :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
env:
- name: DB_PROPERTIES
valueFrom:
configMapKeyRef:
name: my-config
items:
- key: <propert name>
path: <path/to/property>
Here under the configMapKeyRef section under name you should define your config map name you created. e.g. my-config. Under the items you should define the key(s) of your property file and path to each of the key, Kubernetes will automatically resolve the value of the property internally.
You can find more about ConfigMap here.
https://kubernetes-v1-4.github.io/docs/user-guide/configmap/
There are some parameters you can't change once a pod is created. containerPort is one of them.
You can add a new container to a pod though. And open a new port.
The parameters you CAN change, you can do it either by dynamically creating or modifying the original deployment (say with sed) and running kubectl replace -f FILE command, or through kubectl edit DEPLOYMENT command; which automatically applies the changes.