How to add external yml config file into Kubernetes config map - kubernetes

I have a config file inside a config folder i.e console-service.yml. I am trying to load at runtime using configMap, below is my deployment yml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: consoleservice
spec:
replicas: 1
template:
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: docker.example.com/app:1
volumeMounts:
- name: console-config-volume
mountPath: /config/console-server.yml
subPath: console-server.yml
readOnly: true
volumes:
- name: console-config-volume
configMap:
name: console-config
kind: ConfigMap
apiVersion: v1
metadata:
name: consoleservice
data:
pool.size.core: 1
pool.size.max: 16
I am new to configMap. How I can read .yml configuration from config/ location?

There are two possible solutions for your problem.
1. Embedd your File directly into the ConfigMap
This coul look similar to this:
kind: ConfigMap
apiVersion: v1
metadata:
name: some-yaml
file.yaml: |
pool:
size:
core: 1
max: 16
2. Create a ConfigMap from your YAML-File
Would be done by using kubectl:
kubectl create configmap some-yaml \
--from-file=./some-yaml-file.yaml
This would create a ConfigMap containg the selected file. You can add multiple files to a single ConfigMap.
You can find more informations in the Documentation.

Related

Kubernetes pod level configuration externalization in spring boot app

I need some help from the community, I'm still new to K8 and Spring Boot. Thanks all in advance.
what I need is to have 4 K8 pods running in K8 environment and each pod have slightly different configuration from each other, for example, I have a property in one of my java class called regions, it extract it's value from Application.yml, like
#Value("${regions}")
Private String regions;
Now after deploy it to K8 env I want to have 4 pods(I can configure it in helm file) running and in each pod the regions field should have different value.
Is this something achievable ? Can anyone please give any advice ?
If you want to run 4 different pods with different configurations, you have to deploy the 4 different deployments in kubernetes.
You can create the different configmap as per need storing the whole Application.yaml file or environment variables and inject it to different deployments.
how to store whole application.yaml inside config map
apiVersion: v1
kind: ConfigMap
metadata:
name: yaml-region-first
data:
application.yaml: |-
data: test,
region: first-region
the same way you can create the config map with the second deployment.
apiVersion: v1
kind: ConfigMap
metadata:
name: yaml-region-second
data:
application.yaml: |-
data: test,
region: second-region
you can inject this configmap to each deployment
example :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-app
name: hello-app
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: hello-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: hello-app
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/nginx/app.yaml
name: yaml-file
readOnly: true
volumes:
- configMap:
name: yaml-region-second
optional: false
name: yaml-file
accordingly, you can also create the helm chart.
If you just to pass the single environment instead of storing the whole file inside the configmap you can directly add value to the deployment.
Example :
apiVersion: v1
kind: Pod
metadata:
name: print-greeting
spec:
containers:
- name: env-print-demo
image: bash
env:
- name: REGION
value: "one"
- name: HONORIFIC
value: "The Most Honorable"
- name: NAME
value: "Kubernetes"
command: ["echo"]
args: ["$(GREETING) $(HONORIFIC) $(NAME)"]
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
for each deployment, your environment will be different and in helm, you can dynamically also update or overwrite it using CLI command.

Replacing properties file in container using configmaps in kubernetes

I am trying to replace properties file in container using configMap and volumeMount in deployment.yaml file.
Below is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-properties
spec:
selector:
matchLabels:
app: agent-2
replicas: 2
template:
metadata:
labels:
app: agent-2
spec:
containers:
- name: agent-2
image: agent:latest
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/usr/local/tomcat/webapps/agent/WEB-INF/classes/conf/application.properties"
name: "applictaion-conf"
subPath: "application.properties"
volumes:
- name: applictaion-conf
configMap:
name: dddeagent-configproperties
items:
- key: "application.properties"
path: "application.properties"
Below is snippet from configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-configp
data:
application.properties: |-
AGENT_HOME = /var/ddeagenthome
LIC_MAXITERATION=5
LIC_MAXDELAY=10000
After deployment, complete folder structure is getting mounted instead of single file. Because of which all the files are getting deleted from existing folder.
Version - 1.21.13
I checked this configuration and there are few misspelling. You are referring to config map "dddeagent-configproperties" but you have defined a ConfigMap object named as "agent-configp".
configMap: name: dddeagent-configproperties
Should be:
configMap: name: agent-configp
Besides that there a few indentation errors, so I will paste a fixed files at the end of the answer.
To the point of your question: your approach is correct and as I tested in my setup everything was working properly without any issues. I created a sample pod with mounted the ConfigMap the same way you are doing it (in the directory where there are other files). The ConfigMap was mounted as a file as it should and other files were still available in the directory.
Mounts:
/app/upload/test-folder/file-1 from application-conf (rw,path="application.properties")
Your approach is the same as described here.
Please double check that on the pod without mounted config map the directory /usr/local/tomcat/webapps/agent/WEB-INF/classes/conf really exists and other files are here. As your image is not public avaiable, I checked with the tomcat image and /usr/local/tomcat/webapps/ directory is empty. Note that even if this directory is empty, the Kubernetes will create agent/WEB-INF/classes/conf directories and application.properties file here, when you want to mount a file.
Fixed deployment and ConfigMap files with good indentation and without misspellings:
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-properties
spec:
selector:
matchLabels:
app: agent-2
replicas: 2
template:
metadata:
labels:
app: agent-2
spec:
containers:
- name: agent-2
image: agent:latest
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/usr/local/tomcat/webapps/agent/WEB-INF/classes/conf/application.properties"
name: "application-conf"
subPath: "application.properties"
volumes:
- name: application-conf
configMap:
name: agent-configp
items:
- key: "application.properties"
path: "application.properties"
Config file:
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-configp
data:
application.properties: |-
AGENT_HOME = /var/ddeagenthome
LIC_MAXITERATION=5
LIC_MAXDELAY=1000

How to create a volume that mounts a file which it's path configured in a ConfigMap

I'll describe what is my target then show what I had done to achieve it... my goal is to:
create a configmap that holds a path for properties file
create a deployment, that has a volume mounting the file from the path configured in configmap
What I had done:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: "my.properties"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-client-deployment
spec:
selector:
matchLabels:
app: my-client
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: my-client
spec:
containers:
- name: my-client-container
image: {{ .Values.image.client}}
imagePullPolicy: {{ .Values.pullPolicy.client }}
ports:
- containerPort: 80
env:
- name: MY_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: my-configmap
key: my_properties_file_name
volumeMounts:
- name: config
mountPath: "/etc/config"
readOnly: true
imagePullSecrets:
- name: secret-private-registry
volumes:
# You set volumes at the Pod level, then mount them into containers inside that Pod
- name: config
configMap:
# Provide the name of the ConfigMap you want to mount.
name: my-configmap
# An array of keys from the ConfigMap to create as files
items:
- key: "my_properties_file_name"
path: "my.properties"
The result is having a file namedmy.properties under /etc/config, BUT the content of that file is "my.properties" (as it was indicated as the file name in the configmap), and not the content of properties file as I have it actually in my localdisk.
How can I mount that file, using it's path configured in a configmap?
Put the content of the my.properties file directly inside the ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: |
This is the content of the file.
It supports multiple lines but do take care of the indentation.
Or you can also use a kubectl create configmap command:
kubectl create configmap my-configmap --from-file=my_properties_file_name=./my.properties
In either method, you are actually passing the snapshot of the content of the file on the localdisk to kubernetes to store. Any changes you make to the file on the localdisk won't be reflected unless you re-create the configmap.
The design of kubernetes allows running kubectl command against kubernetes cluster located on the other side of the globe so you can't simply mount a file on your localdisk to be accessible in realtime by the cluster. If you want such mechanism, you can't use a ConfigMap, but instead you would need to setup a shared volume that is mounted by both your local machine and the cluster for example using a NFS server.

Can i get a configMap value from an external file?

I have this configMap defined :
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
labels:
app: my-config
data:
myConfiguration.json: |
{
"configKey": [
{
"key" : "value"
},
{
"key" : "value"
}
}
and this is how i use it in my pod :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: someimage
name: someimage
spec:
selector:
matchLabels:
app: someimage
replicas: 1
template:
metadata:
labels:
app: someimage
spec:
containers:
- image: someimage
name: someimage
command:
- mb
- --configfile
- /configFolder/myConfig.json
ports:
- containerPort: 2525
volumeMounts:
- name: config-volume
mountPath: /configFolder
hostname: somehost
restartPolicy: Always
nodeSelector:
beta.kubernetes.io/os: linux
volumes:
- name: config-volume
configMap:
name: my-config
items:
- key: myConfiguration.json
path: myConfiguration.json
my question is : is it possible to keep the value of the myconfiguration (the json string) in a separate file, separated from the configmap? in order to keep it clean? How would i need to change the deployment and the configmap yaml definitions so i do not have to change the application?
Important : i cannot use any separate templating tool.
thanks
Yes you can! Using Kustomize.
Kustomize is kubectl sub-command introduced in 1.14, and it has a lot of features that will help customize your deployments.
To do that you'll have to use ConfigMaps Generators. This will require an additional file kustomization.yml.
So if your deployment yaml file is deployment.yaml and your configMap's name is my-config, then the kustomization.yaml should look something like this
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
configMapGenerator:
- name: my-config
files:
- myConfiguration.json
- myConfiguration2.json # you can use multiple files
To run kustomize you'll have to use kubectl apply with the -k option.
Edit: Kustomize will append the value of the hash of you ConfigMaps into their names. Having that it will be able to track changes on you configurations and trigger a redeploy for you whenever they change.
So no need for deleting your pods whenever you configMaps are altered.

Kubernetes - ConfigMap for nested variables

We have an image deployed in an AKS cluster for which we need to update a config entry during deployment using configmaps.
The configuration file has the following key and we are trying to replace the value of the "ChildKey" without replacing the entire file -
{
"ParentKey": {
"ChildKey": "123"
}
}
The configmap looks like -
apiVersion: v1
data:
ParentKey: |
ChildKey: 456
kind: ConfigMap
name: cf
And in the deployment, the configmap is used like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey
valueFrom:
configMapKeyRef:
key: ParentKey
name: cf
The replacement is not working with the setup above. Is there a different way to declare the key names for nested structures?
We have addressed this in the following manner -
The configmap carries a simpler structure - only the child element -
apiVersion: v1
data:
ChildKey: 456
kind: ConfigMap
name: cf
In the deployment, the environment variable key refers to the child key like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey__ChildKey
valueFrom:
configMapKeyRef:
key: ChildKey
name: cf
Posting this for reference.
use the double underscore for nested environment variables and arrays as explained here
To avoid explicit environment variables and typing names twice, you can use envFrom
configMap.yaml
apiVersion: v1
data:
ParentKey__ChildKey: 456
kind: ConfigMap
name: cf
deployment.yml
containers:
- name: $(name)
image: $(image)
envFrom:
- configMapRef:
name: common-config
- configMapRef:
name: specific-config