using ConfigMap created using Generator in Kustomize/Kubernetes - kubernetes

I have been trying to figure out how to consume a ConfigMap created using a ConfigMap generator via Kustomize.
When created using Kustomize generators, the configMaps are named with a special suffix. See here:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap-from-generator
Question is how can this be referenced?

You don't reference it yourself. Kustomize recognizes where the configMap is used in the other resources (like a Deployment) and changes those references to use the name+hash.
The reason for this is so that if you change the configMap, Kustomize generates a new hash and updates the Deployment, causing a rolling restart of the Pods.
If you don't want this behavior, you can add the following to your kustomization.yaml file:
generatorOptions:
disableNameSuffixHash: true

It is specified there in the doc. When you do kubectl apply -k . a configmap created named game-config-4-m9dm2f92bt.
You can check that the ConfigMap was created like this: kubectl get configmap. This ConfigMap will contains a field data where your given datas will belong.
Now as usual you can use this configmap in a pod. Like below:
Ex from k8s:
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
restartPolicy: Never
You can use ConfigMap as volume also, like this example from k8s doc:
apiVersion: v1
kind: Pod
metadata:
name: configmap-demo-pod
spec:
containers:
- name: demo
image: alpine
command: ["sleep", "3600"]
env:
# Define the environment variable
- name: PLAYER_INITIAL_LIVES # Notice that the case is different here
# from the key name in the ConfigMap.
valueFrom:
configMapKeyRef:
name: game-demo # The ConfigMap this value comes from.
key: player_initial_lives # The key to fetch.
- name: UI_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: game-demo
key: ui_properties_file_name
volumeMounts:
- name: config
mountPath: "/config"
readOnly: true
volumes:
# You set volumes at the Pod level, then mount them into containers inside that Pod
- name: config
configMap:
# Provide the name of the ConfigMap you want to mount.
name: game-demo
# An array of keys from the ConfigMap to create as files
items:
- key: "game.properties"
path: "game.properties"
- key: "user-interface.properties"
path: "user-interface.properties
You can see k8s official doc

I was struggling with this too. I could not figure out why kustomize was not updating the configmap name for the volume in the deployment to include the hash. What solved this for me was to add namespace: <namespace> in the kustomization.yaml for both the base and overlay.

Related

Add to configmap pod name in Kubernetes

I'm trying to figure it out, how to change one string inside configmap in Kubernetes. I have pretty simple configmap:
apiVersion: v1
data:
config.cfg: |-
[authentication]
USERNAME=user
PASSWORD=password
[podname]
PODNAME=metadata.podName
kind: ConfigMap
metadata:
name: name_here
And I need to mount the configmap inside a couple of pods. But PODNAME should be matched to current podname. Is it possible in any another way? thanks!
I do not think it could be done with ConfigMap. But you can set environment variables in your pod spec that references a pod fields.
apiVersion: v1
kind: Pod
metadata:
name: test-ref-pod-name
spec:
containers:
- name: test-container
image: busybox
command: [ "sh", "-c"]
args:
- env | grep PODNAME
env:
- name: PODNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
restartPolicy: Never
See official documentation: https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables
This doesn't answer you question exactly but the pod name normally ends up as the host name inside the pod and can be accessed as a standard environment variable
echo $HOSTNAME

configmaps are not passing properly to the containers

I have a kubectl config map like below.
apiVersion: v1
data:
server.properties: |+
server.hostname=test.com
kind: ConfigMap
metadata:
name: my-config
And I tried to read this config inside a container.
containers:
- name: testserver
env:
- name: server.hostname
valueFrom:
configMapKeyRef:
name: my-config
key: server.properties.server.hostname
However, these configs are not passing to the container properly. Do I need do any changes to my configs?
What you have in there isn't the right key. ConfigMaps are strictly 1 level of k/v pairs. The |+ syntax is YAML for a multiline string but the fact the data inside that is also YAML is not something the system knows. As far as Kubernetes is concerned you have one key there, server.properties, with a string value that is opaque.

How to create a volume that mounts a file which it's path configured in a ConfigMap

I'll describe what is my target then show what I had done to achieve it... my goal is to:
create a configmap that holds a path for properties file
create a deployment, that has a volume mounting the file from the path configured in configmap
What I had done:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: "my.properties"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-client-deployment
spec:
selector:
matchLabels:
app: my-client
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: my-client
spec:
containers:
- name: my-client-container
image: {{ .Values.image.client}}
imagePullPolicy: {{ .Values.pullPolicy.client }}
ports:
- containerPort: 80
env:
- name: MY_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: my-configmap
key: my_properties_file_name
volumeMounts:
- name: config
mountPath: "/etc/config"
readOnly: true
imagePullSecrets:
- name: secret-private-registry
volumes:
# You set volumes at the Pod level, then mount them into containers inside that Pod
- name: config
configMap:
# Provide the name of the ConfigMap you want to mount.
name: my-configmap
# An array of keys from the ConfigMap to create as files
items:
- key: "my_properties_file_name"
path: "my.properties"
The result is having a file namedmy.properties under /etc/config, BUT the content of that file is "my.properties" (as it was indicated as the file name in the configmap), and not the content of properties file as I have it actually in my localdisk.
How can I mount that file, using it's path configured in a configmap?
Put the content of the my.properties file directly inside the ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: |
This is the content of the file.
It supports multiple lines but do take care of the indentation.
Or you can also use a kubectl create configmap command:
kubectl create configmap my-configmap --from-file=my_properties_file_name=./my.properties
In either method, you are actually passing the snapshot of the content of the file on the localdisk to kubernetes to store. Any changes you make to the file on the localdisk won't be reflected unless you re-create the configmap.
The design of kubernetes allows running kubectl command against kubernetes cluster located on the other side of the globe so you can't simply mount a file on your localdisk to be accessible in realtime by the cluster. If you want such mechanism, you can't use a ConfigMap, but instead you would need to setup a shared volume that is mounted by both your local machine and the cluster for example using a NFS server.

Best Practice for Operators for how to get Deployment's configuration

I am working on operator-sdk, in the controller, we often need to create a Deployment object, and Deployment resource has a lot of configuration items, such as environment variables or ports definition or others as following. I am wondering what is best way to get these values, I don't want to hard code them, for example, variable_a or variable_b.
Probably, you can put them in the CRD as spec, then pass them to Operator Controller; Or maybe you can put them in the configmap, then pass configmap name to Operator Controller, Operator Controller can access configmap to get them; Or maybe you can put in the template file, then in the Operator Controller, controller has to read that template file.
What is best way or best practice to deal with this situation? Thanks for sharing your ideas or points.
deployment := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
Labels: ls,
},
Spec: appsv1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
MatchLabels: ls,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: ls,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Image: "....",
Name: m.Name,
Ports: []corev1.ContainerPort{{
ContainerPort: port_a,
Name: "tcpport",
}},
Env: []corev1.EnvVar{
{
Name: "aaaa",
Value: variable_a,
},
{
Name: "bbbb",
Value: variable_b,
},
Using enviroment variables
It can be convenient that your app gets your data as environment variables.
Environment variables from ConfigMap
For non-sensitive data, you can store your variables in a ConfigMap and then define container environment variables using the ConfigMap data.
Example from Kubernetes docs:
Create the ConfigMap first. File configmaps.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
---
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: default
data:
log_level: INFO
Create the ConfigMap:
kubectl create -f ./configmaps.yaml
Then define the environment variables in the Pod specification, pod-multiple-configmap-env-variable.yaml:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: env-config
key: log_level
restartPolicy: Never
Create the Pod:
kubectl create -f ./pod-multiple-configmap-env-variable.yaml
Now in your controller you can read these environment variables SPECIAL_LEVEL_KEY (which will give you special.how value from special-config ConfigMap) and LOG_LEVEL (which will give you log_level value from env-config ConfigMap):
For example:
specialLevelKey := os.Getenv("SPECIAL_LEVEL_KEY")
logLevel := os.Getenv("LOG_LEVEL")
fmt.Println("SPECIAL_LEVEL_KEY:", specialLevelKey)
fmt.Println("LOG_LEVEL:", logLevel)
Environment variables from Secret
If your data is sensitive, you can store it in a Secret and then use the Secret as environment variables.
To create a Secret manually:
You'll first need to encode your strings using base64.
# encode username
$ echo -n 'admin' | base64
YWRtaW4=
# encode password
$ echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm
Then create a Secret with the above data:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
Create a Secret with kubectl apply:
$ kubectl apply -f ./secret.yaml
Please notice that there are other ways to create a secret, pick one that works best for you:
Creating a Secret using kubectl
Creating a Secret from a generator
Creating a Secret from files
Creating a Secret from string literals
Now you can use this created Secret for environment variables.
To use a secret in an environment variable in a Pod:
Create a secret or use an existing one. Multiple Pods can reference the same secret.
Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in env[].valueFrom.secretKeyRef.
Modify your image and/or command line so that the program looks for values in the specified environment variables.
Here is a Pod example from Kubernetes docs that shows how to use a Secret for environment variables:
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
Finally, as stated in the docs:
Inside a container that consumes a secret in an environment variables, the secret keys appear as normal environment variables containing the base64 decoded values of the secret data.
Now in your controller you can read these environment variables SECRET_USERNAME (which will give you username value from mysecret Secret) and SECRET_PASSWORD (which will give you password value from mysecret Secret):
For example:
username := os.Getenv("SECRET_USERNAME")
password := os.Getenv("SECRET_PASSWORD")
Using volumes
You can also mount both ConfigMap and Secret as a volume to you pods.
Populate a Volume with data stored in a ConfigMap:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
Using Secrets as files from a Pod:
To consume a Secret in a volume in a Pod:
Create a secret or use an existing one. Multiple Pods can reference the same secret.
Modify your Pod definition to add a volume under .spec.volumes[]. Name the volume anything, and have a .spec.volumes[].secret.secretName field equal to the name of the Secret object.
Add a .spec.containers[].volumeMounts[] to each container that needs the secret. Specify .spec.containers[].volumeMounts[].readOnly = true and .spec.containers[].volumeMounts[].mountPath to an unused directory name where you would like the secrets to appear.
Modify your image or command line so that the program looks for files in that directory. Each key in the secret data map becomes the filename under mountPath.
An example of a Pod that mounts a Secret in a volume:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret

Populate ConfigMap by importing data from file in k8s

I have a requirement where i push bunch of key value pairs to a text/json file. Post that, i want to import the key value data into a configMap and consume this configMap within a POD using kubernetes-client API's.
Any pointers on how to get this done would be great.
TIA
You can do it in two ways.
Create ConfigMap from file as is.
In this case you will get ConfigMap with filename as a key and filedata as a value.
For example, you have file your-file.json with content {key1: value1, key2: value2, keyN: valueN}.
And your-file.txt with content
key1: value1
key2: value2
keyN: valueN
kubectl create configmap name-of-your-configmap --from-file=your-file.json
kubectl create configmap name-of-your-configmap-2 --from-file=your-file.txt
As result:
apiVersion: v1
kind: ConfigMap
metadata:
name: name-of-your-configmap
data:
your-file.json: |
{key1: value1, key2: value2, keyN: valueN}
apiVersion: v1
kind: ConfigMap
metadata:
name: name-of-your-configmap-2
data:
your-file.txt: |
key1: value1
key2: value2
keyN: valueN
After this you can mount any of ConfigMaps to a Pod, for example let's mount your-file.json:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh","-c","cat /etc/config/keys" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: name-of-your-configmap
items:
- key: your-file.json
path: keys
restartPolicy: Never
Now you can get any information from your /etc/config/your-file.json inside the Pod. Remember that data is read-only.
Create ConfigMap from file with environment variables.
You can use special syntax to define pairs of key: value in file.
These syntax rules apply:
Each line in a file has to be in VAR=VAL format.
Lines beginning with # (i.e. comments) are ignored.
Blank lines are ignored.
There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).
You have file your-env-file.txt with content
key1=value1
key2=value2
keyN=valueN
kubectl create configmap name-of-your-configmap-3 --from-env-file=you-env-file.txt
As result:
apiVersion: v1
kind: ConfigMap
metadata:
name: name-of-your-configmap-3
data:
key1: value1
key2: value2
keyN: valueN
Now you can use ConfigMap data as Pod environment variables:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod-2
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: name-of-your-configmap-3
key: key1
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: name-of-your-configmap-3
key: key2
- name: SOME_VAR
valueFrom:
configMapKeyRef:
name: name-of-your-configmap-3
key: keyN
restartPolicy: Never
Now you can use these variables inside the Pod.
For more information check for documentation
I can also recommend Kustomize for this task. You can use it as part of your deployment pipeline to generate the K8s configuration (not only ConfigMaps, but also Deployments, NetworkPolicies, Services etc.).
In kustomize you'd need a ConfigMapGenerator. There are different options. In your case env is suitable.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
# generate a ConfigMap named my-system-env-<some-hash> where each key/value pair in the
# env.txt appears as a data entry (separated by \n).
- name: my-system-env
env: env.txt
Other options like files will load the whole content of the file into a single value of the ConfigMap.
export the key value pairs in env or text file as is identical in the container environment variables of pod using
create a config map from configmap using
kubectl create configmap special-config --from-env-file=<key value pairs file>
update the spec for the container of pod that needs these key value pairs to
envFrom:
- configMapRef:
name: special-config
Example:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never