I have the following setup:
An azure kubernetes cluster with some nodes where my application (consisting of multiple pods) is running.
I'm looking for a good way to make a project-specific configuration file (a few hundred lines) available for two of the deployed containers and their replicas.
The configuration file is different between my projects but the containers are not.
I'm looking for something like a read-only file mount in the containers, but haven't found an good way. I played around with persistent volume claims but there seems to be no automatic file placement possibility apart from copying (including uri and secret managing).
Best thing would be to have a possiblility where kubectl makes use of a yaml file to access a specific folder on my developer machine to push my configuration file into the cluster.
ConfigMaps are not a proper way to do it (because data has to be inside the yaml and my file is big and changing)
For volumes there seems to be no automatic way to place files inside them at creation time.
Can anybody guide me to a good solution that matches my situation?
You can use a configmap for this, but the configmap includes your config file. You can create a configmap with the content of your config file via the following:
kubectl create configmap my-config --from-file=my-config.ini=/path/to/your/config.ini
and the bind it as a volume in your pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: mypod
...
volumeMounts:
- name: config
mountPath: "/config"
readOnly: true
volumes:
- name: config
configMap:
name: my-config #the name of your configmap
Afterwards your config is available in your pod under /config/my-config.ini
Related
I have a microservice for handling retention policy.
This application has default configuration for retention, e.g.: size for retention, files location etc.
But we also want create an API for the user to change this configuration with customized values on runtime.
I created a configmap with the default values, and in the application I used k8s client library to get/update/watch the configmap.
My question is, is it correct to use configmap for dynamic buisness configuration? or is it meant for static configuration that user is not supposed to touch during runtime?
Thanks in advance
There are no rules against it. A lot of software leverages kube API to do some kind of logic / state, ie. leader election. All of those require the app to apply changes to a kube resource. With that in mind do remember it always puts some additional load on your API and if you're unlucky that might become an issue. About two years ago we've been experiencing API limits exhaustion on one of the managed k8s services cause we were using a lot of deployments that had rather intensive leader election logic (2 requests per pod every 5 sec). The issue is long gone since then, but it shows what you have to take into account when designing interactions like this (retries, backoffs etc.)
Using configMaps is perfectly fine for such use cases. You can use a client library in order to watch for updates on the given configMap, however a cleaner solution would be to mount the configMap as a file into the pod and have your configuration set up from the given file. Since you're mounting the configMap as a Volume, changes won't need a pod restart for changes to be visible within the pod (unlike env variables that only "refresh" once the pod get's recreated).
Let's say you have this configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
And then you mount this configMap as a Volume into your Pod:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: registry.k8s.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
When the pod runs, the command ls /etc/config/ produces the output below:
SPECIAL_LEVEL
SPECIAL_TYPE
This way you would also reduce "noise" to the API-Server as you can simply query the given files for updates to any configuration.
I want to share my non-empty local directory with kind cluster.
Based on answer here: How to reference a local volume in Kind (kubernetes in docker)
I tried few variations of the following:
Kind Cluster yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /Users/xyz/documents/k8_automation/data/manual/
containerPath: /host_manual
extraPortMappings:
- containerPort: 30000
hostPort: 10000
Pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: manual
spec:
serviceAccountName: manual-sa
containers:
- name: tools
image: tools:latest
imagePullPolicy: Never
command:
- bash
tty: true
volumeMounts:
- mountPath: /home/jenkins/agent/data
name: data
volumes:
- name: data
hostPath:
path: /host_manual
type: Directory
---
I see that the directory /home/jenkins/agent/data does exist when the pod gets created. However, the folder is empty.
kinds documentation here: https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts
It should be the case that whatever is in the local machine at hostpath (/Users/xyz/documents/k8_automation/data/manual/) in extraMounts in the cluster yaml be available to the node at containerPath (/host_manual), which then gets mounted at container volume mounthPath (/home/jenkins/agent/data).
I should add that even if I change the hostPath in the cluster yaml file to a non-existent folder, the empty "data" folder still gets mounted in the container, so I think it's the connection from my local to kind cluster that's the issue.
Why am I not getting the contents of /Users/xyz/documents/k8_automation/data/manual/ with it's many files also available at /home/jenkins/agent/data in the container?
How can I fix this?
Any alternatives if there is no fix?
Turns out these yaml configuration was just fine.
The reason the directory was not showing up in the container was related with docker settings. And because "kind is a tool for running local Kubernetes clusters using Docker container “nodes”", it matters.
It seems docker restricts resource sharing and allows only specific directories to be bind mounted into Docker containers by default. Once I added the specific directory I wanted to show up in the container to the list of directories under Preferences -> Resources -> File sharing, it worked!
We have a namespace in kubernetes where I would like some secrets (files like jks,properties,ts,etc.) to be made available to all the containers in all the pods (we have one JVM per container & one container per pod kind of Deployment).
I have created secrets using kustomization and plan to use it as a volume for spec of each Deployment & then volumeMount it for the container of this Deployment. I would like to have this volume to be mounted on each of the containers deployed in our namespace.
I want to know if kustomize (or anything else) can help me to mount this volume on all the deployments in this namespace?
I have tried the following patchesStrategicMerge
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: myNamespace
spec:
template:
spec:
imagePullSecrets:
- name: pull-secret
containers:
- volumeMounts:
- name: secret-files
mountPath: "/secrets"
readOnly: true
volumes:
- name: secret-files
secret:
secretName: mySecrets
items:
- key: key1
path: ...somePath
- key: key2
path: ...somePath
It requires name in metadata section which does not help me as all my Deployments have different names.
Inject Information into Pods Using a PodPreset
You can use a PodPreset object to inject information like secrets, volume mounts, and environment variables etc into pods at creation time.
Update: Feb 2021. The PodPreset feature only made it to alpha. It was removed in v1.20 of kubernetes. See release note https://kubernetes.io/docs/setup/release/notes/
The v1alpha1 PodPreset API and admission plugin has been removed with
no built-in replacement. Admission webhooks can be used to modify pods
on creation. (#94090, #deads2k) [SIG API Machinery, Apps, CLI, Cloud
Provider, Scalability and Testing]
PodPresent (https://kubernetes.io/docs/tasks/inject-data-application/podpreset/) is one way to do this but for this all pods in your namespace should match the label you specify in PodPresent spec.
Another way (which is most popular) is to use Dynamic Admission Control (https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) and write a Mutating webhook in your cluster which will edit your pod spec and add all the secrets you want to mount. Using this you can also make other changes in your pod spec like mounting volumes, adding label and many more.
Standalone kustomize support a patch to many resources. Here is an example Patching multiple resources at once. the built-in kustomize in kubectl doesn't support this feature.
To mount secret as volume you need to update yaml construct for your pod/deployment manifest files and rebuild them.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- name: my-secret-volume
mountPath: /etc/secretpath
volumes:
- name: my-secret-volume
secret:
secretName: my-secret
kustomize (or anything else) will not mount it for you.
I'm new at kubernetes, and Im wondering the best way to inject values to ConfigMap.
for now, I defined Deployment object which takes the relevant values from ConfigMap file. I wish to use the same .yml file for my production and staging environments. so only the values in the configMap will be changed, while the file itself will be the same.
Is there any way to do it built-in in kubernetes, without using configuration management tools (like Ansible, puppet, etc.)?
You can find the links to the quoted text in the end of the answer.
A good practice when writing applications is to separate application code from configuration. We want to enable application authors to easily employ this pattern within Kubernetes. While the Secrets API allows separating information like credentials and keys from an application, no object existed in the past for ordinary, non-secret configuration. In Kubernetes 1.2, we’ve added a new API resource called ConfigMap to handle this type of configuration data.
Besides, Secrets data will be stored in a base64 encoded form, which is also suitable for binary data such as keys, whereas ConfigMaps data will be stored in plain text format, which is fine for text files.
The ConfigMap API is simple conceptually. From a data perspective, the ConfigMap type is just a set of key-value pairs.
There are several ways you can create config maps:
Using list of values in the command line
$ kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
Using a file on the disk as a source of data
$ kubectl create configmap game-config-2 --from-file=docs/user-guide/configmap/kubectl/game.properties --from-file=docs/user-guide/configmap/kubectl/ui.properties
$ kubectl create configmap game-config-3 --from-file=game-special-key=docs/user-guide/configmap/kubectl/game.properties
Using directory with files as a source of data
$ kubectl create configmap game-config --from-file=configure-pod-container/configmap/kubectl/
Combining all three previously mentioned methods
There are several ways to consume a ConfigMap data in Pods
Use values in ConfigMap as environment variables
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_LEVEL
Use data in ConfigMap as files on the volume
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# ConfigMap containing the files
name: special-config
Only changes in ConfigMaps that are consumed in a volume will be visible inside the running pod. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period + ttl of ConfigMaps cache in kubelet.
Pod that contains in specification any references to non-existent ConfigMap or Secrets won't start.
Consider to read official documentation and other good articles for even more details:
Configuration management with Containers
Configure a Pod to Use a ConfigMap
Using ConfigMap
Kubernetes ConfigMaps and Secrets
Managing Pod configuration using ConfigMaps and Secrets in Kubernetes
You also create configmap
kubectl create configmap special-config \
--from-env-file=configure-pod-container/configmap/kubectl/game-env-file.properties
and access it in the container
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
If you're thinking of ansible then I suspect you'll want to look at helm for this. I don't think it is a concern that kubernetes itself would address but helm is a kubernetes project.
If I understand correctly you've got a configmap yaml file and you want to deploy it with one set of values for staging and one for production.
A natural way to do this would be to keep two copies of the file with '-staging' and '-prod' appended on the name and have your CI choose the one for the environment it is deploying to. Or you could have a shell script in your CI that does a sed/replace on the particular values you want to switch for the environment.
Using helm you could pass in command-line parameters at deploy time or via a parameter-file (the values.yaml).
In K8S, what is the best way to execute scripts in container (POD) once at deployment, which reads from confuguration files which are part of the deployment and seed ex mongodb once?
my project consist of k8s manifest files + configuration files
I would like to be able to update the config files locally and then redeploy via kubectl or helm
In docker-compose i could create a volume ponting at the directory where the config files resides and then in the command part execute bash -c cmds reading from the config files in the volume. How is this best done in K8S? I don't want to include the configuration files in a image via dockerfile, forcing me to rebuild the image before redeploying again via kubectl or helm
How is this best done in K8S?
There are several ways to skin a cat, but my suggestion would be to do the following:
Keep configuration in configMap and mount it as separate volume. Such a map is kept as k8s manifest, making all changes to it separate from docker build image - no need to rebuild or keep sensitive data within image. You can also use instead (or together with) secret in the same manner as configMap.
Use initContainers to do the initialization before main container is to be brought online, covering for your 'once on deployment' automatically. Alternatively (if init operation is not repeatable) you can use Jobs instead and start it when necessary.
Here is excerpt of example we are using on gitlab runner:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: ss-my-project
spec:
...
template:
....
spec:
...
volumes:
- name: volume-from-config-map-config-files
configMap:
name: cm-my-config-files
- name: volume-from-config-map-script
projected:
sources:
- configMap:
name: cm-my-scripts
items:
- key: run.sh
path: run.sh
mode: 0755
# if you need to run as non-root here is how it is done:
securityContext:
runAsNonRoot: true
runAsUser: 999
supplementalGroups: [999]
containers:
- image: ...
name: ...
command:
- /scripts/run.sh
...
volumeMounts:
- name: volume-from-config-map-script
mountPath: "/scripts"
readOnly: true
- mountPath: /usr/share/my-app-config/config.file
name: volume-from-config-map-config-files
subPath: config.file
...
You can, ofc, mount several volumes from config maps or combine them in one single, depending on frequency of your changes and affected parts. This is example with two separately mounted configMaps just to illustrate the principle (and mark script executable), but you can use only one for all required files, put several files into one or put single file into each - as per your need.
Example of such configMap is like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-my-scripts
data:
run.sh: |
#!/bin/bash
echo "Doing some work here..."
And example of configMap covering config file is like so:
kind: ConfigMap
apiVersion: v1
metadata:
name: cm-my-config-files
data:
config.file: |
---
# Some config.file (example name) required in project
# in whatever format config file actually is (just example)
... (here is actual content like server.host: "0" or EFG=True or whatever)
Playing with single or multiple files in configMaps can yield result you want, and depending on your need you can have as many or as few as you want.
In docker-compose i could create a volume ponting at the directory where the config files resides and then in the command part execute bash -c cmds reading from the config files in the volume.
In k8s equivalent of this would be hostPath but then you would seriously hamper k8s ability to schedule pods to different nodes. This might be ok if you have single node cluster (or while developing) to ease change of config files, but for actual deployment above approach is advised.