(A very similar question was asked about 2 years ago, though it was specifically about secrets, I doubt the story is any different for configmaps... but at the least, I can present the use case and why the existing workarounds aren't viable for us.)
Given a simple, cut-down deployment.yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
volumeMounts:
- name: vol
mountPath: /app/Configuration
volumes:
- name: vol
configMap:
name: configs
and the matching configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: configs
labels:
k8s-app: example
data:
example1.json: |-
{
"key1": "value1"
}
example2.json: |-
{
"key2": "value2"
}
the keys in configmap.yaml, whatever they may be, are simply created as files, without deployment.yaml needing to be modified or have any specifics other than the mountPath.
The problem is that the actual structure has subfolders to handle region-specific values that override the root ones:
Configuration \ example1.json
Configuration \ example2.json
Configuration \ us \ example1.json
Configuration \ us \ ca \ example2.json
The number and nature of these could obviously vary, for as many different countries and regions imaginable and for each separately configured module. The intent was to provide a tool to the end user that would allow them to set up and manage these configurations, which would behind the scenes automatically generate the configmap.yaml and update it in kubernetes.
However, unless there's a trick I haven't found yet, this seems to be outside of kubernetes's abilities, in a couple ways.
First of all, there is no syntax that allows one to specify configmap keys that are directories, nor include a subdirectory path in a key:
data:
# one possible approach (currently complains that it doesn't validate '[-._a-zA-Z0-9]+')
/us/example1.json: |-
{
"key1": "value1"
}
# another idea; this obviously results in 'invalid type for io.k8s.api.core.v1.ConfigMap.data: got "map", expected "string"'
us:
example2.json: |-
{
"key2": "value2"
}
So what are our options to accomplish this?
Wellll, we could map the keys to specific locations using the items: -key: path: approach in the deployment.yaml's volumes: -configMap: node,
and/or generate several nodes in the deployment.yaml's volumeMounts: node,
using either subPath: (which is basically the same as using items: -key: -path: in the volumes: configMap:),
or individual separate configmaps for each subdirectory, and mounting them all as different volumes in the deployment.yaml.
All of these methods would require massive and incredibly verbose changes to the deployment.yaml, leaking out knowledge it shouldn't have any reason to know about, making it mutable and continually re-generated rather than static, complicating rolling out settings updates to deployed pods, etc. etc. etc. It's just Not Good. And all of that just to have mapped one directory, just because it contains subdirectories...
Surely this CAN'T be the way it's SUPPOSED to work? What am I missing? How should I proceed?
From a "container-native" perspective, having a large file system tree of configuration files that the application processes at startup to arrive at its canonical configuration is an anti-pattern. Better to have a workflow that produces a single file, which can be stored in a ConfigMap and easily inspected in its final form. See, for instance, nginx ingress.
But obviously not everyone is rewriting their apps to better align with the kubernetes approach. The simplest way then to get a full directory tree of configuration files into a container at deploy time is to use initContainers and emptyDir mounts.
Package the config file tree into a container (sometimes called a "data-only" container), and have the container start script just copy the config tree into the emptyDir mount. The application can then consume the tree as it expects to.
Depending on the scale of your config tree another viable option might be to simulate a subtree with, say, underscores instead of slashes in the file "paths" inside the configmap. This will make you loose general filesystem performance (which should never be a problem if you are just having configs to read) and force you to rewrite a little bit of your applications code (file pattern traversing instead of directory traversing when accessing configs), but should solve your use case at a fairly cheap price.
A few workarounds:
have a data only container built with the data on it...
FROM scratch
... # copy data here
then add it as a sidecar mounting the volume on the other container...
create a tar ball from the config, convert it to a configmap, mount in a container and change the container command to untar the config before start...
rename the files with some special char instead of /, like us#example.json and use a script to mv them as in the beginning.
All of this is very hacky... The best scenario is to refactor them to be used in flat folder and create them with something like kustomize:
kustomize edit add configmap my-configmap --from-file='./*.json'
Related
I know this is maybe a weird question, but I want to ask if it's possible to also manage single resources (like f.e. a configmap/secret) without a seperate chart?
F.e. I try to install a nginx-ingress and would like to additionally apply a secret map which includes http-basic-authentication data.
I can just reference the nginx-ingress-repo directly in my helmfile, but do I really need to create a seperate helm chart to also apply the http-basic-secret?
I have many releases which need a single, additional resource (like a json configmap, a single secret) and it would be cumbersome to always need a seperate chart file for each release?
Thank you!
Sorry, Helmfile only manages entire Helm releases.
There are a couple of escape hatches you might be able to use. Helmfile hooks can run arbitrary shell commands on the host (as distinct from Helm hooks, which usually run Jobs in the cluster) and so you could in principle kubectl apply a file in a hook. Helmfile also has some integration with Kustomize and it might be possible to add resources this way. As you've noted you can also write local charts and put whatever YAML you need in those.
The occasional chart does support including either arbitrary extra resources or specific configuration content; the Bitnami MariaDB chart, to pick one, supports putting anything you want under an extraDeploy value. You could use this in combination with Helmfile values: to inject more resources
releases:
- name: mariadb
chart: bitnami/mariadb
values:
- extraDeploy:
- |-
apiVersion: v1
kind: ConfigMap
...
I set up a local Kubernetes cluster using Kind, and then I run Apache-Airflow on it using Helm.
To actually create the pods and run Airflow, I use the command:
helm upgrade -f k8s/values.yaml airflow bitnami/airflow
which uses the chart airflow from the bitnami/airflow repo, and "feeds" it with the configuration of values.yaml.
The file values.yaml looks something like:
web:
extraVolumeMounts:
- name: functions
mountPath: /dir/functions/
extraVolumes:
- name: functions
hostPath:
path: /dir/functions/
type: Directory
where web is one component of Airflow (and one of the pods on my setup), and the directory /dir/functions/ is successfully mapped from the cluster inside the pod. However, I fail to do the same for a single, specific file, instead of a whole directory.
Does anyone knows the syntax for that? Or have an idea for an alternative way to map the file into the pod (its whole directory is successfully mapped into the cluster)?
There is a File type for hostPath which should behave like you desire, as it states in the docs:
File: A file must exist at the given path
which you can then use with the precise file path in mountPath. Example:
web:
extraVolumeMounts:
- name: singlefile
mountPath: /path/to/mount/the/file.txt
extraVolumes:
- name: singlefile
hostPath:
path: /path/on/the/host/to/the/file.txt
type: File
Or if it's not a problem, you could mount the whole directory containing it at the expected path.
With this said, I want to point out that using hostPath is (almost always) never a good idea.
If you have a cluster with more than one node, saying that your Pod is mounting an hostPath doesn't restrict it to run on a specific host (even tho you can enforce it with nodeSelectors and so on) which means that if the Pod starts on a different node, it may behave differently, not finding the directory and / or file it was expecting.
But even if you restrict the application to run on a specific node, you need to be ok with the idea that, if such node becomes unavailable, the Pod will not be scheduled on its own somewhere else.. meaning you'll need manual intervention to recover from a single node failure (unless the application is multi-instance and can resist one instance going down)
To conclude:
if you want to mount a path on a particular host, for whatever reason, I would go for local volumes.. or at least use hostPath and restrict the Pod to run on the specific node it needs to run on.
if you want to mount small, textual files, you could consider mounting them from ConfigMaps
if you want to configure an application, providing a set of files at a certain path when the app starts, you could go for an init container which prepares files for the main container in an emptyDir volume
Right now i use docker-compose for development. This is a great tool that comes handy if i use it on simple projects where i got maximum of 3-6 active services but when it comes to 6-8 and more it is become hard to manage.
So i've started to learn k8s on minikube and now i got few questions about some questions:
How to make "two-way" binding for volumes? for example if i have folder named "my-frontend" and i want to sync specific folder in deployment, how to "link" them using PV and PVC ?
Very often it comes handy to make some service with specific environment like node:12.0.0 and then use it as command executor like this: docker-compose run workspace npm install
how to achieve something like this using k8s?
How to make "two-way" binding for volumes? for example if i have folder named "my-frontend" and i want to sync specific folder in deployment, how to "link" them using PV and PVC ?
You need to create a PersistentVolume which in your case will use a specific directory in the host, in Kubernetes official documentation there's an example with this same use case.
Then a PersistentVolumeClaim to request some space from this volume (also an example in the previous documentation) and then mount the PVC on the pod/deployment where you need it.
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-pvc
containers:
....
volumeMounts:
- mountPath: "/mount/path/in/pod"
name: my-volume
Very often it comes handy to make some service with specific environment like node:12.0.0 and then use it as command executor like this: docker-compose run workspace npm install
how to achieve something like this using k8s?
You need to use kubectl, it has very similar functionalities as docker CLI, it also supports run command with very similar parameters and functionality. Alternatively, you can start your pod once and then run commands multiple times in the same instance by using kubectl exec
I have a python pod running.
This python pod is using different shared libraries. To make it easier to debug the shared libraries I would like to have the libraries directory on my host too.
The python dependencies are located in /usr/local/lib/python3.8/site-packages/ and I would like to access this directory on my host to modify some files.
Is that possible and if so, how? I have tried with emptyDir and PV but they always override what already exists in the directory.
Thank you!
This is by design. Kubelet is responsible for preparing the mounts for your container. At the time of mounting they are empty and kubelet has no reason to put any content in them.
That said, there are ways to achieve what you seem to expect by using init container. In your pod you define init container using your docker image, mount your volume in it in some path (ie. /target) but instead of running regular content of your container, run something like
cp -r /my/dir/* /target/
which will initiate your directory with expected content and exit allowing further startup of the pod
Please take a look: overriding-directory.
Another option is to use subPath. Subpath references files or directories that are controlled by the user, not the system. Take a loot on this example how to mount single file into existing directory:
---
volumeMounts:
- name: "config"
mountPath: "/<existing folder>/<file1>"
subPath: "<file1>"
- name: "config"
mountPath: "/<existing folder>/<file2>"
subPath: "<file2>"
restartPolicy: Always
volumes:
- name: "config"
configMap:
name: "config"
---
Check full example here. See: mountpath, files-in-folder-overriding.
You can also as #DavidMaze said debug your setup in a non-container Python virtual environment if you can, or as a second choice debugging the image in Docker without Kubernetes.
You can take into consideration also below third party tools, that were created especially for Kubernetes app developers keeping in mind this functionality (keep in-sync source and remote files).
Skaffold's Continuous Deployment workflow - it takes care of keeping source and remote files (Pod mounted directory) in sync.
Telepresence`s Volume access feature.
I want to deploy Grafana using Kubernetes, but I don't know how to attach provisioned dashboards to the Pod. Storing them as key-value data in a configMap seems to me like a nightmare - example here https://github.com/do-community/doks-monitoring/blob/master/manifest/dashboards-configmap.yaml - in my case it would me much more JSON dashboards - thus the harsh opinion.
I didn't had an issue with configuring the Grafana settings, datasources and dashboard providers as configMaps since they are defined in single files, but the dashboards situation is a little bit more tricky for me.
All of my dashboards are stored in the repo under "/files/dashboards/", and I wondered how to make them available to the Pod, besides the way described earlier. Wondered about using the hostPath object for a sec, but didn't make sense for multi-node deployment on different hosts.
Maybe its easy - but I'm fairly new to Kubernetes and can't figure it out - so any help would be much appreciated. Thank you!
You can automatically generate a ConfigMap from a set fo files in a directory. Each file will be a key-value pair in the ConfigMap with the file name being the key and the file content being the value (like in your linked example but done automatically instead of manually).
Assuming that your dashboard files are stored as, for example:
files/dashboards/
├── k8s-cluster-rsrc-use.json
├── k8s-node-rsrc-use.json
└── k8s-resources-cluster.json
You can run the following command to directly create the ConfigMap in the cluster:
kubectl create configmap my-config --from-file=files/dashboards
If you prefer to only generate the YAML manifest for the ConfigMap, you can do:
kubectl create configmap my-config --from-file=files/dashboards --dry-run -o yaml >my-config.yaml
You could look into these options:
Use a persistent volume.
Store the JSON files for the dashboards in a code repo like git, file repository like nexus, or a plain web server, and use init container to get the files before the application (Grafana) container is started and put them on a volume shared between the init container and the application (Grafana) container. This example could be a good starting point.
Notice that this doesn't require a persistent volume. See in the example - it uses a volume of type emptyDir.