I have docker image with config folder in it.
Logback.xml is located there.
So I want to have ability to change logback.xml in pod (to up log level dinamicly, for example)
First, I've thought to use epmtyDir volume, but this thing rewrite directory and it becomes empty.
So Is there some simple method to make directory writable into pod?
Hello, hope you are enjoying your kubernetes Journey, if I understand, you want to have a file in a pod and be able to modify it when needed,
The thing you need here is to create a configmap based on your logback.xml file (can do it with imperative or declarative kubernetes configuration, here is the imperative one:
kubectl create configmap logback --from-file logback.xml
And after this, just mount this very file to you directory location by using volume and volumeMount subpath in your deployment/pod yaml manifest:
...
volumeMounts:
- name: "logback"
mountPath: "/CONFIG_FOLDER/logback.xml"
subPath: "logback.xml"
volumes:
- name: "logback"
configMap:
name: "logback"
...
After this, you will be able to modify your logback.xml config, by editing / recreating the configmap and restarting your pod.
But, keep in mind:
1: The pod files are not supposed to be modified on the fly, this is against the container philosophy (cf. Pet vs Cattle)
2: Howerver, Depending on your container image user rights, all pods directories can be writable...
Related
I have the following setup:
An azure kubernetes cluster with some nodes where my application (consisting of multiple pods) is running.
I'm looking for a good way to make a project-specific configuration file (a few hundred lines) available for two of the deployed containers and their replicas.
The configuration file is different between my projects but the containers are not.
I'm looking for something like a read-only file mount in the containers, but haven't found an good way. I played around with persistent volume claims but there seems to be no automatic file placement possibility apart from copying (including uri and secret managing).
Best thing would be to have a possiblility where kubectl makes use of a yaml file to access a specific folder on my developer machine to push my configuration file into the cluster.
ConfigMaps are not a proper way to do it (because data has to be inside the yaml and my file is big and changing)
For volumes there seems to be no automatic way to place files inside them at creation time.
Can anybody guide me to a good solution that matches my situation?
You can use a configmap for this, but the configmap includes your config file. You can create a configmap with the content of your config file via the following:
kubectl create configmap my-config --from-file=my-config.ini=/path/to/your/config.ini
and the bind it as a volume in your pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: mypod
...
volumeMounts:
- name: config
mountPath: "/config"
readOnly: true
volumes:
- name: config
configMap:
name: my-config #the name of your configmap
Afterwards your config is available in your pod under /config/my-config.ini
I need to move my filebeat to other namespace, but I must keep registry , I mean that:
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
Can you tell me how can I copy that in kubernetes
Just to check my assumptions:
filebeat is a a DaemonSet
When you start it up in the new Namespace, you want to keep the registry
You're happy to keep the on-disk path the same
Because the data folder is mounted from the host directly - if you apply the same DaemonSet in a new Namespace, it will mount the same location into the container. So there's no need to copy any files around.
I have a 3rd party docker image that I want to use (https://github.com/coreos/dex/releases/tag/v2.10.0). I need to inject some customisation into the pod (CSS stylesheet and PNG images).
I haven't found a suitable way to do this yet. Configmap binaryData is not available before v1.10 (or 9, can't remember off the top of my head). I could create a new image and COPY the PNG files into the image, but I don't want the overhead of maintaining this new image - far safer to just use the provided image.
Is there an easy way of injecting these 2/3 files I need into the pod I create?
One way would be to mount 1 or more volumes into the desired locations within the pod, seemingly /web/static. This however would overwrite the entire directly so you would need to supply all the files not just those you wish to overwrite.
Example:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- image: dex:2.10.0
name: dex
volumeMounts:
- mountPath: /web/static # the mount location within the container
name: dex-volume
volumes:
- name: dex-volume
hostPath:
path: /destination/on/K8s/node # path on host machine
There are a number of types of storage types for different cloud providers so take a look at https://kubernetes.io/docs/concepts/storage/volumes/ and see if theres something a little more specific to your environment rather than storing on disk.
For what it's worth, creating your own image would probably be the simplest solution.
You could mount your custom files into a volume, and additionally define a set of commands to run on pod startup (see here) to copy your files to their target path.
You of course need to also run the command that starts your service, in addition to the ones that copy your files.
In K8S, what is the best way to execute scripts in container (POD) once at deployment, which reads from confuguration files which are part of the deployment and seed ex mongodb once?
my project consist of k8s manifest files + configuration files
I would like to be able to update the config files locally and then redeploy via kubectl or helm
In docker-compose i could create a volume ponting at the directory where the config files resides and then in the command part execute bash -c cmds reading from the config files in the volume. How is this best done in K8S? I don't want to include the configuration files in a image via dockerfile, forcing me to rebuild the image before redeploying again via kubectl or helm
How is this best done in K8S?
There are several ways to skin a cat, but my suggestion would be to do the following:
Keep configuration in configMap and mount it as separate volume. Such a map is kept as k8s manifest, making all changes to it separate from docker build image - no need to rebuild or keep sensitive data within image. You can also use instead (or together with) secret in the same manner as configMap.
Use initContainers to do the initialization before main container is to be brought online, covering for your 'once on deployment' automatically. Alternatively (if init operation is not repeatable) you can use Jobs instead and start it when necessary.
Here is excerpt of example we are using on gitlab runner:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: ss-my-project
spec:
...
template:
....
spec:
...
volumes:
- name: volume-from-config-map-config-files
configMap:
name: cm-my-config-files
- name: volume-from-config-map-script
projected:
sources:
- configMap:
name: cm-my-scripts
items:
- key: run.sh
path: run.sh
mode: 0755
# if you need to run as non-root here is how it is done:
securityContext:
runAsNonRoot: true
runAsUser: 999
supplementalGroups: [999]
containers:
- image: ...
name: ...
command:
- /scripts/run.sh
...
volumeMounts:
- name: volume-from-config-map-script
mountPath: "/scripts"
readOnly: true
- mountPath: /usr/share/my-app-config/config.file
name: volume-from-config-map-config-files
subPath: config.file
...
You can, ofc, mount several volumes from config maps or combine them in one single, depending on frequency of your changes and affected parts. This is example with two separately mounted configMaps just to illustrate the principle (and mark script executable), but you can use only one for all required files, put several files into one or put single file into each - as per your need.
Example of such configMap is like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-my-scripts
data:
run.sh: |
#!/bin/bash
echo "Doing some work here..."
And example of configMap covering config file is like so:
kind: ConfigMap
apiVersion: v1
metadata:
name: cm-my-config-files
data:
config.file: |
---
# Some config.file (example name) required in project
# in whatever format config file actually is (just example)
... (here is actual content like server.host: "0" or EFG=True or whatever)
Playing with single or multiple files in configMaps can yield result you want, and depending on your need you can have as many or as few as you want.
In docker-compose i could create a volume ponting at the directory where the config files resides and then in the command part execute bash -c cmds reading from the config files in the volume.
In k8s equivalent of this would be hostPath but then you would seriously hamper k8s ability to schedule pods to different nodes. This might be ok if you have single node cluster (or while developing) to ease change of config files, but for actual deployment above approach is advised.
I tried to use configMap to mount some configs in a subdirectory. For example:
spec.template.spec.containers.[0].volumeMounts:
- name: fh16-volume
mountPath: /etc/fh-16/application.log
subPath: my-config.txt
spec.template.spec.volumes:
- name: fh16-volume
configMap:
name: my-config
At this scenario, all mount as expected. But after any changes in configMap, this changes not applied in a container. Need to recreate pod for this.
It looks like some bug, but maybe I make some mistake in my configurations? In the case when I don't use subPath directive, all works as expected.
See this note in the Kubernetes docs:
Note: A container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates."
It looks like some bug
Yes, it is: https://github.com/kubernetes/kubernetes/issues/50345
This is currently expected behavior. kubelet update the content inside container on a timer.
See https://github.com/kubernetes/kubernetes/issues/30189