What are other ways to provide configuration information to pods other than ConfigMap - kubernetes

I have a deployment in which I want to populate pod with config files without using ConfigMap.

You could also store your config files on a PersistentVolume and read those files at container startup. For more details on that topic please take a look at the K8S reference docs: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Please note: I would not consider this good practice. I used this approach in the early beginning of a project where a legacy app was migrated to Kubernetes: The application consisted of tons of config files that were read by the application at startup.
Later on I switched to creating ConfigMaps from my configuration files, as the latter approach allows to store the K8S object (yaml file) in Git and I found managing/editing a ConfigMap way easier/faster, especially in a multi-node K8S environment:
kubectl create configmap app-config --from-file=./app-config1.properties --from-file=./app-config2.properties
If you go for the "config files in persistent volume" approach you need to take different aspects into account... e.g. how to bring your configuration files on that volume, potentially not on a single but multiple nodes, and how to keep them in sync.

You can use environment variable and read the value from environment.
Or you

Related

Migrating resourses from an openshift cluster to another

I have an Openshift cluster and I want to move its resources to another cluster,
e.g. I have 40 Secrets, and 20 ConfigMaps, and some other resources such as deployment configs and more.
Moving these secrets and config maps manually is mind-blowing.
What is the best approach?
I would recommend trying out Monokle's Compare & Sync feature.
It allows you to visually compare the resources of two clusters and deploy resources from one to the other.
Here's a screenshot of the UI:
You can read more about how this works in the docs.
OpenShift has an "official" process for this called "Migration Toolkit for Containers (MTC)":
https://docs.openshift.com/container-platform/4.12/migration_toolkit_for_containers/about-mtc.html
Velero is also a great tool for your scenario. You can backup your namespaces with the granularity of the objects included, and restore them elsewhere with or without making changes:
https://velero.io/docs/v1.10/migration-case/
Follow these steps:
move secrets and config maps
move deployments
move services
move routes
As an example of how I'll do each step mentioned above, follow these steps for each of them:
1 - Login to the first cluster:
oc login --token="your-token-for-first-server" --server="your-first-server"
2 - Export your resources:
oc get -o yaml cm > configmaps.yaml
oc get -o yaml secrets > secrets.yaml
...
There are also some default ConfigMaps and Secrets which you don't need to copy, you can erase them after making the files.
3 - Login to the second cluster:
oc login --token="your-token-for-second-server" --server="your-second-server"
If you forget this step, you may get an error that says resource already exists, but be careful not to forget this step.
4 - Load resources to the second cluster
oc create -f configmaps.yaml
oc create -f secrets.yaml
...
There might be easier ways too, and there are a lot of information about this which is out of my knowledge.
There are also some considerations you need to aware of:
You may not need to move pods, usually they are made and controlled by other resources such as deployment configs.
In some companies, databases are managed completely separately by DBA teams, you may not need to change anything, but if your database is within your cluster, you should consider moving it's PV.
Using Helm chart or Openshift templates can help you make this kind of task so easier.
You can include templates in your GitLab CI/CD pipelines and just change your cluster URL and everything is up and running and redeploy.
In the end, if you are migrating from version 3 to 4, this article might be helpful.

Kubernetes configMap or persistent volume?

What is the best approach to passing multiple configuration files into a POD?
Assume that we have a legacy application that we have to dockerize and run in a Kubernetes environment. This application requires more than 100 configuration files to be passed. What is the best solution to do that? Create hostPath volume and mount it to some directory containing config files on the host machine? Or maybe config maps allow passing everything as a single compressed file, and then extracting it in the pod volume?
Maybe helm allows somehow to iterate over some directory, and create automatically one big configMap that will act as a directory?
Any suggestions are welcomed
Create hostPath volume and mount it to some directory containing config files on the host machine
This should be avoided.
Accessing hostPaths may not always be allowed. Kubernetes may use PodSecurityPolicies (soon to be replaced by OPA/Gatekeeper/whatever admission controller you want ...), OpenShift has a similar SecurityContextConstraint objects, allowing to define policies for which user can do what. As a general rule: accessing hostPaths would be forbidden.
Besides, hostPaths devices are local to one of your node. You won't be able to schedule your Pod some place else, if there's any outage. Either you've set a nodeSelector restricting its deployment to a single node, and your application would be done as long as your node is. Or there's no placement rule, and your application may restart without its configuration.
Now you could say: "if I mount my volume from an NFS share of some sort, ...". Which is true. But then, you would probably be better using a PersistentVolumeClaim.
Create automatically one big configMap that will act as a directory
This could be an option. Although as noted by #larsks in comments to your post: beware that ConfigMaps are limited in terms of size. While manipulating large objects (frequent edit/updates) could grow your etcd database size.
If you really have ~100 files, ConfigMaps may not be the best choice here.
What next?
There's no one good answer, not knowing exactly what we're talking about.
If you want to allow editing those configurations without restarting containers, it would make sense to use some PersistentVolumeClaim.
If that's not needed, ConfigMaps could be helpful, if you can somewhat limit their volume, and stick with non-critical data. While Secrets could be used storing passwords or any sensitive configuration snippet.
Some emptyDir could also be used, assuming you can figure out a way to automate provisioning of those configurations during container startup (eg: git clone in some initContainer, and/or some shell script contextualizing your configuration based on some environment variables)
If there are files that are not expected to change over time, or whose lifecycle is closely related to that of the application version shipping in your container image: I would consider adding them to my Dockerfile. Maybe even add some startup script -- something you could easily call from an initContainer, generating whichever configuration you couldn't ship in the image.
Depending on what you're dealing with, you could combine PVC, emptyDirs, ConfigMaps, Secrets, git stored configurations, scripts, ...

Is there a way to create a configMap containing multiple files for a Kubernetes Pod?

I want to deploy Grafana using Kubernetes, but I don't know how to attach provisioned dashboards to the Pod. Storing them as key-value data in a configMap seems to me like a nightmare - example here https://github.com/do-community/doks-monitoring/blob/master/manifest/dashboards-configmap.yaml - in my case it would me much more JSON dashboards - thus the harsh opinion.
I didn't had an issue with configuring the Grafana settings, datasources and dashboard providers as configMaps since they are defined in single files, but the dashboards situation is a little bit more tricky for me.
All of my dashboards are stored in the repo under "/files/dashboards/", and I wondered how to make them available to the Pod, besides the way described earlier. Wondered about using the hostPath object for a sec, but didn't make sense for multi-node deployment on different hosts.
Maybe its easy - but I'm fairly new to Kubernetes and can't figure it out - so any help would be much appreciated. Thank you!
You can automatically generate a ConfigMap from a set fo files in a directory. Each file will be a key-value pair in the ConfigMap with the file name being the key and the file content being the value (like in your linked example but done automatically instead of manually).
Assuming that your dashboard files are stored as, for example:
files/dashboards/
├── k8s-cluster-rsrc-use.json
├── k8s-node-rsrc-use.json
└── k8s-resources-cluster.json
You can run the following command to directly create the ConfigMap in the cluster:
kubectl create configmap my-config --from-file=files/dashboards
If you prefer to only generate the YAML manifest for the ConfigMap, you can do:
kubectl create configmap my-config --from-file=files/dashboards --dry-run -o yaml >my-config.yaml
You could look into these options:
Use a persistent volume.
Store the JSON files for the dashboards in a code repo like git, file repository like nexus, or a plain web server, and use init container to get the files before the application (Grafana) container is started and put them on a volume shared between the init container and the application (Grafana) container. This example could be a good starting point.
Notice that this doesn't require a persistent volume. See in the example - it uses a volume of type emptyDir.

Flink on K8S: how do I provide Flink configuration to the cluster?

I am following Flink Kubernetes Setup to create a cluster, but it is unclear how can I provide Flink configuration to the cluster? e.g., I want to specify jobmanager.heap.size=2048m.
According to the docs, all configuration has to be passed via a yaml configuration file.
It seems that jobmanager.heap.size is a common option that can be configured.
That being said, the approach on kubernetes is a little different when it comes to providing this configuration file.
The next piece of the puzzle is figuring out what the current start command is for the container you are trying to launch. I assumed you were using the official flink docker image which is good because the Dockerfile is opensource (link to repo at the bottom). They are using a complicated script to launch the flink container, but if you dig through that script you will see that it's reading the configuration yaml from /opt/flink/conf/flink-conf.yaml. Instead of trying to change this, it'll probably be easier to just mount a yaml file at that exact path in the pod with your configuration values.
Here's the github repo that has these Dockerfiles for reference.
Next question is what should the yaml file look like?
From their docs:
All configuration is done in conf/flink-conf.yaml, which is expected
to be a flat collection of YAML key value pairs with format key: value.
So, I'd imagine you'd create flink-conf.yaml with the following contents:
jobmanager.heap.size: 2048m
And then mount it in your kubernetes pod at /opt/flink/conf/flink-conf.yaml and it should work.
From a kubernetes perspective, it might make the most sense to make a configmap of that yaml file, and mount the config map in your pod as a file. See reference docs
Specifically, you are most interested in creating a configmap from a file and Adding a config map as a volume
Lastly, I'll call this out but I won't recommend it because of the fact that the owners of flink have marked it as an incubating feature currently, they have started providing a helm chart for Flink, and I can see that they are passing flink-conf.yaml as a config map in the helm chart templates (ignore values surrounded with {{ }} - that is helm template syntax). Here is where they mount their config map into a pod.

Persistence of Configmap in kubernetes

I have a Kubernetes pod (let's call it POD-A) and I want it to use a certain config file to perform some actions using k8s API. The config file will be a YAML or JSON which will be parsed by the application inside the pod.
The config file is hosted by an application server on cloud and the latest version of it can be pulled based on a trigger. The config file contains configuration details of all the deployments in the k8s cluster and will be used to update deployments using k8s API in POD-A.
Now what I am thinking is to save this config file in a config-map and every time a new config file is pulled a new config-map is created by the pod which is using the k8s API.
What I want to do is to update the previous config map with a certain flag (a key and a value) which will basically help the application to know which is the current version of deployment. So let's say I have a running k8s cluster with multiple pods in it, a config-map is there which has all the configuration details against those pods (image version, namespace, etc.) and a flag notifying that this the current deployment and the application inside POD-A will know that by loading the config-map. Now when a new config-file is pulled a new config-map is created and the flag for current deployment is set to false for the previous config map and is set to true for the latest created config map. Then that config map is used to update all the pods in the cluster.
I know there are a lot of details but I had to explain them to ask the following questions:
1) Can configmaps be used for this purpose?
2) Can I update configmaps or do I have to rewrite them completely? I am thinking of writing a file in the configmap because that would be much simpler.
3) I know configmaps are stored in etcd but are they persisted on disk or are kept in memory?
4) Let's say POD-A goes down will it have any effect on the configmaps? Are they in any way associated with the life cycle of a pod?
5) If the k8s cluster itself goes down what happens to the `configmaps? Since they are in etcd and if they are persisted then will they be available again?
Note: There is also a limit on the size of configmaps so I have to keep that in mind. Although I am guessing 1MB is a fair enough size to save a config file since it would usually be in a few bytes.
1) I think you should not use it in this way.
2) ConfigMaps are kubernetes resources. You can update them.
3) If etcd backups to disk are enabled.
4) No. A pod's lifecycle should not affect configmaps, unless pod mutates(deletes) the configmap.
5) If the cluster itself goes down. Assuming etcd is also running on the same cluster, etcd will not be available till the cluster comes back up again. ETCD has an option to persist backups to disk. If this is enabled, when the etcd comes back up, it will have restored the values that were on the backup. So it should be available once the cluster & etcd is up.
There are multiple ways to mount configMap in a pod like env variables, file etc.
If you change a config map, Values won't be updated on configMaps as files. Only values for configMaps as env variables are update dynamically. And now the process running in the pod should detect env variable has been updated and take some action.
So I think the system will be too complex.
Instead trigger a deployment that kills the old pods and brings up a new pod which uses the updated configMaps.