Adding configMap as a volume to a container in an OCP deployment - kubernetes

I have a deployment of 3 containers in OCP. In one of them there is a configuration file which I want to mount to the container via configMap. I created a configMap and tried to mount it to the container but it didn't work.
I use 'csanchez jenkins kubernetes' plugin, so the deployment is configured in a yml file and written in xml format. I found this in the docs of the csanchez plugin, tried to add the necessary field to the container field, but it did not worked.
I want to connect it to a single container and not to the pod, because the path of the config file is the same in another one, but the config file differs.
I tried to add to the container field:
<volumeMounts>
<org.csanchez.jenkins.plugins.kubernetes.volumes.configMapVolume>
<mountPath>/opt/selenium/config.json</mountPath>
<configMapName>selenium-config-map</configMapName>
</org.csanchez.jenkins.plugins.kubernetes.volumes.configMapVolume>
</volumeMounts>
I tried to switch volumeMounts with volumes and it also didn't work.

Related

set environment variables within a mountPath in Kubernetes

we have deployed an application in Kubernetes as deployment and stored the logs in a folder named /podlogs. whenever the pod has taken the restart, it will create a new folder named with the latest pod name inside the app-log folder and store the actual log files. For example this new folder could be /podlogs/POD_Name
Previously we have mounted /podlogs to ELK and azure blob containers.
By using the subPath, we would like to also mount the /podlogs/POD_Name to a second mount.
How can we pass the env variable in the mount path along with the subPath?
See the DownwardAPI https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
It is possible to expose the pod name as a env variable or to mount
It is conceivable you could then mount or expose this to the ELK container

How to configure an Ingress to access all pods from a DaemonSet?

I'm using hardware-dependents pods; in my K8s, I instantiate my pods with a DaemonSet.
Now I want to access those pods with an URL like https://domain/{pod-hostname}/
My use case is a bit more tedious than this one. my pods' names are not predefined.
Moreover, I also need a REST entry point to list my pod's name or hostname.
I publish a Docker Image to solve my issue: urielch/dyn-ingress
My YAML configuration is in the Docker doc.
This Container add label on each pod, then use this label to create a service per pod, and then update an existing Ingress to reach each node with a path //
feel free to test it.
the source code is here

Kubernetes Edit File In A Pod

I have used some bitnami charts in my kubernetes app. In my pod, there is a file whose path is /etc/settings/test.html. I want to override the file. When I search it, I figured out that I should mount my file by creating a configmap. But how can I use the created configmap with the existed pod . Many of the examples creates a new pod and uses the created config map. But I dont want to create a new pod, I wnat to use the existed pod.
Thanks
If not all then almost all pod specs are immutable, meaning that you can't change them without destroying the old pod and creating a new one with desired parameters. There is no way to edit pod volume list without recreating it.
The reason behind this is that pods aren't meant to be immortal. Pods meant to be temporary units that can be spawned/destroyed according to scheduler needs. In general, you need a workload object that does pod management for you (a Deployement, StatefulSet, Job, or DaemonSet, depenging on deployment strategy and application nature).
There are two ways to edit a file in an existing pod: either by using kubectl exec and console commands to edit the file in place, or kubectl cp to copy an already edited file into the pod. I advise you against both of these, because this is not permanent. Better backup the necessary data, switch deployment type to Deployment with one replica, then go with mounting a configMap as you read on the Internet.

Using Helm Chart and Kubernetes, pass a file to the container during postStart

I'm using Helm Chart, Kubernetes and containers.
I need to pass a file to the container during postStart.
The file will be used in the postStart script.
The file should change from one deployment to another and should be part of the Helm Chart values.
Is the above scenario supported? Any suggestions or examples of how to implement it?
Follow the below steps
Create a ConfigMap object using the file/script that you want to use in postStart event
Mount the ConfigMap as volume inside the pod
You should be able to access the file/script inside pod/container.

Kubernetes / Use json file in pod context

I have a container which use a .json file to load the configuration it needs.
I tried to find a way to load this configuration. from what I see ConfigMap has the option to load json, but in my case the container in the pod is expected it as a mounted file.
in addition, it requires apiVersion and other parameters, so im not sure its the same case.
what is the best way to move this file to the pod context and use is in the container as a mounted file?
You should create configMap object using the Json file. Load the configMap as volume in the pod. The api version and other metada that you are referring to is relevant for configMap and not for Json configuration file that you are going use in the running container