When changing a variable in a configmap, the environment variables inside running pods are not updated.
We have a stateful pod that cannot be restarted easily.
Trying to update the environment variable inside the container with
export VARIABLE_TO_BE_UPDATED="new value"
lasts a little while but is then rolled back automatically after some short time (maybe this has something to due with the open session).
Any way to update those environment variables (manually) in a persistent way without restarting the pod?
You can't reload a ConfigMap that was already mounted. The ConfigMap is read from the API and dumped into a volume before the container is started, it remains static afterwards.
Another way to do this could be to use a sidecard container, watching over changes to those ConfigMaps, refreshing copies in some volume that would be shared with your application container, then instructing your application to reload its configuration. A common implementation for this would be the Prometheus Rule Reloader.
Related
I have Jenkins deployment with one pod I want to make changes to the pod, for example, I wanna install and set up maven. I mounted volume to do pod. But when I restart the pod, changes made with kubectl exec are gone. But when I make changes in Jenkins GUI, changes are persistent. What is the reason behind it, and is there a way to save changes after pod deployed?
The kubernetes pod (the docker container in general) is by default stateless, to make it stateful, you need to store the state somewhere (a database, cloud storage, or a persistent disk, ...).
In your case you use mount a volume to the pod, and the state is restored when you use Jenkins, so here is a few things to check:
is the volume mount after every deployment/restart?
do you execute the same command manually and in Jenkins GUI?
do you use the correct mount path when you execute the command manually?
...I mounted volume to do pod...when I make changes in Jenkins GUI, changes are persistent.
By default changes made with Jenkins GUI is saved to the Jenkins home; presumably the location that you have mounted with a persistent volume.
What is the reason behind it,
When your pod goes away, the persistent volume remains in the system. You get back your changes when your pod come back online and mounted with the same volume. This means any changes that did not persist in the mounted volume will not be retain. This also means if your new pod cannot mount back the same persistent volume for any reason; you loose all the previous changes as well.
...and is there a way to save changes after pod deployed?
GUI or kubectl exec; any change that you want to persist thru Pod lifecycle; you ensure such change is always saves to the mounted volume; and the same volume is always available for new pod to mount.
Suppose that I have a Deployment which loads the env variables from a ConfigMap:
spec.template.spec.containers[].envFrom.configMapRef
Now suppose that I change the data inside the ConfigMap.
When exactly are the Pod env variables updated? (i.e. when the app running in a pod sees the new env variables)
For example (note that we are inside a Deployment):
If a container inside the pod crashes and it is restarted does it read the new env or the old env?
If a pod is deleted (but not its RelicaSet) and it is recreated does it read the new env or the old env?
After some testing (with v1.20) I see that env variables in Pod template are updated immediately (it's just a reference to external values).
However the container does not see the new env variables... You need at least to restart it (or otherwise delete and recreate the pod).
When using a ConfigMap with env variables, the values are not updated. You have to restart the pod.
If you want hot reload, you can mount that ConfigMap as a volume. Then values are updated automatically, however the app has still to watch for that change.
We are using secret as environment variables on pod, but every time we have updated on secrets, we are redeploying the pods to take changes effect. We are looking for a mechanism where Pods get restarted automatically whenever secrets gets updated. Any help on this?
Thanks in advance.
There are many ways to handle this.
First, use Deployment instead of "naked" Pods that are not managed. The Deployment will create new Pods for you, when the Pod template is changed.
Second, to manage Secrets may be a bit tricky. It would be great if you can use a setup where you can use Kustomize SecretGenerator - then each new Secret will get its unique name. In addition, that unique name is reflected to the Deployment automatically - and your pods will automatically be recreated when a Secret is changed - this match your origin problem. When Secret and Deployment is handled this way, you apply the changes with:
kubectl apply -k <folder>
If you mount your secrets to pod it will get updated automatically you don't have to restart your pod as mentioned here
Other approaches are staker reloader which can reload your deployments based on configs, secrets etc
There are multiple ways of doing this:
Simply restart the pod
this can be done manually, or,
you could use an operator provided by VMware carvel kapp controller (documentation), using kapp controller you can reload the secrets/ configmap without needing to restart the pods (which effectively runs helm template <package> on a periodic basis and applies the changes if it founds any differences in helm template), check out my design for reloading the log level without needing to restart the pod.
Using service bindings https://servicebinding.io/
What triggers init container to be run?
Will editing deployment descriptor (or updating it with helm), for example, changing the image tag, trigger the init container?
Will deleting the pod trigger the init container?
Will reducing replica set to null and then increasing it trigger the init container?
Is it possible to manually trigger init container?
What triggers init container to be run?
Basically initContainers are run every time a Pod, which has such containers in its definition, is created and reasons of creation of a Pod can be quite different. As you can read in official documentation init containers run before app containers in a Pod and they always run to completion. If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. So one of the things that trigger starting an initContainer is, among others, previous failed attempt of starting it.
Will editing deployment descriptor (or updating it with helm), for
example, changing the image tag, trigger the init container?
Yes, basically every change to Deployment definition that triggers creation/re-creation of Pods managed by it, also triggers their initContainers to be run. It doesn't matter if you manage it by helm or manually. Some slight changes like adding for example a new set of labels to your Deployment don't make it to re-create its Pods but changing the container image for sure causes the controller (Deployment, ReplicationController or ReplicaSet) to re-create its Pods.
Will deleting the pod trigger the init container?
No, deleting a Pod will not trigger the init container. If you delete a Pod which is not managed by any controller it will be simply gone and no automatic mechanism will care about re-creating it and running its initConainers. If you delete a Pod which is managed by a controller, let's say a replicaSet, it will detect that there are less Pods than declared in its yaml definition and it will try to create such missing Pod to match the desired/declared state. So I would like to highlight it again that it is not the deletion of the Pod that triggers its initContainers to be run, but Pod creation, no matter manual or managed by the controller such as replicaSet, which of course can be triggered by manual deletion of the Pod managed by such controller.
Will reducing replica set to null and then increasing it trigger the
init container?
Yes, because when you reduce the number of replicas to 0, you make the controller delete all Pods that fall under its management. When they are re-created, all their startup processes are repeated including running initContainers being part of such Pods.
Is it possible to manually trigger init container?
As #David Maze already stated in his comment The only way to run an init container is by creating a new pod, but both updating a deployment and deleting a deployment-managed pod should trigger that. I would say it depends what you mean by the term manually. If you ask whether this is possible to trigger somehow an initContainer without restarting / re-creating a Pod - no, it is not possible. Starting initContainers is tightly related with Pod creation or in other words with its startup process.
Btw. all what you're asking in your question is quite easy to test. You have a lot of working examples in kubernetes official docs that you can use for testing different scenarios and you can also create simple initContainer by yourself e.g. using busybox image which only task is to sleep for the required number of seconds. Here you have some useful links from different k8s docs sections related to initContainers:
Init Containers
Debug Init Containers
Configure Pod Initialization
I have a Kubernetes pod (let's call it POD-A) and I want it to use a certain config file to perform some actions using k8s API. The config file will be a YAML or JSON which will be parsed by the application inside the pod.
The config file is hosted by an application server on cloud and the latest version of it can be pulled based on a trigger. The config file contains configuration details of all the deployments in the k8s cluster and will be used to update deployments using k8s API in POD-A.
Now what I am thinking is to save this config file in a config-map and every time a new config file is pulled a new config-map is created by the pod which is using the k8s API.
What I want to do is to update the previous config map with a certain flag (a key and a value) which will basically help the application to know which is the current version of deployment. So let's say I have a running k8s cluster with multiple pods in it, a config-map is there which has all the configuration details against those pods (image version, namespace, etc.) and a flag notifying that this the current deployment and the application inside POD-A will know that by loading the config-map. Now when a new config-file is pulled a new config-map is created and the flag for current deployment is set to false for the previous config map and is set to true for the latest created config map. Then that config map is used to update all the pods in the cluster.
I know there are a lot of details but I had to explain them to ask the following questions:
1) Can configmaps be used for this purpose?
2) Can I update configmaps or do I have to rewrite them completely? I am thinking of writing a file in the configmap because that would be much simpler.
3) I know configmaps are stored in etcd but are they persisted on disk or are kept in memory?
4) Let's say POD-A goes down will it have any effect on the configmaps? Are they in any way associated with the life cycle of a pod?
5) If the k8s cluster itself goes down what happens to the `configmaps? Since they are in etcd and if they are persisted then will they be available again?
Note: There is also a limit on the size of configmaps so I have to keep that in mind. Although I am guessing 1MB is a fair enough size to save a config file since it would usually be in a few bytes.
1) I think you should not use it in this way.
2) ConfigMaps are kubernetes resources. You can update them.
3) If etcd backups to disk are enabled.
4) No. A pod's lifecycle should not affect configmaps, unless pod mutates(deletes) the configmap.
5) If the cluster itself goes down. Assuming etcd is also running on the same cluster, etcd will not be available till the cluster comes back up again. ETCD has an option to persist backups to disk. If this is enabled, when the etcd comes back up, it will have restored the values that were on the backup. So it should be available once the cluster & etcd is up.
There are multiple ways to mount configMap in a pod like env variables, file etc.
If you change a config map, Values won't be updated on configMaps as files. Only values for configMaps as env variables are update dynamically. And now the process running in the pod should detect env variable has been updated and take some action.
So I think the system will be too complex.
Instead trigger a deployment that kills the old pods and brings up a new pod which uses the updated configMaps.