Kubernetes: when exactly envFrom imports the values from ConfigMap? - kubernetes

Suppose that I have a Deployment which loads the env variables from a ConfigMap:
spec.template.spec.containers[].envFrom.configMapRef
Now suppose that I change the data inside the ConfigMap.
When exactly are the Pod env variables updated? (i.e. when the app running in a pod sees the new env variables)
For example (note that we are inside a Deployment):
If a container inside the pod crashes and it is restarted does it read the new env or the old env?
If a pod is deleted (but not its RelicaSet) and it is recreated does it read the new env or the old env?

After some testing (with v1.20) I see that env variables in Pod template are updated immediately (it's just a reference to external values).
However the container does not see the new env variables... You need at least to restart it (or otherwise delete and recreate the pod).

When using a ConfigMap with env variables, the values are not updated. You have to restart the pod.
If you want hot reload, you can mount that ConfigMap as a volume. Then values are updated automatically, however the app has still to watch for that change.

Related

Deploying NextJS application to Kubernetes cluster

Next version 12.1.5
Kubernetes version 1.20
Until now I've copied a .env file to the next image and regulated them that way, but now a new requirement came to hold the env variables in Kubernetes secrets.
After adding the needed change, that is creating the secret and adding "secretAddRef" keys to the env section of my deployment file, when logging process.env within the next.config.js file I get undefined.
The strange thing is that they are present when using shh to enter the Kubernetes pod and running "printenv".

How to make configmap propagate to running pods (without restarting them)

When changing a variable in a configmap, the environment variables inside running pods are not updated.
We have a stateful pod that cannot be restarted easily.
Trying to update the environment variable inside the container with
export VARIABLE_TO_BE_UPDATED="new value"
lasts a little while but is then rolled back automatically after some short time (maybe this has something to due with the open session).
Any way to update those environment variables (manually) in a persistent way without restarting the pod?
You can't reload a ConfigMap that was already mounted. The ConfigMap is read from the API and dumped into a volume before the container is started, it remains static afterwards.
Another way to do this could be to use a sidecard container, watching over changes to those ConfigMaps, refreshing copies in some volume that would be shared with your application container, then instructing your application to reload its configuration. A common implementation for this would be the Prometheus Rule Reloader.

Restart Pod when secrets gets updated

We are using secret as environment variables on pod, but every time we have updated on secrets, we are redeploying the pods to take changes effect. We are looking for a mechanism where Pods get restarted automatically whenever secrets gets updated. Any help on this?
Thanks in advance.
There are many ways to handle this.
First, use Deployment instead of "naked" Pods that are not managed. The Deployment will create new Pods for you, when the Pod template is changed.
Second, to manage Secrets may be a bit tricky. It would be great if you can use a setup where you can use Kustomize SecretGenerator - then each new Secret will get its unique name. In addition, that unique name is reflected to the Deployment automatically - and your pods will automatically be recreated when a Secret is changed - this match your origin problem. When Secret and Deployment is handled this way, you apply the changes with:
kubectl apply -k <folder>
If you mount your secrets to pod it will get updated automatically you don't have to restart your pod as mentioned here
Other approaches are staker reloader which can reload your deployments based on configs, secrets etc
There are multiple ways of doing this:
Simply restart the pod
this can be done manually, or,
you could use an operator provided by VMware carvel kapp controller (documentation), using kapp controller you can reload the secrets/ configmap without needing to restart the pods (which effectively runs helm template <package> on a periodic basis and applies the changes if it founds any differences in helm template), check out my design for reloading the log level without needing to restart the pod.
Using service bindings https://servicebinding.io/

Persistence of Configmap in kubernetes

I have a Kubernetes pod (let's call it POD-A) and I want it to use a certain config file to perform some actions using k8s API. The config file will be a YAML or JSON which will be parsed by the application inside the pod.
The config file is hosted by an application server on cloud and the latest version of it can be pulled based on a trigger. The config file contains configuration details of all the deployments in the k8s cluster and will be used to update deployments using k8s API in POD-A.
Now what I am thinking is to save this config file in a config-map and every time a new config file is pulled a new config-map is created by the pod which is using the k8s API.
What I want to do is to update the previous config map with a certain flag (a key and a value) which will basically help the application to know which is the current version of deployment. So let's say I have a running k8s cluster with multiple pods in it, a config-map is there which has all the configuration details against those pods (image version, namespace, etc.) and a flag notifying that this the current deployment and the application inside POD-A will know that by loading the config-map. Now when a new config-file is pulled a new config-map is created and the flag for current deployment is set to false for the previous config map and is set to true for the latest created config map. Then that config map is used to update all the pods in the cluster.
I know there are a lot of details but I had to explain them to ask the following questions:
1) Can configmaps be used for this purpose?
2) Can I update configmaps or do I have to rewrite them completely? I am thinking of writing a file in the configmap because that would be much simpler.
3) I know configmaps are stored in etcd but are they persisted on disk or are kept in memory?
4) Let's say POD-A goes down will it have any effect on the configmaps? Are they in any way associated with the life cycle of a pod?
5) If the k8s cluster itself goes down what happens to the `configmaps? Since they are in etcd and if they are persisted then will they be available again?
Note: There is also a limit on the size of configmaps so I have to keep that in mind. Although I am guessing 1MB is a fair enough size to save a config file since it would usually be in a few bytes.
1) I think you should not use it in this way.
2) ConfigMaps are kubernetes resources. You can update them.
3) If etcd backups to disk are enabled.
4) No. A pod's lifecycle should not affect configmaps, unless pod mutates(deletes) the configmap.
5) If the cluster itself goes down. Assuming etcd is also running on the same cluster, etcd will not be available till the cluster comes back up again. ETCD has an option to persist backups to disk. If this is enabled, when the etcd comes back up, it will have restored the values that were on the backup. So it should be available once the cluster & etcd is up.
There are multiple ways to mount configMap in a pod like env variables, file etc.
If you change a config map, Values won't be updated on configMaps as files. Only values for configMaps as env variables are update dynamically. And now the process running in the pod should detect env variable has been updated and take some action.
So I think the system will be too complex.
Instead trigger a deployment that kills the old pods and brings up a new pod which uses the updated configMaps.

Kubernetes deployments: Editing the 'spec' of a pod's YAML file fails

The env element added in spec.containers of a pod using K8 dashboard's Edit doesn't get saved. Does anyone know what the problem is?
Is there any other way to add environment variables to pods/containers?
I get this error when doing the same by editing the file using nano:
# pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
Thanks.
Not all fields can be updated. This fact is sometimes mentioned in the kubectl explain output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:
$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
If you deploy your Pods using a Deployment object, then you can change the environment variables in that object with kubectl edit since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.
Another option for you may be to use ConfigMaps. If you use the volume plugin method for mounting the ConfigMap and your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you).
We cannot edit env variables, resource limit, service account of a pod that is running live.
But definitely, we can edit/update image name, toleration and active deadline seconds,, etc.
However, the "deployment" can be easily edited because "pod" is a child template of deployment specification.
In order to "edit" the running pod with desired changes, the following approach can be used.
Extract the pod definition to a file, Make necessary changes, Delete the existing pod, and Create a new pod from the edited file:
kubectl get pod my-pod -o yaml > my-new-pod.yaml
vi my-new-pod.yaml
kubectl delete pod my-pod
kubectl create -f my-new-pod.yaml
Not sure about others but when I edited the pod YAML from google Kubernetes Engine workloads page, the same error came to me. But if I retry after some time it worked.
feels like some update was going on at the same time earlier, so I try to edit YAML fast and apply the changes and it worked.