Kubernetes Edit File In A Pod - kubernetes

I have used some bitnami charts in my kubernetes app. In my pod, there is a file whose path is /etc/settings/test.html. I want to override the file. When I search it, I figured out that I should mount my file by creating a configmap. But how can I use the created configmap with the existed pod . Many of the examples creates a new pod and uses the created config map. But I dont want to create a new pod, I wnat to use the existed pod.
Thanks

If not all then almost all pod specs are immutable, meaning that you can't change them without destroying the old pod and creating a new one with desired parameters. There is no way to edit pod volume list without recreating it.
The reason behind this is that pods aren't meant to be immortal. Pods meant to be temporary units that can be spawned/destroyed according to scheduler needs. In general, you need a workload object that does pod management for you (a Deployement, StatefulSet, Job, or DaemonSet, depenging on deployment strategy and application nature).
There are two ways to edit a file in an existing pod: either by using kubectl exec and console commands to edit the file in place, or kubectl cp to copy an already edited file into the pod. I advise you against both of these, because this is not permanent. Better backup the necessary data, switch deployment type to Deployment with one replica, then go with mounting a configMap as you read on the Internet.

Related

I have deleted a StatefulSet on accident and need to re-create it

So, dumb me has deleted a very important Statefulset, (I was working in the wrong environment) and need to recover it.
I have the .yaml file and the description (what you get when you get when you click edit on OpenLens).
This stateful set if used for the database and I can not lose the data.
The pvc's and pv's are still there and I have not done anything in fear of losing data.
As you can probably tell, I am not very familiar with Kubernetes and need help restoring my statefulset and not losing data in the process.
As a sidenote, I tried just kubectl apply -f <file> in our dev environment and data gets lost.
To restore the StatefulSet without losing data, you should first check the status of the PersistentVolumeClaims (PVCs) and PersistentVolumes (PVs) associated with the StatefulSet. You can do this by running the kubectl get pvc and kubectl get pv commands. Once you have verified that the PVCs and PVs are intact, you need the same statefulset yaml file for restoring, by using kubectl apply -f command you can recreate the StatefulSet. If you want to ensure that the StatefulSet is restored exactly as it was before it was deleted, you can use the kubectl replace -f command instead. This will replace the existing StatefulSet with the one defined in the .yaml file, without changing any of the data on the associated PVCs and PVs.
To ensure that your data is not lost in the process, it is recommended that you create a backup of the StatefulSet before performing any of the above commands.

Facing "The Pod "web-nginx" is invalid: spec.initContainers: Forbidden: pod updates may not add or remove containers" applying pod with initcontainers

I was trying to make file before application gets up in kubernetes cluster with initcontainers,
But when i am setting up the pod.yaml and trying to apply it with "kubectl apply -f pod.yaml" it throws below error
error-image
Like the error says, you cannot update a Pod adding or removing containers. To quote the documentation ( https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement )
Kubernetes doesn't prevent you from managing Pods directly. It is
possible to update some fields of a running Pod, in place. However,
Pod update operations like patch, and replace have some limitations
This is because usually, you don't create Pods directly, instead you use Deployments, Jobs, StatefulSets (and more) which are high-level resources that defines Pods templates. When you modify the template, Kubernetes simply delete the old Pod and then schedule the new version.
In your case:
you could delete the pod first, then create it again with the new specs you defined. But take into consideration that the Pod may be scheduled on a different node of the cluster (if you have more than one) and that may have a different IP Address as Pods are disposable entities.
Change your definition with a slightly more complex one, a Deployment ( https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ ) which can be changed as desired and, each time you'll make a change to its definition, the old Pod will be removed and a new one will be scheduled.
From the spec of your Pod, I see that you are using a volume to share data between the init container and the main container. This is the optimal way but you don't necessarily need to use a hostPath. If the only needs for the volume is to share data between init container and other containers, you can simply use emptyDir type, which acts as a temporary volume that can be shared between containers and that will be cleaned up when the Pod is removed from the cluster for any reason.
You can check the documentation here: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

Difficulty with different kubernetes pods run using kubetctl apply running same container images sharing directories

I am attempting to run two separate pods using the same container image on a cluster by applying a config file. Despite there being no shared or persistent volume when both pods are active the same directory on both pods is updated with created files from the other pod and write access changes suddenly. The container being used is the jupyter-docker-stacks jupyter/minimal-notebook image being pulled directly from dockerhub. These pods running this container is created by applying a manifest. The two pods have different labels and names. A service with a unique name is created for each pod for access.
Do resources for containers persist over time on a cluster like in docker containers? I cannot find something equivalent to a --rm flag to be used alongside kubectl apply.
Thanks
If you want to delete the pod after the job is completed, you might want to apply job instead of pod. The idea of job in k8s is to launch a pod and do the job, and then the pod get stopped. For more info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
$ kubectl apply -f <fileName> will create or make some changes in the pod. If you want to delete pod using apply you must use $ kubectl delete -f <fileName>
About sharing, if you have 2 separate manifest you can specify volumeMounts for each container. For more information please read the documentation depends on your needs.
Also as #Kaizhe Huang advised you can use Job if you want to execute something one time or try initContainers if you want to install something in POD before main container will be run. More about initContainers here.
You could check the dockerfile of your image. See if there are 'VOLUME' claimed. If have, maybe they share the same volume on host. Not sure, but you could check.

Persistence of Configmap in kubernetes

I have a Kubernetes pod (let's call it POD-A) and I want it to use a certain config file to perform some actions using k8s API. The config file will be a YAML or JSON which will be parsed by the application inside the pod.
The config file is hosted by an application server on cloud and the latest version of it can be pulled based on a trigger. The config file contains configuration details of all the deployments in the k8s cluster and will be used to update deployments using k8s API in POD-A.
Now what I am thinking is to save this config file in a config-map and every time a new config file is pulled a new config-map is created by the pod which is using the k8s API.
What I want to do is to update the previous config map with a certain flag (a key and a value) which will basically help the application to know which is the current version of deployment. So let's say I have a running k8s cluster with multiple pods in it, a config-map is there which has all the configuration details against those pods (image version, namespace, etc.) and a flag notifying that this the current deployment and the application inside POD-A will know that by loading the config-map. Now when a new config-file is pulled a new config-map is created and the flag for current deployment is set to false for the previous config map and is set to true for the latest created config map. Then that config map is used to update all the pods in the cluster.
I know there are a lot of details but I had to explain them to ask the following questions:
1) Can configmaps be used for this purpose?
2) Can I update configmaps or do I have to rewrite them completely? I am thinking of writing a file in the configmap because that would be much simpler.
3) I know configmaps are stored in etcd but are they persisted on disk or are kept in memory?
4) Let's say POD-A goes down will it have any effect on the configmaps? Are they in any way associated with the life cycle of a pod?
5) If the k8s cluster itself goes down what happens to the `configmaps? Since they are in etcd and if they are persisted then will they be available again?
Note: There is also a limit on the size of configmaps so I have to keep that in mind. Although I am guessing 1MB is a fair enough size to save a config file since it would usually be in a few bytes.
1) I think you should not use it in this way.
2) ConfigMaps are kubernetes resources. You can update them.
3) If etcd backups to disk are enabled.
4) No. A pod's lifecycle should not affect configmaps, unless pod mutates(deletes) the configmap.
5) If the cluster itself goes down. Assuming etcd is also running on the same cluster, etcd will not be available till the cluster comes back up again. ETCD has an option to persist backups to disk. If this is enabled, when the etcd comes back up, it will have restored the values that were on the backup. So it should be available once the cluster & etcd is up.
There are multiple ways to mount configMap in a pod like env variables, file etc.
If you change a config map, Values won't be updated on configMaps as files. Only values for configMaps as env variables are update dynamically. And now the process running in the pod should detect env variable has been updated and take some action.
So I think the system will be too complex.
Instead trigger a deployment that kills the old pods and brings up a new pod which uses the updated configMaps.

Kubernetes deployments: Editing the 'spec' of a pod's YAML file fails

The env element added in spec.containers of a pod using K8 dashboard's Edit doesn't get saved. Does anyone know what the problem is?
Is there any other way to add environment variables to pods/containers?
I get this error when doing the same by editing the file using nano:
# pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
Thanks.
Not all fields can be updated. This fact is sometimes mentioned in the kubectl explain output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:
$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
If you deploy your Pods using a Deployment object, then you can change the environment variables in that object with kubectl edit since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.
Another option for you may be to use ConfigMaps. If you use the volume plugin method for mounting the ConfigMap and your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you).
We cannot edit env variables, resource limit, service account of a pod that is running live.
But definitely, we can edit/update image name, toleration and active deadline seconds,, etc.
However, the "deployment" can be easily edited because "pod" is a child template of deployment specification.
In order to "edit" the running pod with desired changes, the following approach can be used.
Extract the pod definition to a file, Make necessary changes, Delete the existing pod, and Create a new pod from the edited file:
kubectl get pod my-pod -o yaml > my-new-pod.yaml
vi my-new-pod.yaml
kubectl delete pod my-pod
kubectl create -f my-new-pod.yaml
Not sure about others but when I edited the pod YAML from google Kubernetes Engine workloads page, the same error came to me. But if I retry after some time it worked.
feels like some update was going on at the same time earlier, so I try to edit YAML fast and apply the changes and it worked.