I am using Spark, which has a predefined script to create a pod in my kubernetes cluster.
After the pod is created and running, I want to check if it's still alive. I could do this by using a livenessProbe, however this is configured in the configuration file for the Pod, which I do not have control over, as my pod is created by Spark and I cannot change its config file.
So my question is, after the pod has been already created and running, how can I change the configuration for it so that is uses livenessProbe?
Or is there any other way to check the liveness of the pod?
I am a beginner to Kubernetes, sorry for this question!
After a Pod is created you can't change the livenessProe definition.
You could use a second Pod to report on the status of your workload, if that works for your use case.
The other option is to use a Mutating Admission Controller to modify the Pod definition from your Spark script, though I would consider this not exactly beginner friendly.
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook
https://www.trion.de/news/2019/04/25/beispiel-kubernetes-mutating-admission-controller.html
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/
Related
If we have some requirement to modify property of running pods, Which will be the recommeneded way and whats the reason.
I guess once a pod deployed as part of the deployment, we can modify the pods properties either by kubectl edit pod or by kubectl edit deploy.
Would like to understand is there any difference between these 2 actions. ?
Modify the Deployment not the Pod.
Why?
The Deployment describe the desired state for your pods. The Deployment controller continuously watches for the Deployment object in a control loop. It reads the desired pod state from the Deployment specification and try to ensure the state in the cluster. So, if you edit the pod and change something, the Deployment controller will overwrite the change in next resync because your modification is not present in the Deployment specification.
For the most part you can't edit the pods. In the API definition of a PodSpec, the containers and initContainers fields are both described as "Cannot be updated." Almost all of the interesting things in a Pod spec are in the Container sub-objects.
The corollary to this is that you can't "modify properties of running pods" for the most part; you can only delete and replace them with new pods with the properties you want. If you edit the pod template in a deployment spec, Kubernetes will do exactly that.
I have used some bitnami charts in my kubernetes app. In my pod, there is a file whose path is /etc/settings/test.html. I want to override the file. When I search it, I figured out that I should mount my file by creating a configmap. But how can I use the created configmap with the existed pod . Many of the examples creates a new pod and uses the created config map. But I dont want to create a new pod, I wnat to use the existed pod.
Thanks
If not all then almost all pod specs are immutable, meaning that you can't change them without destroying the old pod and creating a new one with desired parameters. There is no way to edit pod volume list without recreating it.
The reason behind this is that pods aren't meant to be immortal. Pods meant to be temporary units that can be spawned/destroyed according to scheduler needs. In general, you need a workload object that does pod management for you (a Deployement, StatefulSet, Job, or DaemonSet, depenging on deployment strategy and application nature).
There are two ways to edit a file in an existing pod: either by using kubectl exec and console commands to edit the file in place, or kubectl cp to copy an already edited file into the pod. I advise you against both of these, because this is not permanent. Better backup the necessary data, switch deployment type to Deployment with one replica, then go with mounting a configMap as you read on the Internet.
I was trying to make file before application gets up in kubernetes cluster with initcontainers,
But when i am setting up the pod.yaml and trying to apply it with "kubectl apply -f pod.yaml" it throws below error
error-image
Like the error says, you cannot update a Pod adding or removing containers. To quote the documentation ( https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement )
Kubernetes doesn't prevent you from managing Pods directly. It is
possible to update some fields of a running Pod, in place. However,
Pod update operations like patch, and replace have some limitations
This is because usually, you don't create Pods directly, instead you use Deployments, Jobs, StatefulSets (and more) which are high-level resources that defines Pods templates. When you modify the template, Kubernetes simply delete the old Pod and then schedule the new version.
In your case:
you could delete the pod first, then create it again with the new specs you defined. But take into consideration that the Pod may be scheduled on a different node of the cluster (if you have more than one) and that may have a different IP Address as Pods are disposable entities.
Change your definition with a slightly more complex one, a Deployment ( https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ ) which can be changed as desired and, each time you'll make a change to its definition, the old Pod will be removed and a new one will be scheduled.
From the spec of your Pod, I see that you are using a volume to share data between the init container and the main container. This is the optimal way but you don't necessarily need to use a hostPath. If the only needs for the volume is to share data between init container and other containers, you can simply use emptyDir type, which acts as a temporary volume that can be shared between containers and that will be cleaned up when the Pod is removed from the cluster for any reason.
You can check the documentation here: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
I have a problem statement where in there is a Kubernetes cluster and I have some pods running on it.
Now, I want some functions/processes to run once per deployment, independent of number of replicas.
These processes use the same image like the image in deployment yaml.
I cannot use initcontainers and sidecars, because they will run along with main container on pod for each replica.
I tried to create a new image and then a pod out of it. But this pod keeps on running, which is not good for cluster resource, as it should be destroyed after it has done its job. Also, the main container depends on the completion on this process, in order to run the "command" part of K8 spec.
Looking for suggestions on how to tackle this?
Theoretically, You could write an admission controller webhook for intercepting create/update deployments and triggering your functions as you want. If your functions need to be checked, use ValidatingWebhookConfiguration for validating the process and then deny or accept commands.
Assume deployment like that:
Deployment contains two types of pods Config and App
Each App pod to start needs to have access to Config pod
There is always only one Config pod
Already launched App pods can work without access to Config pod service
Situation I would like to manage:
Node containing some of App pods and Config pod going down for any reason
On another Node first starts Config pod
After Config pod is successfully started App pods are launched
Already read about:
InitContainers - couldn't find an information if Config pod would be of type Init if in above situation it would rerun - I think not
StatueFullSet - I cannot find a way how this could help me in that situation
From my perspective I was thinking about a loop for App pods before running target application, that would wait for Config pod to come up and in case of unavailability after timeout force them to fail. But I'm not sure if that is best practice, would like better to handle this with Kubernetes configuration rather that with such script.
You would use either code in your app or an initContainer to block until a config pod is available. Combine this with a readinessProbe that checks if the app is up. Doing the block-and-retry loop in your own code is a bit more work but recommended since you can more carefully control the behavior. This means that app pods can launch whenever, but they won't be marked as ready for traffic until the initialize.