Recreate container on exit in k8s - kubernetes

I have a Deployment in k8s which contains a single container, which exits when it completes its work. By default, k8s then restarts the container.
I would like to recreate the container (or the whole Pod) on exit instead. The work involves a number of temporary files and other changes in the container, which should be discarded when it exits and a fresh container created from the image.
How can I configure the Deployment (or another workload) so that an exited container is either recreated, or causes the whole pod to exit and be recreated?
There is a restartPolicy setting, but it's forced to Always for Deployments. There appears to be a maxRetries setting, but I can't find the documentation or any examples for it. I'm not sure what else to search for.

Using Job API Object may help. If you use restartPolicy=Never in a Job then the Pods managed by the Job will restart each time the process running in the Pod's container exits with error code. Using restartPolicy=Always will only restart/recreate the container without recreating the pod.
In addition, when a container restart in Kubernetes, it means it is re-created.: all the file created in the container are removed, except the ones which where created on attached persistent storage (for example PersistenceVolume). There is no equivalent to docker start/stop in Kubernetes.

Related

Kubernetes rolling deploy: terminate a pod only when there are no containers running

I am trying to deploy updates to pods. However I want the current pods to terminate only when all the containers inside the pod have terminated and their process is complete.
The new pods can keep waiting to start untill all container in the old pods have completed. We have a mechanism to stop old pods from picking up new tasks and therefore they should eventually terminate.
It's okay if twice the pods exist at some instance of time. I tried finding solution for this in kubernetes docs but wan't successful. Pointers on how / if this is possible would be helpful.
well I guess then you may have to create a duplicate kind of deployment with new image as required and change the selector in service to new deployment, which will prevent external traffic from entering pre-existing pods and new calls can go to new pods. Then later you can check for something like -
Kubectl top pods -c containers
and if the load appears to be static and low, then preferrably you can delete the old pods related deployment later.
But for this thing everytime the service selectors have to be updated and likely for keeping track of things you can append the git commit hash to the service selector to keep it unique everytime.
But rollback to previous versions if required from inside Kubernetes cluster will be difficult, so preferably you can trigger the wanted build again.
I hope this makes some sense !!

Init container to be ran only once per deployment

I use initContainers inside a deployment. An init container is created for each pods.
It is possible to get only one init container for the whole deployment ?
edit:
use case: need to run some db migration commands before creating the pods. So i have put these commands inside init pods.
problems:
- each time a pod is created, the init container is created
- on scale up init container is created
solution:
I have finally found a good example for solving this problem in this article
While it is not possible to have one init container shared by whole deployment (which by definition is impossible as the unit of workload scheduling in kube is a pod), there are things you can use to get a similar functionality easily.
One way to achieve this would be to have an init container on every pod waiting for a specific job to be successfully executed, and a job that would do the actual init logic.
Another one would be to use leader election among your init containers so only one executes and the rest just wait for the lock to be release before they exit successfully.
No, pods are identical so if you use an init container it has to be attached to every pod in your deployment.
You're welcome to comment and tell us more about your use case, because it sounds like the problem you are trying to solve is not the best fit for init containers.

Set a Pod as owner reference for another pod in client go program

Is it possible to set a running pod as owner of another pod which is to be created. I tired but in that case pod creation fails.
This is not directly supported by Kubernetes. When you have a Pod that depends on the existence of another one (e.g. needs a Database or similar), you could use a Init Container. This will delay the container start until the init container finishes. This is a good way to apply e.g. waiting conditions.
I think you can use Kubernetes Jobs.
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created.
A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).
More information you can find here: jobs-kubernetes.

Running init container on deployment update

What triggers init container to be run?
Will editing deployment descriptor (or updating it with helm), for example, changing the image tag, trigger the init container?
Will deleting the pod trigger the init container?
Will reducing replica set to null and then increasing it trigger the init container?
Is it possible to manually trigger init container?
What triggers init container to be run?
Basically initContainers are run every time a Pod, which has such containers in its definition, is created and reasons of creation of a Pod can be quite different. As you can read in official documentation init containers run before app containers in a Pod and they always run to completion. If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. So one of the things that trigger starting an initContainer is, among others, previous failed attempt of starting it.
Will editing deployment descriptor (or updating it with helm), for
example, changing the image tag, trigger the init container?
Yes, basically every change to Deployment definition that triggers creation/re-creation of Pods managed by it, also triggers their initContainers to be run. It doesn't matter if you manage it by helm or manually. Some slight changes like adding for example a new set of labels to your Deployment don't make it to re-create its Pods but changing the container image for sure causes the controller (Deployment, ReplicationController or ReplicaSet) to re-create its Pods.
Will deleting the pod trigger the init container?
No, deleting a Pod will not trigger the init container. If you delete a Pod which is not managed by any controller it will be simply gone and no automatic mechanism will care about re-creating it and running its initConainers. If you delete a Pod which is managed by a controller, let's say a replicaSet, it will detect that there are less Pods than declared in its yaml definition and it will try to create such missing Pod to match the desired/declared state. So I would like to highlight it again that it is not the deletion of the Pod that triggers its initContainers to be run, but Pod creation, no matter manual or managed by the controller such as replicaSet, which of course can be triggered by manual deletion of the Pod managed by such controller.
Will reducing replica set to null and then increasing it trigger the
init container?
Yes, because when you reduce the number of replicas to 0, you make the controller delete all Pods that fall under its management. When they are re-created, all their startup processes are repeated including running initContainers being part of such Pods.
Is it possible to manually trigger init container?
As #David Maze already stated in his comment The only way to run an init container is by creating a new pod, but both updating a deployment and deleting a deployment-managed pod should trigger that. I would say it depends what you mean by the term manually. If you ask whether this is possible to trigger somehow an initContainer without restarting / re-creating a Pod - no, it is not possible. Starting initContainers is tightly related with Pod creation or in other words with its startup process.
Btw. all what you're asking in your question is quite easy to test. You have a lot of working examples in kubernetes official docs that you can use for testing different scenarios and you can also create simple initContainer by yourself e.g. using busybox image which only task is to sleep for the required number of seconds. Here you have some useful links from different k8s docs sections related to initContainers:
Init Containers
Debug Init Containers
Configure Pod Initialization

kubernetes pods are restarting with new ID

The pods i am working with are being managed by kubernetes. When I use the docker restart command to restart a pod, sometimes the pod gets a new id and sometimes the old one. When the pod gets a new id, its state first goes friom running ->error->crashloopbackoff. Can anyone please tell me why is this happening. Also how frequently does kubernetes does the health check
Kubernetes currently does not use the docker restart command for many reasons (e.g., preserving the logs of older containers). Kubelet, the daemon on the node, creates a new container if the existing container terminated. In any case, users should not perform container lifecycle operations (e.g., stop, restart) on kubernetes-managed containers directly using docker, as it could cause unexpected behaviors.
EDIT: If you want kubernetes to restart your container automatically, set RestartPolicy in your pod spec to "Always" or "OnFailure". For more details, see http://kubernetes.io/docs/user-guide/pod-states/