Is it possible to setup a delay for pod shutdowns after a rolling update on Kubernetes?
For example I roll out a new version and want the old Pods to run for further 15secs after the new instance has been started.
How can I manage that?
Yes, you can use PreStop Hook to achieve that.
PreStop hooks are executed after a Pod is marked as terminating. See what happen when you delete a pod from here.
You just have to run sleep 15 on PreStop Hook.
For more details see Container hooks.
See how to add a PreStop hook from here: Define postStart and preStop handlers.
Related
I want to execute a script immediately after the container is completed successfully or terminated due to an error in the pod.
I tried by attaching handlers to Container lifecycle events like preStop but it is only called when a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others.
Reference - Kubernetes Doc: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
Is there an alternative approach to this?
From the official docs, as you said:
Kubernetes only sends the preStop event when a Pod is terminated. This means that the preStop hook is not invoked when the Pod is completed.
Although, the use of bare pods is not recommended. Consider using a Job Controller:
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions.
You can check job conditions and wait for them with this:
kubectl wait --for=condition=complete job/your-job, then run your script. In the meanwhile add preStop event to your pods definition to run script if pods are terminated. You can write extra script which will work in the background and will be checking if job is completed and then it will run script.
while kubectl get jobs your-job -o jsonpath='{.status.conditions[?
(#.type=="Complete")].status}' | grep True ; do <run your main script> ; done
See more: job-completion-task.
What is a difference between Helm Hooks post-install and Kubernetes initcontainers? What I am understood is that Hooks are used to define some actions during different stages of Pod lifecycle, in that case - post-install, and initcontainers, on the other hand, allow to initialize the container before it is deployed to Pod.
post-install and initcontainer as I understand allow to do the same thing, i.e. initialize a database.
Is that correct? Which is the better approach?
For database setup I would prefer a Helm hook, but even then there are some subtleties.
Say your service is running as a Deployment with replicas: 3 for a little bit of redundancy. Every one of these replicas will run an init container, if it's specified in the pod spec, without any sort of synchronization. If one of the pods crashes, or its node fails, its replacement will run the init container again. For the sort of setup task you're talking about, you don't want to repeat it that often.
The fundamental difference here is that a Helm hook is a separate Kubernetes object, typically a Job. You can arrange for this Job to be run exactly once on each helm upgrade and at no other times, which makes it a reasonable place to run things like migrations.
The one important subtlety here is that you can have multiple versions of a service running at once. Take the preceding Deployment with replicas: 3, but then helm upgrade --set tag=something-newer. The Deployment controller will first start a new pod with the new image, and only once it's up and running will it tear down an old pod, and now you have both versions going together. Similar things will happen if you helm rollback to an older version. This means you need some tolerance for the database not quite having the right schema.
If the job is more like a "seed" job that preloads some initial data, this is easier to manage: do it in a post-install hook, which you expect to run only once ever. You don't need to repeat it on every upgrade (as a post-upgrade hook) or on every pod start (as an init container).
Helm install hook and initcontainer are fundamentally different.Install hooks in helm creates a completely separate pod altogether which means that pod will not have access to main pods directly using localhost or they cannot use same volume mount etc while initcontainer can do so.
Init container which is comparable to helm pre install hooks is limited in way because it can only do initial tasks before the main pod is started and can not do any tasks which need to be executed after the pod is started for example any clean up activity.
Initialization of DB etc needs to be done before the actual container is started and I think initcontainer is sufficient enough for this use case but a helm pre install hook can also be used.
You need to use post hook, since first you have to create a db pod and then initialize db. You will notice that a pod for post hook comes up after the db pod starts running. The post hook pod will be removed after the hook is executed.
I am running into a situation where, initcontainer execution to completion has to be time bounded. Can someone tell or recommend a strategy to achieve the same? What I have tried till now:
activeDeadlineSeconds - This attribute is supported on Pod but not on ReplicaSet. So, cannot use inside deployment object.
killing initcontainer from inside, when timer expires. This is not working as expected, please refer to link.
progressDeadlineSeconds - This doesnt take into account initcontainers.
One of the solutions could be by Adding lifecycle hooks
Pods also allow you to define two lifecycle hooks:
1: Post-start hooks: K8 docs
Remember: Until the hook completes, the container will stay in the Waiting state with the reason ContainerCreating. Because of this, the pod’s status will be Pending instead of Running. If the hook fails to run or returns a non-zero exit code, the main container will be killed.
2: Pre-stop hooks: K8 docs and pre-stop hook is executed immediately before a container is terminated.
Note:
1: These lifecycle hooks are specified per container, unlike init containers, which apply to the whole pod.
2: As their names suggest, they’re executed when the container starts and before it stops.
I hope this helps you to land you to a new approach!
I use initContainers inside a deployment. An init container is created for each pods.
It is possible to get only one init container for the whole deployment ?
edit:
use case: need to run some db migration commands before creating the pods. So i have put these commands inside init pods.
problems:
- each time a pod is created, the init container is created
- on scale up init container is created
solution:
I have finally found a good example for solving this problem in this article
While it is not possible to have one init container shared by whole deployment (which by definition is impossible as the unit of workload scheduling in kube is a pod), there are things you can use to get a similar functionality easily.
One way to achieve this would be to have an init container on every pod waiting for a specific job to be successfully executed, and a job that would do the actual init logic.
Another one would be to use leader election among your init containers so only one executes and the rest just wait for the lock to be release before they exit successfully.
No, pods are identical so if you use an init container it has to be attached to every pod in your deployment.
You're welcome to comment and tell us more about your use case, because it sounds like the problem you are trying to solve is not the best fit for init containers.
What triggers init container to be run?
Will editing deployment descriptor (or updating it with helm), for example, changing the image tag, trigger the init container?
Will deleting the pod trigger the init container?
Will reducing replica set to null and then increasing it trigger the init container?
Is it possible to manually trigger init container?
What triggers init container to be run?
Basically initContainers are run every time a Pod, which has such containers in its definition, is created and reasons of creation of a Pod can be quite different. As you can read in official documentation init containers run before app containers in a Pod and they always run to completion. If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. So one of the things that trigger starting an initContainer is, among others, previous failed attempt of starting it.
Will editing deployment descriptor (or updating it with helm), for
example, changing the image tag, trigger the init container?
Yes, basically every change to Deployment definition that triggers creation/re-creation of Pods managed by it, also triggers their initContainers to be run. It doesn't matter if you manage it by helm or manually. Some slight changes like adding for example a new set of labels to your Deployment don't make it to re-create its Pods but changing the container image for sure causes the controller (Deployment, ReplicationController or ReplicaSet) to re-create its Pods.
Will deleting the pod trigger the init container?
No, deleting a Pod will not trigger the init container. If you delete a Pod which is not managed by any controller it will be simply gone and no automatic mechanism will care about re-creating it and running its initConainers. If you delete a Pod which is managed by a controller, let's say a replicaSet, it will detect that there are less Pods than declared in its yaml definition and it will try to create such missing Pod to match the desired/declared state. So I would like to highlight it again that it is not the deletion of the Pod that triggers its initContainers to be run, but Pod creation, no matter manual or managed by the controller such as replicaSet, which of course can be triggered by manual deletion of the Pod managed by such controller.
Will reducing replica set to null and then increasing it trigger the
init container?
Yes, because when you reduce the number of replicas to 0, you make the controller delete all Pods that fall under its management. When they are re-created, all their startup processes are repeated including running initContainers being part of such Pods.
Is it possible to manually trigger init container?
As #David Maze already stated in his comment The only way to run an init container is by creating a new pod, but both updating a deployment and deleting a deployment-managed pod should trigger that. I would say it depends what you mean by the term manually. If you ask whether this is possible to trigger somehow an initContainer without restarting / re-creating a Pod - no, it is not possible. Starting initContainers is tightly related with Pod creation or in other words with its startup process.
Btw. all what you're asking in your question is quite easy to test. You have a lot of working examples in kubernetes official docs that you can use for testing different scenarios and you can also create simple initContainer by yourself e.g. using busybox image which only task is to sleep for the required number of seconds. Here you have some useful links from different k8s docs sections related to initContainers:
Init Containers
Debug Init Containers
Configure Pod Initialization