timebound execution of initcontainer - kubernetes

I am running into a situation where, initcontainer execution to completion has to be time bounded. Can someone tell or recommend a strategy to achieve the same? What I have tried till now:
activeDeadlineSeconds - This attribute is supported on Pod but not on ReplicaSet. So, cannot use inside deployment object.
killing initcontainer from inside, when timer expires. This is not working as expected, please refer to link.
progressDeadlineSeconds - This doesnt take into account initcontainers.

One of the solutions could be by Adding lifecycle hooks
Pods also allow you to define two lifecycle hooks:
1: Post-start hooks: K8 docs
Remember: Until the hook completes, the container will stay in the Waiting state with the reason ContainerCreating. Because of this, the pod’s status will be Pending instead of Running. If the hook fails to run or returns a non-zero exit code, the main container will be killed.
2: Pre-stop hooks: K8 docs and pre-stop hook is executed immediately before a container is terminated.
Note:
1: These lifecycle hooks are specified per container, unlike init containers, which apply to the whole pod.
2: As their names suggest, they’re executed when the container starts and before it stops.
I hope this helps you to land you to a new approach!

Related

Init container to be ran only once per deployment

I use initContainers inside a deployment. An init container is created for each pods.
It is possible to get only one init container for the whole deployment ?
edit:
use case: need to run some db migration commands before creating the pods. So i have put these commands inside init pods.
problems:
- each time a pod is created, the init container is created
- on scale up init container is created
solution:
I have finally found a good example for solving this problem in this article
While it is not possible to have one init container shared by whole deployment (which by definition is impossible as the unit of workload scheduling in kube is a pod), there are things you can use to get a similar functionality easily.
One way to achieve this would be to have an init container on every pod waiting for a specific job to be successfully executed, and a job that would do the actual init logic.
Another one would be to use leader election among your init containers so only one executes and the rest just wait for the lock to be release before they exit successfully.
No, pods are identical so if you use an init container it has to be attached to every pod in your deployment.
You're welcome to comment and tell us more about your use case, because it sounds like the problem you are trying to solve is not the best fit for init containers.

Set a Pod as owner reference for another pod in client go program

Is it possible to set a running pod as owner of another pod which is to be created. I tired but in that case pod creation fails.
This is not directly supported by Kubernetes. When you have a Pod that depends on the existence of another one (e.g. needs a Database or similar), you could use a Init Container. This will delay the container start until the init container finishes. This is a good way to apply e.g. waiting conditions.
I think you can use Kubernetes Jobs.
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created.
A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).
More information you can find here: jobs-kubernetes.

Running init container on deployment update

What triggers init container to be run?
Will editing deployment descriptor (or updating it with helm), for example, changing the image tag, trigger the init container?
Will deleting the pod trigger the init container?
Will reducing replica set to null and then increasing it trigger the init container?
Is it possible to manually trigger init container?
What triggers init container to be run?
Basically initContainers are run every time a Pod, which has such containers in its definition, is created and reasons of creation of a Pod can be quite different. As you can read in official documentation init containers run before app containers in a Pod and they always run to completion. If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. So one of the things that trigger starting an initContainer is, among others, previous failed attempt of starting it.
Will editing deployment descriptor (or updating it with helm), for
example, changing the image tag, trigger the init container?
Yes, basically every change to Deployment definition that triggers creation/re-creation of Pods managed by it, also triggers their initContainers to be run. It doesn't matter if you manage it by helm or manually. Some slight changes like adding for example a new set of labels to your Deployment don't make it to re-create its Pods but changing the container image for sure causes the controller (Deployment, ReplicationController or ReplicaSet) to re-create its Pods.
Will deleting the pod trigger the init container?
No, deleting a Pod will not trigger the init container. If you delete a Pod which is not managed by any controller it will be simply gone and no automatic mechanism will care about re-creating it and running its initConainers. If you delete a Pod which is managed by a controller, let's say a replicaSet, it will detect that there are less Pods than declared in its yaml definition and it will try to create such missing Pod to match the desired/declared state. So I would like to highlight it again that it is not the deletion of the Pod that triggers its initContainers to be run, but Pod creation, no matter manual or managed by the controller such as replicaSet, which of course can be triggered by manual deletion of the Pod managed by such controller.
Will reducing replica set to null and then increasing it trigger the
init container?
Yes, because when you reduce the number of replicas to 0, you make the controller delete all Pods that fall under its management. When they are re-created, all their startup processes are repeated including running initContainers being part of such Pods.
Is it possible to manually trigger init container?
As #David Maze already stated in his comment The only way to run an init container is by creating a new pod, but both updating a deployment and deleting a deployment-managed pod should trigger that. I would say it depends what you mean by the term manually. If you ask whether this is possible to trigger somehow an initContainer without restarting / re-creating a Pod - no, it is not possible. Starting initContainers is tightly related with Pod creation or in other words with its startup process.
Btw. all what you're asking in your question is quite easy to test. You have a lot of working examples in kubernetes official docs that you can use for testing different scenarios and you can also create simple initContainer by yourself e.g. using busybox image which only task is to sleep for the required number of seconds. Here you have some useful links from different k8s docs sections related to initContainers:
Init Containers
Debug Init Containers
Configure Pod Initialization

How to ensure that a pod does not restart?

I have a pod that is meant to run a code excerpt and exit afterwards. I do not want this pod to restart after exiting, but apparently it is not possible to set a restart policy in Kubernetes (see here and here).
Therefore my question is: how can I implement a pod that runs only once?
Thank you
You need to deploy a job. A deployment is meant to keep the containers running all the time. Give a check on:
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/

Setup delay for pod shutdown on update

Is it possible to setup a delay for pod shutdowns after a rolling update on Kubernetes?
For example I roll out a new version and want the old Pods to run for further 15secs after the new instance has been started.
How can I manage that?
Yes, you can use PreStop Hook to achieve that.
PreStop hooks are executed after a Pod is marked as terminating. See what happen when you delete a pod from here.
You just have to run sleep 15 on PreStop Hook.
For more details see Container hooks.
See how to add a PreStop hook from here: Define postStart and preStop handlers.