I use initContainers inside a deployment. An init container is created for each pods.
It is possible to get only one init container for the whole deployment ?
edit:
use case: need to run some db migration commands before creating the pods. So i have put these commands inside init pods.
problems:
- each time a pod is created, the init container is created
- on scale up init container is created
solution:
I have finally found a good example for solving this problem in this article
While it is not possible to have one init container shared by whole deployment (which by definition is impossible as the unit of workload scheduling in kube is a pod), there are things you can use to get a similar functionality easily.
One way to achieve this would be to have an init container on every pod waiting for a specific job to be successfully executed, and a job that would do the actual init logic.
Another one would be to use leader election among your init containers so only one executes and the rest just wait for the lock to be release before they exit successfully.
No, pods are identical so if you use an init container it has to be attached to every pod in your deployment.
You're welcome to comment and tell us more about your use case, because it sounds like the problem you are trying to solve is not the best fit for init containers.
Related
I have a problem statement where in there is a Kubernetes cluster and I have some pods running on it.
Now, I want some functions/processes to run once per deployment, independent of number of replicas.
These processes use the same image like the image in deployment yaml.
I cannot use initcontainers and sidecars, because they will run along with main container on pod for each replica.
I tried to create a new image and then a pod out of it. But this pod keeps on running, which is not good for cluster resource, as it should be destroyed after it has done its job. Also, the main container depends on the completion on this process, in order to run the "command" part of K8 spec.
Looking for suggestions on how to tackle this?
Theoretically, You could write an admission controller webhook for intercepting create/update deployments and triggering your functions as you want. If your functions need to be checked, use ValidatingWebhookConfiguration for validating the process and then deny or accept commands.
I am running into a situation where, initcontainer execution to completion has to be time bounded. Can someone tell or recommend a strategy to achieve the same? What I have tried till now:
activeDeadlineSeconds - This attribute is supported on Pod but not on ReplicaSet. So, cannot use inside deployment object.
killing initcontainer from inside, when timer expires. This is not working as expected, please refer to link.
progressDeadlineSeconds - This doesnt take into account initcontainers.
One of the solutions could be by Adding lifecycle hooks
Pods also allow you to define two lifecycle hooks:
1: Post-start hooks: K8 docs
Remember: Until the hook completes, the container will stay in the Waiting state with the reason ContainerCreating. Because of this, the pod’s status will be Pending instead of Running. If the hook fails to run or returns a non-zero exit code, the main container will be killed.
2: Pre-stop hooks: K8 docs and pre-stop hook is executed immediately before a container is terminated.
Note:
1: These lifecycle hooks are specified per container, unlike init containers, which apply to the whole pod.
2: As their names suggest, they’re executed when the container starts and before it stops.
I hope this helps you to land you to a new approach!
What triggers init container to be run?
Will editing deployment descriptor (or updating it with helm), for example, changing the image tag, trigger the init container?
Will deleting the pod trigger the init container?
Will reducing replica set to null and then increasing it trigger the init container?
Is it possible to manually trigger init container?
What triggers init container to be run?
Basically initContainers are run every time a Pod, which has such containers in its definition, is created and reasons of creation of a Pod can be quite different. As you can read in official documentation init containers run before app containers in a Pod and they always run to completion. If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. So one of the things that trigger starting an initContainer is, among others, previous failed attempt of starting it.
Will editing deployment descriptor (or updating it with helm), for
example, changing the image tag, trigger the init container?
Yes, basically every change to Deployment definition that triggers creation/re-creation of Pods managed by it, also triggers their initContainers to be run. It doesn't matter if you manage it by helm or manually. Some slight changes like adding for example a new set of labels to your Deployment don't make it to re-create its Pods but changing the container image for sure causes the controller (Deployment, ReplicationController or ReplicaSet) to re-create its Pods.
Will deleting the pod trigger the init container?
No, deleting a Pod will not trigger the init container. If you delete a Pod which is not managed by any controller it will be simply gone and no automatic mechanism will care about re-creating it and running its initConainers. If you delete a Pod which is managed by a controller, let's say a replicaSet, it will detect that there are less Pods than declared in its yaml definition and it will try to create such missing Pod to match the desired/declared state. So I would like to highlight it again that it is not the deletion of the Pod that triggers its initContainers to be run, but Pod creation, no matter manual or managed by the controller such as replicaSet, which of course can be triggered by manual deletion of the Pod managed by such controller.
Will reducing replica set to null and then increasing it trigger the
init container?
Yes, because when you reduce the number of replicas to 0, you make the controller delete all Pods that fall under its management. When they are re-created, all their startup processes are repeated including running initContainers being part of such Pods.
Is it possible to manually trigger init container?
As #David Maze already stated in his comment The only way to run an init container is by creating a new pod, but both updating a deployment and deleting a deployment-managed pod should trigger that. I would say it depends what you mean by the term manually. If you ask whether this is possible to trigger somehow an initContainer without restarting / re-creating a Pod - no, it is not possible. Starting initContainers is tightly related with Pod creation or in other words with its startup process.
Btw. all what you're asking in your question is quite easy to test. You have a lot of working examples in kubernetes official docs that you can use for testing different scenarios and you can also create simple initContainer by yourself e.g. using busybox image which only task is to sleep for the required number of seconds. Here you have some useful links from different k8s docs sections related to initContainers:
Init Containers
Debug Init Containers
Configure Pod Initialization
I'm currently trying to create a job in Kubernetes in a project I've just joined.
This job should wait until 2 other pods are up and running, then run a .sh script to create some data for the testers in the application.
I've figured out i should use initContainers. However, there is one point i don't understand.
Why should i include some environmental values under env tag under initContainers in job description .yaml file ?
I thought i was just waiting for the pods to be initialised, not creating them again. Am i missing something ?
Thanks in advance.
initContainers are like containers running in the Pod, but executed before the ones defined in the spec key containers.
Like containers, they share some namespaces and IPC, so it means that the Scheduler will check if the declared initContainers are successful, then it will schedule the containers.
Keep in mind that when you create a Pod, basically, you're creating an empty container named pause that will provide a namespace-base for the following containers: so, in the end, the initContainer is not really creating again a new Pod, like its name suggests, it's an initializer.
I wonder how would one implement a colocated auxiliary container in a Pod within a Deployment which does not provide a service but rather a job/batch workload?
Background of my questions is, that I want to deploy a scalable service at which each instance needs configuration after its start. This configuration is done via a HTTP POST to its local colocated service instance. I've implemented a auxiliary container for this in order to benefit from the feature of colocation. So the auxiliary container always knows which instance needs to be configured.
Problem is, that the restartPolicy needs to be defined at the Pod level. I am looking for something like restart policy always for the service and a different restart policy onFailurefor the configuration job.
I know that k8s provides the Job resource for such workloads. But is there an option to colocate those jobs to Pods?
Furthermore I've stumbled across the so called init containers which might be defined via annotations. But these suffer the drawback, that k8s ensures that the actual Pod is only started after the init container did run. So for my very scenario it seems unsuitable.
As I understand you need your service running to configure it.
Your solution is workable and you can set restartPolicy: always you just need a way to tell your one off configuration container that it already ran. You could create and attach an emptyDir volume to your configuration container, create a file on it to mark your configuration successful and check for this file from your process. After your initialization you enter sleep in a loop. The downside is that some resources will be taken up by that container too.
Or you can just add an extra process in the same container and do the configuration (maybe with the file mentioned above as a guard to avoid configuring twice). So write a simple shell script like this and run it instead of your main process:
#!/bin/sh
(
[ -f /mnt/guard-vol/stamp ] && exit 0
/opt/my-config-process parameters && touch /mnt/guard-vol/stamp
) &
exec /opt/my-main-process "$#"
Alternatively you could implement a separate pod that queries the kubernetes API for pods of your service with label configured=false. Configure it and remove the label with the API. You should also modify your Service to select configured=true pods.