Complete Kubernetes Jobs when one container complete - kubernetes

Is it possible to have a Job that compete when a container complete?
For exemple, I want to run a Job of one pod with 2 containers:
Elasticsearch container
Some Java app container connecting to Elasticsearch
The Java app container runs and complete, but obviously the Elasticsearch container continues to run indefinitely.
As a result the Job never completes. What is the solution?
Cheers

This is probably not the easiest way to do it, but you could use the Kubernetes API to delete the job:
https://kubernetes.io/docs/api-reference/v1.7/#delete-41.
I'm not sure how you're starting the job, or how realistic this solution is in your scenario.

Not sure about your uses case. My understanding is Elasticsearch should be running all the time to query the data.
See you can run two different pods. One for Elasticsearch and another one for your java application. just call java application from your job.

You should look at the livenessProbe capability. This is a capability defined in the deployment, which runs every x seconds while your container is running, to make sure it is running correctly. When a liveness probe fails, Kubernetes will terminate the container. Here is the official Kubernetes documentation on liveness and readiness probes.
The strategy here would be to use the liveness probe on the Elasticsearch container to check that the Java app has a connection to it. As soon as the java app completes, the connection will no longer be there, causing the liveness probe to fail, and kubernetes will terminate the Elasticsearch container.
Look out though, I think kubectl tries to restart the container if it is terminated by a liveness probe failure. You might want to look into disabling that or something.

Related

Openshift asynchronous wait for container to be running

I'm creating multiple pods at the same time in Openshift, and I also want to check the containers inside the pods are working correctly.
Some of these containers can take a while to start-up, and I don't want to wait for one pod to be fully running before starting up the other one.
Are there any Openshift / Kubernetes checks I can do to ensure a container has booted up, while also going ahead with other deployments?
Please configure the Liveness and Readiness Probes
Liveness : Under what circumstances is it appropriate to restart the pod?
Readiness : under what circumstances should we take the pod out of the list of service endpoints so that it no longer responds to
requests?
...Some of these containers can take a while to start-up
Liveness probe is not a good option for containers that requires extended startup time, mainly because you have to set a long time to cater for startup; which is irrelevant after that - result to unable to detect problem on time during execution. Instead, you use startup probe to handle and detect problem during startup and handover to liveness probe upon success; or restart container according to its restartPolicy should the startup probe failed.

How to make deployment restart if another pod restarts

I have a web deployment and a mongoDB statefulset. The web deployment connects to the mongodb but once in a while a error may occur in the mongodb and it reboots and starts up. The connection from the web deployment to the mongodb never get restarted. Is there a way in the web deployment. If the mongodb pod restarts to restart the web pod as well?
Yes, you can use a liveness probe on your application container that probes your Mongo Pod/StatefulSet. You can configure it in such a way that it fails if it fails to TCP connect to your Mongo Pod/StatefulSet when Mongo crashes (Maybe check every second)
Keep in mind that with this approach you will have to always start your Mongo Pod/StatefulSet first.
The sidecar function described in the other answer should work too, only it would take a bit more configuration.
Unfortunately, there's no easy way to do this within Kubernetes directly, as Kubernetes has no concept of dependencies between resources.
The best place to handle this is within the web server pod itself.
The ideal solution is to update the application to retry the connection on a failure.
A less ideal solution would be to have a side-car container that just polls the database and causes a failure if the database goes down, which should cause Kubernetes to restart the pod.

Kubernetes - force pod restart if container fails to re-trigger init containers

I found in my pod that if a container fails or is killed due to failing liveness probe, the container is restarted, but the pod is not.
This means that initContainers are not run again in a case of a crashed container.
In my case, I do need to run one of the initContainers every time the main container fails.
Is this possible? Am I missing something?
Currently, this is simply not supported: livenessProbe is a "container level" probe and if this probe fails only the said container is restarted, not the whole Pod
Though, you'll be glad to know that this behaviour is currently being implemented on this PR: https://github.com/kubernetes/community/pull/2342.
As a workaround before it's done and you eventually update, you'd have to rethink why you really need your initContainers in the first place, and consider a different co-ordination between your pod containers (be they initContainers or not) through a shared volume or some other scenarios depending on your use case.

How to reduce the "unhealthy" delay during pod startup?

I am using kubernetes to start java pods. The pod startup delay vary between 10 seconds and about a minute depending on the load of the node, the time flyway took to migrate the tables, ...
To avoid having kubernetes killing the pods that are starting we set the liveness probe with an initial delay of two minutes.
It saves us from pods being eternally killed because they start too slowly. But in case of scaling up, crash recovery, we loose a couple of seconds / minutes before the freshly started pod join the service.
Is there any way to optimize that ?
A way to tell kubernetes "we are live, you can start using the liveness probe" before the initial delay ?
For starters, this will not happen at all. Liveness probe does not control how pods are joined to the service, as you stated, it will restart container if it fails to satisfy the probe, but it will not make the service await for successfull liveness probe before it is added as a service endpoint. For that you have a separate readiness probe. So this should be a no issue for you (btw. you might want to use both readiness and liveness probes to get optimal process)
I think you need to reduce the work the container is doing.
You mention the database migrations. It may be better to supply them as a one time job to Kubernetes and not trying to run them at every start. Effectively for a certain version of your software you only do them once and each subsequent start still has to do the work of checking if the database schema is already up to date.

"Sidecar" containers in Kubernetes pods

I'd like a multi-container pod with a couple of components:
A "main" container which contains a build job
A "sidecar" container which contains an HTTP proxy, used by the "main" container
This seems to fit well with the pod design philosophy as described in the Kubernetes documentation, but I believe so long as the "sidecar" runs, the pod is kept alive. In my case, the "main" container is not long-lived; once it exits, the "sidecar" should be terminated.
How can I achieve this?
A pod is running as long as one of the containers is running. If you need them to exit together, you have to arrange that the sidecar dies. We do not have a notion of "primary" vs "secondary" containers wrt lifecycle, though that's sort of interesting.
One option would be to use an emptyDir volume and write a file telling the sidecar "time to go". The sidecar would exit when it sees that file.
For anyone still looking for an answer to this, the sidecar feature is being developed and should be out in v1.17 of Kubernetes, which will have this exact requested behaviour.
From the proposal:
One-line enhancement description: Containers can now be a marked as sidecars so that they startup before normal containers and shutdown after all other containers have terminated.
https://github.com/kubernetes/enhancements/issues/753
UPDATE: looks like it's planned for v1.18 now
Have you considered using the http://kubernetes.io/docs/user-guide/jobs/ resource?
The feature might not have been available when the question was asked but, one can define postStart and preStop handlers for pod now. You could probably define preStop in your main container to kill sidecar container.
You could use the liveness probe to help with this. The probe checks for the "main" container (as well as any of its own checks). Once the main container goes down, the liveness probe fails, and then the pod should be recycled.