"Sidecar" containers in Kubernetes pods - kubernetes

I'd like a multi-container pod with a couple of components:
A "main" container which contains a build job
A "sidecar" container which contains an HTTP proxy, used by the "main" container
This seems to fit well with the pod design philosophy as described in the Kubernetes documentation, but I believe so long as the "sidecar" runs, the pod is kept alive. In my case, the "main" container is not long-lived; once it exits, the "sidecar" should be terminated.
How can I achieve this?

A pod is running as long as one of the containers is running. If you need them to exit together, you have to arrange that the sidecar dies. We do not have a notion of "primary" vs "secondary" containers wrt lifecycle, though that's sort of interesting.
One option would be to use an emptyDir volume and write a file telling the sidecar "time to go". The sidecar would exit when it sees that file.

For anyone still looking for an answer to this, the sidecar feature is being developed and should be out in v1.17 of Kubernetes, which will have this exact requested behaviour.
From the proposal:
One-line enhancement description: Containers can now be a marked as sidecars so that they startup before normal containers and shutdown after all other containers have terminated.
https://github.com/kubernetes/enhancements/issues/753
UPDATE: looks like it's planned for v1.18 now

Have you considered using the http://kubernetes.io/docs/user-guide/jobs/ resource?

The feature might not have been available when the question was asked but, one can define postStart and preStop handlers for pod now. You could probably define preStop in your main container to kill sidecar container.

You could use the liveness probe to help with this. The probe checks for the "main" container (as well as any of its own checks). Once the main container goes down, the liveness probe fails, and then the pod should be recycled.

Related

How to avoid starting kubernetes pod liveness checks until all containers are running

I'm having trouble with health checks starting too early on a kubernetes pod with multiple containers. My pod is set up like this:
main-container (nodejs)
sidecar container (http proxy)
Currently the health checks are configured on the sidecar container, and end up hitting both containers (proxy, then main container).
If the main container starts quickly, then everything is fine. But if the sidecar starts quickly and the main container starts slowly (e.g. if the image needs to be pulled) then the initial liveness checks start on the sidecar before the other container has even started.
Is there a way of telling kubernetes: don't start running any probes (liveness or readiness checks) until all the containers in the pod have started?
I know I can use a startupProbe to be more generous waiting for startup: but ideally and to avoid other monitoring warnings, I'd prefer to suppress the health/liveness probes completely until all the containers have started.
Answering your question - yes, there is a way of doing so using startupProbe on your sidecar container pointing to your main application's opened port. As per the documentation all other probes (per container) are disabled if a startup probe is provided, until it succeeds. For more information about how to set up a startup probe visit here.

sidecar vs init container in kubernetes

I am having trouble distinguishing between a sidecar and an init container. So far, I understand that the real app containers wait for init container to do something. However, sidecar could do the same thing , could it not? And vice versa, init containers don't die off, so also run "on the side". Hence , my confusion.
Thanks for the help.
Init-containers are used to initialize something inside your Pod. The init-containers will run and exit. After every init container which exits with a code 0, your main containers will start.
Examples for init-containers are:
Moving some file into your application containers, e.g. Themes or Configuration. This example is also described in the Kubernetes docs.
Kubernetes itself does not know anything about sidecars. Sidecar-Containers are a pattern to solve some use-cases. Usually, Kubernetes distinguishes between Init-Containers and Containers running inside your Pod.
Typically, we call Sidecars all containers, that do not provide a user-focused service. For example, this could be a proxy or something for easier database access. If you're running a Java-App you could use a sidecar to export JVM metrics in Prometheus format.
The difference here is, that your sidecar-containers must run all the time. If one of your not-init-containers exits, kubernetes will restart the whole pod.
And that's the difference.
Init containers run and exit before your main application starts
Sidecars run side-by-side with your main container(s) and provide some kind of service for them.

Kubernetes - force pod restart if container fails to re-trigger init containers

I found in my pod that if a container fails or is killed due to failing liveness probe, the container is restarted, but the pod is not.
This means that initContainers are not run again in a case of a crashed container.
In my case, I do need to run one of the initContainers every time the main container fails.
Is this possible? Am I missing something?
Currently, this is simply not supported: livenessProbe is a "container level" probe and if this probe fails only the said container is restarted, not the whole Pod
Though, you'll be glad to know that this behaviour is currently being implemented on this PR: https://github.com/kubernetes/community/pull/2342.
As a workaround before it's done and you eventually update, you'd have to rethink why you really need your initContainers in the first place, and consider a different co-ordination between your pod containers (be they initContainers or not) through a shared volume or some other scenarios depending on your use case.

Complete Kubernetes Jobs when one container complete

Is it possible to have a Job that compete when a container complete?
For exemple, I want to run a Job of one pod with 2 containers:
Elasticsearch container
Some Java app container connecting to Elasticsearch
The Java app container runs and complete, but obviously the Elasticsearch container continues to run indefinitely.
As a result the Job never completes. What is the solution?
Cheers
This is probably not the easiest way to do it, but you could use the Kubernetes API to delete the job:
https://kubernetes.io/docs/api-reference/v1.7/#delete-41.
I'm not sure how you're starting the job, or how realistic this solution is in your scenario.
Not sure about your uses case. My understanding is Elasticsearch should be running all the time to query the data.
See you can run two different pods. One for Elasticsearch and another one for your java application. just call java application from your job.
You should look at the livenessProbe capability. This is a capability defined in the deployment, which runs every x seconds while your container is running, to make sure it is running correctly. When a liveness probe fails, Kubernetes will terminate the container. Here is the official Kubernetes documentation on liveness and readiness probes.
The strategy here would be to use the liveness probe on the Elasticsearch container to check that the Java app has a connection to it. As soon as the java app completes, the connection will no longer be there, causing the liveness probe to fail, and kubernetes will terminate the Elasticsearch container.
Look out though, I think kubectl tries to restart the container if it is terminated by a liveness probe failure. You might want to look into disabling that or something.

How to manifest a container with /dev/console from a pod definition with Kubernetes?

We use systemd in our container to manage the processes running in the container.
We configure journald in the container so, that it sends all logs to /dev/console.
In order to have /dev/console in a container we have to use "-t" option of Docker when we deploy the container.
I would like to ask, what the equivalent way is with Kubernetes. Where can we state in the pod manifest that we need /dev/console in the containers?
I understand, that with kubectl it is possible (with "--tty" or "-t"). But we do not want to start containers with kubectl.
We do support TTY containers in kubernetes v1.1, but not a tty without input. If you want to see that, I think a GitHub issue would be appropriate.
I agree with Spencer that running systemd in a container is not "best practice" but there are valid reasons to do it, not the least of which is "that's what we know how to do". People's usage of container will evolve over time.
The kubectl --tty option only applies to kubectl exec --tty, which is for running a process inside a container that has already been deployed in a pod. So it would not help you deploy pods with /dev/console defined.
As far as I can see there's no way in current Kubernetes to cause pods to be launched with containers having /dev/console defined.
I would go further and say that the way these containers are defined, with multiple processes managed by systemd and logged by journald, is outside the usual use cases for Kubernetes. Kubernetes has value where the containers are simple, individual processes, running as daemons. Kubernetes manages the launching of multiple distict containers per pod, and/or multiple pods as replicas, including monitoring, logging, restart, etc. Having a separate launch/init and log scheme inside each container doesn't fit the usual Kubernetes use case.