I am having trouble distinguishing between a sidecar and an init container. So far, I understand that the real app containers wait for init container to do something. However, sidecar could do the same thing , could it not? And vice versa, init containers don't die off, so also run "on the side". Hence , my confusion.
Thanks for the help.
Init-containers are used to initialize something inside your Pod. The init-containers will run and exit. After every init container which exits with a code 0, your main containers will start.
Examples for init-containers are:
Moving some file into your application containers, e.g. Themes or Configuration. This example is also described in the Kubernetes docs.
Kubernetes itself does not know anything about sidecars. Sidecar-Containers are a pattern to solve some use-cases. Usually, Kubernetes distinguishes between Init-Containers and Containers running inside your Pod.
Typically, we call Sidecars all containers, that do not provide a user-focused service. For example, this could be a proxy or something for easier database access. If you're running a Java-App you could use a sidecar to export JVM metrics in Prometheus format.
The difference here is, that your sidecar-containers must run all the time. If one of your not-init-containers exits, kubernetes will restart the whole pod.
And that's the difference.
Init containers run and exit before your main application starts
Sidecars run side-by-side with your main container(s) and provide some kind of service for them.
Related
I have a k8s cluster that runs the main workload and has a lot of nodes.
I also have a node (I call it the special node) that some of special container are running on that that is NOT part of the cluster. The node has access to some resources that are required for those special containers.
I want to be able to manage containers on the special node along with the cluster, and make it possible to access them inside the cluster, so the idea is to add the node to the cluster as a worker node and taint it to prevent normal workloads to be scheduled on it, and add tolerations on the pods running special containers.
The idea looks fine, but there may be a problem. There will be some other containers and non-container daemons and services running on the special node that are not managed by the cluster (they belong to other activities that have to be separated from the cluster). I'm not sure that will be a problem, but I have not seen running non-cluster containers along with pod containers on a worker node before, and I could not find a similar question on the web about that.
So please enlighten me, is it ok to have non-cluster containers and other daemon services on a worker node? Does is require some cautions, or I'm just worrying too much?
Ahmad from the above description, I could understand that you are trying to deploy a kubernetes cluster using kudeadm or minikube or any other similar kind of solution. In this you have some servers and in those servers one is having some special functionality like GPU etc., for deploying your special pods you can use node selector and I hope you are already doing this.
Coming to running separate container runtime on one of these nodes you need to consider two points mainly
This can be done and if you didn’t integrated the container runtime with
kubernetes it will be one more software that is running on your server
let’s say you used kubeadm on all the nodes and you want to run docker
containers this will be separate provided you have drafted a proper
architecture and configured separate isolated virtual network
accordingly.
Now comes the storage part, you need to create separate storage volumes
for kubernetes and container runtime separately because if any one
software gets failed or corrupted it should not affect the second one and
also for providing the isolation.
If you maintain proper isolation starting from storage to network then you can run both kubernetes and container runtime separately however it is not a suggested way of implementation for production environments.
How do I run a start script on Kubernetes cluster? I saw that there are ways to run start scripts on compute instances but couldn't find any documentation regarding the same for kubernetes clusters.
You can use init containers to execute script before the actual container is started.
Init containers are specialized containers that run before app containers in a Pod . Init containers can contain utilities or setup scripts not present in an app image.
Because init containers have separate images from app containers, they have some advantages for start-up related code:
Init containers can contain utilities or custom code for setup that
are not present in an app image. For example, there is no need to
make an image FROM another image just to use a tool like sed, awk,
python, or dig during setup.
The application image builder and deployer roles can work
independently without the need to jointly build a single app image.
Init containers can run with a different view of the filesystem than
app containers in the same Pod. Consequently, they can be given
access to Secrets that app containers cannot access.
Because init containers run to completion before any app containers
start, init containers offer a mechanism to block or delay app
container startup until a set of preconditions are met. Once
preconditions are met, all of the app containers in a Pod can start
in parallel.
Init containers can securely run utilities or custom code that would
otherwise make an app container image less secure. By keeping
unnecessary tools separate you can limit the attack surface of your
app container image.
How does Kubernetes create Pods?
I.e. what are the sequential steps involved in creating a Pod, is it implemented in Kubernetes?
Any code reference in Kubernetes repo would also be helpful.
A Pod is described in a definition file, and ran as a set of Docker containers on a given host which is part of the Kubernetes cluster, much like docker-compose does, but with several differences.
Precisely, a Pod always contains multiple Docker containers, even though, only the containers defined by the user are usually visible through the API: A Pod has one container that is a placeholder generated by the Kubernetes API, that will hold the IP for the Pod (so that when a Pod is restarted, it's actually the client containers that are restarted, but the placeholder container remains and keeps the same IP, unlike in straight Docker or docker-compose, where recreating a composition or container changes the IP.)
How Pods are scheduled, created, started, restarted if needed, re-scheduled etc... it a much longer story and very broad question.
I'd like a multi-container pod with a couple of components:
A "main" container which contains a build job
A "sidecar" container which contains an HTTP proxy, used by the "main" container
This seems to fit well with the pod design philosophy as described in the Kubernetes documentation, but I believe so long as the "sidecar" runs, the pod is kept alive. In my case, the "main" container is not long-lived; once it exits, the "sidecar" should be terminated.
How can I achieve this?
A pod is running as long as one of the containers is running. If you need them to exit together, you have to arrange that the sidecar dies. We do not have a notion of "primary" vs "secondary" containers wrt lifecycle, though that's sort of interesting.
One option would be to use an emptyDir volume and write a file telling the sidecar "time to go". The sidecar would exit when it sees that file.
For anyone still looking for an answer to this, the sidecar feature is being developed and should be out in v1.17 of Kubernetes, which will have this exact requested behaviour.
From the proposal:
One-line enhancement description: Containers can now be a marked as sidecars so that they startup before normal containers and shutdown after all other containers have terminated.
https://github.com/kubernetes/enhancements/issues/753
UPDATE: looks like it's planned for v1.18 now
Have you considered using the http://kubernetes.io/docs/user-guide/jobs/ resource?
The feature might not have been available when the question was asked but, one can define postStart and preStop handlers for pod now. You could probably define preStop in your main container to kill sidecar container.
You could use the liveness probe to help with this. The probe checks for the "main" container (as well as any of its own checks). Once the main container goes down, the liveness probe fails, and then the pod should be recycled.
We use systemd in our container to manage the processes running in the container.
We configure journald in the container so, that it sends all logs to /dev/console.
In order to have /dev/console in a container we have to use "-t" option of Docker when we deploy the container.
I would like to ask, what the equivalent way is with Kubernetes. Where can we state in the pod manifest that we need /dev/console in the containers?
I understand, that with kubectl it is possible (with "--tty" or "-t"). But we do not want to start containers with kubectl.
We do support TTY containers in kubernetes v1.1, but not a tty without input. If you want to see that, I think a GitHub issue would be appropriate.
I agree with Spencer that running systemd in a container is not "best practice" but there are valid reasons to do it, not the least of which is "that's what we know how to do". People's usage of container will evolve over time.
The kubectl --tty option only applies to kubectl exec --tty, which is for running a process inside a container that has already been deployed in a pod. So it would not help you deploy pods with /dev/console defined.
As far as I can see there's no way in current Kubernetes to cause pods to be launched with containers having /dev/console defined.
I would go further and say that the way these containers are defined, with multiple processes managed by systemd and logged by journald, is outside the usual use cases for Kubernetes. Kubernetes has value where the containers are simple, individual processes, running as daemons. Kubernetes manages the launching of multiple distict containers per pod, and/or multiple pods as replicas, including monitoring, logging, restart, etc. Having a separate launch/init and log scheme inside each container doesn't fit the usual Kubernetes use case.