Mutli-container dependency in pod in kubernetes - kubernetes

We have our application build on kubernetes. We are have many multi-containers pods.
We are facing challenges when as our many containers depends on each other to run application.
We first required database container to come up and then application container to run.
Is there is any equivalent solution to resolve this dependency where our database container will be up first then our application container??

There's no feature like that in Kubernetes because each application should be responsible for (re)connecting to their dependencies.
However, you can do a similar thing by using initContainer which can let other containers in the same pod not start until the initContainer exits with 0.
As the example shows, if you run a simple shell script on a busybox that waits until it can connect to your application's dependencies, your applications will start after their dependencies can be connected.

Related

Is it possible to define a depency to another application/manifest in a kubernetes manifest?

i'm deploying an api application with k3s and i would to know if is it possible to define a depency to another application (potentially already run with is own manifest) in a kubernetes manifest of an application.
If the dependency isn't running when the dependend application is launched, the dependency should be run through its own manifest
I join a schema below.
Thank you in advance for your answers.
You can manage dependencies by using the initContainers in your Application 1 and Application 3 and readinessProbe and livenessProbe in your Application 2.
The initContainers in your Application 1 and 3 will check whether the Application 2 is live and the readinessProbe and livenessProbe in your Application 2 will ensure that your Application 2 is fully up and ready to serve.
Init Container: Init containers solve challenges associated with first-run initialization of applications. It’s common for services to depend on the successful completion of a setup script before they can fully start up.
Liveness probe: This probe is mainly used to determine if the
container is in the Running state. For example, it can detect service
deadlocks, slow responses, and other situations.
Readiness probe: This probe is mainly used to determine if the service
is already working normally. Readiness probes cannot be used in init
containers. If the pod restarts, all of its init containers must be
run again.
Reference: https://www.alibabacloud.com/blog/kubernetes-demystified-solving-service-dependencies_594110

sidecar vs init container in kubernetes

I am having trouble distinguishing between a sidecar and an init container. So far, I understand that the real app containers wait for init container to do something. However, sidecar could do the same thing , could it not? And vice versa, init containers don't die off, so also run "on the side". Hence , my confusion.
Thanks for the help.
Init-containers are used to initialize something inside your Pod. The init-containers will run and exit. After every init container which exits with a code 0, your main containers will start.
Examples for init-containers are:
Moving some file into your application containers, e.g. Themes or Configuration. This example is also described in the Kubernetes docs.
Kubernetes itself does not know anything about sidecars. Sidecar-Containers are a pattern to solve some use-cases. Usually, Kubernetes distinguishes between Init-Containers and Containers running inside your Pod.
Typically, we call Sidecars all containers, that do not provide a user-focused service. For example, this could be a proxy or something for easier database access. If you're running a Java-App you could use a sidecar to export JVM metrics in Prometheus format.
The difference here is, that your sidecar-containers must run all the time. If one of your not-init-containers exits, kubernetes will restart the whole pod.
And that's the difference.
Init containers run and exit before your main application starts
Sidecars run side-by-side with your main container(s) and provide some kind of service for them.

decentralised, updatable configuration with kubernetes

I need to keep some configuration maybe files or otehwrise in all instances of a kubernetes docker image deployment.
I need the ability to remotely update the configuration in all of the running pods of the deployment. This is to be followed by invocation of some java code in all of the running pods of the docker image deployment.
Whenever a new pod comes up of the same docker image deployment it should have the updated configuration.
I dont want the configuration stored anywhere centrally as much as possible. Want it in each pod of the docker image deployment.
What are my choices?
As a last resort I could do it as a rolling deployment update.
R
Rolling deployment, or similar- update to a mounted config map, etc- is the kubernetes option. Always results in an application restart.
Having an application support live configuration updates, running some code after receiving those updates, without restart- that's an application feature.
Handwavy way of doing this-
Have the correct configuration live in a ConfigMap.
Have the application listen on a separate port for either a signal to retrieve updated configuration (if the application is k8s aware) or to actually receive the configuration bits themselves. Have the application be able to handle this live configuration update process, the difficulty of which depends on the framework in use.
Have another application be responsible for delivering these updates- watch for changes to the ConfigMap, get the list of Pods in the deployment, deliver either a signal or the updated configuration to each of the Pods.
Have the first application not get to what k8s recognizes as Ready state without having received updated configuration from the second.

How to properly use Kubernetes for job scheduling?

I have the following system in mind: A master program that polls a list of tasks to see if they should be launched (based on some trigger information). The tasks themselves are container images in some repository. Tasks are executed as jobs on a Kubernetes cluster to ensure that they are run to completion. The master program is a container executing in a pod that is kept running indefinitely by a replication controller.
However, I have not stumbled upon this pattern of launching jobs from a pod. Every tutorial seems to be assuming that I just call kubectl from outside the cluster. Of course I could do this but then I would have to ensure the master program's availability and reliability through some other system. So am I missing something? Launching one-off jobs from inside an indefinitely running pod seems to me as a perfectly valid use case for Kubernetes.
Your master program can utilize the Kubernetes client libraries to preform operations on a cluster. Find a complete example here.

How to manifest a container with /dev/console from a pod definition with Kubernetes?

We use systemd in our container to manage the processes running in the container.
We configure journald in the container so, that it sends all logs to /dev/console.
In order to have /dev/console in a container we have to use "-t" option of Docker when we deploy the container.
I would like to ask, what the equivalent way is with Kubernetes. Where can we state in the pod manifest that we need /dev/console in the containers?
I understand, that with kubectl it is possible (with "--tty" or "-t"). But we do not want to start containers with kubectl.
We do support TTY containers in kubernetes v1.1, but not a tty without input. If you want to see that, I think a GitHub issue would be appropriate.
I agree with Spencer that running systemd in a container is not "best practice" but there are valid reasons to do it, not the least of which is "that's what we know how to do". People's usage of container will evolve over time.
The kubectl --tty option only applies to kubectl exec --tty, which is for running a process inside a container that has already been deployed in a pod. So it would not help you deploy pods with /dev/console defined.
As far as I can see there's no way in current Kubernetes to cause pods to be launched with containers having /dev/console defined.
I would go further and say that the way these containers are defined, with multiple processes managed by systemd and logged by journald, is outside the usual use cases for Kubernetes. Kubernetes has value where the containers are simple, individual processes, running as daemons. Kubernetes manages the launching of multiple distict containers per pod, and/or multiple pods as replicas, including monitoring, logging, restart, etc. Having a separate launch/init and log scheme inside each container doesn't fit the usual Kubernetes use case.