Serialising the deployment of pods/ helm charts - kubernetes

I have multiple helm charts creating a single deployment each (usually creating one pod each).
The requirement is to serialise the deployment of pods ie before the second pod can be deployed the first pod needs to be in a running state.
(coz second pod reads values from the first pod). The third pod again should only come up with the second pod is up and running or completed.
I tried using Umbrella helm hooks for this but hooks are evaluated on a chart object level rather than a collection of charts.
I was looking into having a init container that regularly checks the readiness probe (not sure if this can be done) of the first pod before running the second pod? not sure -- ideas, please...

Init Containers
If you don't mind letting your previous services run to completion before running the next ones you can take advantage of the Init Containers feature: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
They run to completion before any app Containers start, whereas app
Containers run in parallel, so Init Containers provide an easy way to
block or delay the startup of app Containers until some set of
preconditions are met.
Behavior
During the startup of a Pod, the Init Containers are started in order,
after the network and volumes are initialized. Each Container must
exit successfully before the next is started. If a Container fails to
start due to the runtime or exits with failure, it is retried
according to the Pod restartPolicy. However, if the Pod restartPolicy
is set to Always, the Init Containers use RestartPolicy OnFailure.
A Pod cannot be Ready until all Init Containers have succeeded. The
ports on an Init Container are not aggregated under a service. A Pod
that is initializing is in the Pending state but should have a
condition Initializing set to true.
If the Pod is restarted, all Init Containers must execute again.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior
Caveats
Please review the differences and limitations in the documentation before deciding to use this feature.
ie.
Differences from regular Containers
Init Containers support all the fields and features of app Containers,
including resource limits, volumes, and security settings. However,
the resource requests and limits for an Init Container are handled
slightly differently,

Related

If you have a pod with multiple containers and one triggers OOMKilller, does it restart the entire pod?

Trying to plan out a deployment for an application and am wondering if it makes sense to have multiple pods in a container vs putting them in separate pods. I expect one of the containers to potentially be operating near its allocated memory limit. My understanding is that this presents the risk of this container getting OOMKilled. If that's the case, would it restart the entire pod (so the other container in the pod is restarted as well) or will it only restart the OOMKilled container?
No, only the specific container.
For the whole pod to be recreated there needs to be a change in the Pod's ownerObject (tipically a Replicaset) or a scheduling decision by kube-scheduler.

Is container where liveness or readiness probes's config are set to a "pod check" container?

I'm following this task Configure Liveness, Readiness and Startup Probes
and it's unclear to me whether a container where the check is made is a container only used to check the availability of a pod? Because it makes sense if pod check container fails therefore api won't let any traffic in to the pod.
So a health check signal must be coming from container where some image or app runs? (sorry, another question)
From the link you provided it seems like they are speaking about Containers and not Pods so the probes are meant to be per containers. When all containers are ready the pod is described as ready too as written in the doc you provided :
The kubelet uses readiness probes to know when a Container is ready to
start accepting traffic. A Pod is considered ready when all of its
Containers are ready. One use of this signal is to control which Pods
are used as backends for Services. When a Pod is not ready, it is
removed from Service load balancers.
So yes, every containers that are running some images or apps are supposed to expose those metrics.
Livenes and readiness probes as described by Ko2r are additional checks inside your containers and verified by kubelet according to the settings fro particular probe:
If the command (defined by health-check) succeeds, it returns 0, and the kubelet considers the Container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the Container and restarts it.
In addition:
The kubelet uses liveness probes to know when to restart a Container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a Container in such a state can help to make the application more available despite bugs.
Fro another point of view:
Pod is a top-level resource in the Kubernetes REST API.
As per docs:
Pods are ephemeral. They are not designed to run forever, and when a Pod is terminated it cannot be brought back. In general, Pods do not disappear until they are deleted by a user or by a controller.
Information about controllers can find here:
So the best practise is to use controllers like describe above. You’ll rarely create individual Pods directly in Kubernetes–even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a Controller), it is scheduled to run on a Node in your cluster. The Pod remains on that Node until the process is terminated, the pod object is deleted, the Pod is evicted for lack of resources, or the Node fails.
Note:
Restarting a container in a Pod should not be confused with restarting the Pod. The Pod itself does not run, but is an environment the containers run in and persists until it is deleted
Because Pods represent running processes on nodes in the cluster, it is important to allow those processes to gracefully terminate when they are no longer needed (vs being violently killed with a KILL signal and having no chance to clean up). Users should be able to request deletion and know when processes terminate, but also be able to ensure that deletes eventually complete. When a user requests deletion of a Pod, the system records the intended grace period before the Pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container. Once the grace period has expired, the KILL signal is sent to those processes, and the Pod is then deleted from the API server. If the Kubelet or the container manager is restarted while waiting for processes to terminate, the termination will be retried with the full grace period.
The Kubernetes API server validates and configures data for the api objects which include pods, services, replicationcontrollers, and others. The API Server services REST operations and provides the frontend to the cluster’s shared state through which all other components interact.
For example, when you use the Kubernetes API to create a Deployment, you provide a new desired state for the system. The Kubernetes Control Plane records that object creation, and carries out your instructions by starting the required applications and scheduling them to cluster nodes–thus making the cluster’s actual state match the desired state.
Here you can find information about processing pod termination.
There are different probes:
For example for HTTP probe:
even if your app isn’t an HTTP server, you can create a lightweight HTTP server inside your app to respond to the liveness probe.
Command
For command probes, Kubernetes runs a command inside your container. If the command returns with exit code 0 then the container is marked as healthy.
More about probes and best practices.
Hope this help.

How to know how long it takes to create a pod in Kubernetes ? Is there any command please?

I have my Kubernetes cluster and I need to know how long it takes to create a pod? Is there any Kubernetes command show me that ?
Thanks in advance
What you are asking for is not existing.
I think you should first understand the Pod Overview.
A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.
A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.
While you are deploying a POD it's going through phases
Pending
The Pod has been accepted by the Kubernetes system, but one or more of the Container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while.
Running
The Pod has been bound to a node, and all of the Containers have been created. At least one Container is still running, or is in the process of starting or restarting.
Succeeded
All Containers in the Pod have terminated in success, and will not be restarted.
Failed
All Containers in the Pod have terminated, and at least one Container has terminated in failure. That is, the Container either exited with non-zero status or was terminated by the system.
Unknown
For some reason the state of the Pod could not be obtained, typically due to an error in communicating with the
host of the Pod.
As for Pod Conditions it have a type which can have following values:
PodScheduled: the Pod has been scheduled to a node;
Ready: the Pod is able to serve requests and should be added to the load balancing pools of all matching Services;
Initialized: all init containers have started successfully;
Unschedulable: the scheduler cannot schedule the Pod right now, for example due to lacking of resources or other constraints;
ContainersReady: all containers in the Pod are ready.
Please refer to the documentation regarding Pod Lifecycle for more information.
When you are deploying your POD, you have to consider how many containers will be running in it.
The image will have to be downloaded, depending on the size it might take longer. Also default pull policy is IfNotPresent, which means that Kubernetes will skip the image pull if it already exists.
You can find more about Updating Images can be found here.
You also need to consider how much resources your Master and Node has.

Kubernetes - force pod restart if container fails to re-trigger init containers

I found in my pod that if a container fails or is killed due to failing liveness probe, the container is restarted, but the pod is not.
This means that initContainers are not run again in a case of a crashed container.
In my case, I do need to run one of the initContainers every time the main container fails.
Is this possible? Am I missing something?
Currently, this is simply not supported: livenessProbe is a "container level" probe and if this probe fails only the said container is restarted, not the whole Pod
Though, you'll be glad to know that this behaviour is currently being implemented on this PR: https://github.com/kubernetes/community/pull/2342.
As a workaround before it's done and you eventually update, you'd have to rethink why you really need your initContainers in the first place, and consider a different co-ordination between your pod containers (be they initContainers or not) through a shared volume or some other scenarios depending on your use case.

Specify scheduling order of a Kubernetes DaemonSet

I have Consul running in my cluster and each node runs a consul-agent as a DaemonSet. I also have other DaemonSets that interact with Consul and therefore require a consul-agent to be running in order to communicate with the Consul servers.
My problem is, if my DaemonSet is started before the consul-agent, the application will error as it cannot connect to Consul and subsequently get restarted.
I also notice the same problem with other DaemonSets, e.g Weave, as it requires kube-proxy and kube-dns. If Weave is started first, it will constantly restart until the kube services are ready.
I know I could add retry logic to my application, but I was wondering if it was possible to specify the order in which DaemonSets are scheduled?
Kubernetes itself does not provide a way to specific dependencies between pods / deployments / services (e.g. "start pod A only if service B is available" or "start pod A after pod B").
The currect approach (based on what I found while researching this) seems to be retry logic or an init container. To quote the docs:
They run to completion before any app Containers start, whereas app Containers run in parallel, so Init Containers provide an easy way to block or delay the startup of app Containers until some set of preconditions are met.
This means you can either add retry logic to your application (which I would recommend as it might help you in different situations such as a short service outage) our you can use an init container that polls a health endpoint via the Kubernetes service name until it gets a satisfying response.
retry logic is preferred over startup dependency ordering, since it handles both the initial bringup case and recovery from post-start outages