Is there a way to deploy statefulset first in a cluster? - kubernetes

Is there a way to make a kubernetes cluster to deploy first the statefulset and then all other deployments?
I'm working in GKE and I have a Redis pod which I want to get up and ready first because the other deployments depend on the connection to it.

You can use initcontainer in other deployments.Because init containers run to completion before any app containers start, init containers offer a mechanism to block or delay app container startup until a set of preconditions are met.
The init container can have a script which perform a readiness probe of the redis pods.

Related

Is it possible to run Kubernets Cronjob inside Container in Existing Pod?

I need to run this kubernets Cronjob inside the container which is already running in my existing pod.
But This Cronjob always creating a Pod and terminating it based on scheduler.
Is that possible to run the kubernets cron inside the existing pod container ?
or
In existing pod can we run this kubernets cron as container ?
You can trigger a kubernetes cronjob from within a pod, but it defeats the object of the cronjob resource.
There are alternatives to scheduling stuff. Airflow, as m_vemuri is one option. We've got a few cases here where we have actually setup a pod that runs crond, because we have a job that runs every minute, and the time it takes to scale up the pod, pull the image, run the job and then terminate the pod is often more than the 1 minute between runs.

How to check if desired number of pods are up and active in init containers using helm

I have a use case where few deployment pods have to wait for few other stateful pods to be active.
This can be done via init containers in k8s. How could I ensure that desired no. of stateful pods are active now using init containers.

How kubernetes handle livenessProbe failure?

I want to understand what happens behind the scene if a liveness probe fails in kubernetes ?
Here is the context:
We are using Helm Chart for deploying out application in Kubernetes cluster.
We have a statefulsets and headless service. To initialize mTLS, we have created a 'job' kind and in 'command' we are passing shell & python scripts are an arguments.
We have written a 'docker-entrypoint.sh' inside 'docker image' for some initialization work.
Inside statefulSet, we are passing a shell script as a command in 'livenessProbe' which runs every 30 seconds.
I want to know if my livenessProbe fails for any reason :
1. Does helm chart monitor this probe & will restart container or it's K8s responsibility ?
2. Will my 'docker-entryPoint.sh' execute if container is restarted ?
3. Will 'Job' execute at the time container restart ?
How Kubernetes handles livenessProbe failure and what steps it takes?
It's not helm's responsibility.It's kubernetes's responsibility to restart the pod in case of readiness probe failure.
Yes docker-entryPoint.sh is executed at container startup.
Job needs to be applied again to the cluster for it to execute. Alternatively you could use initcontainer which is guaranteed to run before the main container starts.
Kubelet kills the container and restarts it if liveness probe fails.
To answer your question liveness probe and readiness probe are actions basically get calls to your application pod to check whether it is healthy.
This is not related to helm charts.
Once the liveness or readiness probe fails container restart takes place .
I would say these liveness probes failure can affect your app uptime, so use a rolling deployment and autoscale your pod counts to enable availability.

How to rollaback Kubernetes StatefulSet application

Currently, I am migrating one of our microservice from K8S Deployment type to StatefulSets.
While updating Kubernetes deployment config I noticed StatefulSets doesn't support revisionHistoryLimit and minReadySeconds.
revesionHistoryLimit is used keep previous N numbers of replica sets for rollback.
minReadySeconds is number of seconds pod should be ready without any of its container crashing.
I couldn't find any compatible settings for StatefulSets.
So my questions are:
1) How long master will wait to consider Stateful Pod ready?
2) How to handle rollback of Stateful application.
After reverting the configuration, you must also delete any Pods that StatefulSet had already attempted to run with the bad configuration. The new pod will automatically spin up with correct configuration.
You should define a readiness probe, and the master will wait for it to report the pod as Ready.
StatefulSets currently do not support rollbacks.

How to run pod as a global service in Kubernetes minion

In coreos we can defined service as
[X-Fleet]
Global=true
This will make sure that this particular service will run in all the nodes.
How do i achieve same thing for a pod in Kubernetes?
Probably you want to use Daemonset - a way to run a daemon on every node in a Kubernetes cluster.