How to rollaback Kubernetes StatefulSet application - kubernetes

Currently, I am migrating one of our microservice from K8S Deployment type to StatefulSets.
While updating Kubernetes deployment config I noticed StatefulSets doesn't support revisionHistoryLimit and minReadySeconds.
revesionHistoryLimit is used keep previous N numbers of replica sets for rollback.
minReadySeconds is number of seconds pod should be ready without any of its container crashing.
I couldn't find any compatible settings for StatefulSets.
So my questions are:
1) How long master will wait to consider Stateful Pod ready?
2) How to handle rollback of Stateful application.

After reverting the configuration, you must also delete any Pods that StatefulSet had already attempted to run with the bad configuration. The new pod will automatically spin up with correct configuration.

You should define a readiness probe, and the master will wait for it to report the pod as Ready.
StatefulSets currently do not support rollbacks.

Related

What handles a StatefulSet replication?

If a Deployment uses ReplicaSets to scale Pods up and down, and StatefulSets don't have ReplicaSets...
So, how does it manage to scale Pods up and down? I mean, what resource is responsible? What requests does a StatefulSet make in order to scale?
In short StatefulSet Controller handles statefulset replicas.
A StatefulSet is a Kubernetes API object for managing stateful application workloads. StatefulSets handle the deployment and scaling of sets of Kubernetes pods, providing guarantees about their uniqueness and ordering.
Similar to deployments, StatefulSets manage pods with identical container specifications. They differ in terms of maintaining a persistent identity for each pod. While the pods are all created based on the same spec, they are not interchangeable, so each pod is given a persistent identifier that is maintained through rescheduling.
Benefits of a StatefulSet deployment include:
Unique identifiers—every pod in the StatefulSet is assigned a unique, stable network identity, consisting of a hostname based on the application name and instance number. For example, a StatefulSet for a web application with three instances may have pods labeled web1, web2 and web3.
Persistent storage—every pod has its own stable, persistent volume, either by default or as defined per storage class. When the pods in a cluster are scaled down or deleted, their associated volumes are not lost, and the data persists. Unneeded resources can be purged by scaling down the StatefulSet to 0 before deleting the unused pods.
Ordered deployment and scaling—the pods in a StatefulSet are created and deployed in order, according to their increments. Pods are also shut down in (reverse) order, ensuring that the deployment and runtime are reliable and repeatable. The StatefulSet won’t scale until all every required pod is running, so if a pod fails, it will recreate the pod before it attempts to add more instances as per the scaling requirements.
Automated, ordered updates—a StatefulSets can handle rolling updates, shutting down each node and rebuilding it according to the original order, until every node has been replaced and the older versions cleaned up. The persistent volumes can be reused, so data is migrated to the new version automatically.

Will pods running on a PreferNoSchedule node migrate to an untainted node?

If a single Kubernetes cluster is built and runs some number of pods, however the single node carries a PreferNoSchedule taint, it would would make sense to migrate these pods and workloads to more suitable, untainted nodes if they are added to the cluster.
Will this happen automatically in >= 1.6 or will it need to be triggered? How is it triggered?
In this scenario, there will be no action triggered towards the kube-scheduler to schedule pods even though a new worker is added to a cluster.
For the pods to be moved to a new worker, we need to trigger a new pod scheduling requirement.
Simple solution would be to scale down to 0 and scale up to the needed number of pods for each deployment.
kubectl scale --replicas=<expected_replica_num> deployment <deployment_name>
As far as I know, this doesn't happen automatically with node taints. You can trigger it using kubectl rollout restart deployment/<name>.
I was unable to find sufficient literature for this in official Kubernetes documentation. The best I could find is kubernetes-sigs/descheduler

Kubernetes StatefulSet restart waits if one or more pods are not ready

I have a statefulset which constitutes of multiple pods. I have a use case where I need to invoke restart of the STS, I run this: kubectl rollout restart statefulset mysts
If I restart the statefulset at a time when one or more pods are in not-ready state, the restart action get queued up. Restart takes effect only after all the pods become ready. This could take long depending on the readiness threshold and the kind of issue the pod is facing.
Is there a way to force restart the statefulset, wherein I don't wait for pods to become ready? I don't want to terminate/delete the pods instead of restarting statefulset. A rolling restart works well for me as it helps avoid outage of the application.

Kubernetes helm waiting before killing the old pods during helm deployment

I have a "big" micro-service (website) with 3 pods deployed with Helm Chart in production env, but when I deploy a new version of the Helm chart, during 40 seconds (time to start my big microservice) I have a problem with the website (503 Service Unavailable)
So, I look at a solution to tell to kubernetes do not kill the old pod before the complete start of the new version
I tried the --wait --timeout but it did not work for me.
My EKS version : "v1.14.6-eks-5047ed"
Without more details about the Pods, I'd suggest:
Use Deployment (if not already) so that Pods are managed by a Replication Controller, which allows to do rolling updates, and that in combination with configured Startup Probe (if on k8s v1.16+) or Readiness Probe so that Kubernetes knows when the new Pods are ready to take on traffic (a Pod is considered ready when all of its Containers are ready).

Can you tell kubernetes to start one pod before another?

Can I add some config so that my daemon pods start before other pods can be scheduled or nodes are designated as ready?
Adding post edit:
These are 2 different pods altogether, the daemonset is a downstream dependency to any pods that might get scheduled on the host.
There's no such a thing as Pod hierarchy in Kubernetes between multiple separate types of pods. Meaning belonging to different Deployments, Statefulsets, Daemonsets, etc. In other words, there is no notion of a master pod and children pods. If you like to create your custom hierarchy you can build your own tooling around, for example waiting for the status of all pods in a DaemonSet to start or create a new Pod or Kubernetes workload resource.
The closest in terms of pod dependency in K8s is StatefulSets.
As per the docs:
For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.