Kubernetes StatefulSet restart waits if one or more pods are not ready - kubernetes

I have a statefulset which constitutes of multiple pods. I have a use case where I need to invoke restart of the STS, I run this: kubectl rollout restart statefulset mysts
If I restart the statefulset at a time when one or more pods are in not-ready state, the restart action get queued up. Restart takes effect only after all the pods become ready. This could take long depending on the readiness threshold and the kind of issue the pod is facing.
Is there a way to force restart the statefulset, wherein I don't wait for pods to become ready? I don't want to terminate/delete the pods instead of restarting statefulset. A rolling restart works well for me as it helps avoid outage of the application.

Related

Kubernetes does not evict a pod event if it is in a "failed" state, e.g. CrashLoopBackOff

I have a pod that had a Pod Disruption Budget that says at least one has to be running. While it generally works very well it leads to a peculiar problem.
I have this pod sometimes in a failed state (due to some development) so I have two pods, scheduled for two different nodes, both in a CrashLoopBackOff state.
Now if I want to run a drain or k8s version upgrade, what happens is that pod cannot ever be evicted since it knows that there should be at least one running, which will never happen.
So k8s does not evict a pod due to Pod Disruption Budget even if the pod is not running. Is there a way to do something with this? I think ideally k8s should treat failed pods as candidates for eviction regardless of the budget (as deleting a failing pod cannot "break" anything anyway)
...if I want to run a drain or k8s version upgrade, what happens is that pod cannot ever be evicted since it knows that there should be at least one running...
kubectl drain --disable-eviction <node> will delete pod that is protected by PDB. Since you are fully aware of the downtime, you can first delete the PDB in question before draining the node.
I hit this issue too during k8s upgrade. Fyi, as mentioned in the other answer, kubectl drain --disable-eviction <node> may cause service downtime, and deleting pods might not work always when deleted pods are immediately recreated by the deployment managing the pods. Also, even if the pods are deleted successfully, it may cause service downtime depending on PodDisruptionBudget.
Instead, I increased the number of replicas of the pods in the deployment to honor PodDisruptionBudget.minAvailable or PodDisruptionBudget.maxUnavailable and was able to successfully upgrade k8s while honoring PodDisruptionBudget.

pod - How to kill or stop only one pod from n replicas of a deployment

I have a testing scenario to check if the API requests are being handled by another pod if one goes down. I know this is the default behaviour, but I want to stimulate the following scenario.
Pod replicas - 2 (pod A and B)
During my API requests, I want to kill/stop only pod A.
During downtime of A, requests should be handled by B.
I am aware that we can restart the deployment and also scale replicas to 0 and again to 2, but this won't work for me.
Is there any way to kill/stop/crash only pod A?
Any help will be appreciated.
If you want to simulate what happens if one of the pods just gets lost, you can scale down the deployment
kubectl scale deployment the-deployment-name --replicas=1
and Kubernetes will terminate all but one of the pods; you should almost immediately see all of the traffic going to the surviving pod.
But if instead you want to simulate what happens if one of the pods crashes and restarts, you can delete the pod
# kubectl scale deployment the-deployment-name --replicas=2
kubectl get pods
kubectl delete pod the-deployment-name-12345-f7h9j
Once the pod starts getting deleted, the Kubernetes Service should route all of the traffic to the surviving pod(s) (those with Running status). However, the pod is managed by a ReplicaSet that wants there to be 2 replicas, so if one of the pods is deleted, the ReplicaSet will immediately create a new one. This is similar to what would happen if the pod crashes and restarts (in this scenario you'd get the same pod and the same node, if you delete the pod it might come back in a different place).
As you mentioned you can manually kill or restart the pod that is the only solution to test the case or else you can try crashing the one single POD but in the end, it will create the same scenario POD will auto restart.
Or else may you can increase the Graceful shutdown period for deployment so this way POD might take time and stay in terminating state for a good amount of time and you can perform the test.
In kubernetes where pods are controlled by the replicaSet, if you kill a pod it will again be recreated. So the only way to do this is to scale down the number of replicas.
Let's say if your deployment had 4 replicas. You can scale down to 3 by running the command below
kubectl scale deployment <deployment-name> --replicas=3
My example is as show below
kubectl scale deployment hello-world --replicas=3
deployment.apps/hello-world scaled

Does "kubectl rollout restart deploy" cause downtime?

I'm trying to get all the deployments of a namespace to be restarted for implementation reasons.
I'm using "kubectl rollout -n restart deploy" and it works perfectly, but I'm not sure it that command causes downtime or if it works as the "rollout update", applying the restart one by one, keeping my services up.
Does anyone know?
In the documentation I can only find this:
Operation
Syntax
Description
rollout
kubectl rollout SUBCOMMAND [options]
Manage the rollout of a resource. Valid resource types include: deployments, daemonsets and statefulsets.
But I can't find details about the specific "rollout restart deploy".
I need to make sure it doesn't cause downtime. Right now is very hard to tell, because the restart process is very quick.
Update: I know that for one specific deployment (kubectl rollout restart deployment/name), it works as expected and doesn't cause downtime, but I need to apply it to all the namespace (without specifying the deployment) and that's the case I'm not sure about.
kubectl rollout restart deploy -n namespace1 will restart all deployments in specified namespace with zero downtime.
Restart command will work as follows:
After restart it will create new pods for a each deployments
Once new pods are up (running and ready) it will terminate old pods
Add readiness probes to your deployments to configure initial delays.
#pcsutar 's answer is almost correct. kubectl rollout restart $resourcetype $resourcename restarts your deployment, daemonset or stateful set according to the its update strategy. so if it is set to rollingUpdate it will behave exactly as the above answer:
After restart it will create new pods for a each deployments
Once new pods are up (running and ready) it will terminate old pods
Add readiness probes to your deployments to configure initial delays.
However, if the strategy for example is type: recreate all the currently running pods belonging to the deployment will be terminated before new pods will be spun up!

Would Kubernetes bring up the down-ed Pod if only Pod definition file exists?

I have Pod definition file only. Kubernetes will bring up the pod. What happens if it goes down? Would Kubernetes bring it up automatically? Or if we want certain numbers of pods up at all time, we MUST take the help of ReplicationController( or ReplicaSet in new versions)?
Although your question is not clear , but yes , if you have deployed the pod through deployment or replicaSet , then kubernetes will create another one if you or someone else deletes that pod.
If you have just the pod without any controller like ReplicaSet , then it goes forever as there is no one to take care of it.
In case , the app crashes inside pod then:
A CrashloopBackOff means that you have a pod starting, crashing, starting again, and then crashing again.
A PodSpec has a restartPolicy field with possible values Always, OnFailure, and Never which applies to all containers in a pod. The default value is Always and the restartPolicy only refers to restarts of the containers by the kubelet on the same node (so the restart count will reset if the pod is rescheduled in a different node). Failed containers that are restarted by the kubelet are restarted with an exponential back-off delay (10s, 20s, 40s …) capped at five minutes, and is reset after ten minutes of successful execution.
https://sysdig.com/blog/debug-kubernetes-crashloopbackoff/
restartPolicy pod only refers to restarts of the Containers by the kubelet on the same node.If there is no replication controller or deployment then if a node goes down kubernetes will not reschedule or restart the pods of that node into any other nodes.This is the reason pods are not recommended to be used directly in production.

K8s: StatefulSet how to increase time between resatrt of pod in case fails

I have integration test, where i start StatefulSet, wait untill ready and then do some asserts.
My problem that if Application fails - it try to restart too fast.
And I can't get logs from failed pod.
SO my question how can i increase time between Restart of pod in StatefulSet?
Because K8s controllers do not support RestartPolicy: Never.
If all you want is to view the logs of the terminated pod, you can do
kubectl log <pod_name> --previous
I would try to run the service in question as a regular deployment and convert it to a StatefulSet after I analyze the issue with the application.
Why can't you get the logs from the terminated pods?
Maybe you should try to set terminationGracePeriodSeconds on the SS container to make the dying pods stay longer for analysis.