Kubernetes deleting statefulset deletes pod inspite of cascade=false - kubernetes

I am upgrading from one version of a helm chart to a newer version which fails due to changes in one of the statefulsets. My solution to this was to delete the statefulset and then upgrade. I wanted to delete the stateful set without deleting the pod so i used the --cascade=false argument, but regardless the pod is immediately terminated upon the statefulsets deletion.
kc delete --cascade=false statefulsets/trendy-grasshopper-sts
Am I doing something wrong? How can I diagnose?

Related

Fail to upgrade operator in K8s

I'm writing an operator by operator-sdk and I have created statefulset pod in operator by using k8s api like :
r.client.Create(context.TODO(), statefulset)
It's works correctly and the statefulset pod is crated. But now I want to upgrade the operator already run in k8s so that I can add some command for pod like
Containers: []corev1.Container{{
Command: []string{.....}
First I build the newer operator image and delete the operator in k8s. The k8s quickly restarts the the operator by using the newer image(kubectl describe pod myoperator show newer images is used).
Second I delete the statefulset pod and the k8s also restarts the statefulset pod in seconds.
But I find the statefulset pod doesn't contain the command I added in the operator(kubectl describe pod statefulsetpod). If I delete all the resources in k8s and redeploy them, It works.
I have a lot of resources need be created by the operator so I don't want deploy all the resources.
You should delete statefulset itself instead of statefulset pod. The problem is when you delete statefulset pod - new pod automatically creates from old statefulset spec.
Once you delete/recreate statefulset - as expected you schedule proper updated pods.
Probably you can add additional logic to operator that will patch already existed statefulset - that can be resolution for avoiding redeploy statefulset each time.

Understanding deleting stateful sets

New to k8s. I want to understand, what kubectl delete sts --cascade=false does?
If i remove cascade, it deletes the statefulsets pods.
It is clearly explained in the documentation under Deleting the Statefulset:
Deleting a StatefulSet through kubectl will scale it down to 0,
thereby deleting all pods that are a part of it. If you want to delete
just the StatefulSet and not the pods, use --cascade=false.
So by passing this flag to kubectl delete the Pods that are managed by Statefulset are still running even though the StatefulSet object itself is deleted.
As described by the fine manual, it deletes the StatefulSet oversight mechanism without actually deleting the underlying Pods. Removing the oversight mechanism means that if a Pod dies, or you wish to make some kind of change, kubernetes will no longer take responsibility for ensuring the Pods are in the desired configuration.

Recreate Pod managed by a StatefulSet with a fresh PersistentVolume

On an occasional basis I need to perform a rolling replace of all Pods in my StatefulSet such that all PVs are also recreated from scratch. The reason to do so is to get rid of all underlying hard drives that use old versions of encryption key. This operation should not be confused with regular rolling upgrades, for which I still want volumes to survive Pod terminations. The best routine I figured so far to do that is following:
Delete the PV.
Delete the PVC.
Delete the Pod.
Wait until all deletions complete.
Manually recreate the PVC deleted in step 2.
Wait for the new Pod to finish streaming data from other Pods in the StatefulSet.
Repeat from step 1. for the next Pod.
I'm not happy about step 5. I wish StatefulSet recreated the PVC for me, but unfortunately it does not. I have to do it myself, otherwise Pod creation fails with following error:
Warning FailedScheduling 3s (x15 over 15m) default-scheduler persistentvolumeclaim "foo-bar-0" not found
Is there a better way to do that?
I just recently had to do this. The following worked for me:
# Delete the PVC
$ kubectl delete pvc <pvc_name>
# Delete the underlying statefulset WITHOUT deleting the pods
$ kubectl delete statefulset <statefulset_name> --cascade=false
# Delete the pod with the PVC you don't want
$ kubectl delete pod <pod_name>
# Apply the statefulset manifest to re-create the StatefulSet,
# which will also recreate the deleted pod with a new PVC
$ kubectl apply -f <statefulset_yaml>
This is described in https://github.com/kubernetes/kubernetes/issues/89910. The workaround proposed there, of deleting the new Pod which is stuck pending, works and the second time it gets replaced a new PVC is created. It was marked as a duplicate of https://github.com/kubernetes/kubernetes/issues/74374, and reported as potentially fixed in 1.20.
It seems like you're using "Persistent" volume in a wrong way. It's designed to keep the data between roll-outs, not to delete it. There are other different ways to renew the keys. One can use k8s Secret and ConfigMap to mount the key into the Pod. Then you just need to recreate a Secret during a rolling update

Deleting Stateful Sets in Kubernetes

How to delete the Stateful Sets in Kubernetes permanently? They get re-created even after I delete them by setting --force and --grace-period=0 flags.
I know I can delete them all by removing the deployment itself. I'm interested in knowing if there is any way to preserve the deployments and delete only unwanted Stateful Sets.
Scaling the deployment to 0 will remove the pods, but will keep the deployment:
kubectl scale deploy/my-deployment --replicas=0

How to rollaback Kubernetes StatefulSet application

Currently, I am migrating one of our microservice from K8S Deployment type to StatefulSets.
While updating Kubernetes deployment config I noticed StatefulSets doesn't support revisionHistoryLimit and minReadySeconds.
revesionHistoryLimit is used keep previous N numbers of replica sets for rollback.
minReadySeconds is number of seconds pod should be ready without any of its container crashing.
I couldn't find any compatible settings for StatefulSets.
So my questions are:
1) How long master will wait to consider Stateful Pod ready?
2) How to handle rollback of Stateful application.
After reverting the configuration, you must also delete any Pods that StatefulSet had already attempted to run with the bad configuration. The new pod will automatically spin up with correct configuration.
You should define a readiness probe, and the master will wait for it to report the pod as Ready.
StatefulSets currently do not support rollbacks.