Can not delete pods in Kubernetes - kubernetes

I tried installing dgraph (single server) using Kubernetes.
I created pod using:
kubectl create -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml
Now all I need to do is to delete the created pods.
I tried deleting the pod using:
kubectl delete pod pod-name
The result shows pod deleted, but the pod keeps recreating itself.
I need to remove those pods from my Kubernetes. What should I do now?

I did face same issue. Run command:
kubectl get deployment
you will get respective deployment to your pod. Copy it and then run command:
kubectl delete deployment xyz
then check. No new pods will be created.

The link provided by the op may be unavailable. See the update section
As you specified you created your dgraph server using this https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml, So just use this one to delete the resources you created:
$ kubectl delete -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml
Update
Basically, this is an explanation for the reason.
Kubernetes has some workloads (those contain PodTemplate in their manifest). These are:
Pods
Controllers (basically Pod controllers)
ReplicationController
ReplicaSet
Deployment
StatefulSet
DaemonSet
Job
CronJob
See, who controls whom:
ReplicationController -> Pod(s)
ReplicaSet -> Pod(s)
Deployment -> ReplicaSet(s) -> Pod(s)
StatefulSet -> Pod(s)
DaemonSet -> Pod(s)
Job -> Pod
CronJob -> Job(s) -> Pod
a -> b means a creates and controls b and the value of field
.metadata.ownerReference in b's manifest is the reference of a. For
example,
apiVersion: v1
kind: Pod
metadata:
...
ownerReferences:
- apiVersion: apps/v1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
...
This way, deletion of the parent object will also delete the child object via garbase collection.
So, a's controller ensures that a's current status matches with
a's spec. Say, if one deletes b, then b will be deleted. But
a is still alive and a's controller sees that there is a
difference between a's current status and a's spec. So a's
controller recreates a new b obj to match with the a's spec.
The ops created a Deployment that created ReplicaSet that further created Pod(s). So here the soln was to delete the root obj which was the Deployment.
$ kubectl get deploy -n {namespace}
$ kubectl delete deploy {deployment name} -n {namespace}
Note Book
Another problem may arise during deletion is as follows:
If there is any finalizer in the .metadata.finalizers[] section, then only after completing the task(s) performed by the associated controller, the deletion will be performed. If one wants to delete the object without performing the finalizer(s)' action(s), then he/she has to delete those finalizer(s) first. For example,
$ kubectl patch -n {namespace} deploy {deployment name} --patch '{"metadata":{"finalizers":[]}}'
$ kubectl delete -n {namespace} deploy {deployment name}

You can perform a graceful pod deletion with the following command:
kubectl delete pods <pod>
If you want to delete a Pod forcibly using kubectl version >= 1.5, do the following:
kubectl delete pods <pod> --grace-period=0 --force
If you’re using any version of kubectl <= 1.4, you should omit the --force option and use:
kubectl delete pods <pod> --grace-period=0
If even after these commands the pod is stuck on Unknown state, use the following command to remove the pod from the cluster:
kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}'

Pods in kubernetes also depends on its type.
Like
Replication Controllers
Replica Sets
Statefulsets
Deployments
Daemon Sets
Pod
Do kubectl describe pod <podname> and check
apiVersion: apps/v1
kind: StatefulSet
metadata:
Now do kubectl get <pod-kind>
At last delete the same and pod will also be deleted.

As #Shudipta Sharma's answer is obviously correct way on how to delete the pods. I would just like to make sure author will understand why this is happening.
The reason is the "mindset" of the Kubernetes in which Pods are considered to be ephemeral, throwaway entities. As Pods come and go, StatefulSets are one way of ensuring that a given number of pods with unique identities will be running at any given time. Reaching the yaml file you used to deploy:
# This StatefulSet runs 1 pod with one Zero, one Alpha & one Ratel containers.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dgraph
spec:
serviceName: "dgraph"
replicas: 1
By deploying this you are basically saying that you want Kubernetes to always run 1 replica of that Pod, at any time. When you delete the Pod, that condition is no longer true so after deletion, there is another Pod spawning to make sure the condition above will be valid.
The way that #Shudipta Sharma provided is just deletion of that StatefulSet so you no longer have a desired state which will keep an eye on the number of running Pods.
You can find more about that in Kubernetes documentation on:
StatefulSets
Cluster's desired state
More about Kubernetes objects and difference between each of them

Delete deployment, not the pods. It is deployment that is making another pod. You can see the different pod name after you delete pods.
kubectl get all
kubectl delete deployment DEPLOYMENTNAME

Related

Can't delete a pod added trough Kubernetes Dashboard

I added a pod through Kubernetes Dashboard. I used Create new resource and I created a pod from input.
I then tried to delete it with:
kubectl delete -n default pod pod-name-0
It deletes it, but gets redeployed. As I understand, I should delete it's deployment first. So to list deployments, I used
kubectl get deployments
But it's not there. How do I permanently delete a pod?
The pods are maintained by a ReplicationController and they are automatically replaced if they fail, are deleted, or are terminated, you should check
kubectl describe pods POD_NAME
kubectl describe replicationcontrollers/REPLICATION_CONTROLLER_NAME
Alternatively you can check the ReplicaSet kubectl get rs
Afterwards you can: kubectl edit rs REPLICASET_NAME and change the replicas count up or down as you desire.
Nice explanation regarding ReplicaSet vs ReplicationController

Kubernetes pod crashLoopBackOff, need to remove a pod

I have installed Prometheus using helm chart, so I got 4 deployment files listed:
prometheus-alertmanager
prometheus-server
prometheus-pushgateway
prometheus-kube-state-metrics
All pods of deployment files are running accordingly.
By mistake I restarted one deployment file using this command:
kubectl rollout restart deployment prometheus-alertmanager
Now a new pod is getting created and getting crashed, if I delete deployment file then previous pod also be deleted. So what can I do for that crashLoopBackOff pod?
Screenshot of kubectl output
You can simply delete that pod with the kubectl delete pod <pod_name> command or attempt to delete all pod in crashLoopBackOff status with:
kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'`
Make sure that the corresponding deployment is set to 1 replica (or any other chosen number). If you delete a pod(s) of that deployment, it will create a new one while keeping the desired replica count.
These two pods (one running and the other crashloopbackoff) belong to different deployments, as they're suffixed by different tags, i.e: pod1-abc-123 and pod2-abc-456 belong to the same deployment template, however pod1-abc-123 and pod2-def-566 belong to different deployments.
A deployment is going to create a replicaset, make sure you delete that corresponding old replicase, kubectl get rs | grep 99dd and delete that one, similar to the prometheus server one.

Kubernetes : How to delete a specific pod managed by StatefulSet without it being recreated?

I have a StatefulSet with 2 pods. It has a headless service and each pod has a LoadBalancer service that makes it available to the world.
Let's say pod names are pod-0 and pod-1.
If I want to delete pod-0 but keep pod-1 active, I am not able to do that.
I tried
kubectl delete pod pod-0
This deletes it but then restarts it because StatefulSet replica is set to 2.
So I tried
kubectl delete pod pod-0
kubectl scale statefulset some-name --replicas=1
This deletes pod-0, deletes pod-1 and then restarts pod-0. I guess because when replica is set to 1, StatefulSet wants to keep pod-0 active but not pod-1.
But how do I keep pod-1 active and delete pod-0?
This is not supported by the StatefulSet controller. Probably the best you could do is try to create that pod yourself with a sleep shim and maybe you could be faster. But then the sts controller will just be unhappy forever.
You can try using a custom controller like:
https://github.com/openkruise/kruise/
You can have more fine grain control with selective pod removal, if you use a CloneSet custom resource.
apiVersion: apps.kruise.io/v1alpha1
kind: CloneSet
spec:
# ...
replicas: 4
scaleStrategy:
podsToDelete:
- sample-9m4hp # you select which pod to remove
https://openkruise.io/en-us/docs/cloneset.html
The issue of removing specific pods of a deployment or StatefulSet has been opened for years with no resolution:
https://github.com/kubernetes/kubernetes/issues/45509
Statefulset always creates the pods with indices 0..(replica-1).
If you really want to have a finer control over individual pods, i guess you would need to create separate STS objects (with replica = 1)
ReplicaSet in K8s 1.21+ will have PodDeletionCost POD annotation to solve that issue for Deployments but so far nothting for STS.

K8 Pod Lifetime: Is Cleanup Necessary?

The Kubernetes Docs say the following:
In general, Pods do not disappear until someone destroys them. This
might be a human or a controller. The only exception to this rule is
that Pods with a phase of Succeeded or Failed for more than some
duration (determined by the master) will expire and be automatically
destroyed.
What is the default value for this duration and how do I set it? My pods also never enter the Succeeded or Failed phase, rather they enter Completed or Error phase respectively. Is this to be expected; are the docs out of date?
I check the pod phases using kubectl get pods --show-all, where information about them seems to persist. Is there any additional cleanup necessary? Running kubectl get pods without --show-all does not show any pods after they destroyed.
I am creating pods with kubectl apply -f k8/dummy-pod.yaml and the following yaml file:
apiVersion: v1
kind: Pod
metadata:
name: dummy.3
labels:
vara: a
role: idk
spec:
hostNetwork: true
restartPolicy: Never
containers:
- image: gcr.io/gv-test-196801/dummy:v2
name: dummy-1
I believe this documentation is out of date.
Pod garbage collection using TTL was abandoned in favor of a threshold number of terminated pods. --terminated-pod-gc-threshold on the kube controller manager (docs here).
Currently deleting a DaemonSet, Deployment, ReplicaSet or StatefulSet will orphan its pods by default.
You can work around this by enabling cascading deletes
This behavior will change in 1.10
Prior to apps/v1 the default garbage collection policy for Pods in a
DaemonSet, Deployment, ReplicaSet, or StatefulSet, was to orphan the
Pods. That is, if you deleted one of these kinds, the Pods that they
owned would not be deleted automatically unless cascading deletion was
explicitly specified
see kubernetes blog

Pod gets recreated after deletion

I'm unable to delete the kubernetes pod, it keeps recreating it.
There's no service or deployment associated with the pod. There's a label on the pod thou, is that the root cause?
If I edit the label out with kubectl edit pod podname it removes the label from the pod, but creates a new pod with the same label at the same time. ¿?
Pod can be created by ReplicationControllers or ReplicaSets. The latter one might be created by an Deployment. The described behavior strongly indicates, that the Pod is managed by either of these two.
You can check for these with this commands:
kubectl get rs
kubectl get rc