How do you hard force a pod to terminate?
We have tried every documented form of kubectl delete pod and they remain. kubectl reports that they've been deleted, but kubectl get pods tells a different story. All pods that they could've been using have been deleted as well as any pods that could be using them.
Is there any form of kubectl SERIOUSLY_DELETE this pod?
I've tried: kubectl delete pods --all --grace-period=0 --force -n monitoring
With no favorable result. I've also tried to delete them individually.
NAME READY STATUS RESTARTS AGE
es-master-962148878-7vxpp 1/1 Terminating 1 12d
es-master-962148878-h1r4k 1/1 Terminating 1 12d
es-master-962148878-rkg9g 1/1 Terminating 1 12d
Taken from kubectl delete --help:
kubectl delete pod foo --grace-period=0 --force
Note that if your pods are controlled via e.g. a deployment, then a new one will be recreated every time you delete one. So do make sure that's not the symptom you're observing!
Related
I want to remove zk and kafka from my k8s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka1-mvzch 1/1 Running 1 25s
kafka2-m292k 0/1 CrashLoopBackOff 8 20m
zookeeper1-qhmnf 1/1 Running 0 20m
zookeeper2-t7r8w 1/1 Running 0 20m
$kubectl delete pod kafka1-mvzch kafka2-m292k zookeeper1-qhmnf zookeeper2-t7r8w
pod "kafka1-mvzch" deleted
pod "kafka1-m292k" deleted
pod "zookeeper1-qhmnf" deleted
pod "zookeeper2-t7r8w" deleted
but when I run get pods, it still shows the pods.
And I got no service and deployment
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 7h1m
$ kubectl get deployment
No resources found in default namespace.
You are removing the pods, and they will be deleted.
But there is some other construct that re-creates pods to replace the (now deleted) previous pods.
In fact, the names of the pods with the random-looking suffix suggest that there is another controller operating the pods.
When looking at the linked tutorial, you notice that a ReplicationController is created. This ensures the pods.
If you want to remove it, remove the replication controller; the pods will be deleted as well.
You can use kubectl get pod -ojsonpath='{.metadata.ownerReferences}' to identify the owner object of the pods. The owner might be a Deployment, StatefulSet, etc.
Looking at the medium.com guide that you mentioned, I see that they suggest to create ReplicationControllers.
You can cleanup your namespace by running kubectl delete replicationcontroller --all.
Strangely, one pod in kubernetes cluster crashes but other doesn't!
codingjediweb-6d77f46b56-5mffg 0/1 CrashLoopBackOff 3 81s
codingjediweb-6d77f46b56-vcr8q 1/1 Running 0 81s
They should both have same image and both should work. What could be reason?
I suspect that the crashing pod has old image but I don't know why. Its because I fixed an issue and expected the code to work (which is on one of the pods).
Is it possible that different pods have different images? Is there a way to check which pod is running which image? Is there a way to "flush" an old image or force K8S to download even if it has a cache?
UPDATE
After Famen's suggestion, I looked at the image. I can see that for the crashing container seem to be using an existing image (which might be old). How can I make K8S always pull an image?
manuchadha25#cloudshell:~ (copper-frame-262317)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 1 2d1h
codingjediweb-6d77f46b56-5mffg 0/1 CrashLoopBackOff 10 29m
codingjediweb-6d77f46b56-vcr8q 1/1 Running 0 29m
manuchadha25#cloudshell:~ (copper-frame-262317)$ kubectl describe pod codingjediweb-6d77f46b56-vcr8q | grep image
Normal Pulling 29m kubelet, gke-codingjediweb-cluste-default-pool-69be8339-wtjt Pulling image "docker.io/manuchadha25/codingjediweb:08072020v3"
Normal Pulled 29m kubelet, gke-codingjediweb-cluste-default-pool-69be8339-wtjt Successfully pulled image "docker.io/manuchadha25/codingjediweb:08072020v3"
manuchadha25#cloudshell:~ (copper-frame-262317)$ kubectl describe pod codingjediweb-6d77f46b56-5mffg | grep image
Normal Pulled 28m (x5 over 30m) kubelet, gke-codingjediweb-cluste-default-pool-69be8339-p5hx Container image "docker.io/manuchadha25/codingjediweb:08072020v3" already present on machine
manuchadha25#cloudshell:~ (copper-frame-262317)$
Also, the working pod has two entries for the image (pulling and pulled). Where are there two?
When you create a deployment, a replicaSet is created in the background. Each pod of that replicaSet has same properties(i.e. images, memory).
When you apply any changes by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment controller manages the moving of the Pods from the old ReplicaSet to the new one at a controlled rate. At this point you may find different pods from different replicaSet with differnt properties.
To check the image:
# get pod's yaml
$ kubectl get pods -n <namespace> <pod-name> -o yaml
# get deployment's yaml
$ kubectl get deployments -n <namespace> <deployment-name> -o yaml
Set imagePullPolicy to Always in your deployment yaml, to use the updated image by forcing a pull.
I would like to perform rolling back a deployment in my environment.
Command:
kubectl rollout undo deployment/foo
Steps which are perform:
create pods with old configurations
delete old pods
Is there a way to not perform last step - for example - developer would like to check why init command fail and debug.
I didn't find information about that in documentation.
Yes it is possible, before doing rollout, first you need to remove labels (corresponding to replica-set controlling that pod) from unhealthy pod. This way pod won't belong anymore to the deployment and even if you do rollout, it will still be there. Example:
$kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
sleeper 1/1 1 1 47h
$kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
sleeper-d75b55fc9-87k5k 1/1 Running 0 5m46s pod-template-hash=d75b55fc9,run=sleeper
$kubectl label pod sleeper-d75b55fc9-87k5k pod-template-hash- run-
pod/sleeper-d75b55fc9-87k5k labeled
$kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
sleeper-d75b55fc9-87k5k 1/1 Running 0 6m34s <none>
sleeper-d75b55fc9-swkj9 1/1 Running 0 3s pod-template-hash=d75b55fc9,run=sleeper
So what happens here, we have a pod sleeper-d75b55fc9-87k5k which belongs to sleeper deployment, we remove all labels from it, deployment detects that pod "has gone" so it creates a new one sleeper-d75b55fc9-swkj9, but the old one is still there and ready for debugging. Only pod sleeper-d75b55fc9-swkj9 will be affected by rollout.
I want to see the list of terminated and running pods details in Kubernetes.
Below command only shows the running pods, somehow I want to see the history of all pods so far terminated.
$ ./kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
POD1 1/1 Running 0 3d 10.333.33.333 node123
POD2 1/1 Running 0 4d 10.333.33.333 node121
POD3 1/1 Running 0 1m 10.333.33.333 node124
I expect the terminated pods list using kubectl command
Since v1.10 kubectl prints terminated pods by default:
--show-all (which only affected pods and only for human
readable/non-API printers) is now defaulted to true and deprecated.
The flag determines whether pods in a terminal state are displayed. It
will be inert in 1.11 and removed in a future release.
If running older versions than v1.10, you still need to use the --show-all flag.
[root#vpct-k8s-1 kubernetes]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-ui-v2-ck0yw 0/1 Pending 0 1h
[root#vpct-k8s-1 kubernetes]# kubectl get rc --all-namespaces
NAMESPACE CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
kube-system kube-ui-v2 kube-ui gcr.io/google_containers/kube-ui:v2 k8s-app=kube-ui,version=v2 1 1h
Can't delete pods in pending state?
kubectl get ns
kubectl get pods --all-namespaces
kubectl get deployment -n (namespacename)
kubectl get deployments --all-namespaces
kubectl delete deployment (podname) -n (namespacename)
Try the below command
kubectl delete pod kube-ui-v2-ck0yw --grace-period=0 --force -n kube-system
To delete a pod in the pending state, simply delete the deployment file by using kubectl.
Please check the below command:
kubectl delete -f deployment-file-name.yaml
Depending on the number of replicas you specified while creating the cluster, you might be able to delete the pending pod but another pod will be recreated automatically. You can delete the pod by running this command:
$ ./cluster/kubectl.sh delete pod kube-ui-v2-ck0yw