I'm unable to delete the kubernetes pod, it keeps recreating it.
There's no service or deployment associated with the pod. There's a label on the pod thou, is that the root cause?
If I edit the label out with kubectl edit pod podname it removes the label from the pod, but creates a new pod with the same label at the same time. ¿?
Pod can be created by ReplicationControllers or ReplicaSets. The latter one might be created by an Deployment. The described behavior strongly indicates, that the Pod is managed by either of these two.
You can check for these with this commands:
kubectl get rs
kubectl get rc
Related
I was wondering what would happen in this scenario or if it's even possible:
Kubernetes cluster -
If the deployment has a container restartPolicy of: Always
but on the POD level you specify a restartPolicy of: Never
Which will Kubernetes do?
As #Turing85 commented, in the normal use case a Deployment and its Pod cannot have different restartPolicys, as the Deployment creates the Pods. If you try to alter the Pods restartPolicy manually after it is created (e.g. with kubectl edit pod <pod-name>) you will get an error, as this property cannot be changed after creation. However, we can trick a Deployment or more specifically the underlying ReplicaSet into accepting a manually created Pod. ReplicaSets in Kubernetes know which Pods are theirs through the use of labels. If you inspect the ReplicaSet belonging to your Deployment, you will see a label selector, that shows you which labels need to be present for the ReplicaSet to consider the Pod part of the ReplicaSet.
So if you want to manually create a Pod that is later managed by the ReplicaSet, you first create a Pod with the desired restartPolicy. After this Pod has started and is ready you delete an existing Pod of the ReplicaSet and update the labels of your pod to contain the correct labels. Now there is a Pod in the ReplicaSet with a different restartPolicy.
This is really hacky and actually depends on the timing of deletion and update of the labels, because as soon as you delete a Pod in the ReplicaSet it will try to create a new one. You essentially have to be faster with the label change than the ReplicaSet is with the creation of a new Pod.
I added a pod through Kubernetes Dashboard. I used Create new resource and I created a pod from input.
I then tried to delete it with:
kubectl delete -n default pod pod-name-0
It deletes it, but gets redeployed. As I understand, I should delete it's deployment first. So to list deployments, I used
kubectl get deployments
But it's not there. How do I permanently delete a pod?
The pods are maintained by a ReplicationController and they are automatically replaced if they fail, are deleted, or are terminated, you should check
kubectl describe pods POD_NAME
kubectl describe replicationcontrollers/REPLICATION_CONTROLLER_NAME
Alternatively you can check the ReplicaSet kubectl get rs
Afterwards you can: kubectl edit rs REPLICASET_NAME and change the replicas count up or down as you desire.
Nice explanation regarding ReplicaSet vs ReplicationController
I have installed Prometheus using helm chart, so I got 4 deployment files listed:
prometheus-alertmanager
prometheus-server
prometheus-pushgateway
prometheus-kube-state-metrics
All pods of deployment files are running accordingly.
By mistake I restarted one deployment file using this command:
kubectl rollout restart deployment prometheus-alertmanager
Now a new pod is getting created and getting crashed, if I delete deployment file then previous pod also be deleted. So what can I do for that crashLoopBackOff pod?
Screenshot of kubectl output
You can simply delete that pod with the kubectl delete pod <pod_name> command or attempt to delete all pod in crashLoopBackOff status with:
kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'`
Make sure that the corresponding deployment is set to 1 replica (or any other chosen number). If you delete a pod(s) of that deployment, it will create a new one while keeping the desired replica count.
These two pods (one running and the other crashloopbackoff) belong to different deployments, as they're suffixed by different tags, i.e: pod1-abc-123 and pod2-abc-456 belong to the same deployment template, however pod1-abc-123 and pod2-def-566 belong to different deployments.
A deployment is going to create a replicaset, make sure you delete that corresponding old replicase, kubectl get rs | grep 99dd and delete that one, similar to the prometheus server one.
I have by mistake added a pod in the system namespace "kube-system". And then I am unable to remove this pod. It also seems to have created a replica set. Every time delete these items, they are recreated.
Can't seem to find a way to delete pods or replica sets belonging to the system namespace "kube-system"
If you created the pod using kubectl run, then you will need to delete the deployment (which created the replica set, which created the pod). Otherwise, the higher level controllers will continue to ensure that the objects they are responsible for keeping running stay around in the system, even if you try to delete them manually. Try kubectl get deployment --namespace=kube-system to see if you have a deployment in the kube-system namespace. If so, deleting it should also delete the replica set and the pods that you created.
If a pod is recreated even after kubectl delete pod-name, it means that the pod is controlled by a higher level kubernetes objects such as Deployment, Replicaset, Replication controller.
You can use kubectl describe pods pod-name | grep Controllers to find which controller your pod belongs to. You need to delete this higher level object to delete the pod.
I have installed Kubernetes in Ubuntu server using instructions here. I am trying to create pods using kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --hostport=8000 --port=8080 as listed in the example. However, when I do kubectl get pod I get the status of the container as pending. I further did kubectl describe pod for debugging and I see the message:
FailedScheduling pod (hello-minikube-3383150820-1r4f7) failed to fit in any node fit failure on node (minikubevm): PodFitsHostPorts.
I am further trying to delete this pod by kubectl delete pod hello-minikube-3383150820-1r4f7 but when I further do kubectl get pod I see another pod with prefix "hello-minikube-3383150820-" that I havent created. Does anyone know how to fix this problem? Thank you in advance.
The PodFitsHostPorts predicate is failing because you have something else on your nodes using port 8000. You might be able to find what it is by running kubectl describe svc.
kubectl run creates a deployment object (you can see it with kubectl describe deployments) which makes sure that you always keep the intended number of replicas of the pod running (in this case 1). When you delete the pod, the deployment controller automatically creates another for you. If you want to delete the deployment and the pods it keeps creating, you can run kubectl delete deployments hello-minikube.