How to delete kubernetes pods (and other resources) in the system namespace - kubernetes

I have by mistake added a pod in the system namespace "kube-system". And then I am unable to remove this pod. It also seems to have created a replica set. Every time delete these items, they are recreated.
Can't seem to find a way to delete pods or replica sets belonging to the system namespace "kube-system"

If you created the pod using kubectl run, then you will need to delete the deployment (which created the replica set, which created the pod). Otherwise, the higher level controllers will continue to ensure that the objects they are responsible for keeping running stay around in the system, even if you try to delete them manually. Try kubectl get deployment --namespace=kube-system to see if you have a deployment in the kube-system namespace. If so, deleting it should also delete the replica set and the pods that you created.

If a pod is recreated even after kubectl delete pod-name, it means that the pod is controlled by a higher level kubernetes objects such as Deployment, Replicaset, Replication controller.
You can use kubectl describe pods pod-name | grep Controllers to find which controller your pod belongs to. You need to delete this higher level object to delete the pod.

Related

Deployment vs POD - change

I was wondering what would happen in this scenario or if it's even possible:
Kubernetes cluster -
If the deployment has a container restartPolicy of: Always
but on the POD level you specify a restartPolicy of: Never
Which will Kubernetes do?
As #Turing85 commented, in the normal use case a Deployment and its Pod cannot have different restartPolicys, as the Deployment creates the Pods. If you try to alter the Pods restartPolicy manually after it is created (e.g. with kubectl edit pod <pod-name>) you will get an error, as this property cannot be changed after creation. However, we can trick a Deployment or more specifically the underlying ReplicaSet into accepting a manually created Pod. ReplicaSets in Kubernetes know which Pods are theirs through the use of labels. If you inspect the ReplicaSet belonging to your Deployment, you will see a label selector, that shows you which labels need to be present for the ReplicaSet to consider the Pod part of the ReplicaSet.
So if you want to manually create a Pod that is later managed by the ReplicaSet, you first create a Pod with the desired restartPolicy. After this Pod has started and is ready you delete an existing Pod of the ReplicaSet and update the labels of your pod to contain the correct labels. Now there is a Pod in the ReplicaSet with a different restartPolicy.
This is really hacky and actually depends on the timing of deletion and update of the labels, because as soon as you delete a Pod in the ReplicaSet it will try to create a new one. You essentially have to be faster with the label change than the ReplicaSet is with the creation of a new Pod.

How to delete pod created with rolling restart?

I ran kubectl rollout restart deployment.
It created a new pod which is now stuck in Pending state because there are not enough resources to schedule it.
I can't increase the resources.
How do I delete the new pod?
please check if that pod has a Deployment controller (which should be recreating the pod), use:
kubectl get deployments
Then try to delete the Deployment with
Kubectl delete deployment DEPLOYMENT_NAME
Also, I would suggest to check resources allocation on GKE and its usage on your nodes with next command:
kubectl describe nodes | grep -A10 "Allocated resources"
And if you need more resources, try to activate GKE CA (cluster autoscaler) or in case you already have it enabled, then increase the number of nodes on Max value. You can also try to manually add a new node by manually resizing the Nodepool you are using.

What is the recommended way to move lone pods to different node before draining? Such that kubectl drain node1 --force does not delete the pod

Cannot find how to do so in the docs. After draining the node with --ignore-daemonsets --force pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet are lost. How should I move such pods prior to issuing the drain command? I want to preserve the local data on these pods.
A good practice is to always start a Pod as a Deployment with specs.replicas: 1. It's very easy as the Deployment specs.template literally takes in your Pod specs, and quite convenient as the deployment will make sure your Pod is always running.
Then, assuming you'll only have 1 replica of your Pod, you can simply use a PersistentVolumeClaim and attach it to the pod as a volume, you do not need a StatefulSet in that case. Your data will be stored in the PVC, and whenever your Pod is moved over nodes for whatever reason it will reattach the volume automatically without loosing any data.
Now, if it's too late for you, and your Pod hasn't got a volume pointing to a PVC, you can still get ready to change that by implementing the Deployment/PVC approach, and manually copy data out of your current pod:
kubectl cp theNamespace/thePod:/the/path /somewhere/on/your/local/computer
Before copying it back to the new pod:
kubectl cp /somewhere/on/your/local/computer theNamespace/theNewPod:/the/path
This time, just make sure /the/path (to reuse the example above) is actually a Volume mapped to a PVC so you won't have to do that manually again!

Delete all the contents from a kubernetes node

How to delete all the contents from a kubernetes node? Contents include deployments, replica sets etc. I tried to delete deplyoments seperately. But kubernetes recreates all the pods again. Is there there any ways to delete all the replica sets present in a node?
If you are testing things, the easiest way would be
kubectl delete deployment --all
Althougth if you are using minikube, the easiest would probably be delete the machine and start again with a fresh node
minikube delete
minikube start
If we are talking about a production cluster, Kubernetes has a built-in feature to drain a node of the cluster, removing all the objects from that node safely.
You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node. Safe evictions allow the pod’s containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.
Note: By default kubectl drain will ignore certain system pods on the node that cannot be killed; see the kubectl drain documentation for more details.
When kubectl drain returns successfully, that indicates that all of the pods (except the ones excluded as described in the previous paragraph) have been safely evicted (respecting the desired graceful termination period, and without violating any application-level disruption SLOs). It is then safe to bring down the node by powering down its physical machine or, if running on a cloud platform, deleting its virtual machine.
First, identify the name of the node you wish to drain. You can list all of the nodes in your cluster with
kubectl get nodes
Next, tell Kubernetes to drain the node:
kubectl drain <node name>
Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node). drain waits for graceful termination. You should not operate on the machine until the command completes.
If you leave the node in the cluster during the maintenance operation, you need to run
kubectl uncordon <node name>
afterwards to tell Kubernetes that it can resume scheduling new pods onto the node.
Please, note that if there are any pods that are not managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force, as mentioned in the docs.
kubectl drain <node name> --force
minikube delete --all
in case you are using minikube
it will let you start a new clean cluster.
in case you run on Kubernetes :
kubectl delete pods,deployments -A --all
it will remove it from all namespaces, you can add more objects in the same command .
Kubenertes provides namespaces object for isolation and separation of concern. Therefore, It is recommended to apply all of the k8s resources objects (Deployment, ReplicaSet, Pods, Services and other) in a custom namespace.
Now If you want to remove all of the relevant and related k8s resources, you just need to delete the namespace which will remove all of these resources.
kubectl create namespace custom-namespace
kubectl create -f deployment.yaml --namespace=custom-namespace
kubectl delete namespaces custom-namespace
I have attached a link for further research.
Namespaces
I tried so many variations to delete old pods from tutorials, including everything here.
What finally worked for me was:
kubectl delete replicaset --all
Deleting them one at a time didn't seem to work; it was only with the --all flag that all pods were deleted without being recreated.

Pod gets recreated after deletion

I'm unable to delete the kubernetes pod, it keeps recreating it.
There's no service or deployment associated with the pod. There's a label on the pod thou, is that the root cause?
If I edit the label out with kubectl edit pod podname it removes the label from the pod, but creates a new pod with the same label at the same time. ¿?
Pod can be created by ReplicationControllers or ReplicaSets. The latter one might be created by an Deployment. The described behavior strongly indicates, that the Pod is managed by either of these two.
You can check for these with this commands:
kubectl get rs
kubectl get rc