Pod is not found when trying to delete, however, can be patched - kubernetes

I have a pod that I can see on GKE. But if I try to delete them, I got the error:
kubectl delete pod my-pod --namespace=kube-system --context=cluster-1
Error from server (NotFound): pods "my-pod" not found
However, if I try to patch it, the operation was completed successfully:
kubectl patch deployment my-pod --namespace kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-update\":\"`date +'%s'`\"}}}}}" --context=cluster-1
deployment.apps/my-pod patched
Same namespace, same context, same pod. Why kubectl fails to delete the pod?

kubectl patch deployment my-pod --namespace kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-update\":\"`date +'%s'`\"}}}}}" --context=cluster-1
You are patching the deployment here, not the pod.
Additionally, your pod will not be called "my-pod" but would be called the name of your deployment plus a hash (random set of letters and numbers), something like "my-pod-ace3g"
To see the pods in the namespace use
kubectl get pods -n {namespace}
Since you've put the deployment in the "kube-system" namespace, you would use
kubectl get pods -n kube-system
Side note: Generally don't use the kube-system namespace unless your deployment is related to the cluster functionality. There's a namespace called default you can use to test things

Related

How to debug why my pods are pending in GCE

I'#m trying to get a pod running on GCE. The pod has an init container, and is created by me applying a manifest with a deployment that creates 1 replica of the pod.
When I look at my workloads on the cloud console, I can see that under 'Active revisions' my deployment is in the state of 'Pods are pending', and under 'Managed pods', the status is 'PodsInitializing'.
The container logs are empty, and the audit logs contain a single entry for the creation of the deployment.
My pods seem to be stuck in the above state, and I'm not really sure why. How do I go about debugging that?
Edit:
kubectl get pods --namespace=my-namespace
Outputs:
NAME READY STATUS RESTARTS AGE
my-pod-v77jm 0/1 Init:0/1 0 55m
But when I run:
kubectl describe pod my-pod-v77jm
I get
Error from server (NotFound): pods "my-pod-v77jm" not found
If you have access to kube-api via kubectl:
Use describe see details about the pod and containers
kubectl describe myPod --namespace mynamespace
To view container logs (including init containers)
kubectl logs myPod --namespace mynamespace -c initContainerName
You can get more information about pod statuses and how to debug init containers here

kubectl create doesn't seem to do anything

I am running the command
kubectl create -f mypod.yaml --namespace=mynamespace
as I need to specify the environment variables through a configMap I created and specified in the mypod.yaml file. Kubernetes returns
pod/mypod created
but kubectl get pods doesn't show it in my list of pods and I can't access it by name as if it does not exist. However, if I try to create it again, it says that the pod is already created.
What may cause this, and how would I diagnose the problem?
By default, kubectl commands operate in the default namespace. But you created your pod in the mynamespace namespace.
Try one of the following:
kubectl get pods -n mynamespace
kubectl get pods --all-namespaces

Unabel to deploy mariadb on kubernetes using openstack-helm charts

I am trying to deploy OpenStack on kubernetes using helm charts. I am seeing the below error when trying to deploy MariaDB. Mariadb-server-0 looks for PVC which is in LOST state. I tried creating the PersistentVolume and assign the same but still, the pod looks for a lost PVC as shown in the error below.
2018-10-05T17:05:04.087573+00:00 node2: kubelet[9897]: E1005 17:05:04.087449 9897 desired_state_of_world_populator.go:273] Error processing volume "mysql-data" for pod "mariadb-server-0_openstack(c259471b-c8c0-11e8-9636-441ea14dfc98)": error processing PVC "openstack"/"mysql-data-mariadb-server-0": PVC openstack/mysql-data-mariadb-server-0 has non-bound phase ("Lost") or empty pvc.Spec.VolumeName ("pvc-74e81ef0-bb97-11e8-9636-441ea14dfc98")
Is there a way we can delete the old PVC entry from a cluster, so MariaDB doesn't look for the same while deploying ??
Thanks,
Ab
To delete a PVC, you can just use the typical kubectl commands.
See all the PVCs:
kubectl -n <namespace> get pvcs
To delete PVCs:
kubectl -n <namespace> delete pvc <pvc-id-from-the-previous-command>
Similarly, I would try PVs, to see if there are any dangling PVs.
See all the PVs:
kubectl -n <namespace> get pvcs
To delete PVs:
kubectl -n <namespace> delete pv <pv-id-from-the-previous-command>

how to delete tiller from kubernetes cluster

Tiller is not working properly in my kubernetes cluster. I want to delete everything Tiller. Tiller (2.5.1) has 1 Deployment, 1 ReplicaSet and 1 Pod.
I tried: kubectl delete deployment tiller-deploy -n kube-system
results in "deployment "tiller-deploy" deleted"
however, tiller-deploy is immediately recreated
kubectl get deployments -n kube-system shows tiller-deploy running again
I also tried: kubectl delete rs tiller-deploy-393110584 -n kube-system
results in "replicaset "tiller-deploy-2745651589" deleted"
however, tiller-deploy-2745651589 is immediately recreated
kubectl get rs -n kube-system shows tiller-deploy-2745651589 running again
What is the correct way to permanently delete Tiller?
To uninstall tiller from a kubernetes cluster:
helm reset
To delete failed tiller from a kubernetes cluster:
helm reset --force
If you want to remove tiller from your cluster the cleanest way it's by removing all the components deployed during the installation.
If you already know the namespace where tiller its deployed:
$ kubectl delete all -l app=helm -n kube-system
pod "tiller-deploy-8557598fbc-5b2g7" deleted
service "tiller-deploy" deleted
deployment.apps "tiller-deploy" deleted
replicaset.apps "tiller-deploy-75f6c87b87" deleted
replicaset.apps "tiller-deploy-8557598fbc" deleted
Be careful with the command, will delete all in the namespace indicated and
with the corresponding label.
where app its the label assigned and will identify all component(replication controller, deployments, service, etc).
You can describe the pod to verify the labels:
$ kubectl describes pod tiller-deploy-8557598fbc-5b2g7 -n kube-system
Name: tiller-deploy-8557598fbc-5b2g7
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: srvlpi03 / 192.168.1.133
Start Time: Tue, 20 Aug 2019 15:51:03 -0400
Labels: app = helm
name = tiller
pod-template-hash = 8557598fbc
You have to uninstall 3 things to completely get rid of tiller:
Deployment
Service
Secret
kubectl delete deployment -n some-namespace tiller-deploy
kubectl delete svc -n some-namespace tiller-deploy
kubectl delete secret -n some-namespace tiller-secret
Be sure to backup the secret as it store all the certificates if TLS is enabled.
You can also try below command
kubectl delete deployment tiller-deploy --namespace kube-system
Turns out that it was running as replicaset:
kubectl delete replicasets -n kube-system tiller-deploy-6fdb84698b
worked for me
helm reset --force didn't remove the tiller.
Kubectl get hpa --all-namespaces( OR -n kube-system)
In normal tiller deployment, they use replica set. For your set up there might be a HorizontalPodAutoscaler object which is targeting the replica sets for tiller.
You can delete the HPA first and then delete the associated replicasets, pods, configmaps OR you can reset helm using "helm reset" command.
don't forget
kubectl -n kube-system delete service tiller-deploy

Kubernetes pod gets recreated when deleted

I have started pods with command
$ kubectl run busybox \
--image=busybox \
--restart=Never \
--tty \
-i \
--generator=run-pod/v1
Something went wrong, and now I can't delete this Pod.
I tried using the methods described below but the Pod keeps being recreated.
$ kubectl delete pods busybox-na3tm
pod "busybox-na3tm" deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-vlzh3 0/1 ContainerCreating 0 14s
$ kubectl delete pod busybox-vlzh3 --grace-period=0
$ kubectl delete pods --all
pod "busybox-131cq" deleted
pod "busybox-136x9" deleted
pod "busybox-13f8a" deleted
pod "busybox-13svg" deleted
pod "busybox-1465m" deleted
pod "busybox-14uz1" deleted
pod "busybox-15raj" deleted
pod "busybox-160to" deleted
pod "busybox-16191" deleted
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox-c9rnx 0/1 RunContainerError 0 23s
You need to delete the deployment, which should in turn delete the pods and the replica sets https://github.com/kubernetes/kubernetes/issues/24137
To list all deployments:
kubectl get deployments --all-namespaces
Then to delete the deployment:
kubectl delete -n NAMESPACE deployment DEPLOYMENT
Where NAMESPACE is the namespace it's in, and DEPLOYMENT is the name of the deployment. If NAMESPACE is default, leave off the -n option altogether.
In some cases it could also be running due to a job or daemonset.
Check the following and run their appropriate delete command.
kubectl get jobs
kubectl get daemonsets.app --all-namespaces
kubectl get daemonsets.extensions --all-namespaces
Instead of trying to figure out whether it is a deployment, deamonset, statefulset... or what (in my case it was a replication controller that kept spanning new pods :)
In order to determine what it was that kept spanning up the image I got all the resources with this command:
kubectl get all
Of course you could also get all resources from all namespaces:
kubectl get all --all-namespaces
or define the namespace you would like to inspect:
kubectl get all -n NAMESPACE_NAME
Once I saw that the replication controller was responsible for my trouble I deleted it:
kubectl delete replicationcontroller/CONTROLLER_NAME
If your pod has name like name-xxx-yyy, it could be controlled by a replicasets.apps named name-xxx, you should delete that replicaset first before deleting the pod:
kubectl delete replicasets.apps name-xxx
Obviously something is respawning the pod. While a lot of the other answers have you looking at everything (replica sets, jobs, deployments, stateful sets, ...) to find what may be respawning the pod, you can instead just look at the pod to see what spawned it. For example do:
$ kubectl describe pod $mypod | grep 'Controlled By:'
Controlled By: ReplicaSet/foobar
This tells you exactly what created the pod. You can then go and delete that.
Look out for stateful sets as well
kubectl get sts --all-namespaces
to delete all the stateful sets in a namespace
kubectl --namespace <yournamespace> delete sts --all
to delete them one by one
kubectl --namespace ag1 delete sts mssql1
kubectl --namespace ag1 delete sts mssql2
kubectl --namespace ag1 delete sts mssql3
This will provide information about all the pods,deployments, services and jobs
in the namespace.
kubectl get pods,services,deployments,jobs
pods can either be created by deployments or jobs
kubectl delete job [job_name]
kubectl delete deployment [deployment_name]
If you delete the deployment or job then restart of the pods can be stopped.
Many answers here tells to delete a specific k8s object, but you can delete multiple objects at once, instead of one by one:
kubectl delete deployments,jobs,services,pods --all -n <namespace>
In my case, I'm running OpenShift cluster with OLM - Operator Lifecycle Manager. OLM is the one who controls the deployment, so when I deleted the deployment, it was not sufficient to stop the pods from restarting.
Only when I deleted OLM and its subscription, the deployment, services and pods were gone.
First list all k8s objects in your namespace:
$ kubectl get all -n openshift-submariner
NAME READY STATUS RESTARTS AGE
pod/submariner-operator-847f545595-jwv27 1/1 Running 0 8d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/submariner-operator-metrics ClusterIP 101.34.190.249 <none> 8383/TCP 8d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/submariner-operator 1/1 1 1 8d
NAME DESIRED CURRENT READY AGE
replicaset.apps/submariner-operator-847f545595 1 1 1 8d
OLM is not listed with get all, so I search for it specifically:
$ kubectl get olm -n openshift-submariner
NAME AGE
operatorgroup.operators.coreos.com/openshift-submariner 8d
NAME DISPLAY VERSION
clusterserviceversion.operators.coreos.com/submariner-operator Submariner 0.0.1
Now delete all objects, including OLMs, subscriptions, deployments, replica-sets, etc:
$ kubectl delete olm,svc,rs,rc,subs,deploy,jobs,pods --all -n openshift-submariner
operatorgroup.operators.coreos.com "openshift-submariner" deleted
clusterserviceversion.operators.coreos.com "submariner-operator" deleted
deployment.extensions "submariner-operator" deleted
subscription.operators.coreos.com "submariner" deleted
service "submariner-operator-metrics" deleted
replicaset.extensions "submariner-operator-847f545595" deleted
pod "submariner-operator-847f545595-jwv27" deleted
List objects again - all gone:
$ kubectl get all -n openshift-submariner
No resources found.
$ kubectl get olm -n openshift-submariner
No resources found.
After taking an interactive tutorial I ended up with a bunch of pods, services, deployments:
me#pooh ~ > kubectl get pods,services
NAME READY STATUS RESTARTS AGE
pod/kubernetes-bootcamp-5c69669756-lzft5 1/1 Running 0 43s
pod/kubernetes-bootcamp-5c69669756-n947m 1/1 Running 0 43s
pod/kubernetes-bootcamp-5c69669756-s2jhl 1/1 Running 0 43s
pod/kubernetes-bootcamp-5c69669756-v8vd4 1/1 Running 0 43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37s
me#pooh ~ > kubectl get deployments --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default kubernetes-bootcamp 4 4 4 4 1h
docker compose 1 1 1 1 1d
docker compose-api 1 1 1 1 1d
kube-system kube-dns 1 1 1 1 1d
To clean up everything, delete --all worked fine:
me#pooh ~ > kubectl delete pods,services,deployments --all
pod "kubernetes-bootcamp-5c69669756-lzft5" deleted
pod "kubernetes-bootcamp-5c69669756-n947m" deleted
pod "kubernetes-bootcamp-5c69669756-s2jhl" deleted
pod "kubernetes-bootcamp-5c69669756-v8vd4" deleted
service "kubernetes" deleted
deployment.extensions "kubernetes-bootcamp" deleted
That left me with (what I think is) an empty Kubernetes cluster:
me#pooh ~ > kubectl get pods,services,deployments
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8m
In some cases the pods will still not go away even when deleting the deployment. In that case to force delete them you can run the below command.
kubectl delete pods podname --grace-period=0 --force
When the pod is recreating automatically even after the deletion of the pod manually, then those pods have been created using the Deployment.
When you create a deployment, it automatically creates ReplicaSet and Pods. Depending upon how many replicas of your pod you mentioned in the deployment script, it will create those number of pods initially.
When you try to delete any pod manually, it will automatically create those pod again.
Yes, sometimes you need to delete the pods with force. But in this case force command doesn’t work.
Instead of removing NS you can try removing replicaSet
kubectl get rs --all-namespaces
Then delete the replicaSet
kubectl delete rs your_app_name
The root cause for the question asked was the deployment/job/replicasets spec attribute strategy->type which defines what should happen when the pod will be destroyed (either implicitly or explicitly). In my case, it was Recreate.
As per #nomad's answer, deleting the deployment/job/replicasets is the simple fix to avoid experimenting with deadly combos before messing up the cluster as a novice user.
Try the following commands to understand the behind the scene actions before jumping into debugging :
kubectl get all -A -o name
kubectl get events -A | grep <pod-name>
In my case I deployed via a YAML file like kubectl apply -f deployment.yaml and the solution appears to be to delete via kubectl delete -f deployment.yaml
Firstly list the deployments
kubectl get deployments
After that delete the deployment
kubectl delete deployment <deployment_name>
If you have a job that continues running, you need to search the job and delete it:
kubectl get job --all-namespaces | grep <name>
and
kubectl delete job <job-name>
You can do kubectl get replicasets check for old deployment based on age or time
Delete old deployment based on time if you want to delete same current running pod of application
kubectl delete replicasets <Name of replicaset>
I also faced the issue, I have used below command to delete deployment.
kubectl delete deployments DEPLOYMENT_NAME
but still pods was recreating, So I crossed check the Replica Set by using below command
kubectl get rs
then edit the replicaset to 1 to 0
kubectl edit rs REPICASET_NAME
With deployments that have stateful sets (or services, jobs, etc.) you can use this command:
This command terminates anything that runs in the specified <NAMESPACE>
kubectl -n <NAMESPACE> delete replicasets,deployments,jobs,service,pods,statefulsets --all
And forceful
kubectl -n <NAMESPACE> delete replicasets,deployments,jobs,service,pods,statefulsets --all --cascade=true --grace-period=0 --force
There is basically two ways to remove PODS
kubectl scale --replicas=0 deploy name_of_deployment.
This will set the number of replica to 0 and hence it will not restart the pods again.
Use helm to uninstall the chart which you have implemented in your pipeline.
Do not delete the deployment directly, instead use helm to uninstall the chart which will remove all objects it created.
The fastest solution for me was installing Lens IDE and removing the service under de DEPLOYMENTS tab. Just delete from this tab and the replica will be deleted too.
Best regards
Kubernetes always works in the format like:
deployments >>> replicasets >>> pods
first edit deployment with 0 replicas and then scale deployment with desired replicas(run below command).You will see new replicaset has been created and pods will also run with desired count.
*
IN-Linux:~ anuragmanikkame$ kubectl scale deploy tomcat -n
dev-namespace --replicas=2 deployment.extensions/tomcat scaled
I experienced a similar problem: after deleting the deployment (kubectl delete deploy <name>), the pods kept "Running" and where automatically re-created after deletion (kubectl delete po <name>).
It turned out that the associated replica set was not deleted automatically for some reason, and after deleting that (kubectl delete rs <name>), it was possible to delete the pods.
This has happened to me with some broken 'helm' installs. You might have a bit of a messed up deployment. If none of the previous suggestions work, look for a daemonset and delete that.
eg
kubectl get daemonset --namespace
then delete daemonset
kubectl delete daemonset --namespace <NAMESPACE> --all --force
then try to delete the pods.
kubectl delete pod --namespace <NAMESPACE> --all --force
Check if pods are gone.
kubectl get pods --all-namespaces
In my case I use these below
kubectl get all --all-namespaces
kubectl delete deployment statefulset-deploymentnament(choose your deployment name)
kubectl delete sts -n default(choose your namespace) --all
kubectl get pods --all-namespaces
Problem got resolved