How to force delete resources in a non-existant namespace? - kubernetes

This question is a follow up of: How to list really all objects of a nonexistant namespace?
Long story short:
$ kubectl get namespaces
NAME STATUS AGE
argo Active 27d
default Active 27d
kube-node-lease Active 27d
kube-public Active 27d
kube-system Active 27d
$ kubectl get eventbus -n argo-events
NAME AGE
default 17h
$ kubectl get eventsource -n argo-events
NAME AGE
pubsub-event-source 14h
There are two resources in namespace argo-events which actually no longer exits because I deleted it and expected it to be gone with all resources in it. Obviously something didn't work as expected.
Now (after listing potentially more objects - first question) I want to really get rid of those resources because they seem to block a redeployment.
But this ...
$ kubectl delete eventbus default -n argo-events
eventbus.argoproj.io "default" deleted
^C
$ kubectl delete eventsource pubsub-event-source -n argo-events
eventsource.argoproj.io "pubsub-event-source" deleted
^C
... doesn't work.
So, how do I force their deletion?
UPDATE:
$ kubectl describe eventbus default -n argo-events | grep -A 3 final
f:finalizers:
.:
v:"eventbus-controller":
f:status:
$ kubectl describe eventsource pubsub-event-source -n argo-events | grep -A 3 final
f:finalizers:
.:
v:"eventsource-controller":
f:spec:

This worked:
$ kubectl create namespace argo-events
namespace/argo-events created
$ kubectl patch eventsource/pubsub-event-source -p '{"metadata":{"finalizers":[]}}' --type=merge -n argo-events
eventsource.argoproj.io/pubsub-event-source patched
$ kubectl patch eventbus/default -p '{"metadata":{"finalizers":[]}}' --type=merge -n argo-events
eventbus.argoproj.io/default patched
$ kubectl delete namespace argo-events
namespace "argo-events" deleted
If somebody stumbles upon this answer and knows why this works - please add an explanation in a comment. That would be cool, thanks.

What about:
kubectl delete eventsource pubsub-event-source -n argo-events --grace-period=0 --force
?

Related

flannel" cannot get resource "pods" in API group "" in the namespace "kube-flannel"

I'm trying to install Kubernetes with dashboard but I get the following issue:
test#ubuntukubernetes1:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-ksc9n 0/1 CrashLoopBackOff 14 (2m15s ago) 49m
kube-system coredns-6d4b75cb6d-27m6b 0/1 ContainerCreating 0 4h
kube-system coredns-6d4b75cb6d-vrgtk 0/1 ContainerCreating 0 4h
kube-system etcd-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kube-system kube-apiserver-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kube-system kube-controller-manager-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kube-system kube-proxy-6v8w6 1/1 Running 1 (106m ago) 4h
kube-system kube-scheduler-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kubernetes-dashboard dashboard-metrics-scraper-7bfdf779ff-dfn4q 0/1 Pending 0 48m
kubernetes-dashboard dashboard-metrics-scraper-8c47d4b5d-9kh7h 0/1 Pending 0 73m
kubernetes-dashboard kubernetes-dashboard-5676d8b865-q459s 0/1 Pending 0 73m
kubernetes-dashboard kubernetes-dashboard-6cdd697d84-kqnxl 0/1 Pending 0 48m
test#ubuntukubernetes1:~$
Log files:
test#ubuntukubernetes1:~$ kubectl logs --namespace kube-flannel kube-flannel-ds-ksc9n
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
I0808 23:40:17.324664 1 main.go:207] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W0808 23:40:17.324753 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
E0808 23:40:17.547453 1 main.go:224] Failed to create SubnetManager: error retrieving pod spec for 'kube-flannel/kube-flannel-ds-ksc9n': pods "kube-flannel-ds-ksc9n" is forbidden: User "system:serviceaccount:kube-flannel:flannel" cannot get resource "pods" in API group "" in the namespace "kube-flannel"
test#ubuntukubernetes1:~$
Do you know how this issue can be solved? I tried the following installation:
Swapoff -a
Remove following line from /etc/fstab
/swap.img none swap sw 0 0
sudo apt update
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker
sudo apt install apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list
sudo mv ~/kubernetes.list /etc/apt/sources.list.d
sudo apt update
sudo apt install kubeadm kubelet kubectl kubernetes-cni
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
kubectl proxy --address 192.168.1.133 --accept-hosts '.*'
Can you advise?
I had the same situation on a new deployment today. Turns out, the kube-flannel-rbac.yml file had the wrong namespace. It's now 'kube-flannel', not 'kube-system', so I modified it and re-applied.
I also added a 'namespace' entry under each 'name' entry in kube-flannel.yml, except for under the roleRef heading. (it threw an error when I added it there) All pods came up as 'Running' after the new yml was applied.
Seems like the problem is with kube-flannel-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
it expecting a service account in the kube-system
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
so just delete this
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
as the kube-flannel.yml already creating this in the right namespace.
https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml#L43
I tried to deploy a 3 node cluster with 1 master and 2 workers. I followed similar method as described above.
Then tried to depploy Nginx but it failed. When I checked my pods, flannel on master was running but on the worker nodes it is failing.
I deleted flannel and started from beginning.
First I just used only, since there was some mention that kube-flannel-rbac.yaml was causing issues.
ubuntu#master:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
ubuntu#master:~$ kubectl describe ClusterRoleBinding flannel
Name: flannel
Labels:
Annotations:
Role:
Kind: ClusterRole
Name: flannel
Subjects:
Kind Name Namespace
ServiceAccount flannel kube-flannel
Then I was able to create nginx image.
However, I then delete image and applied the second yaml. This changed the namespace
ubuntu#master:~$ kubectl describe ClusterRoleBinding flannel
Name: flannel
Labels:
Annotations:
Role:
Kind: ClusterRole
Name: flannel
Subjects:
Kind Name Namespace
ServiceAccount flannel kube-system
and again the nginx was successful.
What is the purpose of this config? Is it needed since the image is being deployed with and without it?
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Pods not found while using kubectl port-forward

I want to forward the ports
kubectl port-forward ...
But for this I need to find out the name of pods, I run the command
kubectl -n main_sp get pods
Getting a list:
NAME READY STATUS RESTARTS AGE
main-ms-hc-78469b74c-7lfdh 1/1 Running 0 13h
I'm trying
kubectl port-forward main-ms-hc-78469b74c-7lfdh 8080:80
and I get
Error from server (NotFound): pods "main-ms-hc-78469b74c-7lfdh" not found
What am I doing wrong?
You need to mention the namespace too while using port-forward:
$ kubectl port-forward -n main_sp main-ms-hc-78469b74c-7lfdh 8080:80
To port-forward a pod:
$ kubectl port-forward -n <namespace> <pod-name> <local-port>:<target-port>
To port-forward a pod via service name:
$ kubectl port-forward -n <namespace> svc/<servic-name> <local-port>:<target-port>

Identify pod which is not in a Ready state

We have deployed a few pods in cluster in various namespaces. I would like to inspect and identify all pod which is not in a Ready state.
master $ k get pod/nginx1401 -n dev1401
NAME READY STATUS RESTARTS AGE
nginx1401 0/1 Running 0 10m
In above list, Pod are showing in Running status but having some issue. How can we find the list of those pods. Below command not showing me the desired output:
kubectl get po -A | grep Pending Looking for pods that have yet to schedule
kubectl get po -A | grep -v Running Looking for pods in a state other than Running
kubectl get pods --field-selector=status.phase=Failed
There is a long-standing feature request for this. The latest entry suggests
kubectl get po --all-namespaces | gawk 'match($3, /([0-9])+\/([0-9])+/, a) {if (a[1] < a[2] && $4 != "Completed") print $0}'
for finding pods that are running but not complete.
There are a lot of other suggestions in the thread that might work as well.
You can try this:
$ kubectl get po --all-namespaces -w
you will get an update whenever any change(create/update/delete) happened in the pod for all namespace
Or you can watch all pod by using:
$ watch -n 1 kubectl get po --all-namespaces
This will continuously watch all pod in any namespace in 1 seconds interval.

Scale down Kubernetes pods

I am using
kubectl scale --replicas=0 -f deployment.yaml
to stop all my running pods. Please let me know if there are better ways to bring down all running pods to Zero keeping configuration, deployments etc.. intact, so that I can scale up later as required.
You are doing the correct action; traditionally the scale verb is applied just to the resource name, as in kubectl scale deploy my-awesome-deployment --replicas=0, which removes the need to always point at the specific file that describes that deployment, but there's nothing wrong (that I know of) with using the file if that is more convenient for you.
The solution is pretty easy and straightforward
kubectl scale deploy -n <namespace> --replicas=0 --all
Here we go.
Scales down all deployments in a whole namespace:
kubectl get deploy -n <namespace> -o name | xargs -I % kubectl scale % --replicas=0 -n <namespace>
To scale up set --replicas=1 (or any other required number) accordingly
Use the following to scale down/up all deployments and stateful sets in the current namespace. Useful in development when switching projects.
kubectl scale statefulset,deployment --all --replicas=0
Add a namespace flag if needed
kubectl scale statefulset,deployment -n mynamespace --all --replicas=0
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
app-gke 3/3 3 3 13m
kubectl scale deploy app-gke --replicas=5
deployment.extensions/app-gke scaled
kubectl get pods
NAME READY STATUS RESTARTS AGE
app-gke-7b768cd6d7-b25px 2/2 Running 0 11m
app-gke-7b768cd6d7-glj5v 0/2 ContainerCreating 0 4s
app-gke-7b768cd6d7-jdt6l 2/2 Running 0 11m
app-gke-7b768cd6d7-ktx87 2/2 Running 0 11m
app-gke-7b768cd6d7-qxpgl 0/2 ContainerCreating 0 4s
If you need more granularity with pipes or grep, here is another shell solution:
for i in $(kubectl get deployments | grep -v NAME | grep -v app | awk '{print $1}'); do kubectl scale --replicas=2 deploy $i; done
If you want generic patch:
namespace=devops-ci-dev
kubectl get deployment -n ${namespace} --no-headers| awk '{print $1}' | xargs -I elhay kubectl patch deployment -n ${namespace} -p '{"spec": {"replicas": 1}}' elhay
Change namespace=devops-ci-dev, to be your name space.
kubectl get svc | awk '{print $1}' | xargs kubectl scale deploy --replicas=0

Can't delete pods in pending state?

[root#vpct-k8s-1 kubernetes]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-ui-v2-ck0yw 0/1 Pending 0 1h
[root#vpct-k8s-1 kubernetes]# kubectl get rc --all-namespaces
NAMESPACE CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
kube-system kube-ui-v2 kube-ui gcr.io/google_containers/kube-ui:v2 k8s-app=kube-ui,version=v2 1 1h
Can't delete pods in pending state?
kubectl get ns
kubectl get pods --all-namespaces
kubectl get deployment -n (namespacename)
kubectl get deployments --all-namespaces
kubectl delete deployment (podname) -n (namespacename)
Try the below command
kubectl delete pod kube-ui-v2-ck0yw --grace-period=0 --force -n kube-system
To delete a pod in the pending state, simply delete the deployment file by using kubectl.
Please check the below command:
kubectl delete -f deployment-file-name.yaml
Depending on the number of replicas you specified while creating the cluster, you might be able to delete the pending pod but another pod will be recreated automatically. You can delete the pod by running this command:
$ ./cluster/kubectl.sh delete pod kube-ui-v2-ck0yw