I install kubernetes on ubuntu on baremetal. I deploy 1 master and 3 worker.
and then deploy rook and every thing work fine.but when i want to deploy a wordpress on it ,it stuck in container creating and then i delete wordpress and now i got this error
Volume is already attached by pod
default/wordpress-mysql-b78774f44-gvr58. Status Running
#kubectl describe pods wordpress-mysql-b78774f44-bjc2c
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m21s default-scheduler Successfully assigned default/wordpress-mysql-b78774f44-bjc2c to worker2
Warning FailedMount 2m57s (x6 over 3m16s) kubelet, worker2 MountVolume.SetUp failed for volume "pvc-dcba7817-553b-11e9-a229-52540076d16c" : mount command failed, status: Failure, reason: Rook: Mount volume failed: failed to attach volume pvc-dcba7817-553b-11e9-a229-52540076d16c for pod default/wordpress-mysql-b78774f44-bjc2c. Volume is already attached by pod default/wordpress-mysql-b78774f44-gvr58. Status Running
Normal Pulling 2m26s kubelet, worker2 Pulling image "mysql:5.6"
Normal Pulled 110s kubelet, worker2 Successfully pulled image "mysql:5.6"
Normal Created 106s kubelet, worker2 Created container mysql
Normal Started 101s kubelet, worker2 Started container mysql
for more information
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-dcba7817-553b-11e9-a229-52540076d16c 20Gi RWO Delete Bound default/mysql-pv-claim rook-ceph-block 13m
pvc-e9797517-553b-11e9-a229-52540076d16c 20Gi RWO Delete Bound default/wp-pv-claim rook-ceph-block 13m
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-dcba7817-553b-11e9-a229-52540076d16c 20Gi RWO rook-ceph-block 15m
wp-pv-claim Bound pvc-e9797517-553b-11e9-a229-52540076d16c 20Gi RWO rook-ceph-block 14m
#kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default wordpress-595685cc49-sdbfk 1/1 Running 6 9m58s
default wordpress-mysql-b78774f44-bjc2c 1/1 Running 0 8m14s
kube-system coredns-fb8b8dccf-plnt4 1/1 Running 0 46m
kube-system coredns-fb8b8dccf-xrkql 1/1 Running 0 47m
kube-system etcd-master 1/1 Running 0 46m
kube-system kube-apiserver-master 1/1 Running 0 46m
kube-system kube-controller-manager-master 1/1 Running 1 46m
kube-system kube-flannel-ds-amd64-45bsf 1/1 Running 0 40m
kube-system kube-flannel-ds-amd64-5nxfz 1/1 Running 0 40m
kube-system kube-flannel-ds-amd64-pnln9 1/1 Running 0 40m
kube-system kube-flannel-ds-amd64-sg4pv 1/1 Running 0 40m
kube-system kube-proxy-2xsrn 1/1 Running 0 47m
kube-system kube-proxy-mll8b 1/1 Running 0 42m
kube-system kube-proxy-mv5dw 1/1 Running 0 42m
kube-system kube-proxy-v2jww 1/1 Running 0 42m
kube-system kube-scheduler-master 1/1 Running 0 46m
rook-ceph-system rook-ceph-agent-8pbtv 1/1 Running 0 26m
rook-ceph-system rook-ceph-agent-hsn27 1/1 Running 0 26m
rook-ceph-system rook-ceph-agent-qjqqx 1/1 Running 0 26m
rook-ceph-system rook-ceph-operator-d97564799-9szvr 1/1 Running 0 27m
rook-ceph-system rook-discover-26g84 1/1 Running 0 26m
rook-ceph-system rook-discover-hf7lc 1/1 Running 0 26m
rook-ceph-system rook-discover-jc72g 1/1 Running 0 26m
rook-ceph rook-ceph-mgr-a-68cb58b456-9rrj7 1/1 Running 0 21m
rook-ceph rook-ceph-mon-a-6469b4c68f-cq6mj 1/1 Running 0 23m
rook-ceph rook-ceph-mon-b-d59cfd758-2d2zt 1/1 Running 0 22m
rook-ceph rook-ceph-mon-c-79664b789-wl4t4 1/1 Running 0 21m
rook-ceph rook-ceph-osd-0-8778dbbc-d84mh 1/1 Running 0 19m
rook-ceph rook-ceph-osd-1-84974b86f6-z5c6c 1/1 Running 0 19m
rook-ceph rook-ceph-osd-2-84f9b78587-czx6d 1/1 Running 0 19m
rook-ceph rook-ceph-osd-prepare-worker1-x4rqc 0/2 Completed 0 20m
rook-ceph rook-ceph-osd-prepare-worker2-29jpg 0/2 Completed 0 20m
rook-ceph rook-ceph-osd-prepare-worker3-rkp52 0/2 Completed 0 20m
You are using a standard class storage for your PVC, and your policy will be ReadWriteOnce. This does not mean you can only connect your PVC to one pod, but only to one node.
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadWriteMany – the volume can be mounted as read-write by many nodes
Here, seems like you have 2 pods trying to mount this volume. This will be flakey unless you do one of two things:
Schedule both pods on the same node
Use other storageClasses such as NFS (FileSystem) to change policy to ReadWriteMany
Downscale to 1 pod, so you don't have to share the volume
Right now you have 2 pods trying to mount the same volume, default/wordpress-mysql-b78774f44-gvr58 and default/wordpress-mysql-b78774f44-bjc2c.
You can also downscale to 1 pod, so you don't have to worry about any of the above altogether:
kubectl scale deploy wordpress-mysql --replicas=1
Related
I am currently working for the first time with a vault and I am encountering an issue. I am following the official guide from hashicorp to install the vault on a Kubernetes cluster and I am running it with MicroK8s.
However, I am not able to unseal my vault. As you can see it is running but when I try to use the selector to be able to unseal afterwards, it doesn't find the namespace.
I am fairly new to this but I wasn't able to find the solution online.
Any help would be appreciated.
Please find below the console output.
Thank you !
root#vault-dev:/home/dev# kubectl --namespace="vault" get pods
NAME READY STATUS RESTARTS AGE
vault-0 0/1 Pending 0 12m
vault-1 0/1 Pending 0 12m
vault-2 0/1 Pending 0 12m
vault-3 0/1 Pending 0 12m
vault-4 0/1 Pending 0 12m
vault-agent-injector-7969bb745-nmswj 1/1 Running 0 12m
root#vault-dev:/home/dev# kubectl exec --stdin=true --tty=true vault-0 -- vault operator init
Error from server (NotFound): pods "vault-0" not found
root#vault-dev:/home/dev# kubectl --namespace="vault" get pods
NAME READY STATUS RESTARTS AGE
vault-0 0/1 Pending 0 15m
vault-1 0/1 Pending 0 15m
vault-2 0/1 Pending 0 15m
vault-3 0/1 Pending 0 15m
vault-4 0/1 Pending 0 15m
vault-agent-injector-7969bb745-nmswj 1/1 Running 0 15m
root#vault-dev:/home/dev# kubectl --namespace="vault" get all
NAME READY STATUS RESTARTS AGE
pod/vault-0 0/1 Pending 0 15m
pod/vault-1 0/1 Pending 0 15m
pod/vault-2 0/1 Pending 0 15m
pod/vault-3 0/1 Pending 0 15m
pod/vault-4 0/1 Pending 0 15m
pod/vault-agent-injector-7969bb745-nmswj 1/1 Running 0 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/vault-internal ClusterIP None <none> 8200/TCP,8201/TCP 15m
service/vault-ui LoadBalancer REDACTED <pending> 8200:32269/TCP 15m
service/vault-standby ClusterIP REDACTED <none> 8200/TCP,8201/TCP 15m
service/vault-agent-injector-svc ClusterIP REDACTED <none> 443/TCP 15m
service/vault ClusterIP REDACTED <none> 8200/TCP,8201/TCP 15m
service/vault-active ClusterIP REDACTED <none> 8200/TCP,8201/TCP 15m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/vault-agent-injector 1/1 1 1 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/vault-agent-injector-7969bb745 1 1 1 15m
NAME READY AGE
statefulset.apps/vault 0/5 15m
root#vault-dev:/home/dev# kubectl get pods --selector='statefulset.apps/name=vault' --namespace=' vault'
No resources found in vault namespace.
I'm new to Helm. I'm trying to deploy a simple server on the master node. When I do helm install and see the details using the command kubectl get po,svc I see lot of pods created other than the pods I intend to deploy.So, My precise questions are:
Why so many pods got created?
How do I delete all those pods?
Below is the output of the command kubectl get po,svc:
NAME READY STATUS RESTARTS AGE
pod/altered-quoll-stx-sdo-chart-6446644994-57n7k 1/1 Running 0 25m
pod/austere-garfish-stx-sdo-chart-5b65d8ccb7-jjxfh 1/1 Running 0 25m
pod/bald-hyena-stx-sdo-chart-9b666c998-zcfwr 1/1 Running 0 25m
pod/cantankerous-pronghorn-stx-sdo-chart-65f5699cdc-5fkf9 1/1 Running 0 25m
pod/crusty-unicorn-stx-sdo-chart-7bdcc67546-6d295 1/1 Running 0 25m
pod/exiled-puffin-stx-sdo-chart-679b78ccc5-n68fg 1/1 Running 0 25m
pod/fantastic-waterbuffalo-stx-sdo-chart-7ddd7b54df-p78h7 1/1 Running 0 25m
pod/gangly-quail-stx-sdo-chart-75b9dd49b-rbsgq 1/1 Running 0 25m
pod/giddy-pig-stx-sdo-chart-5d86844569-5v8nn 1/1 Running 0 25m
pod/hazy-indri-stx-sdo-chart-65d4c96f46-zmvm2 1/1 Running 0 25m
pod/interested-macaw-stx-sdo-chart-6bb7874bbd-k9nnf 1/1 Running 0 25m
pod/jaundiced-orangutan-stx-sdo-chart-5699d9b44b-6fpk9 1/1 Running 0 25m
pod/kindred-nightingale-stx-sdo-chart-5cf95c4d97-zpqln 1/1 Running 0 25m
pod/kissing-snail-stx-sdo-chart-854d848649-54m9w 1/1 Running 0 25m
pod/lazy-tiger-stx-sdo-chart-568fbb8d65-gr6w7 1/1 Running 0 25m
pod/nonexistent-octopus-stx-sdo-chart-5f8f6c7ff8-9l7sm 1/1 Running 0 25m
pod/odd-boxer-stx-sdo-chart-6f5b9679cc-5stk7 1/1 Running 1 15h
pod/orderly-chicken-stx-sdo-chart-7889b64856-rmq7j 1/1 Running 0 25m
pod/redis-697fb49877-x5hr6 1/1 Running 0 25m
pod/rv.deploy-6bbffc7975-tf5z4 1/2 CrashLoopBackOff 93 30h
pod/sartorial-eagle-stx-sdo-chart-767d786685-ct7mf 1/1 Running 0 25m
pod/sullen-gnat-stx-sdo-chart-579fdb7df7-4z67w 1/1 Running 0 25m
pod/undercooked-cow-stx-sdo-chart-67875cc5c6-mwvb7 1/1 Running 0 25m
pod/wise-quoll-stx-sdo-chart-5db8c766c9-mhq8v 1/1 Running 0 21m
You can run the command helm ls to see all the deployed helm releases in your cluster.
To remove the release (and every resource it created, including the pods), run: helm delete RELEASE_NAME --purge.
If you want to delete all the pods in your namespace without your Helm release (I DON'T think this is what you're looking for), you can run: kubectl delete pods --all.
On a side note, if you're new to Helm, consider starting with Helm v3 since it has many improvements, and specially because the migration from v2 to v3 can become cumbersome, and if you can avoid it - you should.
I'm running a simple example with Helm. Take a look below at values.yaml file:
cat << EOF | helm install helm/vitess -n vitess -f -
topology:
cells:
- name: 'zone1'
keyspaces:
- name: 'vitess'
shards:
- name: '0'
tablets:
- type: 'replica'
vttablet:
replicas: 1
mysqlProtocol:
enabled: true
authType: secret
username: vitess
passwordSecret: vitess-db-password
etcd:
replicas: 3
vtctld:
replicas: 1
vtgate:
replicas: 3
vttablet:
dataVolumeClaimSpec:
storageClassName: nfs-slow
EOF
Take a look at the output of current pods running below:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-fb8b8dccf-8f5kt 1/1 Running 0 32m
kube-system coredns-fb8b8dccf-qbd6c 1/1 Running 0 32m
kube-system etcd-master1 1/1 Running 0 32m
kube-system kube-apiserver-master1 1/1 Running 0 31m
kube-system kube-controller-manager-master1 1/1 Running 0 32m
kube-system kube-flannel-ds-amd64-bkg9z 1/1 Running 0 32m
kube-system kube-flannel-ds-amd64-q8vh4 1/1 Running 0 32m
kube-system kube-flannel-ds-amd64-vqmnz 1/1 Running 0 32m
kube-system kube-proxy-bd8mf 1/1 Running 0 32m
kube-system kube-proxy-nlc2b 1/1 Running 0 32m
kube-system kube-proxy-x7cd5 1/1 Running 0 32m
kube-system kube-scheduler-master1 1/1 Running 0 32m
kube-system tiller-deploy-8458f6c667-cx2mv 1/1 Running 0 27m
vitess etcd-global-6pwvnv29th 0/1 Init:0/1 0 16m
vitess etcd-operator-84db9bc774-j4wml 1/1 Running 0 30m
vitess etcd-zone1-zwgvd7spzc 0/1 Init:0/1 0 16m
vitess vtctld-86cd78b6f5-zgfqg 0/1 CrashLoopBackOff 7 16m
vitess vtgate-zone1-58744956c4-x8ms2 0/1 CrashLoopBackOff 7 16m
vitess zone1-vitess-0-init-shard-master-mbbph 1/1 Running 0 16m
vitess zone1-vitess-0-replica-0 0/6 Init:CrashLoopBackOff 7 16m
Running logs I see this error:
$ kubectl logs -n vitess vtctld-86cd78b6f5-zgfqg
++ cat
+ eval exec /vt/bin/vtctld '-cell="zone1"' '-web_dir="/vt/web/vtctld"' '-web_dir2="/vt/web/vtctld2/app"' -workflow_manager_init -workflow_manager_use_election -logtostderr=true -stderrthreshold=0 -port=15000 -grpc_port=15999 '-service_map="grpc-vtctl"' '-topo_implementation="etcd2"' '-topo_global_server_address="etcd-global-client.vitess:2379"' -topo_global_root=/vitess/global
++ exec /vt/bin/vtctld -cell=zone1 -web_dir=/vt/web/vtctld -web_dir2=/vt/web/vtctld2/app -workflow_manager_init -workflow_manager_use_election -logtostderr=true -stderrthreshold=0 -port=15000 -grpc_port=15999 -service_map=grpc-vtctl -topo_implementation=etcd2 -topo_global_server_address=etcd-global-client.vitess:2379 -topo_global_root=/vitess/global
ERROR: logging before flag.Parse: E0422 02:35:34.020928 1 syslogger.go:122] can't connect to syslog
F0422 02:35:39.025400 1 server.go:221] Failed to open topo server (etcd2,etcd-global-client.vitess:2379,/vitess/global): grpc: timed out when dialing
I'm running behind vagrant with 1 master and 2 nodes. I suspect that is a issue with eth1.
The storage are configured to use NFS.
$ kubectl logs etcd-operator-84db9bc774-j4wml
time="2019-04-22T17:26:51Z" level=info msg="skip reconciliation: running ([]), pending ([etcd-zone1-zwgvd7spzc])" cluster-name=etcd-zone1 cluster-namespace=vitess pkg=cluster
time="2019-04-22T17:26:51Z" level=info msg="skip reconciliation: running ([]), pending ([etcd-zone1-zwgvd7spzc])" cluster-name=etcd-global cluster-namespace=vitess pkg=cluster
It appears that etcd is not fully initializing. Note that neither the pod for the global lockserver (etcd-global-6pwvnv29th) nor the local one for cell zone1 (pod etcd-zone1-zwgvd7spzc) are ready.
I don't know what to do to debug it. I have 1 Kubernetes master node and three slave nodes. I have deployed on the three nodes a Gluster cluster just fine with this guide https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md.
I created volumes and everything is working. But when I reboot a slave node, and the node reconnects to the master node, the glusterd.service inside the slave node shows up dead and nothing works after this.
[root#kubernetes-node-1 /]# systemctl status glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: inactive (dead)
I don't know what to do from here, for example /var/log/glusterfs/glusterd.log has been updated last time 3 days ago (it's not being updated with errors after a reboot or a pod deletion+recreation).
I just want to know where glusterd crashes so I can find out why.
How can I debug this crash?
All the nodes (master + slaves) run on Ubuntu Desktop 18 64 bit LTS Virtualbox VMs.
requested logs (kubectl get all --all-namespaces):
NAMESPACE NAME READY STATUS RESTARTS AGE
glusterfs pod/glusterfs-7nl8l 0/1 Running 62 22h
glusterfs pod/glusterfs-wjnzx 1/1 Running 62 2d21h
glusterfs pod/glusterfs-wl4lx 1/1 Running 112 41h
glusterfs pod/heketi-7495cdc5fd-hc42h 1/1 Running 0 22h
kube-system pod/coredns-86c58d9df4-n2hpk 1/1 Running 0 6d12h
kube-system pod/coredns-86c58d9df4-rbwjq 1/1 Running 0 6d12h
kube-system pod/etcd-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-apiserver-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-controller-manager-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-flannel-ds-amd64-785q8 1/1 Running 5 3d19h
kube-system pod/kube-flannel-ds-amd64-8sj2z 1/1 Running 8 3d19h
kube-system pod/kube-flannel-ds-amd64-v62xb 1/1 Running 0 3d21h
kube-system pod/kube-flannel-ds-amd64-wx4jl 1/1 Running 7 3d21h
kube-system pod/kube-proxy-7f6d9 1/1 Running 5 3d19h
kube-system pod/kube-proxy-7sf9d 1/1 Running 0 6d12h
kube-system pod/kube-proxy-n9qxq 1/1 Running 8 3d19h
kube-system pod/kube-proxy-rwghw 1/1 Running 7 3d21h
kube-system pod/kube-scheduler-kubernetes-master-work 1/1 Running 0 6d12h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d12h
elastic service/glusterfs-dynamic-9ad03769-2bb5-11e9-8710-0800276a5a8e ClusterIP 10.98.38.157 <none> 1/TCP 2d19h
elastic service/glusterfs-dynamic-a77e02ca-2bb4-11e9-8710-0800276a5a8e ClusterIP 10.97.203.225 <none> 1/TCP 2d19h
elastic service/glusterfs-dynamic-ad16ed0b-2bb6-11e9-8710-0800276a5a8e ClusterIP 10.105.149.142 <none> 1/TCP 2d19h
glusterfs service/heketi ClusterIP 10.101.79.224 <none> 8080/TCP 2d20h
glusterfs service/heketi-storage-endpoints ClusterIP 10.99.199.190 <none> 1/TCP 2d20h
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 6d12h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
glusterfs daemonset.apps/glusterfs 3 3 0 3 0 storagenode=glusterfs 2d21h
kube-system daemonset.apps/kube-flannel-ds-amd64 4 4 4 4 4 beta.kubernetes.io/arch=amd64 3d21h
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 3d21h
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 3d21h
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 3d21h
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 3d21h
kube-system daemonset.apps/kube-proxy 4 4 4 4 4 <none> 6d12h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
glusterfs deployment.apps/heketi 1/1 1 0 2d20h
kube-system deployment.apps/coredns 2/2 2 2 6d12h
NAMESPACE NAME DESIRED CURRENT READY AGE
glusterfs replicaset.apps/heketi-7495cdc5fd 1 1 0 2d20h
kube-system replicaset.apps/coredns-86c58d9df4 2 2 2 6d12h
requested:
tasos#kubernetes-master-work:~$ kubectl logs -n glusterfs glusterfs-7nl8l
env variable is set. Update in gluster-blockd.service
Please check these similar topics:
GlusterFS deployment on k8s cluster-- Readiness probe failed: /usr/local/bin/status-probe.sh
and
https://github.com/gluster/gluster-kubernetes/issues/539
Check tcmu-runner.log log to debug it.
UPDATE:
I think it will be your issue:
https://github.com/gluster/gluster-kubernetes/pull/557
PR is prepared, but not merged.
UPDATE 2:
https://github.com/gluster/glusterfs/issues/417
Be sure that rpcbind is installed.
I have just started a new Kubernetes 1.8.0 environment using minikube (0.27) on Windows 10.
I followed this steps but it didn't work:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
When I list pods this is the result:
C:\WINDOWS\system32>kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-minikube 1/1 Running 0 23m
kube-system heapster-69b5d4974d-s9vrf 1/1 Running 0 5m
kube-system kube-addon-manager-minikube 1/1 Running 0 23m
kube-system kube-apiserver-minikube 1/1 Running 0 23m
kube-system kube-controller-manager-minikube 1/1 Running 0 23m
kube-system kube-dns-545bc4bfd4-xkt7l 3/3 Running 3 1h
kube-system kube-proxy-7jnk6 1/1 Running 0 23m
kube-system kube-scheduler-minikube 1/1 Running 0 23m
kube-system kubernetes-dashboard-5569448c6d-8zqnc 1/1 Running 2 52m
kube-system kubernetes-dashboard-869db7f6b4-ddlmq 0/1 CrashLoopBackOff 19 51m
kube-system monitoring-influxdb-78d4c6f5b6-b66m9 1/1 Running 0 4m
kube-system storage-provisioner 1/1 Running 2 1h
As you can see, I have 2 kubernets-dashboard pods now, one of then is running and the other one is CrashLookBackOff.
When I try to run minikube dashboard this is the result:
"Waiting, endpoint for service is not ready yet..."
I have tried to remove kubernetes-dashboard-869db7f6b4-ddlmq pod:
kubectl delete pod kubernetes-dashboard-869db7f6b4-ddlmq
This is the result:
"Error from server (NotFound): pods "kubernetes-dashboard-869db7f6b4-ddlmq" not found"
"Error from server (NotFound): pods "kubernetes-dashboard-869db7f6b4-ddlmq" not found"
You failed to delete the pod due to the lack of namespace (add -n kube-system). And it should be 1 dashboard pod if no modification's applied. If it still fails to run minikube dashboard after you delete the abnormal pod, more logs should be provided.