I've been working with a 6 node cluster for the last few weeks without issue. Earlier today we ran into an open file issue (https://github.com/kubernetes/kubernetes/pull/12443/files) and I patched and restarted kube-proxy.
Since then, all rc deployed pods to ALL BUT node-01 get stuck in pending state and there log messages stating the cause.
Looking at the docker daemon on the nodes, the containers in the pod are actually running and a delete of the rc removes them. It appears to be some sort of callback issue between the state according to kubelet and the kube-apiserver.
Cluster is running v1.0.3
Here's an example of the state
docker run --rm -it lachie83/kubectl:prod get pods --namespace=kube-system -o wide
NAME READY STATUS RESTARTS AGE NODE
kube-dns-v8-i0yac 0/4 Pending 0 4s 10.1.1.35
kube-dns-v8-jti2e 0/4 Pending 0 4s 10.1.1.34
get events
Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8 ReplicationController successfulCreate {replication-controller } Created pod: kube-dns-v8-i0yac
Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8-i0yac Pod scheduled {scheduler } Successfully assigned kube-dns-v8-i0yac to 10.1.1.35
Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8-jti2e Pod scheduled {scheduler } Successfully assigned kube-dns-v8-jti2e to 10.1.1.34
Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8 ReplicationController successfulCreate {replication-controller } Created pod: kube-dns-v8-jti2e
scheduler log
I0916 06:25:42.897814 10076 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-v8-jti2e", UID:"c1cafebe-5c3b-11e5-b3c4-020443b6797d", APIVersion:"v1", ResourceVersion:"670117", FieldPath:""}): reason: 'scheduled' Successfully assigned kube-dns-v8-jti2e to 10.1.1.34
I0916 06:25:42.904195 10076 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-v8-i0yac", UID:"c1cafc69-5c3b-11e5-b3c4-020443b6797d", APIVersion:"v1", ResourceVersion:"670118", FieldPath:""}): reason: 'scheduled' Successfully assigned kube-dns-v8-i0yac to 10.1.1.35
tailing kubelet log file during pod create
tail -f kubelet.kube-node-03.root.log.INFO.20150916-060744.10668
I0916 06:25:04.448916 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:25:24.449253 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:25:44.449522 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:26:04.449774 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:26:24.450400 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:26:44.450995 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:27:04.451501 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:27:24.451910 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:27:44.452511 10668 config.go:253] Setting pods for source file : {[] 0 file}
kubelet process
root#kube-node-03:/var/log/kubernetes# ps -ef | grep kubelet
root 10668 1 1 06:07 ? 00:00:13 /opt/bin/kubelet --address=10.1.1.34 --port=10250 --hostname_override=10.1.1.34 --api_servers=https://kube-master-01.sj.lithium.com:6443 --logtostderr=false --log_dir=/var/log/kubernetes --cluster_dns=10.1.2.53 --config=/etc/kubelet/conf --cluster_domain=prod-kube-sjc1-1.internal --v=4 --tls-cert-file=/etc/kubelet/certs/kubelet.pem --tls-private-key-file=/etc/kubelet/certs/kubelet-key.pem
node list
docker run --rm -it lachie83/kubectl:prod get nodes
NAME LABELS STATUS
10.1.1.30 kubernetes.io/hostname=10.1.1.30,name=node-1 Ready
10.1.1.32 kubernetes.io/hostname=10.1.1.32,name=node-2 Ready
10.1.1.34 kubernetes.io/hostname=10.1.1.34,name=node-3 Ready
10.1.1.35 kubernetes.io/hostname=10.1.1.35,name=node-4 Ready
10.1.1.42 kubernetes.io/hostname=10.1.1.42,name=node-5 Ready
10.1.1.43 kubernetes.io/hostname=10.1.1.43,name=node-6 Ready
The issue turned out to be an MTU issue between the node and the master. Once that was fixed the problem was resolved.
Looks like you were building your cluster from scratch. Have you run conformance test against your cluster yet? If no, could you please run it and the detail information can be found at:
https://github.com/kubernetes/kubernetes/blob/e8009e828c864a46bf2e1d5c7dab8ef413c8bbe5/hack/conformance-test.sh
The conformance test should failed, or at least give us more information on your cluster setup. Please post the test result somewhere, so that we can diagnose your problem more.
The problem most likely your kubelet and your kube-apiserver don't agree upon the node name here. And I also noticed that you are using hostname_override too.
Related
I'm trying to run k3s in rootless-mode. For now, i've done common steps from https://rootlesscontaine.rs/getting-started and used unit-file from https://github.com/k3s-io/k3s/blob/master/k3s-rootless.service
Systemd service k3s-rootless.service is active and run, but the pods are constantly in pending status.
I get these messages:
jun 21 20:43:58 k3s-tspd.local k3s[1065]: E0621 20:43:58.647601 33 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
jun 21 20:43:58 k3s-tspd.local k3s[1065]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
jun 21 20:43:58 k3s-tspd.local k3s[1065]: I0621 20:43:58.647876 33 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
jun 21 20:43:59 k3s-tspd.local k3s[1065]: time="2022-06-21T20:43:59Z" level=info msg="Waiting for control-plane node k3s-tspd.local startup: nodes \"k3s-tspd.local\" not found"
jun 21 20:44:00 k3s-tspd.local k3s[1065]: time="2022-06-21T20:44:00Z" level=info msg="Waiting for control-plane node k3s-tspd.local startup: nodes \"k3s-tspd.local\" not found"
jun 21 20:44:00 k3s-tspd.local k3s[1065]: time="2022-06-21T20:44:00Z" level=info msg="certificate CN=k3s-tspd.local signed by CN=k3s-server-ca#1655821591: notBefore=2022-06-21 14:26:31 +0000 UTC notAfter=2023-06-21 20:44:00 +0000 UTC"
jun 21 20:44:00 k3s-tspd.local k3s[1065]: time="2022-06-21T20:44:00Z" level=info msg="certificate CN=system:node:k3s-tspd.local,O=system:nodes signed by CN=k3s-client-ca#1655821591: notBefore=2022-06-21 14:26:31 +0000 UTC notAfter=2023-06-21 20:44:00 +0000 UTC"
jun 21 20:44:00 k3s-tspd.local k3s[1065]: time="2022-06-21T20:44:00Z" level=info msg="Waiting to retrieve agent configuration; server is not ready: \"fuse-overlayfs\" snapshotter cannot be enabled for \"/home/scadauser/.rancher/k3s/agent/containerd\", try using \"native\": fuse-overlayfs not functional, make sure running with kernel >= 4.18: failed to mount fuse-overlayfs ({Type:fuse3.fuse-overlayfs Source:overlay Options:[lowerdir=/home/scadauser/.rancher/k3s/agent/containerd/fuseoverlayfs-check751772682/lower2:/home/scadauser/.rancher/k3s/agent/containerd/fuseoverlayfs-check751772682/lower1]}) on /home/scadauser/.rancher/k3s/agent/containerd/fuseoverlayfs-check751772682/merged: mount helper [mount.fuse3 [overlay /home/scadauser/.rancher/k3s/agent/containerd/fuseoverlayfs-check751772682/merged -o lowerdir=/home/scadauser/.rancher/k3s/agent/containerd/fuseoverlayfs-check751772682/lower2:/home/scadauser/.rancher/k3s/agent/containerd/fuseoverlayfs-check751772682/lower1 -t fuse-overlayfs]] failed: \"\": exec: \"mount.fuse3\": executable file not found in $PATH"
jun 21 20:44:01 k3s-tspd.local k3s[1065]: time="2022-06-21T20:44:01Z" level=info msg="Waiting for control-plane node k3s-tspd.local startup: nodes \"k3s-tspd.local\" not found"
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system helm-install-traefik-hn2nn 0/1 Pending 0 5h5m
kube-system helm-install-traefik-crd-djr4j 0/1 Pending 0 5h5m
kube-system local-path-provisioner-6c79684f77-w7fjb 0/1 Pending 0 5h5m
kube-system metrics-server-7cd5fcb6b7-rlctn 0/1 Pending 0 5h5m
kube-system coredns-d76bd69b-mjj4m 0/1 Pending 0 15m
What should i do next?
The solution was quite obvious.
In unit file k3s-rootless.service I used the wrong snapshotter. For containerd in k3s rootless-mode it has to be '--snapshotter=fuse-overlayfs'.
fuse-overlayf also need to be installed before run k3s in rootless-mode.
In my v1.23.1 test cluster I see worker node certificate expired some time ago. but worker node still taking the workload and in Ready status.
How this certificate is getting used, when we will see the issue with expired certificate?
# curl -v https://localhost:10250 -k 2>&1 |grep 'expire date'
* expire date: Oct 4 18:02:14 2021 GMT
# openssl x509 -text -noout -in /var/lib/kubelet/pki/kubelet.crt |grep -A2 'Validity'
Validity
Not Before: Oct 4 18:02:14 2020 GMT
Not After : Oct 4 18:02:14 2021 GMT
Update 1:
Cluster is running on-perm with CentOS Stream 8 OS , build with kubeadm tool. I was able to schedule the workload on all the worker nodes. created nginx deployment and scaled it 50 pods, I can see nginx PODs on all the worker nodes.
Also I can reboot the work nodes with-out any issue.
Update 2:
kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0303 11:17:18.261639 698383 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jan 16, 2023 16:15 UTC 318d ca no
apiserver Jan 16, 2023 16:15 UTC 318d ca no
apiserver-kubelet-client Jan 16, 2023 16:15 UTC 318d ca no
controller-manager.conf Jan 16, 2023 16:15 UTC 318d ca no
front-proxy-client Jan 16, 2023 16:15 UTC 318d front-proxy-ca no
scheduler.conf Jan 16, 2023 16:15 UTC 318d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Oct 02, 2030 18:44 UTC 8y no
front-proxy-ca Oct 02, 2030 18:44 UTC 8y no
Thanks
Update 3
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
server10 Ready control-plane,master 519d v1.23.1
server11 Ready control-plane,master 519d v1.23.1
server12 Ready control-plane,master 519d v1.23.1
server13 Ready <none> 519d v1.23.1
server14 Ready <none> 519d v1.23.1
server15 Ready <none> 516d v1.23.1
server16 Ready <none> 516d v1.23.1
server17 Ready <none> 516d v1.23.1
server18 Ready <none> 516d v1.23.1
# kubectl get pods -o wide
nginx-dev-8677c757d4-4k9xp 1/1 Running 0 4d12h 10.203.53.19 server17 <none> <none>
nginx-dev-8677c757d4-6lbc6 1/1 Running 0 4d12h 10.203.89.120 server14 <none> <none>
nginx-dev-8677c757d4-ksckf 1/1 Running 0 4d12h 10.203.124.4 server16 <none> <none>
nginx-dev-8677c757d4-lrz9h 1/1 Running 0 4d12h 10.203.124.41 server16 <none> <none>
nginx-dev-8677c757d4-tllx9 1/1 Running 0 4d12h 10.203.151.70 server11 <none> <none>
# grep client /etc/kubernetes/kubelet.conf
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
# ls -ltr /var/lib/kubelet/pki
total 16
-rw------- 1 root root 1679 Oct 4 2020 kubelet.key
-rw-r--r-- 1 root root 2258 Oct 4 2020 kubelet.crt
-rw------- 1 root root 1114 Oct 4 2020 kubelet-client-2020-10-04-14-50-21.pem
lrwxrwxrwx 1 root root 59 Jul 6 2021 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2021-07-06-01-44-10.pem
-rw------- 1 root root 1114 Jul 6 2021 kubelet-client-2021-07-06-01-44-10.pem
Those kubelet certificates is called kubelet-serving certificates. They are used when Kubelet acts as a "server" instead of a "client".
For example, kubelet provides metrics to metrics server. So when metrics-server was enabled to use secure-tls, and in case that those certificate are expired, the metrics-server would not have a proper connection to Kubelet to get the metrics. In case you are using K8s Dashboard, the Dashboard will not able to show CPU and memory consumption in the page. That's the time when you see the issue from those expired certificates.
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#client-and-serving-certificates
Those certificate will not auto-rotate when expiring. They are not also able to be rotated with "kubeadm certificate renew". To renew those certificate, you will need to add "serverTLSBootstrap: true" in your cluster config. With this, when the serving certificate expired, kubelet will send a CSR request to K8s cluster, from the cluster, you can use "kubectl certificate approve" to renew them.
Kubelet has been initialized with pod network for Calico :
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --image-repository=someserver
Then i get calico.yaml v3.11 and applied it :
sudo kubectl --kubeconfig="/etc/kubernetes/admin.conf" apply -f calico.yaml
Right after i check on the pod status :
sudo kubectl --kubeconfig="/etc/kubernetes/admin.conf" get nodes
NAME STATUS ROLES AGE VERSION
master-1 NotReady master 7m21s v1.17.2
on describe i've got cni config unitialized, but i thought that calico should have done that ?
MemoryPressure False Fri, 21 Feb 2020 10:14:24 +0100 Fri, 21 Feb 2020 10:09:00 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 21 Feb 2020 10:14:24 +0100 Fri, 21 Feb 2020 10:09:00 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 21 Feb 2020 10:14:24 +0100 Fri, 21 Feb 2020 10:09:00 +0100 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Fri, 21 Feb 2020 10:14:24 +0100 Fri, 21 Feb 2020 10:09:00 +0100 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
In fact i have nothing under /etc/cni/net.d/ so it seems it forgot something ?
ll /etc/cni/net.d/
total 0
sudo kubectl --kubeconfig="/etc/kubernetes/admin.conf" -n kube-system get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5644fb7cf6-f7lqq 0/1 Pending 0 3h
calico-node-f4xzh 0/1 Init:ImagePullBackOff 0 3h
coredns-7fb8cdf968-bbqbz 0/1 Pending 0 3h24m
coredns-7fb8cdf968-vdnzx 0/1 Pending 0 3h24m
etcd-master-1 1/1 Running 0 3h24m
kube-apiserver-master-1 1/1 Running 0 3h24m
kube-controller-manager-master-1 1/1 Running 0 3h24m
kube-proxy-9m879 1/1 Running 0 3h24m
kube-scheduler-master-1 1/1 Running 0 3h24m
As explained i'm running through a local repo and journalctl says :
kubelet[21935]: E0225 14:30:54.830683 21935 pod_workers.go:191] Error syncing pod cec2f72b-844a-4d6b-8606-3aff06d4a36d ("calico-node-f4xzh_kube-system(cec2f72b-844a-4d6b-8606-3aff06d4a36d)"), skipping: failed to "StartContainer" for "upgrade-ipam" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://repo:10000/v2/calico/cni/manifests/v3.11.2: no basic auth credentials"
kubelet[21935]: E0225 14:30:56.008989 21935 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Feels like it's not only CNI the issue
Core DNS pod will be pending and master will be NotReady till calico pods are successfully running and CNI is setup properly.
It seems to be network issue to download calico docker images from docker.io. So you can pull calico images from docker.io and and push it to your internal container registry and then modify the calico yaml to refer that registry in images section of calico.yaml and finally apply the modified calico yaml to the kubernetes cluster.
So the issue with Init:ImagePullBackOff was that it cannot apply image from my private repo automatically. I had to pull all images for calico from docker. Then i deleted the calico pod it's recreate itself with the newly pushed image
sudo docker pull private-repo/calico/pod2daemon-flexvol:v3.11.2
sudo docker pull private-repo/calico/node:v3.11.2
sudo docker pull private-repo/calico/cni:v3.11.2
sudo docker pull private-repo/calico/kube-controllers:v3.11.2
sudo kubectl -n kube-system delete po/calico-node-y7g5
After that the node re-do all the init phase and :
sudo kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5644fb7cf6-qkf47 1/1 Running 0 11s
calico-node-mkcsr 1/1 Running 0 21m
coredns-7fb8cdf968-bgqvj 1/1 Running 0 37m
coredns-7fb8cdf968-v85jx 1/1 Running 0 37m
etcd-lin-1k8w1dv-vmh 1/1 Running 0 38m
kube-apiserver-lin-1k8w1dv-vmh 1/1 Running 0 38m
kube-controller-manager-lin-1k8w1dv-vmh 1/1 Running 0 38m
kube-proxy-9hkns 1/1 Running 0 37m
kube-scheduler-lin-1k8w1dv-vmh 1/1 Running 0 38m
Issue Redis POD creation on k8s(v1.10) cluster and POD creation stuck at "ContainerCreating"
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned redis to k8snode02
Normal SuccessfulMountVolume 30m kubelet, k8snode02 MountVolume.SetUp succeeded for volume "default-token-f8tcg"
Warning FailedCreatePodSandBox 5m (x1202 over 30m) kubelet, k8snode02 Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "redis_default" network: failed to find plugin "loopback" in path [/opt/loopback/bin /opt/cni/bin]
Normal SandboxChanged 47s (x1459 over 30m) kubelet, k8snode02 Pod sandbox changed, it will be killed and re-created.
When I used calico as CNI and I faced a similar issue.
The container remained in creating state, I checked for /etc/cni/net.d and /opt/cni/bin on master both are present but not sure if this is required on worker node as well.
root#KubernetesMaster:/opt/cni/bin# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-5c7588df-5zds6 0/1 ContainerCreating 0 21m
root#KubernetesMaster:/opt/cni/bin# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetesmaster Ready master 26m v1.13.4
kubernetesslave1 Ready <none> 22m v1.13.4
root#KubernetesMaster:/opt/cni/bin#
kubectl describe pods
Name: nginx-5c7588df-5zds6
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: kubernetesslave1/10.0.3.80
Start Time: Sun, 17 Mar 2019 05:13:30 +0000
Labels: app=nginx
pod-template-hash=5c7588df
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/nginx-5c7588df
Containers:
nginx:
Container ID:
Image: nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qtfbs (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-qtfbs:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qtfbs
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned default/nginx-5c7588df-5zds6 to kubernetesslave1
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "123d527490944d80f44b1976b82dbae5dc56934aabf59cf89f151736d7ea8adc" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8cc5e62ebaab7075782c2248e00d795191c45906cc9579464a00c09a2bc88b71" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "30ffdeace558b0935d1ed3c2e59480e2dd98e983b747dacae707d1baa222353f" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "630e85451b6ce2452839c4cfd1ecb9acce4120515702edf29421c123cf231213" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "820b919b7edcfc3081711bb78b79d33e5be3f7dafcbad29fe46b6d7aa22227aa" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "abbfb5d2756f12802072039dec20ba52f546ae755aaa642a9a75c86577be589f" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dfeb46ffda4d0f8a434f3f3af04328fcc4b6c7cafaa62626e41b705b06d98cc4" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ae3f47bb0282a56e607779d3267127ee8b0ae1d7f416f5a184682119203b1c8" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "546d07f1864728b2e2675c066775f94d658e221ada5fb4ed6bf6689ec7b8de23" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Normal SandboxChanged 18m (x12 over 18m) kubelet, kubernetesslave1 Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 3m39s (x829 over 18m) kubelet, kubernetesslave1 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f586be437843537a3082f37ad139c88d0eacfbe99ddf00621efd4dc049a268cc" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
root#KubernetesMaster:/etc/cni/net.d#
On worker node NGINX is trying to come up but getting exited, I am not sure what's going on here - I am newbie to kubernetes & not able to fix this issue -
root#kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kube…" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
root#kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kube…" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
root#kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kube…" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
root#kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94b2994401d0 k8s.gcr.io/pause:3.1 "/pause" 1 second ago Up Less than a second k8s_POD_nginx-5c7588df-5zds6_default_677a722b-4873-11e9-a33a-06516e7d78c4_534
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kube…" 4 minutes ago Up 4 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
root#kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kube…" 4 minutes ago Up 4 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
root#kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f72500cae2b7 k8s.gcr.io/pause:3.1 "/pause" 1 second ago Up Less than a second k8s_POD_nginx-5c7588df-5zds6_default_677a722b-4873-11e9-a33a-06516e7d78c4_585
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kube…" 4 minutes ago Up 4 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
root#kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kube…" 5 minutes ago Up 5 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
I checked about /etc/cni/net.d & /opt/cni/bin on worker node as well, it is there -
root#kubernetesslave1:/home/ubuntu# cd /etc/cni
root#kubernetesslave1:/etc/cni# ls -ltr
total 4
drwxr-xr-x 2 root root 4096 Mar 17 05:19 net.d
root#kubernetesslave1:/etc/cni# cd /opt/cni
root#kubernetesslave1:/opt/cni# ls -ltr
total 4
drwxr-xr-x 2 root root 4096 Mar 17 05:19 bin
root#kubernetesslave1:/opt/cni# cd bin
root#kubernetesslave1:/opt/cni/bin# ls -ltr
total 107440
-rwxr-xr-x 1 root root 3890407 Aug 17 2017 bridge
-rwxr-xr-x 1 root root 3475802 Aug 17 2017 ipvlan
-rwxr-xr-x 1 root root 3520724 Aug 17 2017 macvlan
-rwxr-xr-x 1 root root 3877986 Aug 17 2017 ptp
-rwxr-xr-x 1 root root 3475750 Aug 17 2017 vlan
-rwxr-xr-x 1 root root 9921982 Aug 17 2017 dhcp
-rwxr-xr-x 1 root root 2605279 Aug 17 2017 sample
-rwxr-xr-x 1 root root 32351072 Mar 17 05:19 calico
-rwxr-xr-x 1 root root 31490656 Mar 17 05:19 calico-ipam
-rwxr-xr-x 1 root root 2856252 Mar 17 05:19 flannel
-rwxr-xr-x 1 root root 3084347 Mar 17 05:19 loopback
-rwxr-xr-x 1 root root 3036768 Mar 17 05:19 host-local
-rwxr-xr-x 1 root root 3550877 Mar 17 05:19 portmap
-rwxr-xr-x 1 root root 2850029 Mar 17 05:19 tuning
root#kubernetesslave1:/opt/cni/bin#
Ensure that /etc/cni/net.d and its /opt/cni/bin friend both exist and are correctly populated with the CNI configuration files and binaries on all Nodes. For flannel specifically, one might make use of the flannel cni repo
I had this issue with my GKE cluster on GCP with one of my preemptive node pools. Thanks to #mdaniel tip of checking the integrity of /etc/cni/net.d I could reproduce the issue again by ssh into the node of a testing cluster with the command gcloud compute ssh <name of some node> --zone <zone-of-cluster> --internal-ip. Then I simply edited the file /etc/cni/net.d/10-gke-ptp.conflist and messed with the values on the "routes": [ {"dst": "0.0.0.0/0"} ] (changed from 0.0.0.0/0 to 1.0.0.0/0).
After that, I deleted the pods that were running inside of it and they all got stuck with the ContainerCreating status forever generating kublet events with the error Failed create pod sandbox: rpc error: code...
Note that in order to test I've set up my nodepool to have maximum of 1 node. Otherwise it will scale up a new one and the pods will be recreated at the new node. In my production incident the nodepool reached maximum node count so setting my tests to max 1 node reproduced a similar situation.
Since that, deleting the node from GKE solved the issue in production, I created a Python script that lists all events on the cluster and filters the ones that have the keyword "Failed create pod sandbox: rpc error: code". Than I go over all events and get their pods, and then from the pods, I get the nodes. Finally I loop over the nodes deleting them both from Kubernetes API and from Compute API with it's respective Python clients. For the Python script I used the libs: kubernetes and google-cloud-compute.
This is a simpler version of the script. Test it before using it:
from kubernetes import client, config
from google.cloud.compute_v1.services.instances import InstancesClient
ERROR_KEYWORDS = [
'Failed to create pod sandbox'.lower()
]
config.load_kube_config()
v1 = client.CoreV1Api()
events_result = v1.list_event_for_all_namespaces()
filtered_events = []
# filter only the events containing ERROR_KEYWORDS
for event in events_result.items:
for error_keyword in ERROR_KEYWORDS:
if error_keyword in event.message.lower():
filtered_events.append(event)
# gets the list of pods from those events
pods_list = {}
for event in filtered_events:
try:
pod = v1.read_namespaced_pod(
event.involved_object.name,
namespace=event.involved_object.namespace
)
pod_dict = {
"name": event.involved_object.name,
"namespace": event.involved_object.namespace,
"node": pod.spec.node_name
}
pods_list[event.involved_object.name] = pod_dict
except Exception as e:
pass
# Get the nodes from those pods
broken_nodes = set()
for name, pod_dict in pods_list.items():
if pod_dict.get('node'):
broken_nodes.add(pod_dict["node"])
broken_nodes = list(broken_nodes)
# Deletes the nodes from both Kubernetes API and Compute Engine API
if broken_nodes:
broken_nodes_str = ", ".join(broken_nodes)
print(f'BROKEN NODES: "{broken_nodes_str}"')
for node in broken_nodes:
try:
api_response = v1.delete_node(node)
except Exception as e:
pass
time.sleep(30)
try:
result = gcp_client.delete(project=PROJECT_ID, zone=CLUSTER_ZONE, instance=node)
except Exception as e:
pass
AWS EKS doesn't yet support t3a, m5ad r5ad instances
kubectl drain node1 node2 --delete-local-data --force --ignore-daemonsets
Just when I was planning to expel pods on all nodes, I didn’t expect the pods with errors all the time to become Running. You can try to execute it, hope it will be useful to you
This problem appeared for me when I added a PVC on AWS EKS.
Updating the aws-node CNI plugin to the latest version resolved it -
https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
Following steps resets kubernetes cluster and helped me to solve my problem.
Stop all running pods
Delete all worker nodes from cluster
Perform kubeadm reset on master and nodes
Initiate the master node
kubeadm init --apiserver-advertise-address
install Pod network “WeaveNet”
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=192.168.0.0/16"
Join nodes to the cluster
Restart all nodes
#-------------------------------------
#Reset the kubernetes environment
#-----------------------------------
#[root#centos8-Master: ~]# k get nodes
#NAME STATUS ROLES AGE VERSION
#centos8-master Ready control-plane 14m v1.24.1
#centos8-slave Ready <none> 11m v1.24.3
#
#Master Node
#1. Delete the nodes
#First delete all pods, deployments, svc
#kubectl delete --all pods
#kubectl delete --all deployments
#kubectl delete --all svc
#kubectl drain centos8-slave --ignore-daemonsets --delete-emptydir-data --force
#kubectl delete node centos8-slave
#
#Worker Node
#2. Go to worker node, stop all the kubelet services.
#[root#centos8-Slave rprasads]# kubectl version --short
#Client Version: v1.24.3
#Kustomize Version: v4.5.4
#[root#centos8-Slave rprasads]# systemctl stop kubelet
#[root#centos8-Slave rprasads]# netstat -tulnp |grep kube
#kill -9 <pid> [kube-proxy]
#
#Master Node
#2. Reset the kubeadm.
#$ sudo kubeadm reset
#$ sudo swapoff -a
#
#Master Node
#3. Get you kubeadm version
#[root#centos8-Master: ~]# kubectl version --short
#Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
#Client Version: v1.24.1
#Kustomize Version: v4.5.4
#Server Version: v1.24.3
#
#Master Node
#4.On Master Initialize the kubeadm with proper network address and version
#$ kubeadm init --apiserver-advertise-address=192.168.56.101 --pod-network-cidr=192.168.0.0/16
##Download calico yaml file from the site: Refer the documentation https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-more-than-50-nodes
#
#$ curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O
#$ kubectl apply -f calico.yaml
#
#Worker Node
#5. Go to worker node and add the node with the command displayed.
# kubeadm join 192.168.56.101:6443 --token h0nuxq.zk9m731nc4ia93pq --discovery-token-ca-cert-hash sha256:1682644baf3433caeb0e6f9099ed487ef48b94ab6a0314df88e3ff42ae501a13
#
#Master Node
#6.On the master node run below commands.
#$ sudo rm -rf $HOME/.kube
#
#$ mkdir -p $HOME/.kube
#$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
#$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
#
#$ sudo systemctl enable docker.service
#$ sudo service kubelet restart
#
#$ kubectl get nodes
#
#
#------------------------------------------------
#Test your new kubernetes cluster environment.
#-----------------------------------------------
#[root#centos8-Master: ~]# kubectl run nginx --image=nginx
#Wait for some time.
#
#[root#centos8-Master: ~]# k describe pods nginx
#Normal Scheduled 21s default-scheduler Successfully assigned default/nginx to centos8-slave
#
#[root#centos8-Master: ~]# k get pods
#NAME READY STATUS RESTARTS AGE
#nginx 1/1 Running 0 25s
#
#*************************************END*************************************
k = kubectl. I'm getting these logs
$ k get events -w
...snip
2018-02-03 13:46:06 +0100 CET 2018-02-03 13:46:06 +0100 CET 1 consul-0.150fd18470775752 Pod spec.containers{consul} Normal Started kubelet, gke-projectid-default-pool-2de02f1c-059w Started container
2018-02-03 13:46:06 +0100 CET 2018-02-03 13:46:06 +0100 CET 1 consul-0.150fd184668e88a6 Pod spec.containers{consul} Normal Created kubelet, gke-projectid-default-pool-2de02f1c-059w Created container
2018-02-03 13:47:35 +0100 CET 2018-02-03 13:47:35 +0100 CET 1 consul-0.150fd1993877443c Pod Warning FailedMount kubelet, gke-projectid-default-pool-2de02f1c-059w Unable to mount volumes for pod "consul-0_staging(1f35ac42-08e0-11e8-850a-42010af001f0)": timeout expired waiting for volumes to attach/mount for pod "staging"/"consul-0". list of unattached/unmounted volumes=[data config tls default-token-93wx3]
Meanwhile, at the same time:
$ k get pods
consul-0 1/1 Running 0 49m
consul-1 1/1 Running 0 1h
consul-2 1/1 Running 0 1h
...snip
What is going on? Why is events telling me it's restarting/starting the container? k logs pods/consul-0 and -1 and -2 don't tell anything about them being restarted.
The third column of the events output tells you the number of times an event has been seen. In your case, that value is 1. So it's not restarting your container: it's just telling you that at some point in the past, it created and started the container. That's why you can see it's running when you kubectl get pods.