Pods are not starting. NetworkPlugin cni failed to set up pod - kubernetes

K8 Version:
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
I tried to launch spinnaker pods(yaml files here). I choose Flannel(kubectl apply -f kube-flannel.yml) while installing K8. Then I see the pods are not starting, it is struck in "ContainerCreating" status. I kubectl describe a pod, showing NetworkPlugin cni failed to set up pod
veeru#ubuntu:/opt/spinnaker/experimental/kubernetes/simple$ kubectl describe pod data-redis-master-v000-38j80 --namespace=spinnaker
Name: data-redis-master-v000-38j80
Namespace: spinnaker
Node: ubuntu/192.168.6.136
Start Time: Thu, 01 Jun 2017 02:54:14 -0700
Labels: load-balancer-data-redis-server=true
replication-controller=data-redis-master-v000
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"spinnaker","name":"data-redis-master-v000","uid":"43d4a44c-46b0-11e7-b0e1-000c29b...
Status: Pending
IP:
Controllers: ReplicaSet/data-redis-master-v000
Containers:
redis-master:
Container ID:
Image: gcr.io/kubernetes-spinnaker/redis-cluster:v2
Image ID:
Port: 6379/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
Requests:
cpu: 100m
Environment:
MASTER: true
Mounts:
/redis-master-data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-71p4q (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-71p4q:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-71p4q
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s
node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
45m 45m 1 default-scheduler Normal Scheduled Successfully assigned data-redis-master-v000-38j80 to ubuntu
43m 43m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 8265d80732e7b73ebf8f1493d40403021064b61436c4c559b41330e7592fd47f"
43m 43m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: b972862d763e621e026728073deb9a304748c4ec4522982db0a168663ab59d36
42m 42m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 72b39083a3a81c0da1d4b7fa65b5d6450b62a3562a05452c27b185bc33197327"
41m 41m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: d315511bfa9f6f09d7ef4cd277bde44e4885291ea566e3089460356c1ed34413"
40m 40m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: a03d776d2d7c5c4ae9c1ec31681b0b6e40759326a452916cff0e60c4d4e2c954"
40m 40m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: acf30a4aacda0c53bdbb8bc2d416704720bd1b623c43874052b4029f15950052"
39m 39m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: ea49f5f9428d585be7138f4ebce54f713eef549b16104a3d7aa728175b6ebc2a"
38m 38m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: ec2483435b4b22576c9bd7bffac5d67d53893c189c0cf26aca1ae6af79d09914"
38m 1m 39 kubelet, ubuntu Warning FailedSync (events with common reason combined)
45m 1s 448 kubelet, ubuntu Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
45m 0s 412 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "data-redis-master-v000-38j80_spinnaker(447d302c-46b0-11e7-b0e1-000c29b1270f)" with CreatePodSandboxError: "CreatePodSandbox for pod \"data-redis-master-v000-38j80_spinnaker(447d302c-46b0-11e7-b0e1-000c29b1270f)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"data-redis-master-v000-38j80_spinnaker\" network: open /run/flannel/subnet.env: no such file or directory"
How can I resolve above issue?
UPDATE-1
I have reinitialized K8 with kubeadm init --pod-network-cidr=10.244.0.0/16 and deployed sample nginx pod. Still getting same error
-----------------OUTPUT REMOVED-------------------------------
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3m 3m 1 default-scheduler Normal Scheduled Successfully assigned nginx-622qj to ubuntu
1m 1m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "0728fece-46fe-11e7-ae5d-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 38250afd765f0108aeff6e31bbe5a642a60db99b97cbbf15711f810cbe8f3829"
24s 24s 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "0728fece-46fe-11e7-ae5d-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 3bebcef02cb5f6645a65dcf06b2730144080f9d3c4fb18267feca5c5ce21031c"
2m 9s 33 kubelet, ubuntu Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
3m 7s 32 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "nginx-622qj_default(0728fece-46fe-11e7-ae5d-000c29b1270f)" with CreatePodSandboxError: "CreatePodSandbox for pod \"nginx-622qj_default(0728fece-46fe-11e7-ae5d-000c29b1270f)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"nginx-622qj_default\" network: open /run/flannel/subnet.env: no such file or directory"

You error message shows flanel subnet.evn file is missing. you need to fix flannel configuration first. What version of kubernetes your using?
network: open /run/flannel/subnet.env: no such file or directory"
if your using kubernetes 1.6 and above, we can use below yaml file to configure the flannel container process.
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Running the following command resolved my issues:
kubeadm init --pod-network-cidr=10.244.0.0/16
For flannel as cni the api server needs to have the argument --pod-network-cidr=... to be set to the overlay.

Here are the steps that fixed my issue.
Create a file called subnet.env at location /run/flannel/ inside your worker nodes.
Add the below content in it.
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
Save the file and create the pod again. It should saw the runnign status now.

I was getting same error on one of node after disk pressure and some how it deleted file /run/flannel/subnet.env.
After creating file with content from other node resolve my issue.

Related

Centos 8 microk8s Readiness probe failed: HTTP probe failed with statuscode: 503

I have installed microk8s on my centos 8 operating system.
kube-system coredns-7f9c69c78c-lxm7c 0/1 Running 1 18m
kube-system calico-node-thhp8 1/1 Running 1 68m
kube-system calico-kube-controllers-f7868dd95-dpsnl 0/1 CrashLoopBackOff 23 68m
When I do microk8s enable dns, coredns or calico-kube-controllers cannot be started as above.
Describe the pod for coredns :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned kube-system/coredns-7f9c69c78c-lxm7c to localhost.localdomain
Normal Pulled 14m kubelet Container image "coredns/coredns:1.8.0" already present on machine
Normal Created 14m kubelet Created container coredns
Normal Started 14m kubelet Started container coredns
Warning Unhealthy 11m (x22 over 14m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503
Normal SandboxChanged 2m8s kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 2m7s kubelet Container image "coredns/coredns:1.8.0" already present on machine
Normal Created 2m7s kubelet Created container coredns
Normal Started 2m6s kubelet Started container coredns
Warning Unhealthy 2m6s kubelet Readiness probe failed: Get "http://10.1.102.132:8181/ready": dial tcp 10.1.102.132:8181: connect: connection refused
Warning Unhealthy 9s (x12 over 119s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503
Describe the pod for calico-kube-controllers :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 73m default-scheduler no nodes available to schedule pods
Warning FailedScheduling 73m (x1 over 73m) default-scheduler no nodes available to schedule pods
Warning FailedScheduling 72m (x1 over 72m) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Normal Scheduled 72m default-scheduler Successfully assigned kube-system/calico-kube-controllers-f7868dd95-dpsnl to localhost.localdomain
Warning FailedCreatePodSandBox 72m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f3ea36b003b0c9142ae63fee31531f9102e40ab837f4d795d1efb5c85af223ec": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Warning FailedCreatePodSandBox 71m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a1c405cdcebe79c586badcc8da47700247751a50ef9a1403e95fc4995485fba0": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Warning FailedCreatePodSandBox 71m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4adb07610eef0d7a618105abf72a114e486c373a02d5d1b204da2bd35268dd1b": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Warning FailedCreatePodSandBox 71m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "96aac009175973ac4c20034824db3443b3ab184cfcd1ed23786e539fb6147796": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Warning FailedCreatePodSandBox 71m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "79639a18edcffddbdb93492157af43bb6c1f1a9ac2af1b3fbbac58335737d5dc": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Warning FailedCreatePodSandBox 70m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3264f006447297583a37d8cc87ffe01311deaf2a31bf25867b3b18c83db2167d": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Warning FailedCreatePodSandBox 70m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5c5cf6509bfcf515ad12bc51451e4c385e5242c4f7bb593779d207abf9c906a4": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Normal Pulling 70m kubelet Pulling image "calico/kube-controllers:v3.13.2"
Normal Pulled 69m kubelet Successfully pulled image "calico/kube-controllers:v3.13.2" in 50.744281789s
Normal Created 69m kubelet Created container calico-kube-controllers
Normal Started 69m kubelet Started container calico-kube-controllers
Warning Unhealthy 69m (x2 over 69m) kubelet Readiness probe failed: Failed to read status file status.json: open status.json: no such file or directory
Warning MissingClusterDNS 37m (x185 over 72m) kubelet pod: "calico-kube-controllers-f7868dd95-dpsnl_kube-system(d8c3ee40-7d3b-4a84-9398-19ec8a6d9082)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
Warning Unhealthy 31m (x6 over 32m) kubelet Readiness probe failed: Failed to read status file status.json: open status.json: no such file or directory
Normal Pulled 30m (x4 over 32m) kubelet Container image "calico/kube-controllers:v3.13.2" already present on machine
Normal Created 30m (x4 over 32m) kubelet Created container calico-kube-controllers
Normal Started 30m (x4 over 32m) kubelet Started container calico-kube-controllers
Warning BackOff 22m (x42 over 32m) kubelet Back-off restarting failed container
Normal SandboxChanged 10m kubelet Pod sandbox changed, it will be killed and re-created.
Warning Unhealthy 9m36s (x6 over 10m) kubelet Readiness probe failed: Failed to read status file status.json: open status.json: no such file or directory
Normal Pulled 8m51s (x4 over 10m) kubelet Container image "calico/kube-controllers:v3.13.2" already present on machine
Normal Created 8m51s (x4 over 10m) kubelet Created container calico-kube-controllers
Normal Started 8m51s (x4 over 10m) kubelet Started container calico-kube-controllers
Warning BackOff 42s (x42 over 10m) kubelet Back-off restarting failed container
I cannot start my microk8s services. I don't encounter these on my Ubuntu server. What can I do in these error situations that I encounter for my Centos 8 server?
Have you tried updating the microk8s version?

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container

We are trying to create POD but the Pod's status struck at ContainerCreating for long time.
This is the output we got after running the command: kubectl describe pod
Name: demo-6c59fb8f77-9x6sr
Namespace: default
Priority: 0
Node: k8-slave2/10.0.0.5
Start Time: Wed, 23 Dec 2020 10:16:23 +0000
Labels: app=demo
pod-template-hash=6c59fb8f77
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/demo-6c59fb8f77
Containers:
private-docker-registry:
Container ID:
Image: private-docker-registry:5000/mahin/mof-docker-demo:v1
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-p94zw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-p94zw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-p94zw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/demo-6c59fb8f77-9x6sr to k8-slave2
Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8eee497a2176c7f5782222f804cc63a4abac7f4a2fc7813016793857ae1b1dff" network for pod "demo-6c59fb8f77-9x6sr": networkPlugin cni failed to set up pod "demo-6c59fb8f77-9x6sr_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "95e72bfc6f6c13de7f5c96eb76b012c2e6639ca03f4c2f270b23ed1a09b90413" network for pod "demo-6c59fb8f77-9x6sr": networkPlugin cni failed to set up pod "demo-6c59fb8f77-9x6sr_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "566370012e4a1d32af2ef9035ff64d743cd81f36f25d2724e7b033e393b8247e" network for pod "demo-6c59fb8f77-9x6sr": networkPlugin cni failed to set up pod "demo-6c59fb8f77-9x6sr_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7d499e40f572cfc29ecfb44f8376493df56a44213b1c1e9333b65499a0c288cd" network for pod "demo-6c59fb8f77-9x6sr": networkPlugin cni failed to set up pod "demo-6c59fb8f77-9x6sr_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "53241e64de1e4470712b4061e2c82f44916d654bc532f8f1d12e5d5d4e136914" network for pod "demo-6c59fb8f77-9x6sr": networkPlugin cni failed to set up pod "demo-6c59fb8f77-9x6sr_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fd168faab4546f988dc38fc56df2f71cf80c922e86d3f869be15a43f08328f99" network for pod "demo-6c59fb8f77-9x6sr": networkPlugin cni failed to set up pod "demo-6c59fb8f77-9x6sr_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e578afe329abb0cba64802dfa480e00f2bbbb8c80be537791c24a31c853eb62f" network for pod "demo-6c59fb8f77-9x6sr": networkPlugin cni failed to set up pod "demo-6c59fb8f77-9x6sr_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a3cb32dba55907ca907fc4f38f7ca05ef6db10a6af2dd1fa3c4db166e4ab9ffe" network for pod "demo-6c59fb8f77-9x6sr": networkPlugin cni failed to set up pod "demo-6c59fb8f77-9x6sr_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7e4368ba8ec460b3c94de24ab0a04b6c799eb28df885cbbacfc3bb3ffa8c1e67" network for pod "demo-6c59fb8f77-9x6sr": networkPlugin cni failed to set up pod "demo-6c59fb8f77-9x6sr_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 10m (x4 over 10m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c4aaa8f8cd2dc1eff788baf04774c4ecc845568d00ed1b386df311ec224eb6f3" network for pod "demo-6c59fb8f77-9x6sr": networkPlugin cni failed to set up pod "demo-6c59fb8f77-9x6sr_default" network: open /run/flannel/subnet.env: no such file or directory
Normal SandboxChanged 56s (x551 over 10m) kubelet Pod sandbox changed, it will be killed and re-created.
azureuser#k8-master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default demo-6c59fb8f77-2jq6k 0/1 ContainerCreating 0 5m23s
kube-system coredns-f9fd979d6-q8s9b 1/1 Running 2 27h
kube-system coredns-f9fd979d6-qnm4j 1/1 Running 2 27h
kube-system etcd-k8-master 1/1 Running 2 27h
kube-system kube-apiserver-k8-master 1/1 Running 3 27h
kube-system kube-controller-manager-k8-master 1/1 Running 3 27h
kube-system kube-flannel-ds-kqz4t 0/1 CrashLoopBackOff 92 27h
kube-system kube-flannel-ds-szqzn 1/1 Running 3 27h
kube-system kube-flannel-ds-v9q47 0/1 CrashLoopBackOff 142 27h
kube-system kube-proxy-4mb47 1/1 Running 2 27h
kube-system kube-proxy-54m9b 1/1 Running 2 27h
kube-system kube-proxy-wdxfz 1/1 Running 1 27h
kube-system kube-scheduler-k8-master 1/1 Running 3 27h
kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-zmlvs 0/1 ContainerCreating 0 27h
kubernetes-dashboard kubernetes-dashboard-665f4c5ff-cnsvn 0/1 ContainerCreating 0 6h3m
To fix the flannel crashloopbackoff we did Kubeadm reset and after some time this problem showed up again.
Current we are working with one master and two worker node.
My cluster details as follows:
azureuser#k8-master:~$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://52.150.11.168:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
Docker version:
azureuser#k8-master:~$ sudo docker version
[sudo] password for azureuser:
Client:
Version: 19.03.6
API version: 1.40
Go version: go1.12.17
Git commit: 369ce74a3c
Built: Wed Oct 14 19:00:27 2020
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 19.03.6
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: 369ce74a3c
Built: Wed Oct 14 16:52:50 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.3.3-0ubuntu1~18.04.2
GitCommit:
runc:
Version: spec: 1.0.1-dev
GitCommit:
docker-init:
Version: 0.18.0
GitCommit:
kubeadm version :
azureuser#k8-master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:15:05Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
The flannel is crashing whenever I tried to schedule pod creation.
Background
I think your issue is cased by your 2 Flannel CNI pods CrashLoopBackOff status.
Your error
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8eee497a2176c7f5782222f804cc63a4abac7f4a2fc7813016793857ae1b1dff" network for pod "demo-6c59fb8f77-9x6sr": networkPlugin cni failed to set up pod "demo-6c59fb8f77-9x6sr_default" network: open /run/flannel/subnet.env: no such file or directory
is pointing that pod cannot be created due to lack of /run/flannel/subnet.env file.
In Flannel Github document you can find:
Flannel runs a small, single binary agent called flanneld on each host, and is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space.
Meaning, to proper work, Flannel pod should be running on each node as it contains subnets information. From your outputs I can see that only 1 is working properly out of 3 Flannel pods.
NAMESPACE NAME READY STATUS RESTARTS AGE
...
kube-system kube-flannel-ds-kqz4t 0/1 CrashLoopBackOff 92 27h
kube-system kube-flannel-ds-szqzn 1/1 Running 3 27h
kube-system kube-flannel-ds-v9q47 0/1 CrashLoopBackOff 142 27h
If mentioned pod was scheduled on node where flannel pod is not working it won't be created due to CNI network issues. Besides your demo pod, also kubernetes-dashboard pods have the same issue with ContainerCreating status.
Conclusion
Your demo pod cannot be scheduled as Kubernetes encounter some network issues related with flannel configuration file (...network: open /run/flannel/subnet.env: no such file or directory).
Your flannel pods restarts counts is very high as for 27 hours. You have to determine why and fix it. It might be lack of resources, network issues with your infrastructure or many other reasons. Once all flannel pods will be working correctly, your shouldn't encounter this error.
Solution
You have to make flannel pods works correctly on each node.
Additional Troubleshooting Details
For detailed investigation please provide
$ kubectl describe kube-flannel-ds-kqz4t -n kube-system
$ kubectl describe kube-flannel-ds-v9q47 -n kube-system
Logs details would be also helpful
$ kubectl logs kube-flannel-ds-kqz4t -n kube-system
$ kubectl logs kube-flannel-ds-v9q47 -n kube-system
Please replace kubectl get pods --all-namespaces with kubectl get pods -o wide -A and output of kubectl get nodes -o wide.
If you will provide those information, it should be possible to determine root cause of flannel pods issues and I will edit this answer with exact solution.

Kubernetes cluster VirtualBox issues with networking (NAT and Host-only adapters)

I am trying to setup a kubernetes cluster (two nodes, 1 master, 1 worker) on VirtualBox. My host computer runs Windows 10 and on the VirtualBox I have installed Ubuntu 18.10, Codename cosmic.
I have configured two adapters on each VirtualBox, one NAT and one Host-Only adapter. I did that because I need to access some internal resources using the host IP (NAT) and I also need a stable network between the host and the virtual machines (Host-only network).
I have installed Kubernetes v1.12.4 and successfully joined the worker to the master node.
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 36m v1.12.4
kubernetes-slave Ready <none> 25m v1.12.4
I am using Flannel for networking.
All pods seems to be ok.
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-server-7bb6997d9c-kdcld 1/1 Running 0 27m
kube-system coredns-576cbf47c7-btrvb 1/1 Running 1 38m
kube-system coredns-576cbf47c7-zfscv 1/1 Running 1 38m
kube-system etcd-kubernetes-master 1/1 Running 1 38m
kube-system kube-apiserver-kubernetes-master 1/1 Running 1 38m
kube-system kube-controller-manager-kubernetes-master 1/1 Running 1 38m
kube-system kube-flannel-ds-amd64-29p96 1/1 Running 1 28m
kube-system kube-flannel-ds-amd64-sb2fq 1/1 Running 1 37m
kube-system kube-proxy-59v6b 1/1 Running 1 38m
kube-system kube-proxy-bfd78 1/1 Running 0 28m
kube-system kube-scheduler-kubernetes-master 1/1 Running 1 38m
I have deployed nginx to verify that everything is working
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 41m
nginx-http ClusterIP 10.111.151.28 <none> 80/TCP 29m
However when I try to reach nginx I am getting a timeout. describe pod gives me the following events.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32m default-scheduler Successfully assigned default/nginx-server-7bb6997d9c-kdcld to kubernetes-slave
Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dbb2595628fc2579c29779e31e27e27eaeff2dbcf2bdb68467c47f22a3590bd0" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "801e0f3f8ca4a9b7cc21d87d41141485e1b1da357f2d89e1644acf0ecf634016" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "77214c757449097bfbe05b24ebb5fd3c7f1d96f7e3e9a3cd48f3b37f30224feb" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ebffdd723083d916c0910489e12368dc4069dd99c24a3a4ab1b1d4ab823866ff" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d87b93815380246a05470e597a88d50eb31c132a50e30000ab41a456d1e65107" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3ef233ef0a6c447134c7b027747a701d6576a80e76c9cc8ffd8287e8ee5f02a4" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6b621aab3c57154941b37360240228fe939b528855a5fe8cd9536df63d41ed93" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fa992bde90e0a1839180666bedaf74965fb26f3dccb33a66092836a25882ab44" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 32m kubelet, kubernetes-slave Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "81f74f687e17d67bd2853849f84ece33a118744278d78ac7af3bdeadff8aa9c7" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 32m (x2 over 32m) kubelet, kubernetes-slave (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "29188c3e73d08e81b08b2258254dc2691fcaa514ecc96e9df86f2e61ba455b76" network for pod "nginx-server-7bb6997d9c-kdcld": NetworkPlugin cni failed to set up pod "nginx-server-7bb6997d9c-kdcld_default" network: open /run/flannel/subnet.env: no such file or directory
Normal SandboxChanged 32m (x11 over 32m) kubelet, kubernetes-slave Pod sandbox changed, it will be killed and re-created.
Normal Pulling 32m kubelet, kubernetes-slave pulling image "nginx"
Normal Pulled 32m kubelet, kubernetes-slave Successfully pulled image "nginx"
Normal Created 32m kubelet, kubernetes-slave Created container
I have tried to do the same exactly installation with a bridge adapter only configured to the virtual machines and then everything works as expected.
I believe that its a configuration issue however I am unable to solve it. Can someone advise me.
As I have mentioned in deleted comment, I recreated this on my Ubuntu 18.04 host. Created two Ubuntu 18.10 VM, with two adapters (NAT and one Host-Only adapter). I have the same configuration as you have specified here. Everything works fine.
What I had to do was to add the second adapter manually, I did it by using netplan before running kubeadm init and kubeadm join on node.
Just in case you did not do that - add the host only adapter network to the yaml file in /etc/netplan/50-cloud-init.yaml and run sudo netplan generate and sudo netplan apply. For nginx I have used deployment from official Kubernetes documentation. Then I have exposed the service:
kubectl create service nodeport nginx --tcp=80:80
Curling my node IP address on NodePort from host machine works fine.
This was just to demonstrate what I did so it works in my environment. Judging from the described pod error it seems like there is something wrong with Flannel itself:
/run/flannel/subnet.env: no such file or directory
I checked this directory on master and it looks like this:
/run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
Check if the file is there, and if this will not help you, we can try to further troubleshoot if you provide more information. However there are too many unknowns so I had to guess in some places, my advice would be to destroy it all and try again with the information I have provided, and run the nginx with NodePort and not ClusterIP type. ClusterIP will only be reachable from inside of the cluster - for example Node.
Please let me pump up this thread. Long time ago I had configurated 1 NAT for internet, 1 HOST for SSH remote and errors the same. Special when setup Rancher Longhorn.
Now, I don't build like that. First, I build the GATEWAY SERVER by using CentOS with iptable (1 NAT, 1 HOST)
Then, other VMs has just 1 interface HOST connected direct to GATEWAY SERVER

kube-dns remains in "ContainerCreating"

I manually installed k8s-1.6.6 and I deployed calico-2.3(uses etcd-3.0.17 with kube-apiserver) and kube-dns on baremetal(ubuntu 16.04).
It dosen't have any problems without RBAC.
But, after applying RBAC by adding "--authorization-mode=RBAC" to kube-apiserver.
I couldn't apply kube-dns whose status remains in "ContainerCreating".
I checked "kubectl describe pod kube-dns.."
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
10m 10m 1 default-scheduler Normal Scheduled Successfully assigned kube-dns-1759312207-t35t3 to work01
9m 9m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 8c2585b1b3170f220247a6abffb1a431af58060f2bcc715fe29e7c2144d19074
8m 8m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: c6962db6c5a17533fbee563162c499631a647604f9bffe6bc71026b09a2a0d4f
7m 7m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "f693931a-7335-11e7-aaa2-525400aa8825" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"kube-dns-1759312207-t35t3_kube-system\" network: CNI failed to retrieve network namespace path: Error: No such container: 9adc41d07a80db44099460c6cc56612c6fbcd53176abcc3e7bbf843fca8b7532"
5m 5m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 4c2d450186cbec73ea28d2eb4c51497f6d8c175b92d3e61b13deeba1087e9a40
4m 4m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "f693931a-7335-11e7-aaa2-525400aa8825" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"kube-dns-1759312207-t35t3_kube-system\" network: CNI failed to retrieve network namespace path: Error: No such container: 12df544137939d2b8af8d70964e46b49f5ddec1228da711c084ff493443df465"
3m 3m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: c51c9d50dcd62160ffe68d891967d118a0f594885e99df3286d0c4f8f4986970
2m 2m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 94533f19952c7d5f32e919c03d9ec5147ef63d4c1f35dd4fcfea34306b9fbb71
1m 1m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 166a89916c1e6d63e80b237e5061fd657f091f3c6d430b7cee34586ba8777b37
16s 12s 2 kubelet, work01 Warning FailedSync (events with common reason combined)
10m 2s 207 kubelet, work01 Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "kube-dns-1759312207-t35t3_kube-system(f693931a-7335-11e7-aaa2-525400aa8825)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-1759312207-t35t3_kube-system(f693931a-7335-11e7-aaa2-525400aa8825)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"kube-dns-1759312207-t35t3_kube-system\" network: the server does not allow access to the requested resource (get pods kube-dns-1759312207-t35t3)"
10m 1s 210 kubelet, work01 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
my kubelet
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /var/log/containers
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStart=/usr/local/bin/kubelet \
--api-servers=http://127.0.0.1:8080 \
--allow-privileged=true \
--pod-manifest-path=/etc/kubernetes/manifests \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--cluster-dns=10.3.0.10 \
--cluster-domain=cluster.local \
--register-node=true \
--network-plugin=cni \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/cni/bin \
--container-runtime=docker
my kube-apiserver
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: kube-apiserver:v1.6.6
command:
- kube-apiserver
- --bind-address=0.0.0.0
- --etcd-servers=http://127.0.0.1:2379
- --allow-privileged=true
- --service-cluster-ip-range=10.3.0.0/16
- --secure-port=6443
- --advertise-address=172.30.1.10
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
- --tls-cert-file=/srv/kubernetes/apiserver.pem
- --tls-private-key-file=/srv/kubernetes/apiserver-key.pem
- --client-ca-file=/srv/kubernetes/ca.pem
- --service-account-key-file=/srv/kubernetes/apiserver-key.pem
- --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP
- --anonymous-auth=false
- --authorization-mode=RBAC
- --token-auth-file=/srv/kubernetes/known_tokens.csv
- --basic-auth-file=/srv/kubernetes/basic_auth.csv
- --storage-backend=etcd3
livenessProbe:
httpGet:
host: 127.0.0.1
port: 8080
path: /healthz
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
ports:
- name: https
hostPort: 6443
containerPort: 6443
- name: local
hostPort: 8080
containerPort: 8080
volumeMounts:
- name: srvkube
mountPath: "/srv/kubernetes"
readOnly: true
- name: etcssl
mountPath: "/etc/ssl"
readOnly: true
volumes:
- name: srvkube
hostPath:
path: "/srv/kubernetes"
- name: etcssl
hostPath:
path: "/etc/ssl"
I found the cause.
This issue is not related to kube-dns.
I just missed out applying ClusterRole/ClusterRoleBinding, before deplying calico

ContainerCreating: Error from server (BadRequest): container "kubedns"

I have setup this 3 nodes cluster (http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/vagrant/).
after restarting my nodes. KubeDNS service is not starting. log didn't show much information.
getting bellow message
$ kubectl logs --namespace=kube-system kube-dns-v19-sqx9q -c kubedns
Error from server (BadRequest): container "kubedns" in pod "kube-dns-v19-sqx9q" is waiting to start: ContainerCreating
nodes are running.
$ kubectl get nodes
NAME STATUS AGE VERSION
172.18.18.101 Ready,SchedulingDisabled 2d v1.6.0
172.18.18.102 Ready 2d v1.6.0
172.18.18.103 Ready 2d v1.6.0
$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
calico-node-6rhb9 2/2 Running 4 2d
calico-node-mbhk7 2/2 Running 93 2d
calico-node-w9sjq 2/2 Running 6 2d
calico-policy-controller-2425378810-rd9h7 1/1 Running 0 25m
kube-dns-v19-sqx9q 0/3 ContainerCreating 0 25m
kubernetes-dashboard-2457468166-rs0tn 0/1 ContainerCreating 0 25m
How can I find what is wrong with DNS service?
Thanks
SR
some more details
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
31m 31m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 87bd5c4bc5b9d81468170cc840ba9203988bb259aa0c025372ee02303d9e8d4b"
31m 31m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: d091593b55eb9e16e09c5bc47f4701015839d83d23546c4c6adc070bc37ad60d"
30m 30m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 69a1fa33f26b851664b2ad10def1eb37b5e5391ca33dad2551a2f98c52e05d0d
30m 30m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: c3b7c06df3bea90e4d12c0b7f1a03077edf5836407206038223967488b279d3d"
28m 28m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 467d54496eb5665c5c7c20b1adb0cc0f01987a83901e4b54c1dc9ccb4860f16d"
28m 28m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 1cd8022c9309205e61d7e593bc7ff3248af17d731e2a4d55e74b488cbc115162
27m 27m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 1ed4174aba86124055981b7888c9d048d784e98cef5f2763fd1352532a0ba85d
26m 26m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 444693b4ce06eb25f3dbd00aebef922b72b291598fec11083cb233a0f9d5e92d"
25m 25m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 736df24a9a6640300d62d542e5098e03a5a9fde4f361926e2672880b43384516
8m 8m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 8424dbdf92b16602c7d5a4f61d21cd602c5da449c6ec3449dafbff80ff5e72c4
2h 1m 49 kubelet, 172.18.18.102 Warning FailedSync (events with common reason combined)
2h 2s 361 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "kube-dns-v19-sqx9q_kube-system(d7c71007-2933-11e7-9bbd-08002774bad8)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-v19-sqx9q_kube-system(d7c71007-2933-11e7-9bbd-08002774bad8)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"kube-dns-v19-sqx9q_kube-system\" network: the server has asked for the client to provide credentials (get pods kube-dns-v19-sqx9q)"
2h 1s 406 kubelet, 172.18.18.102 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
pod describe output
Name: kube-dns-v19-sqx9q
Namespace: kube-system
Node: 172.18.18.102/172.18.18.102
Start Time: Mon, 24 Apr 2017 17:34:22 -0400
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
version=v19
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kube-dns-v19","uid":"dac3d892-278c-11e7-b2b5-0800...
scheduler.alpha.kubernetes.io/critical-pod=
scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}]
Status: Pending
IP:
Controllers: ReplicationController/kube-dns-v19
Containers:
kubedns:
Container ID:
Image: gcr.io/google_containers/kubedns-amd64:1.7
Image ID:
Ports: 10053/UDP, 10053/TCP
Args:
--domain=cluster.local
--dns-port=10053
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/healthz delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=30s timeout=5s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro)
dnsmasq:
Container ID:
Image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
Image ID:
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--no-resolv
--server=127.0.0.1#10053
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro)
healthz:
Container ID:
Image: gcr.io/google_containers/exechealthz-amd64:1.1
Image ID:
Port: 8080/TCP
Args:
-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
-port=8080
-quiet
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 10m
memory: 50Mi
Requests:
cpu: 10m
memory: 50Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-r5xws:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r5xws
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
service account mount /var/run/secrets/kubernetes.io/serviceaccount from secret default-token-r5xws failed. Check the logs for this secret creation failure.
I solved this problem by signing-in to my Docker Desktop that is running on my computer.
(I was running Kubernetes on my computer via minikube)