ContainerCreating: Error from server (BadRequest): container "kubedns" - kubernetes

I have setup this 3 nodes cluster (http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/vagrant/).
after restarting my nodes. KubeDNS service is not starting. log didn't show much information.
getting bellow message
$ kubectl logs --namespace=kube-system kube-dns-v19-sqx9q -c kubedns
Error from server (BadRequest): container "kubedns" in pod "kube-dns-v19-sqx9q" is waiting to start: ContainerCreating
nodes are running.
$ kubectl get nodes
NAME STATUS AGE VERSION
172.18.18.101 Ready,SchedulingDisabled 2d v1.6.0
172.18.18.102 Ready 2d v1.6.0
172.18.18.103 Ready 2d v1.6.0
$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
calico-node-6rhb9 2/2 Running 4 2d
calico-node-mbhk7 2/2 Running 93 2d
calico-node-w9sjq 2/2 Running 6 2d
calico-policy-controller-2425378810-rd9h7 1/1 Running 0 25m
kube-dns-v19-sqx9q 0/3 ContainerCreating 0 25m
kubernetes-dashboard-2457468166-rs0tn 0/1 ContainerCreating 0 25m
How can I find what is wrong with DNS service?
Thanks
SR
some more details
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
31m 31m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 87bd5c4bc5b9d81468170cc840ba9203988bb259aa0c025372ee02303d9e8d4b"
31m 31m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: d091593b55eb9e16e09c5bc47f4701015839d83d23546c4c6adc070bc37ad60d"
30m 30m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 69a1fa33f26b851664b2ad10def1eb37b5e5391ca33dad2551a2f98c52e05d0d
30m 30m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: c3b7c06df3bea90e4d12c0b7f1a03077edf5836407206038223967488b279d3d"
28m 28m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 467d54496eb5665c5c7c20b1adb0cc0f01987a83901e4b54c1dc9ccb4860f16d"
28m 28m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 1cd8022c9309205e61d7e593bc7ff3248af17d731e2a4d55e74b488cbc115162
27m 27m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 1ed4174aba86124055981b7888c9d048d784e98cef5f2763fd1352532a0ba85d
26m 26m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 444693b4ce06eb25f3dbd00aebef922b72b291598fec11083cb233a0f9d5e92d"
25m 25m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 736df24a9a6640300d62d542e5098e03a5a9fde4f361926e2672880b43384516
8m 8m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 8424dbdf92b16602c7d5a4f61d21cd602c5da449c6ec3449dafbff80ff5e72c4
2h 1m 49 kubelet, 172.18.18.102 Warning FailedSync (events with common reason combined)
2h 2s 361 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "kube-dns-v19-sqx9q_kube-system(d7c71007-2933-11e7-9bbd-08002774bad8)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-v19-sqx9q_kube-system(d7c71007-2933-11e7-9bbd-08002774bad8)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"kube-dns-v19-sqx9q_kube-system\" network: the server has asked for the client to provide credentials (get pods kube-dns-v19-sqx9q)"
2h 1s 406 kubelet, 172.18.18.102 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
pod describe output
Name: kube-dns-v19-sqx9q
Namespace: kube-system
Node: 172.18.18.102/172.18.18.102
Start Time: Mon, 24 Apr 2017 17:34:22 -0400
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
version=v19
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kube-dns-v19","uid":"dac3d892-278c-11e7-b2b5-0800...
scheduler.alpha.kubernetes.io/critical-pod=
scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}]
Status: Pending
IP:
Controllers: ReplicationController/kube-dns-v19
Containers:
kubedns:
Container ID:
Image: gcr.io/google_containers/kubedns-amd64:1.7
Image ID:
Ports: 10053/UDP, 10053/TCP
Args:
--domain=cluster.local
--dns-port=10053
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/healthz delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=30s timeout=5s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro)
dnsmasq:
Container ID:
Image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
Image ID:
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--no-resolv
--server=127.0.0.1#10053
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro)
healthz:
Container ID:
Image: gcr.io/google_containers/exechealthz-amd64:1.1
Image ID:
Port: 8080/TCP
Args:
-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
-port=8080
-quiet
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 10m
memory: 50Mi
Requests:
cpu: 10m
memory: 50Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-r5xws:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r5xws
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>

service account mount /var/run/secrets/kubernetes.io/serviceaccount from secret default-token-r5xws failed. Check the logs for this secret creation failure.

I solved this problem by signing-in to my Docker Desktop that is running on my computer.
(I was running Kubernetes on my computer via minikube)

Related

How to resolve this error that nginx-ingress-controller start fail in my k8s cluster?

Rancher v2.4.2
kubernetes version: v1.17.4
In my k8s cluster,nginx-ingress-controller doesn't work and restart always.I don't get anything useful information in the logs, thanks for your help.
cluster nodes:
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready controlplane,etcd,worker 18d v1.17.4
master2 Ready controlplane,etcd,worker 17d v1.17.4
node1 Ready worker 17d v1.17.4
node2 Ready worker 17d v1.17.4
cluster pods in ingress-nginx namespace
> kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
default-http-backend-5bb77998d7-k7gdh 1/1 Running 1 17d
nginx-ingress-controller-6l4jh 0/1 Running 10 27m
nginx-ingress-controller-bh2pg 1/1 Running 0 63m
nginx-ingress-controller-drtzx 1/1 Running 0 63m
nginx-ingress-controller-qndbw 1/1 Running 0 63m
the pod logs of nginx-ingress-controller-6l4jh
> kubectl logs nginx-ingress-controller-6l4jh -n ingress-nginx
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: nginx-0.25.1-rancher1
Build:
Repository: https://github.com/rancher/ingress-nginx.git
nginx version: openresty/1.15.8.1
-------------------------------------------------------------------------------
>
describe info
> kubectl describe pod nginx-ingress-controller-6l4jh -n ingress-nginx
Name: nginx-ingress-controller-6l4jh
Namespace: ingress-nginx
Priority: 0
Node: node2/172.26.13.11
Start Time: Tue, 19 Apr 2022 07:12:16 +0000
Labels: app=ingress-nginx
controller-revision-hash=758cb9dbbc
pod-template-generation=8
Annotations: cattle.io/timestamp: 2022-04-19T07:08:51Z
field.cattle.io/ports:
[[{"containerPort":80,"dnsName":"nginx-ingress-controller-hostport","hostPort":80,"kind":"HostPort","name":"http","protocol":"TCP","source...
field.cattle.io/publicEndpoints:
[{"addresses":["172.26.13.130"],"nodeId":"c-wv692:m-d5802d05bbf0","port":80,"protocol":"TCP"},{"addresses":["172.26.13.130"],"nodeId":"c-w...
prometheus.io/port: 10254
prometheus.io/scrape: true
Status: Running
IP: 172.26.13.11
IPs:
IP: 172.26.13.11
Controlled By: DaemonSet/nginx-ingress-controller
Containers:
nginx-ingress-controller:
Container ID: docker://09a6248edb921b9c9cbab678c793fe1cc3d28322ea6abbb8f15c899351ce4b40
Image: 172.26.13.133:5000/rancher/nginx-ingress-controller:nginx-0.25.1-rancher1
Image ID: docker-pullable://172.26.13.133:5000/rancher/nginx-ingress-controller#sha256:fe50ceea3d1a0bc9a7ccef8d5845c9a30b51f608e411467862dff590185a47d2
Ports: 80/TCP, 443/TCP
Host Ports: 80/TCP, 443/TCP
Args:
/nginx-ingress-controller
--default-backend-service=$(POD_NAMESPACE)/default-http-backend
--configmap=$(POD_NAMESPACE)/nginx-configuration
--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
--udp-services-configmap=$(POD_NAMESPACE)/udp-services
--annotations-prefix=nginx.ingress.kubernetes.io
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 143
Started: Tue, 19 Apr 2022 07:40:12 +0000
Finished: Tue, 19 Apr 2022 07:41:32 +0000
Ready: False
Restart Count: 11
Liveness: http-get http://:10254/healthz delay=60s timeout=20s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=60s timeout=20s period=10s #success=1 #failure=3
Environment:
POD_NAME: nginx-ingress-controller-6l4jh (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-serviceaccount-token-2kdbj (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nginx-ingress-serviceaccount-token-2kdbj:
Type: Secret (a volume populated by a Secret)
SecretName: nginx-ingress-serviceaccount-token-2kdbj
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoExecute
:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/network-unavailable:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned ingress-nginx/nginx-ingress-controller-6l4jh to node2
Normal Pulled 27m (x3 over 30m) kubelet, node2 Container image "172.26.13.133:5000/rancher/nginx-ingress-controller:nginx-0.25.1-rancher1" already present on machine
Normal Created 27m (x3 over 30m) kubelet, node2 Created container nginx-ingress-controller
Normal Started 27m (x3 over 30m) kubelet, node2 Started container nginx-ingress-controller
Normal Killing 27m (x2 over 28m) kubelet, node2 Container nginx-ingress-controller failed liveness probe, will be restarted
Warning Unhealthy 25m (x10 over 29m) kubelet, node2 Liveness probe failed: Get http://172.26.13.11:10254/healthz: dial tcp 172.26.13.11:10254: connect: connection refused
Warning Unhealthy 10m (x21 over 29m) kubelet, node2 Readiness probe failed: Get http://172.26.13.11:10254/healthz: dial tcp 172.26.13.11:10254: connect: connection refused
Warning BackOff 8s (x69 over 20m) kubelet, node2 Back-off restarting failed container
>
It sounds like the ingress controller pod fails the liveness/readiness checks but looks like only on a certain node. You could try:
check the node for firewall on that port
update to newer version than nginx-0.25.1

kube-dns remains in "ContainerCreating"

I manually installed k8s-1.6.6 and I deployed calico-2.3(uses etcd-3.0.17 with kube-apiserver) and kube-dns on baremetal(ubuntu 16.04).
It dosen't have any problems without RBAC.
But, after applying RBAC by adding "--authorization-mode=RBAC" to kube-apiserver.
I couldn't apply kube-dns whose status remains in "ContainerCreating".
I checked "kubectl describe pod kube-dns.."
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
10m 10m 1 default-scheduler Normal Scheduled Successfully assigned kube-dns-1759312207-t35t3 to work01
9m 9m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 8c2585b1b3170f220247a6abffb1a431af58060f2bcc715fe29e7c2144d19074
8m 8m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: c6962db6c5a17533fbee563162c499631a647604f9bffe6bc71026b09a2a0d4f
7m 7m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "f693931a-7335-11e7-aaa2-525400aa8825" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"kube-dns-1759312207-t35t3_kube-system\" network: CNI failed to retrieve network namespace path: Error: No such container: 9adc41d07a80db44099460c6cc56612c6fbcd53176abcc3e7bbf843fca8b7532"
5m 5m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 4c2d450186cbec73ea28d2eb4c51497f6d8c175b92d3e61b13deeba1087e9a40
4m 4m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "f693931a-7335-11e7-aaa2-525400aa8825" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"kube-dns-1759312207-t35t3_kube-system\" network: CNI failed to retrieve network namespace path: Error: No such container: 12df544137939d2b8af8d70964e46b49f5ddec1228da711c084ff493443df465"
3m 3m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: c51c9d50dcd62160ffe68d891967d118a0f594885e99df3286d0c4f8f4986970
2m 2m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 94533f19952c7d5f32e919c03d9ec5147ef63d4c1f35dd4fcfea34306b9fbb71
1m 1m 1 kubelet, work01 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 166a89916c1e6d63e80b237e5061fd657f091f3c6d430b7cee34586ba8777b37
16s 12s 2 kubelet, work01 Warning FailedSync (events with common reason combined)
10m 2s 207 kubelet, work01 Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "kube-dns-1759312207-t35t3_kube-system(f693931a-7335-11e7-aaa2-525400aa8825)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-1759312207-t35t3_kube-system(f693931a-7335-11e7-aaa2-525400aa8825)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"kube-dns-1759312207-t35t3_kube-system\" network: the server does not allow access to the requested resource (get pods kube-dns-1759312207-t35t3)"
10m 1s 210 kubelet, work01 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
my kubelet
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /var/log/containers
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStart=/usr/local/bin/kubelet \
--api-servers=http://127.0.0.1:8080 \
--allow-privileged=true \
--pod-manifest-path=/etc/kubernetes/manifests \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--cluster-dns=10.3.0.10 \
--cluster-domain=cluster.local \
--register-node=true \
--network-plugin=cni \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/cni/bin \
--container-runtime=docker
my kube-apiserver
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: kube-apiserver:v1.6.6
command:
- kube-apiserver
- --bind-address=0.0.0.0
- --etcd-servers=http://127.0.0.1:2379
- --allow-privileged=true
- --service-cluster-ip-range=10.3.0.0/16
- --secure-port=6443
- --advertise-address=172.30.1.10
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
- --tls-cert-file=/srv/kubernetes/apiserver.pem
- --tls-private-key-file=/srv/kubernetes/apiserver-key.pem
- --client-ca-file=/srv/kubernetes/ca.pem
- --service-account-key-file=/srv/kubernetes/apiserver-key.pem
- --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP
- --anonymous-auth=false
- --authorization-mode=RBAC
- --token-auth-file=/srv/kubernetes/known_tokens.csv
- --basic-auth-file=/srv/kubernetes/basic_auth.csv
- --storage-backend=etcd3
livenessProbe:
httpGet:
host: 127.0.0.1
port: 8080
path: /healthz
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
ports:
- name: https
hostPort: 6443
containerPort: 6443
- name: local
hostPort: 8080
containerPort: 8080
volumeMounts:
- name: srvkube
mountPath: "/srv/kubernetes"
readOnly: true
- name: etcssl
mountPath: "/etc/ssl"
readOnly: true
volumes:
- name: srvkube
hostPath:
path: "/srv/kubernetes"
- name: etcssl
hostPath:
path: "/etc/ssl"
I found the cause.
This issue is not related to kube-dns.
I just missed out applying ClusterRole/ClusterRoleBinding, before deplying calico

Pods are not starting. NetworkPlugin cni failed to set up pod

K8 Version:
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
I tried to launch spinnaker pods(yaml files here). I choose Flannel(kubectl apply -f kube-flannel.yml) while installing K8. Then I see the pods are not starting, it is struck in "ContainerCreating" status. I kubectl describe a pod, showing NetworkPlugin cni failed to set up pod
veeru#ubuntu:/opt/spinnaker/experimental/kubernetes/simple$ kubectl describe pod data-redis-master-v000-38j80 --namespace=spinnaker
Name: data-redis-master-v000-38j80
Namespace: spinnaker
Node: ubuntu/192.168.6.136
Start Time: Thu, 01 Jun 2017 02:54:14 -0700
Labels: load-balancer-data-redis-server=true
replication-controller=data-redis-master-v000
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"spinnaker","name":"data-redis-master-v000","uid":"43d4a44c-46b0-11e7-b0e1-000c29b...
Status: Pending
IP:
Controllers: ReplicaSet/data-redis-master-v000
Containers:
redis-master:
Container ID:
Image: gcr.io/kubernetes-spinnaker/redis-cluster:v2
Image ID:
Port: 6379/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
Requests:
cpu: 100m
Environment:
MASTER: true
Mounts:
/redis-master-data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-71p4q (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-71p4q:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-71p4q
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s
node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
45m 45m 1 default-scheduler Normal Scheduled Successfully assigned data-redis-master-v000-38j80 to ubuntu
43m 43m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 8265d80732e7b73ebf8f1493d40403021064b61436c4c559b41330e7592fd47f"
43m 43m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: b972862d763e621e026728073deb9a304748c4ec4522982db0a168663ab59d36
42m 42m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 72b39083a3a81c0da1d4b7fa65b5d6450b62a3562a05452c27b185bc33197327"
41m 41m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: d315511bfa9f6f09d7ef4cd277bde44e4885291ea566e3089460356c1ed34413"
40m 40m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: a03d776d2d7c5c4ae9c1ec31681b0b6e40759326a452916cff0e60c4d4e2c954"
40m 40m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: acf30a4aacda0c53bdbb8bc2d416704720bd1b623c43874052b4029f15950052"
39m 39m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: ea49f5f9428d585be7138f4ebce54f713eef549b16104a3d7aa728175b6ebc2a"
38m 38m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: ec2483435b4b22576c9bd7bffac5d67d53893c189c0cf26aca1ae6af79d09914"
38m 1m 39 kubelet, ubuntu Warning FailedSync (events with common reason combined)
45m 1s 448 kubelet, ubuntu Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
45m 0s 412 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "data-redis-master-v000-38j80_spinnaker(447d302c-46b0-11e7-b0e1-000c29b1270f)" with CreatePodSandboxError: "CreatePodSandbox for pod \"data-redis-master-v000-38j80_spinnaker(447d302c-46b0-11e7-b0e1-000c29b1270f)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"data-redis-master-v000-38j80_spinnaker\" network: open /run/flannel/subnet.env: no such file or directory"
How can I resolve above issue?
UPDATE-1
I have reinitialized K8 with kubeadm init --pod-network-cidr=10.244.0.0/16 and deployed sample nginx pod. Still getting same error
-----------------OUTPUT REMOVED-------------------------------
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3m 3m 1 default-scheduler Normal Scheduled Successfully assigned nginx-622qj to ubuntu
1m 1m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "0728fece-46fe-11e7-ae5d-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 38250afd765f0108aeff6e31bbe5a642a60db99b97cbbf15711f810cbe8f3829"
24s 24s 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "0728fece-46fe-11e7-ae5d-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 3bebcef02cb5f6645a65dcf06b2730144080f9d3c4fb18267feca5c5ce21031c"
2m 9s 33 kubelet, ubuntu Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
3m 7s 32 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "nginx-622qj_default(0728fece-46fe-11e7-ae5d-000c29b1270f)" with CreatePodSandboxError: "CreatePodSandbox for pod \"nginx-622qj_default(0728fece-46fe-11e7-ae5d-000c29b1270f)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"nginx-622qj_default\" network: open /run/flannel/subnet.env: no such file or directory"
You error message shows flanel subnet.evn file is missing. you need to fix flannel configuration first. What version of kubernetes your using?
network: open /run/flannel/subnet.env: no such file or directory"
if your using kubernetes 1.6 and above, we can use below yaml file to configure the flannel container process.
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Running the following command resolved my issues:
kubeadm init --pod-network-cidr=10.244.0.0/16
For flannel as cni the api server needs to have the argument --pod-network-cidr=... to be set to the overlay.
Here are the steps that fixed my issue.
Create a file called subnet.env at location /run/flannel/ inside your worker nodes.
Add the below content in it.
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
Save the file and create the pod again. It should saw the runnign status now.
I was getting same error on one of node after disk pressure and some how it deleted file /run/flannel/subnet.env.
After creating file with content from other node resolve my issue.

issue on arm64: no endpoints,code:503

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:52:34Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/arm64"}
Environment:
OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.2 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.2 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
Kernel (e.g. uname -a):
Linux node4 4.11.0-rc6-next-20170411-00286-gcc55807 #0 SMP PREEMPT Mon Jun 5 18:56:20 CST 2017 aarch64 aarch64 aarch64 GNU/Linux
What happened:
I want to use kube-deploy/master.sh to setup master on ARM64, but I encountered the error when visiting $myip:8080/ui:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "no endpoints available for service "kubernetes-dashboard"",
"reason": "ServiceUnavailable",
"code": 503
}
My branch is 2017-2-7 (c8d6fbfc…)
by the way, It can work on X86-amd64 platform by using the same steps to install.
Anything else we need to know:
5.1 kubectl get pod --namespace=kube-system
k8s-master-10.193.20.23 4/4 Running 17 1h
k8s-proxy-v1-sk8vd 1/1 Running 0 1h
kube-addon-manager-10.193.20.23 2/2 Running 2 1h
kube-dns-3365905565-xvj7n 2/4 CrashLoopBackOff 65 1h
kubernetes-dashboard-1416335539-lhlhz 0/1 CrashLoopBackOff 22 1h
5.2 kubectl describe pods kubernetes-dashboard-1416335539-lhlhz --namespace=kube-system
Name: kubernetes-dashboard-1416335539-lhlhz
Namespace: kube-system
Node: 10.193.20.23/10.193.20.23
Start Time: Mon, 12 Jun 2017 10:04:07 +0800
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=1416335539
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kubernetes-dashboard-1416335539","uid":"6ab170d2-4f13-11e7-a...
scheduler.alpha.kubernetes.io/critical-pod=
scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}]
Status: Running
IP: 10.1.70.2
Controllers: ReplicaSet/kubernetes-dashboard-1416335539
Containers:
kubernetes-dashboard:
Container ID: docker://fbdbe4c047803b0e98ca7412ca617031f1f31d881e3a5838298a1fda24a1ae18
Image: gcr.io/google_containers/kubernetes-dashboard-arm64:v1.5.0
Image ID: docker-pullable://gcr.io/google_containers/kubernetes-dashboard-arm64#sha256:559d58ef0d8e9dbe78f80060401b97d6262462318c0b8e071937a73896ea1d3d
Port: 9090/TCP
State: Running
Started: Mon, 12 Jun 2017 11:30:03 +0800
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 12 Jun 2017 11:24:28 +0800
Finished: Mon, 12 Jun 2017 11:24:59 +0800
Ready: True
Restart Count: 23
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-0mnn8 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-0mnn8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-0mnn8
Optional: false
QoS Class: Guaranteed
Node-Selectors:
Tolerations:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
30m 30m 1 kubelet, 10.193.20.23 spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id b0562b3640ae: pod "kubernetes-dashboard-1416335539-lhlhz_kube-system(6ab54dba-4f13-11e7-a56b-6805ca369d7f)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
18m 18m 1 kubelet, 10.193.20.23 spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 477066c3a00f: pod "kubernetes-dashboard-1416335539-lhlhz_kube-system(6ab54dba-4f13-11e7-a56b-6805ca369d7f)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
12m 12m 1 kubelet, 10.193.20.23 spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 3e021d6df31f: pod "kubernetes-dashboard-1416335539-lhlhz_kube-system(6ab54dba-4f13-11e7-a56b-6805ca369d7f)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
11m 11m 1 kubelet, 10.193.20.23 spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 43fe3c37817d: pod "kubernetes-dashboard-1416335539-lhlhz_kube-system(6ab54dba-4f13-11e7-a56b-6805ca369d7f)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
5m 5m 1 kubelet, 10.193.20.23 spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 23cea72e1f45: pod "kubernetes-dashboard-1416335539-lhlhz_kube-system(6ab54dba-4f13-11e7-a56b-6805ca369d7f)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
1h 5m 7 kubelet, 10.193.20.23 spec.containers{kubernetes-dashboard} Warning Unhealthy Liveness probe failed: Get http://10.1.70.2:9090/: dial tcp 10.1.70.2:9090: getsockopt: connection refused
1h 38s 335 kubelet, 10.193.20.23 spec.containers{kubernetes-dashboard} Warning BackOff Back-off restarting failed docker container
1h 38s 307 kubelet, 10.193.20.23 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-1416335539-lhlhz_kube-system(6ab54dba-4f13-11e7-a56b-6805ca369d7f)"
1h 27s 24 kubelet, 10.193.20.23 spec.containers{kubernetes-dashboard} Normal Pulled Container image "gcr.io/google_containers/kubernetes-dashboard-arm64:v1.5.0" already present on machine
59m 23s 15 kubelet, 10.193.20.23 spec.containers{kubernetes-dashboard} Normal Created (events with common reason combined)
59m 22s 15 kubelet, 10.193.20.23 spec.containers{kubernetes-dashboard} Normal Started (events with common reason combined)
5.3 kubectl get svc,ep,rc,rs,deploy,pod -o wide --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default svc/kubernetes 10.0.0.1 443/TCP 16m
kube-system svc/kube-dns 10.0.0.10 53/UDP,53/TCP 16m k8s-app=kube-dns
kube-system svc/kubernetes-dashboard 10.0.0.95 80/TCP 16m k8s-app=kubernetes-dashboard
NAMESPACE NAME ENDPOINTS AGE
default ep/kubernetes 10.193.20.23:6443 16m
kube-system ep/kube-controller-manager <none> 11m
kube-system ep/kube-dns 16m
kube-system ep/kube-scheduler <none> 11m
kube-system ep/kubernetes-dashboard 16m
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR
kube-system rs/kube-dns-3365905565 1 1 0 16m kubedns,dnsmasq,dnsmasq-metrics,healthz gcr.io/google_containers/kubedns-arm64:1.9,gcr.io/google_containers/kube-dnsmasq-arm64:1.4,gcr.io/google_containers/dnsmasq-metrics-arm64:1.0,gcr.io/google_containers/exechealthz-arm64:1.2 k8s-app=kube-dns,pod-template-hash=3365905565
kube-system rs/kubernetes-dashboard-1416335539 1 1 0 16m kubernetes-dashboard gcr.io/google_containers/kubernetes-dashboard-arm64:v1.5.0 k8s-app=kubernetes-dashboard,pod-template-hash=1416335539
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINER(S) IMAGE(S) SELECTOR
kube-system deploy/kube-dns 1 1 1 0 16m kubedns,dnsmasq,dnsmasq-metrics,healthz gcr.io/google_containers/kubedns-arm64:1.9,gcr.io/google_containers/kube-dnsmasq-arm64:1.4,gcr.io/google_containers/dnsmasq-metrics-arm64:1.0,gcr.io/google_containers/exechealthz-arm64:1.2 k8s-app=kube-dns
kube-system deploy/kubernetes-dashboard 1 1 1 0 16m kubernetes-dashboard gcr.io/google_containers/kubernetes-dashboard-arm64:v1.5.0 k8s-app=kubernetes-dashboard
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system po/k8s-master-10.193.20.23 4/4 Running 50 15m 10.193.20.23 10.193.20.23
kube-system po/k8s-proxy-v1-5b831 1/1 Running 0 16m 10.193.20.23 10.193.20.23
kube-system po/kube-addon-manager-10.193.20.23 2/2 Running 6 15m 10.193.20.23 10.193.20.23
kube-system po/kube-dns-3365905565-jxg4f 1/4 CrashLoopBackOff 20 16m 10.1.5.3 10.193.20.23
kube-system po/kubernetes-dashboard-1416335539-frt3v 0/1 CrashLoopBackOff 7 16m 10.1.5.2 10.193.20.23
5.4 kubectl describe pods kube-dns-3365905565-lb0mq --namespace=kube-system
Name: kube-dns-3365905565-lb0mq
Namespace: kube-system
Node: 10.193.20.23/10.193.20.23
Start Time: Wed, 14 Jun 2017 10:43:46 +0800
Labels: k8s-app=kube-dns
pod-template-hash=3365905565
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kube-dns-3365905565","uid":"4870aec2-50ab-11e7-a420-6805ca36...
scheduler.alpha.kubernetes.io/critical-pod=
scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}]
Status: Running
IP: 10.1.20.3
Controllers: ReplicaSet/kube-dns-3365905565
Containers:
kubedns:
Container ID: docker://729562769b48be60a02b62692acd3d1e1c67ac2505f4cb41240067777f45fd77
Image: gcr.io/google_containers/kubedns-arm64:1.9
Image ID: docker-pullable://gcr.io/google_containers/kubedns-arm64#sha256:3c78a2c5b9b86c5aeacf9f5967f206dcf1e64362f3e7f274c1c078c954ecae38
Ports: 10053/UDP, 10053/TCP, 10055/TCP
Args:
--domain=cluster.local.
--dns-port=10053
--config-map=kube-dns
--v=0
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Wed, 14 Jun 2017 10:56:29 +0800
Finished: Wed, 14 Jun 2017 10:58:06 +0800
Ready: False
Restart Count: 6
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/healthz-kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
Environment:
PROMETHEUS_PORT: 10055
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-1t5v9 (ro)
dnsmasq:
Container ID: docker://b6d7e98a4af2715294764929f901947ab3b985be45d9f213245bd338ab8c3101
Image: gcr.io/google_containers/kube-dnsmasq-arm64:1.4
Image ID: docker-pullable://gcr.io/google_containers/kube-dnsmasq-arm64#sha256:dff5f9e2a521816aa314d469fd8ef961270fe43b4a74bab490385942103f3728
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--no-resolv
--server=127.0.0.1#10053
--log-facility=-
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Wed, 14 Jun 2017 10:55:50 +0800
Finished: Wed, 14 Jun 2017 10:57:26 +0800
Ready: False
Restart Count: 6
Requests:
cpu: 150m
memory: 10Mi
Liveness: http-get http://:8080/healthz-dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-1t5v9 (ro)
dnsmasq-metrics:
Container ID: docker://51693aea0e732e488b631dcedc082f5a9e23b5b74857217cf005d1e947375367
Image: gcr.io/google_containers/dnsmasq-metrics-arm64:1.0
Image ID: docker-pullable://gcr.io/google_containers/dnsmasq-metrics-arm64#sha256:fc0e8b676a26ed0056b8c68611b74b9b5f3f00c608e5b11ef1608484ce55dd9a
Port: 10054/TCP
Args:
--v=2
--logtostderr
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: ContainerCannotRun
Exit Code: 128
Started: Wed, 14 Jun 2017 10:57:28 +0800
Finished: Wed, 14 Jun 2017 10:57:28 +0800
Ready: False
Restart Count: 7
Requests:
memory: 10Mi
Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-1t5v9 (ro)
healthz:
Container ID: docker://fab7ef143a95ad4d2f6363d5fcdc162eba1522b92726665916462be765289327
Image: gcr.io/google_containers/exechealthz-arm64:1.2
Image ID: docker-pullable://gcr.io/google_containers/exechealthz-arm64#sha256:e8300fde6c36b454cc00b5fffc96d6985622db4d8eb42a9f98f24873e9535b5c
Port: 8080/TCP
Args:
--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
--url=/healthz-dnsmasq
--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
--url=/healthz-kubedns
--port=8080
--quiet
State: Running
Started: Wed, 14 Jun 2017 10:44:31 +0800
Ready: True
Restart Count: 0
Limits:
memory: 50Mi
Requests:
cpu: 10m
memory: 50Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-1t5v9 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-1t5v9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-1t5v9
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
15m 15m 1 default-scheduler Normal Scheduled Successfully assigned kube-dns-3365905565-lb0mq to 10.193.20.23
14m 14m 1 kubelet, 10.193.20.23 spec.containers{kubedns} Normal Created Created container with docker id 2fef2db445e6; Security:[seccomp=unconfined]
14m 14m 1 kubelet, 10.193.20.23 spec.containers{kubedns} Normal Started Started container with docker id 2fef2db445e6
14m 14m 1 kubelet, 10.193.20.23 spec.containers{dnsmasq} Normal Created Created container with docker id 41ec998eeb76; Security:[seccomp=unconfined]
14m 14m 1 kubelet, 10.193.20.23 spec.containers{dnsmasq} Normal Started Started container with docker id 41ec998eeb76
14m 14m 1 kubelet, 10.193.20.23 spec.containers{dnsmasq-metrics} Normal Created Created container with docker id 676ef0e877c8; Security:[seccomp=unconfined]
14m 14m 1 kubelet, 10.193.20.23 spec.containers{healthz} Normal Pulled Container image "gcr.io/google_containers/exechealthz-arm64:1.2" already present on machine
14m 14m 1 kubelet, 10.193.20.23 spec.containers{dnsmasq-metrics} Warning Failed Failed to start container with docker id 676ef0e877c8 with error: Error response from daemon: {"message":"linux spec user: unable to find group nobody: no matching entries in group file"}
14m 14m 1 kubelet, 10.193.20.23 spec.containers{healthz} Normal Created Created container with docker id fab7ef143a95; Security:[seccomp=unconfined]
14m 14m 1 kubelet, 10.193.20.23 spec.containers{healthz} Normal Started Started container with docker id fab7ef143a95
14m 14m 1 kubelet, 10.193.20.23 spec.containers{dnsmasq-metrics} Warning Failed Failed to start container with docker id 45f6bd7f1f3a with error: Error response from daemon: {"message":"linux spec user: unable to find group nobody: no matching entries in group file"}
14m 14m 1 kubelet, 10.193.20.23 spec.containers{dnsmasq-metrics} Normal Created Created container with docker id 45f6bd7f1f3a; Security:[seccomp=unconfined]
14m 14m 1 kubelet, 10.193.20.23 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "dnsmasq-metrics" with CrashLoopBackOff: "Back-off 10s restarting failed container=dnsmasq-metrics pod=kube-dns-3365905565-lb0mq_kube-system(48845c1a-50ab-11e7-a420-6805ca369d7f)"
14m 14m 1 kubelet, 10.193.20.23 spec.containers{dnsmasq-metrics} Normal Created Created container with docker id 2d1e5adb97bb; Security:[seccomp=unconfined]
14m 14m 1 kubelet, 10.193.20.23 spec.containers{dnsmasq-metrics} Warning Failed Failed to start container with docker id 2d1e5adb97bb with error: Error response from daemon: {"message":"linux spec user: unable to find group nobody: no matching entries in group file"}
14m 14m 2 kubelet, 10.193.20.23 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "dnsmasq-metrics" with CrashLoopBackOff: "Back-off 20s restarting failed container=dnsmasq-metrics pod=kube-dns-3365905565-lb0mq_kube-system(48845c1a-50ab-11e7-a420-6805ca369d7f)"
So it looks like you have hit a (or several) bugs in Kubernetes. I suggest that you retry with a more recent version (possibly another docker version too). It would be a good idea to report these bugs too (https://github.com/kubernetes/dashboard/issues).
All in all, bear in mind that Kubernetes on arm is an advanced topic and you should expect problems and be ready to debug/resolve them :/
There might be a problem with that docker image (gcr.io/google_containers/dnsmasq-metrics-amd64). Non amd64 stuff is not well tested.
Could you try running:
kubectl set image --namespace=kube-system deployment/kube-dns dnsmasq-metrics=lenart/dnsmasq-metrics-arm64:1.0`
Can't reach dashboard because the dashboard Pod is unhealthy and failing the readiness probe. Because it's not ready it's not considered for the dashboard service so the service has no endpoints which leads to the error message you reported.
The dashboard is most likely unhealthy because kube-dns is not ready (1/4 containers in the Pod ready, should be 4/4).
The kube-dns is most likely unhealthy because you have no pod networking (overlay network) deployed.
Go to the add-ons, pick a network add-on and deploy it. Weave has 1.5 compatible version and requires no setup.
After you have done that give it a few minutes. If you are inpatient just delete the kubernetes-dashboard and kube-dns pods (not the deployment/controller!!). If this does not resolve your problem then please update your question with the new information.

Unable to pull from private docker hub registry on kubernetes

I'm running a k8 cluster on google container engine. I'm having trouble getting it to pull images from a private docker repo.
I get the following when trying to boot:
Name: ds-expected-date
Namespace: default
Node: gke-ds-cluster-1-default-pool-8980b100-l64j/10.132.0.3
Start Time: Wed, 24 May 2017 13:24:11 +0100
Labels: <none>
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container ds-expected-date-flask
Status: Pending
IP: 10.40.0.23
Controllers: <none>
Containers:
ds-expected-date-flask:
Container ID:
Image: fluidy/ds-expected-date:latest
Image ID:
Port:
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-h340m (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-h340m:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-h340m
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
21s 21s 1 default-scheduler Normal Scheduled Successfully assigned ds-expected-date to gke-ds-cluster-1-default-pool-8980b100-l64j
18s 18s 1 kubelet, gke-ds-cluster-1-default-pool-8980b100-l64j spec.containers{ds-expected-date-flask} Normal BackOff Back-off pulling image "fluidy/ds-expected-date:latest"
18s 18s 1 kubelet, gke-ds-cluster-1-default-pool-8980b100-l64j Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ds-expected-date-flask" with ImagePullBackOff: "Back-off pulling image \"fluidy/ds-expected-date:latest\""
20s 6s 2 kubelet, gke-ds-cluster-1-default-pool-8980b100-l64j spec.containers{ds-expected-date-flask} Normal Pulling pulling image "fluidy/ds-expected-date:latest"
19s 5s 2 kubelet, gke-ds-cluster-1-default-pool-8980b100-l64j spec.containers{ds-expected-date-flask} Warning Failed Failed to pull image "fluidy/ds-expected-date:latest": Error response from daemon: unauthorized: authentication required
19s 5s 2 kubelet, gke-ds-cluster-1-default-pool-8980b100-l64j Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ds-expected-date-flask" with ErrImagePull: "Error response from daemon: unauthorized: authentication required"
I have followed all the instructions on the docs page. I'm confident my registry secret is being read - if I put duff credentials in it, the error changes to 'invalid user name or password'.
You have not configured your cluster to pull private images from Docker Hub with your credentials.
Read and apply this guide: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Google Container Engine can automatically pull from Google Container Registry (http://gcr.io), consider using that without pulling images from a private registry.