Why does 'unauthorized' appear in the startup of kubernetes cluster? - kubernetes

I start k8s on my local machine and find most pods are not ready:
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-65dbdb44db-zrfpm 0/1 Running 0 3m9s
kube-system dashboard-metrics-scraper-545bbb8767-nfnbg 1/1 Running 1 16m
kube-system kube-flannel-ds-amd64-nm7kr 0/1 CrashLoopBackOff 5 16m
kube-system kubernetes-dashboard-65665f84db-rqxpv 0/1 CrashLoopBackOff 4 118s
kube-system metrics-server-869ffc99cd-6fhfl 0/1 CrashLoopBackOff 5 16m
Then I check the status of flannel:
kubectl logs kube-flannel-ds-amd64-nm7kr -n kube-system
The log tells me that something is unauthorized:
I0809 10:23:51.307347 1 main.go:518] Determining IP address of default interface
I0809 10:23:51.308840 1 main.go:531] Using interface with name wlo1 and address 192.168.1.102
I0809 10:23:51.308894 1 main.go:548] Defaulting external address to interface address (192.168.1.102)
W0809 10:23:51.308917 1 client_config.go:517] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
E0809 10:23:51.620449 1 main.go:243] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-amd64-nm7kr': Unauthorized
Then I check the coredns:
E0809 10:28:06.807582 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Unauthorized
Then I check the metrics-server:
2020/08/09 10:28:06 Starting overwatch
2020/08/09 10:28:06 Using namespace: kube-system
2020/08/09 10:28:06 Using in-cluster config to connect to apiserver
2020/08/09 10:28:06 Using secret token for csrf signing
2020/08/09 10:28:06 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Unauthorized
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00036f0e0)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000114080)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:501 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000114080)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:469 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:550
main.main()
/home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x20d
I use kubernetes 1.18.2.
So what's wrong with that?

Related

Kubernetes cluster on bare metal by kubeadm

I'm trying to create a single control-plane cluster with kubeadm on 3 bare metal nodes (1 master and 2 workers) running on Debian 10 with Docker as a container runtime. Each node has an external IP and internal IP.
I want to configure a cluster on the internal network and be accessible from the Internet.
Used this command for that (please correct me if something wrong):
kubeadm init --control-plane-endpoint=10.10.0.1 --apiserver-cert-extra-sans={public_DNS_name},10.10.0.1 --pod-network-cidr=192.168.0.0/16
I got:
kubectl get no -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
dev-k8s-master-0.public.dns Ready master 16h v1.18.2 10.10.0.1 <none> Debian GNU/Linux 10 (buster) 4.19.0-8-amd64 docker://19.3.8
Init phase complete successfully and the cluster is accessible from the Internet. All pods are up and running except coredns that should be running after networking will be applied.
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
After networking applied, coredns pods still not ready:
kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-75d56dfc47-g8g9g 0/1 CrashLoopBackOff 192 16h
kube-system calico-node-22gtx 1/1 Running 0 16h
kube-system coredns-66bff467f8-87vd8 0/1 Running 0 16h
kube-system coredns-66bff467f8-mv8d9 0/1 Running 0 16h
kube-system etcd-dev-k8s-master-0 1/1 Running 0 16h
kube-system kube-apiserver-dev-k8s-master-0 1/1 Running 0 16h
kube-system kube-controller-manager-dev-k8s-master-0 1/1 Running 0 16h
kube-system kube-proxy-lp6b8 1/1 Running 0 16h
kube-system kube-scheduler-dev-k8s-master-0 1/1 Running 0 16h
Some logs from failed pods:
kubectl -n kube-system logs calico-kube-controllers-75d56dfc47-g8g9g
2020-04-22 08:24:55.853 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", ReconcilerPeriod:"5m", CompactionPeriod:"10m", EnabledControllers:"node", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", HealthEnabled:true, SyncNodeLabels:true, DatastoreType:"kubernetes"}
2020-04-22 08:24:55.855 [INFO][1] k8s.go 228: Using Calico IPAM
W0422 08:24:55.855525 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2020-04-22 08:24:55.856 [INFO][1] main.go 109: Ensuring Calico datastore is initialized
2020-04-22 08:25:05.857 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2020-04-22 08:25:05.857 [FATAL][1] main.go 114: Failed to initialize Calico datastore error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
coredns:
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0422 08:29:12.275344 1 trace.go:116] Trace[1050055850]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.274382393 +0000 UTC m=+59491.429700922) (total time: 30.000897581s):
Trace[1050055850]: [30.000897581s] [30.000897581s] END
E0422 08:29:12.275388 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0422 08:29:12.276163 1 trace.go:116] Trace[188478428]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.275499997 +0000 UTC m=+59491.430818380) (total time: 30.000606394s):
Trace[188478428]: [30.000606394s] [30.000606394s] END
E0422 08:29:12.276198 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0422 08:29:12.277424 1 trace.go:116] Trace[16697023]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.276675998 +0000 UTC m=+59491.431994406) (total time: 30.000689778s):
Trace[16697023]: [30.000689778s] [30.000689778s] END
E0422 08:29:12.277452 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
Any thoughts what's wrong?
This answer is to call attention to #florin suggestion:
I've seen a similar behavior when I had multiple public interfaces on the node and calico selected the wrong one.
What I did is to set IP_AUTODETECT_METHOD in the calico config.
From Calico Configuration on IP_AUTO_DETECT_METHOD:
The method to use to autodetect the IPv4 address for this host. This is only used when the IPv4 address is being autodetected. See IP Autodetection methods for details of the valid methods.
Learn more Here: https://docs.projectcalico.org/reference/node/configuration#ip-autodetection-methods
I am also facing same problem, but following is work for me, try this in you master node.
$ sudo iptables -P INPUT ACCEPT
$ sudo iptables -P FORWARD ACCEPT
$ sudo iptables -P FORWARD ACCEPT
$ sudo iptables -F

kubernetes HA setup with kubeadm - fail to start scheduler and controller

I attempt to build an HA cluster using kubeadm, here is my configuration:
kind: MasterConfiguration
kubernetesVersion: v1.11.4
apiServerCertSANs:
- "aaa.xxx.yyy.zzz"
api:
controlPlaneEndpoint: "my.domain.de:6443"
apiServerExtraArgs:
apiserver-count: 3
etcd:
local:
image: quay.io/coreos/etcd:v3.3.10
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4):2379"
advertise-client-urls: "https://$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4):2379"
listen-peer-urls: "https://$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4):2380"
initial-advertise-peer-urls: "https://$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4):2380"
initial-cluster-state: "new"
initial-cluster-token: "kubernetes-cluster"
initial-cluster: ${CLUSTER}
name: $(hostname -s)
localEtcd:
serverCertSANs:
- "$(hostname -s)"
- "$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)"
peerCertSANs:
- "$(hostname -s)"
- "$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)"
networking:
podSubnet: "${POD_SUBNET}/${POD_SUBNETMASK}"
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: foobar.fedcba9876543210
ttl: 24h0m0s
usages:
- signing
- authentication
I run this on all three nodes, and I get the nodes starting. After joining calico, it seems that everything is fine, I even added one worker successfully:
ubuntu#master-2-test2:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1-test2 Ready master 1h v1.11.4
master-2-test2 Ready master 1h v1.11.4
master-3-test2 Ready master 1h v1.11.4
node-1-test2 Ready <none> 1h v1.11.4
Looking at the control plane, everything looks fine.
curl https://192.168.0.125:6443/api/v1/nodes works from both the masters and the worker node. All pods are running:
ubuntu#master-2-test2:~$ sudo kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-9lnk8 2/2 Running 0 1h
calico-node-f7dkk 2/2 Running 1 1h
calico-node-k7hw5 2/2 Running 17 1h
calico-node-rtrvb 2/2 Running 3 1h
coredns-78fcdf6894-6xgqc 1/1 Running 0 1h
coredns-78fcdf6894-kcm4f 1/1 Running 0 1h
etcd-master-1-test2 1/1 Running 0 1h
etcd-master-2-test2 1/1 Running 1 1h
etcd-master-3-test2 1/1 Running 0 1h
kube-apiserver-master-1-test2 1/1 Running 0 40m
kube-apiserver-master-2-test2 1/1 Running 0 58m
kube-apiserver-master-3-test2 1/1 Running 0 36m
kube-controller-manager-master-1-test2 1/1 Running 0 17m
kube-controller-manager-master-2-test2 1/1 Running 1 17m
kube-controller-manager-master-3-test2 1/1 Running 0 17m
kube-proxy-5clt4 1/1 Running 0 1h
kube-proxy-d2tpz 1/1 Running 0 1h
kube-proxy-q6kjw 1/1 Running 0 1h
kube-proxy-vn6l7 1/1 Running 0 1h
kube-scheduler-master-1-test2 1/1 Running 1 24m
kube-scheduler-master-2-test2 1/1 Running 0 24m
kube-scheduler-master-3-test2 1/1 Running 0 24m
But trying to start a pod, nothing happens:
~$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 0 0 0 32m
I turned into looking into the scheduler and controller, and to my dismay there are a lot of errors, the controller floods with:
E1108 00:40:36.638832 1 reflector.go:205] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: Unauthorized
E1108 00:40:36.639161 1 reflector.go:205] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: Unauthorized
and sometimes with:
garbagecollector.go:649] failed to discover preferred resources: Unauthorized
E1108 00:40:36.639356 1 reflector.go:205] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: Unauthorized
E1108 00:40:36.640568 1 reflector.go:205] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: Unauthorized
E1108 00:40:36.642129 1 reflector.go:205] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: Unauthorized
And the scheduler has similar errors:
E1107 23:25:43.026465 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.ReplicaSet: Get https://mydomain.de:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: EOF
E1107 23:25:43.026614 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: Get https://mydomain.de:e:6443/api/v1/nodes?limit=500&resourceVersion=0: EOF
So far, I have no clue how to correct these errors. Any help would be appreciated.
more information:
The kubeconfig for kube-proxy is:
----
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://my.domain.de:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
Events: <none>
Looks good to me, however, can you describe the workernode and see resources available for pods? Also describe the pod and see what error it shows.
There's some authentication issue (certificate) while talking to the active kube-apiserver on this endpoint: https://mydomain.de:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0.
Some pointers:
Is the load balancer for your kube-apiserver pointing to the right one? Are you using an L4 (TCP) load balancer and not a L7 (HTTP) load balancer?
Did you copy the same certs everywhere and made sure that they are the same?
USER=ubuntu # customizable
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"#$host:
scp /etc/kubernetes/pki/ca.key "${USER}"#$host:
scp /etc/kubernetes/pki/sa.key "${USER}"#$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"#$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"#$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"#$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"#$host:etcd-ca.crt
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"#$host:etcd-ca.key
scp /etc/kubernetes/admin.conf "${USER}"#$host:
done
Did you check that the kube-apiserver and the kube-controller-manager configurations are equivalent under /etc/kubernetes/manifests?

flannel pods in CrashLoopBackoff Error in kubernetes

Using flannel as a CNI in kubernetes i am trying to implement a network for pod to pod communication spread on different vagrant vms. I am using this https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml to create flannel pods. But the kube-flannel pods go in CrashLoopBackOff error and do not start.
[root#flnode-04 ~]# kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
diamanti-system collectd-v0.5-flnode-04 1/1 Running 0 3h 192.168.30.14 flnode-04
diamanti-system collectd-v0.5-flnode-05 1/1 Running 0 3h 192.168.30.15 flnode-05
diamanti-system collectd-v0.5-flnode-06 1/1 Running 0 3h 192.168.30.16 flnode-06
diamanti-system provisioner-d4kvf 1/1 Running 0 3h 192.168.30.16 flnode-06
kube-system kube-flannel-ds-2kqpv 0/1 CrashLoopBackOff 1 18m 192.168.30.14 flnode-04
kube-system kube-flannel-ds-xgqdm 0/1 CrashLoopBackOff 1 18m 192.168.30.16 flnode-06
kube-system kube-flannel-ds-z59jz 0/1 CrashLoopBackOff 1 18m 192.168.30.15 flnode-05
here are the logs of one pod
[root#flnode-04 ~]# kubectl logs kube-flannel-ds-2kqpv --namespace=kube-system
I0327 10:28:44.103425 1 main.go:483] Using interface with name mgmt0 and address 192.168.30.14
I0327 10:28:44.105609 1 main.go:500] Defaulting external address to interface address (192.168.30.14)
I0327 10:28:44.138132 1 kube.go:130] Waiting 10m0s for node controller to sync
I0327 10:28:44.138213 1 kube.go:283] Starting kube subnet manager
I0327 10:28:45.138509 1 kube.go:137] Node controller sync successful
I0327 10:28:45.138588 1 main.go:235] Created subnet manager: Kubernetes Subnet Manager - flnode-04
I0327 10:28:45.138596 1 main.go:238] Installing signal handlers
I0327 10:28:45.138690 1 main.go:348] Found network config - Backend type: vxlan
I0327 10:28:45.138767 1 vxlan.go:119] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
panic: assignment to entry in nil map
goroutine 1 [running]:
github.com/coreos/flannel/subnet/kube.(*kubeSubnetManager).AcquireLease(0xc420010cd0, 0x7f5314399bd0, 0xc420347880, 0xc4202213e0, 0x6, 0xf54, 0xc4202213e0)
/go/src/github.com/coreos/flannel/subnet/kube/kube.go:239 +0x1f7
github.com/coreos/flannel/backend/vxlan.(*VXLANBackend).RegisterNetwork(0xc4200b3480, 0x7f5314399bd0, 0xc420347880, 0xc420010c30, 0xc4200b3480, 0x0, 0x0, 0x4d0181)
/go/src/github.com/coreos/flannel/backend/vxlan/vxlan.go:141 +0x44e
main.main()
/go/src/github.com/coreos/flannel/main.go:278 +0x8ae
What exactly is the reason for flannel pods going into CrashLoopBackoff and what is the solution ?
I was able to solve the problem by running the command
kubectl annotate node appserv9 flannel.alpha.coreos.com/public-ip=10.10.10.10 --overwrite=true
Reason for bug : nil map in the code(no key available)
it doesn't matter what ip you give but this command has to be run individually on all the nodes so that the above error does not have to assign to a nil map.
If you deploy cluster with kubeadm init --pod-network-cidr network/mask, this network/mask should match the ConfigMap in kube-flannel.yaml
My ConfigMap looks like:
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
data:
...
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
So the network/mask should equal 10.244.0.0/16

Kubernetes dashboard not working, “already exists” and “could not find the requested resource (get services heapster)”

I am new to Kubernetes
The goal is to get Kubernetes cluster dashboard working
The Kubernetes cluster was deployed using Kubespray: github.com/kubernetes-incubator/kubespray
Versions:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-15T08:51:21Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3+coreos.0", GitCommit:"42de91f04e456f7625941a6c4aaedaa69708be1b", GitTreeState:"clean", BuildDate:"2017-08-07T19:44:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
When I do kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml --validate=false as described here
I get:
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": rolebindings.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": deployments.extensions "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists
When I run kubectl get services --namespace kube-system, I get:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.233.0.3 <none> 53/UDP,53/TCP 10d
kubernetes-dashboard 10.233.28.132 <none> 80/TCP 9d
When I try to reach the dashboard kubernetes cluster, I get Connection refused
kubectl logs --namespace=kube-system kubernetes-dashboard-4167803980-1dz53 output:
2017/09/27 10:54:11 Using in-cluster config to connect to apiserver
2017/09/27 10:54:11 Using service account token for csrf signing
2017/09/27 10:54:11 No request provided. Skipping authorization
2017/09/27 10:54:11 Starting overwatch
2017/09/27 10:54:11 Successful initial request to the apiserver, version: v1.7.3+coreos.0
2017/09/27 10:54:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2017/09/27 10:54:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/09/27 10:54:11 Initializing secret synchronizer synchronously using secret kubernetes-dashboard-key-holder from namespace kube-system
2017/09/27 10:54:11 Initializing JWE encryption key from synchronized object
2017/09/27 10:54:11 Creating in-cluster Heapster client
2017/09/27 10:54:11 Serving securely on HTTPS port: 8443
2017/09/27 10:54:11 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
Other outputs:
kubectl get pods --namespace=kube-system:
NAME READY STATUS RESTARTS AGE
calico-node-bqckz 1/1 Running 0 12d
calico-node-r9svd 1/1 Running 2 12d
calico-node-w3tps 1/1 Running 0 12d
kube-apiserver-kubetest1 1/1 Running 0 12d
kube-apiserver-kubetest2 1/1 Running 0 12d
kube-controller-manager-kubetest1 1/1 Running 2 12d
kube-controller-manager-kubetest2 1/1 Running 2 12d
kube-dns-3888408129-n0m8d 3/3 Running 0 12d
kube-dns-3888408129-z8xx3 3/3 Running 0 12d
kube-proxy-kubetest1 1/1 Running 0 12d
kube-proxy-kubetest2 1/1 Running 0 12d
kube-proxy-kubetest3 1/1 Running 0 12d
kube-scheduler-kubetest1 1/1 Running 2 12d
kube-scheduler-kubetest2 1/1 Running 2 12d
kubedns-autoscaler-1629318612-sd924 1/1 Running 0 12d
kubernetes-dashboard-4167803980-1dz53 1/1 Running 0 1d
nginx-proxy-kubetest3 1/1 Running 0 12d
kubectl proxy:
Starting to serve on 127.0.0.1:8001panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x2692f20]
goroutine 1 [running]:
k8s.io/kubernetes/pkg/kubectl.(*ProxyServer).ServeOnListener(0x0, 0x3a95a60, 0xc420114110, 0x17, 0xc4208b7c28)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/proxy_server.go:201 +0x70
k8s.io/kubernetes/pkg/kubectl/cmd.RunProxy(0x3aa5ec0, 0xc42074e960, 0x3a7f1e0, 0xc42000c018, 0xc4201d7200, 0x0, 0x0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:156 +0x774
k8s.io/kubernetes/pkg/kubectl/cmd.NewCmdProxy.func1(0xc4201d7200, 0xc4203586e0, 0x0, 0x2)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:79 +0x4f
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc4201d7200, 0xc420358500, 0x2, 0x2, 0xc4201d7200, 0xc420358500)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x234
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc4202e4240, 0x5000107, 0x0, 0xffffffffffffffff)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x2fe
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc4202e4240, 0xc42074e960, 0x3a7f1a0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b
k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:39 +0xd5
main.main()
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:26 +0x22
kubectl top nodes:
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
kubectl get svc --namespace=kube-system:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.233.0.3 <none> 53/UDP,53/TCP 12d
kubernetes-dashboard 10.233.28.132 <none> 80/TCP 11d
curl http://localhost:8001/ui:
curl: (7) Failed to connect to 10.2.3.211 port 8001: Connection refused
How can I get the dashboard working? Appreciate your help.
you may be installing dashboard version 1.7. try installing version 1.6.3 its well tested.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
Update 10/2/17:
can you try this: Delete and install 1.6.3 version.
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
I believe the kubernetes dashboard is by default available already if you are deploying it through GCP or Azure. The first error explains this already. To verify, you may do type the following command to look for the pods/service in the namespace kube-system.
>kubectl get pods --namespace=kube-system
>kubectl get svc --namespace=kube-system
From the above command, you should find your available kubernetes dashboard and so you don't need to deploy it again. To access the dashboard, you could type the following command.
>kubectl proxy
This will make the Dashboard available at http://localhost:8001/ui on the machine where you type this command.
But to understand more about your problem, may I know which version of kubernetes and what environment are you using now? Also, it will be great if you could show me the result of these two commands.
>kubectl get pods --namespace=kube-system
>kubectl top nodes

How to get the endpoint for kubernetes-dashboard

I have installed kubernetes using minikube on ubuntu 16.04 machine.
I have also installed kubernetes-dashboard.
When i try accessing the dashboard i get
Waiting, endpoint for service is not registered yet
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
.....
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
`
However, when i try a kubectl get pods --all namespacesi get the below output
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-addon-manager-minikube 1/1 Running 0 11m
kube-system kube-dns-1301475494-xtb3b 3/3 Running 0 8m
kube-system kubernetes-dashboard-2039414953-dvv3m 1/1 Running 0 9m
kube-system kubernetes-dashboard-2crsk 1/1 Running 0 8m
kubectl get endpoints --all-namespaces
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 10.0.2.15:8443 11m
kube-system kube-controller-manager <none> 6m
kube-system kube-dns 172.17.0.4:53,172.17.0.4:53 8m
kube-system kube-scheduler <none> 6m
kube-system kubernetes-dashboard <none> 9m
How can i fix this issue? I don't seem to understand what is wrong. I am completely new to kubernetes
You need to run minikube dashboard. You shouldn't install dashboard separately; it comes with minikube.
some of the minikube commands
./minikube.exe version
./minikube.exe delete
./minikube.exe start --help
./minikube get-k8s-versions
./minikube.exe status
./minikube.exe ip
./minikube.exe dashboard --url=true