no endpoints available for service \"kubernetes-dashboard\" - kubernetes

I'm trying to follow GitHub - kubernetes/dashboard: General-purpose web UI for Kubernetes clusters.
deploy/access:
# export KUBECONFIG=/etc/kubernetes/admin.conf
# kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
# kubectl proxy
Starting to serve on 127.0.0.1:8001
curl:
# curl http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}#
Please advise.
per #VKR
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-576cbf47c7-56vg7 0/1 ContainerCreating 0 57m
kube-system coredns-576cbf47c7-sn2fk 0/1 ContainerCreating 0 57m
kube-system etcd-wcmisdlin02.uftwf.local 1/1 Running 0 56m
kube-system kube-apiserver-wcmisdlin02.uftwf.local 1/1 Running 0 56m
kube-system kube-controller-manager-wcmisdlin02.uftwf.local 1/1 Running 0 56m
kube-system kube-proxy-2hhf7 1/1 Running 0 6m57s
kube-system kube-proxy-lzfcx 1/1 Running 0 7m35s
kube-system kube-proxy-rndhm 1/1 Running 0 57m
kube-system kube-scheduler-wcmisdlin02.uftwf.local 1/1 Running 0 56m
kube-system kubernetes-dashboard-77fd78f978-g2hts 0/1 Pending 0 2m38s
$
logs:
$ kubectl logs kubernetes-dashboard-77fd78f978-g2hts -n kube-system
$
describe:
$ kubectl describe pod kubernetes-dashboard-77fd78f978-g2hts -n kube-system
Name: kubernetes-dashboard-77fd78f978-g2hts
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=77fd78f978
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/kubernetes-dashboard-77fd78f978
Containers:
kubernetes-dashboard:
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-gp4l7 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
kubernetes-dashboard-token-gp4l7:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-gp4l7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m39s (x21689 over 20h) default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
$

It would appear that you are attempting to deploy Kubernetes leveraging kubeadm but have skipped the step of Installing a pod network add-on (CNI). Notice the warning:
The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed. kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).
Once you do this, the CoreDNS pods should come up healthy. This can be verified with:
kubectl -n kube-system -l=k8s-app=kube-dns get pods
Then the kubernetes-dashboard pod should come up healthy as well.

you could refer to https://github.com/kubernetes/dashboard#getting-started
Also, I see "https" in your link
Please try this link instead
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

I had the same problem. In the end it turned out as a Calico Network configuration problem. But step by step...
First I checked if the Dashboard Pod was running:
kubectl get pods --all-namespaces
The result for me was:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-bcc6f659f-j57l9 1/1 Running 2 19h
kube-system calico-node-hdxp6 0/1 CrashLoopBackOff 13 15h
kube-system calico-node-z6l56 0/1 Running 68 19h
kube-system coredns-74ff55c5b-8l6m6 1/1 Running 2 19h
kube-system coredns-74ff55c5b-v7pkc 1/1 Running 2 19h
kube-system etcd-got-virtualbox 1/1 Running 3 19h
kube-system kube-apiserver-got-virtualbox 1/1 Running 3 19h
kube-system kube-controller-manager-got-virtualbox 1/1 Running 3 19h
kube-system kube-proxy-q99s5 1/1 Running 2 19h
kube-system kube-proxy-vrpcd 1/1 Running 1 15h
kube-system kube-scheduler-got-virtualbox 1/1 Running 2 19h
kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-qc9ms 1/1 Running 0 28m
kubernetes-dashboard kubernetes-dashboard-74d688b6bc-zrdk4 0/1 CrashLoopBackOff 9 28m
The last line indicates, that the dashboard pod could not have been started (status=CrashLoopBackOff).
And the 2nd line shows that the calico node has problems. Most likely the root cause is Calico.
Next step is to have a look at the pod log (change namespace / name as listed in YOUR pods list):
kubectl logs kubernetes-dashboard-74d688b6bc-zrdk4 -n kubernetes-dashboard
The result for me was:
2021/03/05 13:01:12 Starting overwatch
2021/03/05 13:01:12 Using namespace: kubernetes-dashboard
2021/03/05 13:01:12 Using in-cluster config to connect to apiserver
2021/03/05 13:01:12 Using secret token for csrf signing
2021/03/05 13:01:12 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout
Hm - not really helpful. After searching for "dial tcp 10.96.0.1:443: i/o timeout" I found this information, where it says ...
If you follow the kubeadm instructions to the letter ... Which means install docker, kubernetes (kubeadm, kubectl, & kubelet), and calico with the Kubeadm hosted instructions ... and your computer nodes have a physical ip address in the range of 192.168.X.X then you will end up with the above mentioned non-working dashboard. This is because the node ip addresses clash with the internal calico ip addresses.
https://github.com/kubernetes/dashboard/issues/1578#issuecomment-329904648
Yes, in deed I do have a physical IP in the range of 192.168.x.x - like many others might have as well. I wish Calico would check this during setup.
So let's move the pod network to a different IP range:
You should use a classless reserved IP range for Private Networks like
10.0.0.0/8 (16.777.216 addresses)
172.16.0.0/12 (1.048.576 addresses)
192.168.0.0/16 (65.536 addresses). Otherwise Calico will terminate with an error saying "Invalid CIDR specified in CALICO_IPV4POOL_CIDR" ...
sudo kubeadm reset
sudo rm /etc/cni/net.d/10-calico.conflist
sudo rm /etc/cni/net.d/calico-kubeconfig
export CALICO_IPV4POOL_CIDR=172.16.0.0
export MASTER_IP=192.168.100.122
sudo kubeadm init --pod-network-cidr=$CALICO_IPV4POOL_CIDR/12 --apiserver-advertise-address=$MASTER_IP --apiserver-cert-extra-sans=$MASTER_IP
mkdir -p $HOME/.kube
sudo rm -f $HOME/.kube/config
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chown $(id -u):$(id -g) /etc/kubernetes/kubelet.conf
wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml -O calico.yaml
sudo sed -i "s/192.168.0.0\/16/$CALICO_IPV4POOL_CIDR\/12/g" calico.yaml
sudo sed -i "s/192.168.0.0/$CALICO_IPV4POOL_CIDR/g" calico.yaml
kubectl apply -f calico.yaml
Now we test if all calico pods are running:
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-bcc6f659f-ns7kz 1/1 Running 0 15m
kube-system calico-node-htvdv 1/1 Running 6 15m
kube-system coredns-74ff55c5b-lqwpd 1/1 Running 0 17m
kube-system coredns-74ff55c5b-qzc87 1/1 Running 0 17m
kube-system etcd-got-virtualbox 1/1 Running 0 17m
kube-system kube-apiserver-got-virtualbox 1/1 Running 0 17m
kube-system kube-controller-manager-got-virtualbox 1/1 Running 0 18m
kube-system kube-proxy-6xr5j 1/1 Running 0 17m
kube-system kube-scheduler-got-virtualbox 1/1 Running 0 17m
Looks good. If not check CALICO_IPV4POOL_CIDR by editing the node config: KUBE_EDITOR="nano" kubectl edit -n kube-system ds calico-node
Let's apply the kubernetes-dashboard and start the proxy:
export KUBECONFIG=$HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
kubectl proxy
Now I can load http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

if you are using helm,
check if kubectl proxy is running
then goto
http://localhost:8001/api/v1/namespaces/default/services/https:kubernetes-dashboard:https/proxy
two tips in above link:
use helm to install, the namespaces will be /default (not /kubernetes-dashboard
need add https after /https:kubernetes-dashboard:
better way is
helm delete kubernetes-dashboard
kubectl create namespace kubernetes-dashboard
helm install -n kubernetes-dashboard kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard
then goto
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:https/proxy
then you can easily follow creating-sample-user to get token to login

i was facing the same issue, so i followed the official docs and then went to https://github.com/kubernetes/dashboard url, there is another way using helm on this link https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard
after installing helm and run this 2 commands
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard
it worked but on default namespace on this link
http://localhost:8001/api/v1/namespaces/default/services/https:kubernetes-dashboard:https/proxy/#/workloads?namespace=default

Related

K8s no internet connection inside the container

I installed a clean K8s cluster in virtual machines (Debian 10). After the installation and the integration into my landscape, I checked the connectivity inside my testing alpine image. As result the connection of outgoing traffic not working and no information was inside the coreDNS log. I used the workaround on my build image to overwrite my /etc/resolv.conf and replace the DNS entries (e.g. set 1.1.1.1 as Nameserver). After that temporary "hack" the connection to the internet works perfectly. But the workaround is not a long term solution and I want to use the official way. Inside the documentation of K8s coreDNS, I found the forward section and I interpret the flag like an option, to forward the inquiry to the predefined local resolver. I think the forwarding to the local resolv.conf and the resolve process works not correctly. Can anyone help me to solve that issue?
Basic setup:
K8s version: 1.19.0
K8s setup: 1 master + 2 worker nodes
Based on: Debian 10 VM's
CNI: Flannel
Status of CoreDNS Pods
kube-system coredns-xxxx 1/1 Running 1 26h
kube-system coredns-yyyy 1/1 Running 1 26h
CoreDNS Log:
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
CoreDNS config:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: ""
name: coredns
namespace: kube-system
resourceVersion: "219"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: xxx
Ouput alpine image:
/ # nslookup -debug google.de
;; connection timed out; no servers could be reached
Output of pods resolv.conf
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search development.svc.cluster.local svc.cluster.local cluster.local invalid
options ndots:5
Output of host resolv.conf
cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 213.136.95.11
nameserver 213.136.95.10
search invalid
Output of host /run/flannel/subnet.env
cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
Output of kubectl get pods -n kube-system -o wide
coredns-54694b8f47-4sm4t 1/1 Running 0 14d 10.244.1.48 xxx3-node-1 <none> <none>
coredns-54694b8f47-6c7zh 1/1 Running 0 14d 10.244.0.43 xxx2-master <none> <none>
coredns-54694b8f47-lcthf 1/1 Running 0 14d 10.244.2.88 xxx4-node-2 <none> <none>
etcd-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-apiserver-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-controller-manager-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-flannel-ds-amd64-4w8zl 1/1 Running 8 28d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-flannel-ds-amd64-w7m44 1/1 Running 7 28d xxx.xx.xx.xxx xxx3-node-1 <none> <none>
kube-flannel-ds-amd64-xztqm 1/1 Running 6 28d xxx.xx.xx.xxx xxx4-node-2 <none> <none>
kube-proxy-dfs85 1/1 Running 4 28d xxx.xx.xx.xxx xxx4-node-2 <none> <none>
kube-proxy-m4hl2 1/1 Running 4 28d xxx.xx.xx.xxx xxx3-node-1 <none> <none>
kube-proxy-s7p4s 1/1 Running 8 28d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-scheduler-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
Problem:
The (two) coreDNS pods were only deployed on the master node. You can check the settings with this command.
kubectl get pods -n kube-system -o wide | grep coredns
Solution:
I could solve the problem by scaling up the coreDNS pods and edit the deployment configuration. The following commands must be executed.
kubectl edit deployment coredns -n kube-system
Set replicas value to node quantity e.g. 3
kubectl patch deployment coredns -n kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"force-update/updated-at\":\"$(date +%s)\"}}}}}"
kubectl get pods -n kube-system -o wide | grep coredns
Source
https://blog.dbi-services.com/kubernetes-dns-resolution-using-coredns-force-update-deployment/
Hint
If you still have a problem with your coreDNS and your DNS resolution works sporadically, take a look at this post.

kubernetes dashbord could not get csrf from api server after installed

I am using this command in Helm 3 to install kubernetes dashboard 2.2.0 in kubernetes v1.18,the OS is CentOS 8:
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm repo update
helm install k8s-dashboard/kubernetes-dashboard --generate-name --version 2.2.0
the installing is success,but when I check the pod status,it shows CrashLoopBackOff like this:
[root#localhost ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default kubernetes-dashboard-1594440918-549c59c487-h8z9l 0/1 CrashLoopBackOff 15 87m 10.11.157.65 k8sslave1 <none> <none>
default traefik-5f95ff4766-vg8gx 1/1 Running 0 34m 10.11.125.129 k8sslave2 <none> <none>
kube-system calico-kube-controllers-75d555c48-lt4jr 1/1 Running 0 36h 10.11.102.134 localhost.localdomain <none> <none>
kube-system calico-node-6rj58 1/1 Running 0 14h 192.168.31.30 k8sslave1 <none> <none>
kube-system calico-node-czhww 1/1 Running 0 36h 192.168.31.29 localhost.localdomain <none> <none>
kube-system calico-node-vwr5w 1/1 Running 0 36h 192.168.31.31 k8sslave2 <none> <none>
kube-system coredns-546565776c-45jr5 1/1 Running 40 4d13h 10.11.102.132 localhost.localdomain <none> <none>
kube-system coredns-546565776c-zjwg7 1/1 Running 0 4d13h 10.11.102.129 localhost.localdomain <none> <none>
kube-system etcd-localhost.localdomain 1/1 Running 0 14h 192.168.31.29 localhost.localdomain <none> <none>
kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 14h 192.168.31.29 localhost.localdomain <none> <none>
kube-system kube-controller-manager-localhost.localdomain 1/1 Running 0 14h 192.168.31.29 localhost.localdomain <none> <none>
kube-system kube-proxy-8z9vs 1/1 Running 0 38h 192.168.31.31 k8sslave2 <none> <none>
kube-system kube-proxy-dnpc6 1/1 Running 0 4d13h 192.168.31.29 localhost.localdomain <none> <none>
kube-system kube-proxy-s5t5r 1/1 Running 0 14h 192.168.31.30 k8sslave1 <none> <none>
kube-system kube-scheduler-localhost.localdomain 1/1 Running 0 14h 192.168.31.29 localhost.localdomain <none> <none>
so I just check the kubernetes dashboard pod logs and see what happen:
[root#localhost ~]# kubectl logs kubernetes-dashboard-1594440918-549c59c487-h8z9l
2020/07/11 05:44:13 Starting overwatch
2020/07/11 05:44:13 Using namespace: default
2020/07/11 05:44:13 Using in-cluster config to connect to apiserver
2020/07/11 05:44:13 Using secret token for csrf signing
2020/07/11 05:44:13 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get "https://10.20.0.1:443/api/v1/namespaces/default/secrets/kubernetes-dashboard-csrf": dial tcp 10.20.0.1:443: i/o timeout
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0000a2080)
/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0005a4100)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:501 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc0005a4100)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:469 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:550
main.main()
/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x20d
I am tried to access this resource using curl in host machine to see if the master server is response properly:
[root#localhost ~]# curl -k https://10.20.0.1:443/api/v1/namespaces/default/secrets/kubernetes-dashboard-csrf
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "secrets \"kubernetes-dashboard-csrf\" is forbidden: User \"system:anonymous\" cannot get resource \"secrets\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"name": "kubernetes-dashboard-csrf",
"kind": "secrets"
},
"code": 403
}
this is my master node and k8sslave1 firewalld status:
[root#localhost ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
[root#k8sslave1 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor >
Active: inactive (dead)
Docs: man:firewalld(1)
lines 1-4/4 (END)
so where is the problem? what should I do to make the dashbord running success?
The problem is that you didn't specify ClusterRole for the serviceaccount attached to the dashboard pod.
I've used this chart few months ago and i have to provide custom values.yaml as following :
# myvalues.yaml
#these are mine
rbac:
clusterReadOnlyRole: true # <--- 🔴 YOU NEED this one
clusterAdminRole: false
extraArgs:
- --enable-skip-login
- --enable-insecure-login
- --system-banner="Welcome to Company.com Kubernetes Cluster"
As you can see rbac.enabled is not enough, you need to specify also rbac.clusterReadOnlyRole=true.
Or if you want to give more access to the Dashboard, set true to rbac.clusterAdminRole.
Now, you can upgrade your helm release using values file above :
helm install <generate-release-name> k8s-dashboard/kubernetes-dashboard \
--version 2.2.0 \
-f myvalues.yaml

kubectl proxy not working on Ubuntu LTS 18.04

I've installed Kubernetes on ubuntu 18.04 using this article. Everything is working fine and then I tried to install Kubernetes dashboard with these instructions.
Now when I am trying to run kubectl proxy then the dashboard is not cumming up and it gives following error message in the browser when trying to access it using default kubernetes-dashboard URL.
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
"reason": "ServiceUnavailable",
"code": 503
}
Following commands give this output where kubernetes-dashboard shows status as CrashLoopBackOff
$> kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default amazing-app-rs-59jt9 1/1 Running 5 23d
default amazing-app-rs-k6fg5 1/1 Running 5 23d
default amazing-app-rs-qd767 1/1 Running 5 23d
default amazingapp-one-deployment-57dddd6fb7-xdxlp 1/1 Running 5 23d
default nginx-86c57db685-vwfzf 1/1 Running 4 22d
kube-system coredns-6955765f44-nqphx 0/1 Running 14 25d
kube-system coredns-6955765f44-psdv4 0/1 Running 14 25d
kube-system etcd-master-node 1/1 Running 8 25d
kube-system kube-apiserver-master-node 1/1 Running 42 25d
kube-system kube-controller-manager-master-node 1/1 Running 11 25d
kube-system kube-flannel-ds-amd64-95lvl 1/1 Running 8 25d
kube-system kube-proxy-qcpqm 1/1 Running 8 25d
kube-system kube-scheduler-master-node 1/1 Running 11 25d
kubernetes-dashboard dashboard-metrics-scraper-7b64584c5c-kvz5d 1/1 Running 0 41m
kubernetes-dashboard kubernetes-dashboard-566f567dc7-w2sbk 0/1 CrashLoopBackOff 12 41m
$> kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP ---------- <none> 443/TCP 25d
default nginx NodePort ---------- <none> 80:32188/TCP 22d
kube-system kube-dns ClusterIP ---------- <none> 53/UDP,53/TCP,9153/TCP 25d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP ---------- <none> 8000/TCP 24d
kubernetes-dashboard kubernetes-dashboard ClusterIP ---------- <none> 443/TCP 24d
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP ====== <none> 443/TCP 25d
default nginx NodePort ====== <none> 80:32188/TCP 22d
kube-system kube-dns ClusterIP ====== <none> 53/UDP,53/TCP,9153/TCP 25d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP ====== <none> 8000/TCP 24d
kubernetes-dashboard kubernetes-dashboard ClusterIP ====== <none> 443/TCP 24d
$ kubectl get events -n kubernetes-dashboard
LAST SEEN TYPE REASON OBJECT MESSAGE
24m Normal Pulling pod/kubernetes-dashboard-566f567dc7-w2sbk Pulling image "kubernetesui/dashboard:v2.0.0-rc2"
4m46s Warning BackOff pod/kubernetes-dashboard-566f567dc7-w2sbk Back-off restarting failed container
$ kubectl describe services kubernetes-dashboard -n kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard"...
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.96.241.62
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints:
Session Affinity: None
Events: <none>
$ kubectl logs kubernetes-dashboard-566f567dc7-w2sbk -n kubernetes-dashboard
> 2020/01/29 16:00:34 Starting overwatch 2020/01/29 16:00:34 Using
> namespace: kubernetes-dashboard 2020/01/29 16:00:34 Using in-cluster
> config to connect to apiserver 2020/01/29 16:00:34 Using secret token
> for csrf signing 2020/01/29 16:00:34 Initializing csrf token from
> kubernetes-dashboard-csrf secret panic: Get
> https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf:
> dial tcp 10.96.0.1:443: i/o timeout
>
> goroutine 1 [running]:
> github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0003dac80)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40
> +0x3b4 github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
> github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000534200)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:494
> +0xc7 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000534200)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:462
> +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:543
> main.main()
> /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105
> +0x212
Any suggestions to fix this? Thanks in advance.
I noticed that the guide You used to install kubernetes cluster is missing one important part.
According to kubernetes documentation:
For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.
Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here.
Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network. see here .
Note that flannel works on amd64, arm, arm64, ppc64le and s390x under Linux. Windows (amd64) is claimed as supported in v0.11.0 but the usage is undocumented.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
For more information about flannel, see the CoreOS flannel repository on GitHub .
To fix this:
I suggest using the command:
sysctl net.bridge.bridge-nf-call-iptables=1
And then reinstall flannel:
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Update: After verifying the the /proc/sys/net/bridge/bridge-nf-call-iptables value is 1 by default ubuntu-18-04-lts. So issue here is You need to access the dashboard locally.
If You are connected to Your master node via ssh. It could be possible to use -X flag with ssh in order to launch we browser via ForwardX11. Fortunately ubuntu-18-04-lts has it turned on by default.
ssh -X server
Then install local web browser like chromium.
sudo apt-get install chromium-browser
chromium-browser
And finally access the dashboard locally from node.
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Hope it helps.

Kubernetes Dashboard CrashLoopBackOff, Get error "connect: no route to host", How could I fix it?

I have deployed the Kubernetes dashboard which ended up in CrashLoopBackOff status. When I run:
$ kubectl logs kubernetes-dashboard-767dc7d4d-mc2sm --namespace=kube-system
the output is:
Error from server: Get https://10.4.211.53:10250/containerLogs/kube-system/kubernetes-dashboard-767dc7d4d-mc2sm/kubernetes-dashboard: dial tcp 10.4.211.53:10250: connect: no route to host
How can I fix this? Does this means that the port 10250 isn't open?
Update:
#LucaBrasi
Error from server (NotFound): pods "kubernetes-dashboard-767dc7d4d-mc2sm" not found
systemctl status kubelet --full Output is :
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since 一 2018-09-10 15:04:57 CST; 1 day 23h ago
Docs: https://kubernetes.io/docs/
Main PID: 93440 (kubelet)
Tasks: 21
Memory: 78.9M
CGroup: /system.slice/kubelet.service
└─93440 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni
Output for kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-qh6zb 1/1 Running 2 3d
kube-system coredns-78fcdf6894-xbzgn 1/1 Running 1 3d
kube-system etcd-twsr-whtestserver01.garenanet.com 1/1 Running 2 3d
kube-system kube-apiserver-twsr-whtestserver01.garenanet.com 1/1 Running 2 3d
kube-system kube-controller-manager-twsr-whtestserver01.garenanet.com 1/1 Running 2 3d
kube-system kube-flannel-ds-amd64-2bnmx 1/1 Running 3 3d
kube-system kube-flannel-ds-amd64-r58j6 1/1 Running 0 3d
kube-system kube-flannel-ds-amd64-wq6ls 1/1 Running 0 3d
kube-system kube-proxy-ds7lg 1/1 Running 0 3d
kube-system kube-proxy-fx46d 1/1 Running 0 3d
kube-system kube-proxy-ph7qq 1/1 Running 2 3d
kube-system kube-scheduler-twsr-whtestserver01.garenanet.com 1/1 Running 1 3d
kube-system kubernetes-dashboard-767dc7d4d-mc2sm 0/1 CrashLoopBackOff 877 3d
I had the same issue when I reproduced all the steps from the tutorial you've linked - my dashboard was in CrashLoopBackOffstate. After I performed this steps and applied new dashboard yaml from the official github documentation (there seems to be no difference from the one you've posted), the dashboard was working properly.
First, list all the objects related to Kubernetes dashboard:
kubectl get secret,sa,role,rolebinding,services,deployments --namespace=kube-system | grep dashboard
Delete them:
kubectl delete deployment kubernetes-dashboard --namespace=kube-system
kubectl delete service kubernetes-dashboard --namespace=kube-system
kubectl delete role kubernetes-dashboard-minimal --namespace=kube-system
kubectl delete rolebinding kubernetes-dashboard-minimal --namespace=kube-system
kubectl delete sa kubernetes-dashboard --namespace=kube-system
kubectl delete secret kubernetes-dashboard-certs --namespace=kube-system
kubectl delete secret kubernetes-dashboard-key-holder --namespace=kube-system
Now apply Kubernetes dashboard yaml:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Please tell me if this worked for you as well, and if it did, treat it as a workaround as I don't know the reason yet - I am investigating.

Deploy GitLab with Helm. Nginx-ingress pods can't start

Call install:
helm install --name gitlab1 -f values.yaml gitlab/gitlab-omnibus
I see Pods can't start.
And I see error:no service with name nginx-ingress/default-http-backend found: services "default-http-backend" is forbidden: User "system:serviceaccount:nginx-ingress:default" cannot get services in the namespace "nginx-ingress"
I think about ABAC/RBAC... But what doing with this...
Logs from nginx pod:
# kubectl logs nginx-ndxhn --namespace nginx-ingress
[dumb-init] Unable to detach from controlling tty (errno=25 Inappropriate ioctl for device).
[dumb-init] Child spawned with PID 7.
[dumb-init] Unable to attach to controlling tty (errno=25 Inappropriate ioctl for device).
[dumb-init] setsid complete.
I0530 21:30:23.232676 7 launch.go:105] &{NGINX 0.9.0-beta.11 git-a3131c5 https://github.com/kubernetes/ingress}
I0530 21:30:23.232749 7 launch.go:108] Watching for ingress class: nginx
I0530 21:30:23.233708 7 launch.go:262] Creating API server client for https://10.233.0.1:443
I0530 21:30:23.234080 7 nginx.go:182] starting NGINX process...
F0530 21:30:23.251587 7 launch.go:122] no service with name nginx-ingress/default-http-backend found: services "default-http-backend" is forbidden: User "system:serviceaccount:nginx-ingress:default" cannot get services in the namespace "nginx-ingress"
[dumb-init] Received signal 17.
[dumb-init] A child with PID 7 exited with exit status 255.
[dumb-init] Forwarded signal 15 to children.
[dumb-init] Child exited with status 255. Goodbye.
# kubectl get svc -w --namespace nginx-ingress nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.233.25.0 <pending> 80:32048/TCP,443:31430/TCP,22:31636/TCP 9m
# kubectl describe svc --namespace nginx-ingress nginx
Name: nginx
Namespace: nginx-ingress
Labels: <none>
Annotations: service.beta.kubernetes.io/external-traffic=OnlyLocal
Selector: app=nginx
Type: LoadBalancer
IP: 10.233.25.0
IP: 1.1.1.1
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 32048/TCP
Endpoints:
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31430/TCP
Endpoints:
Port: git 22/TCP
TargetPort: 22/TCP
NodePort: git 31636/TCP
Endpoints:
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default gitlab1-gitlab-75576c4589-lnf56 0/1 Running 2 11m
default gitlab1-gitlab-postgresql-f66555d65-nqvqx 1/1 Running 0 11m
default gitlab1-gitlab-redis-58cf598657-ksptm 1/1 Running 0 11m
default gitlab1-gitlab-runner-55d458ccb7-g442z 0/1 CrashLoopBackOff 6 11m
default glusterfs-9cfcr 1/1 Running 0 1d
default glusterfs-k422g 1/1 Running 0 1d
default glusterfs-tjtvq 1/1 Running 0 1d
default heketi-75dcfb7d44-thxpm 1/1 Running 0 1d
default nginx-nginx-ingress-controller-775b5b9c6d-hhvlr 1/1 Running 0 2h
default nginx-nginx-ingress-default-backend-7bb66746b9-mzgcb 1/1 Running 0 2h
default nginx-pod1 1/1 Running 0 1d
kube-lego kube-lego-58c9f5788d-pdfb5 1/1 Running 0 11m
kube-system calico-node-hq2v7 1/1 Running 3 2d
kube-system calico-node-z4nts 1/1 Running 3 2d
kube-system calico-node-z9r9v 1/1 Running 4 2d
kube-system kube-apiserver-k8s-m1.me 1/1 Running 4 2d
kube-system kube-apiserver-k8s-m2.me 1/1 Running 5 1d
kube-system kube-apiserver-k8s-m3.me 1/1 Running 3 2d
kube-system kube-controller-manager-k8s-m1.me 1/1 Running 4 2d
kube-system kube-controller-manager-k8s-m2.me 1/1 Running 4 1d
kube-system kube-controller-manager-k8s-m3.me 1/1 Running 3 2d
kube-system kube-dns-7bd4d5fbb6-r2rnf 3/3 Running 9 2d
kube-system kube-dns-7bd4d5fbb6-zffvn 3/3 Running 9 2d
kube-system kube-proxy-k8s-m1.me 1/1 Running 3 2d
kube-system kube-proxy-k8s-m2.me 1/1 Running 3 1d
kube-system kube-proxy-k8s-m3.me 1/1 Running 3 2d
kube-system kube-scheduler-k8s-m1.me 1/1 Running 4 2d
kube-system kube-scheduler-k8s-m2.me 1/1 Running 4 1d
kube-system kube-scheduler-k8s-m3.me 1/1 Running 4 2d
kube-system kubedns-autoscaler-679b8b455-pp7jd 1/1 Running 3 2d
kube-system kubernetes-dashboard-55fdfd74b4-6z8qp 1/1 Running 0 1d
kube-system tiller-deploy-75b7d95f5c-8cmxh 1/1 Running 0 1d
nginx-ingress default-http-backend-6679b97b47-w6cx7 1/1 Running 0 11m
nginx-ingress nginx-ndxhn 0/1 CrashLoopBackOff 6 11m
nginx-ingress nginx-nk2jg 0/1 CrashLoopBackOff 6 11m
nginx-ingress nginx-rz7xj 0/1 CrashLoopBackOff 6 11m
Logs on runner:
# kubectl logs gitlab1-gitlab-runner-55d458ccb7-g442z
+ cp /scripts/config.toml /etc/gitlab-runner/
+ /entrypoint register --non-interactive --executor kubernetes
Running in system-mode.
ERROR: Registering runner... failed runner=tQtCbx5U status=couldn't execute POST against http://gitlab1-gitlab.default:8005/api/v4/runners: Post http://gitlab1-gitlab.default:8005/api/v4/runners: dial tcp 10.233.7.205:8005: i/o timeout
PANIC: Failed to register this runner. Perhaps you are having network problems
PVC is fine
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gitlab1-gitlab-config-storage Bound pvc-c957bd23-644f-11e8-8f10-4ccc6a60fcbe 1Gi RWO gluster-heketi 13m
gitlab1-gitlab-postgresql-storage Bound pvc-c964e7d0-644f-11e8-8f10-4ccc6a60fcbe 30Gi RWO gluster-heketi 13m
gitlab1-gitlab-redis-storage Bound pvc-c96f9146-644f-11e8-8f10-4ccc6a60fcbe 5Gi RWO gluster-heketi 13m
gitlab1-gitlab-registry-storage Bound pvc-c959d377-644f-11e8-8f10-4ccc6a60fcbe 30Gi RWO gluster-heketi 13m
gitlab1-gitlab-storage Bound pvc-c9611ab1-644f-11e8-8f10-4ccc6a60fcbe 30Gi RWO gluster-heketi 13m
gluster1 Bound pvc-922b5dc0-6372-11e8-8f10-4ccc6a60fcbe 5Gi RWO gluster-heketi 1d
# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
I think about ABAC/RBAC... But what doing with this...
You are correct, and the error message explains exactly what is wrong. There are two paths forward: you can fix the Role and RoleBinding for the default ServiceAccount in the nginx-ingress namespace, or you can switch the Deployment to use a ServiceAccount other than default in order to assign that Deployment the specific permissions required. I recommend the latter, but the former may be less typing.
The rough version of the Role and RoleBinding lives in the nginx-ingress repo but may need to be adapted for your needs, including updating the apiVersion away from v1beta1
After that change has taken place, you'll need to delete the nginx-ingress Pods in order for them to pick up their new Role and conduct whatever initialization tasks nginx does during startup.
Separately, you will for sure want to fix this business:
Post http://gitlab1-gitlab.default:8005/api/v4/runners: dial tcp 10.233.7.205:8005: i/o timeout
I can't offer more concrete actions without knowing more about your CNI setup and the state of affairs of the actual GitLab Pod, but an I/O timeout is certainly a very weird error to get for in cluster communication.