kubernetes dashboard CrashLoopBackOff - kubernetes

Brand new to kubernetes, but managed to install kubernetes, ubuntu 20.04 LTS, but having issues with the dashboard. followed the procedure, using flannel as CNF.
The log states issues with connection to 10.96.0.1:443, but telnet seems to work? Any suggestion how to getting further ?
bwa#prod3:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-66bff467f8-jgmpl 0/1 Running 1 27h 10.244.0.6 prod3 <none> <none>
kube-system coredns-66bff467f8-ldr9d 0/1 Running 1 27h 10.244.0.9 prod3 <none> <none>
kube-system etcd-prod3 1/1 Running 1 27h 192.168.0.93 prod3 <none> <none>
kube-system kube-apiserver-prod3 1/1 Running 1 27h 192.168.0.93 prod3 <none> <none>
kube-system kube-controller-manager-prod3 1/1 Running 1 27h 192.168.0.93 prod3 <none> <none>
kube-system kube-flannel-ds-amd64-xm26h 1/1 Running 2 27h 192.168.0.93 prod3 <none> <none>
kube-system kube-proxy-7lk5d 1/1 Running 1 27h 192.168.0.93 prod3 <none> <none>
kube-system kube-scheduler-prod3 1/1 Running 1 27h 192.168.0.93 prod3 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-xrdbh 1/1 Running 1 27h 10.244.0.7 prod3 <none> <none>
kubernetes-dashboard kubernetes-dashboard-7f99b75bf4-lfqtf 0/1 CrashLoopBackOff 310 27h 10.244.0.8 prod3 <none> <none>
bwa#prod3:~$ kubectl logs kubernetes-dashboard-7f99b75bf4-lfqtf --namespace=kubernetes-dashboard --tail=100
2020/08/05 12:02:31 Starting overwatch
2020/08/05 12:02:31 Using namespace: kubernetes-dashboard
2020/08/05 12:02:31 Using in-cluster config to connect to apiserver
2020/08/05 12:02:31 Using secret token for csrf signing
2020/08/05 12:02:31 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00000c640)
/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc00044f800)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:501 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc00044f800)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:469 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:550
main.main()
/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x20d
bwa#prod3:~$ telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.
^CConnection closed by foreign host.
bwa#prod3:~$

By the looks of that cluster, you do not have a networking plugin (CNI) installed. I do not see any flannel pods in the kube-system namespace, and the coredns pods are not starting.
This would also explain why the dashboard panics, as it is unable to reach the K8s API server via the 10.96.0.1 service.
Can you check the flannel installation (or just reinstall flannel on the cluster)?

Related

K8s pods running in diffrent node can't communicate with each other

I have k8s cluster with two node, master and worker node, installed Calico.
I initialized cluster and installed calico with following commands
# Initialize cluster
kubeadm init --apiserver-advertise-address=<MatserNodePublicIP> --pod-network-cidr=192.168.0.0/16
# Install Calico. Refer to official document
# https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-50-nodes-or-less
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
After that, I found pods running in different node can't communicate with each other, but pods running in same node can communicate with each other.
Here are my operations:
# With following command, I ran a nginx pod scheduled to worker node
# and assigned pod id 192.168.199.72
kubectl create nginx --image=nginx
# With following command, I ran a busybox pod scheduled to master node
# and assigned pod id 192.168.119.197
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh
# In busybox bash, I executed following command
# '>' represents command output
wget 192.168.199.72
> Connecting to 192.168.199.72 (192.168.199.72:80)
> wget: can't connect to remote host (192.168.199.72): Connection timed out
However, if nginx pod run in master node (same as busybox), the wget would output a correct welcome html.
(For scheduling nginx pod to master node, I cordon worker node, and restarted nginx pod)
I also tried to schedule nginx and busybox pod to worker node, the wget ouput is a correct welcome html.
Here are my cluster status, everything looks fine. I searched all I can find but couldn't find solution.
matser and worker node can ping each other with private IP.
For firewall
systemctl status firewalld
> Unit firewalld.service could not be found.
For node infomation
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
pro-con-scrapydmanager Ready control-plane,master 26h v1.21.2 10.120.0.5 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://20.10.5
pro-con-scraypd-01 Ready,SchedulingDisabled <none>
For pod infomation
kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default busybox 0/1 Error 0 24h 192.168.199.72 pro-con-scrapydmanager <none> <none>
default nginx 1/1 Running 1 26h 192.168.119.197 pro-con-scraypd-01 <none> <none>
kube-system calico-kube-controllers-78d6f96c7b-msrdr 1/1 Running 1 26h 192.168.199.77 pro-con-scrapydmanager <none> <none>
kube-system calico-node-gjhwh 1/1 Running 1 26h 10.120.0.2 pro-con-scraypd-01 <none> <none>
kube-system calico-node-x8d7g 1/1 Running 1 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system coredns-558bd4d5db-mllm5 1/1 Running 1 26h 192.168.199.78 pro-con-scrapydmanager <none> <none>
kube-system coredns-558bd4d5db-whfnn 1/1 Running 1 26h 192.168.199.75 pro-con-scrapydmanager <none> <none>
kube-system etcd-pro-con-scrapydmanager 1/1 Running 1 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system kube-apiserver-pro-con-scrapydmanager 1/1 Running 1 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system kube-controller-manager-pro-con-scrapydmanager 1/1 Running 2 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system kube-proxy-84cxb 1/1 Running 2 26h 10.120.0.2 pro-con-scraypd-01 <none> <none>
kube-system kube-proxy-nj2tq 1/1 Running 2 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system kube-scheduler-pro-con-scrapydmanager 1/1 Running 1 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
lens-metrics kube-state-metrics-78596b555-zxdst 1/1 Running 1 26h 192.168.199.76 pro-con-scrapydmanager <none> <none>
lens-metrics node-exporter-ggwtc 1/1 Running 1 26h 192.168.199.73 pro-con-scrapydmanager <none> <none>
lens-metrics node-exporter-sbz6t 1/1 Running 1 26h 192.168.119.196 pro-con-scraypd-01 <none> <none>
lens-metrics prometheus-0 1/1 Running 1 26h 192.168.199.74 pro-con-scrapydmanager <none> <none>
For services
kubectl get services -o wide --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h <none>
default nginx ClusterIP 10.99.117.158 <none> 80/TCP 24h run=nginx
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 26h k8s-app=kube-dns
lens-metrics kube-state-metrics ClusterIP 10.104.32.63 <none> 8080/TCP 26h name=kube-state-metrics
lens-metrics node-exporter ClusterIP None <none> 80/TCP 26h name=node-exporter,phase=prod
lens-metrics prometheus ClusterIP 10.111.86.164 <none> 80/TCP 26h name=prometheus
Ok. It's fault of firewall. I opened all of the following ports on my master node and recreated my cluster, then everything got fine and cni0 interface appeared. Although I still don't know why.
During the proccessing of tring, I find cni0 interface is important. If there is no cni0, I could not ping pod running in diffrent node.
(Refer: https://docs.projectcalico.org/getting-started/bare-metal/requirements)
Configuration Host(s) Connection type Port/protocol
Calico networking (BGP) All Bidirectional TCP 179
Calico networking with IP-in-IP enabled (default) All Bidirectional IP-in-IP, often represented by its protocol number 4
Calico networking with VXLAN enabled All Bidirectional UDP 4789
Calico networking with Typha enabled Typha agent hosts Incoming TCP 5473 (default)
flannel networking (VXLAN) All Bidirectional UDP 4789
All kube-apiserver host Incoming Often TCP 443 or 6443*
etcd datastore etcd hosts Incoming Officially TCP 2379 but can vary

k3s - can't access from one pod to another if pods on different master nodes (HighAvailability setup)

k3s - can't access from one pod to another if pods on different nodes
Update:
I've narrowed the issue down - it's pods that are on other master nodes that can't communicate with those on the original master
pods on rpi4-server1 - the original cluster - can communicate with pods on rpi-worker01 and rpi3-worker02
pods on rpi4-server2 are unable to communicate with the others
I'm trying to run a HighAvailability cluster with embedded DB and using flannel / vxlan
I'm trying to setup a project with 5 services in k3s
When all of the pods are contained on a single node, they work together fine.
As soon as I add other nodes into the system and pods are deployed to them, the links seem to break.
In troubleshooting I've exec'd into one of the pods and tried to curl another. When they are on the same node this works, if the second service is on another node it doesn't.
I'm sure this is something simple that I'm missing, but I can't work it out! Help appreciated.
Key details:
Using k3s and native traefik
Two rpi4s as servers (High Availability) and two rpi3s as worker nodes
metallb as loadbalancer
Two services - blah-interface and blah-svc are configured as LoadBalancer to allow external access. The others blah-server, n34 and test-apisas NodePort to support debugging, but only really need internal access
Info on nodes, pods and services....
pi#rpi4-server1:~/Projects/test_demo_2020/test_kube_config/testchart/templates $ sudo kubectl get nodes --all-namespaces -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rpi4-server1 Ready master 11h v1.17.0+k3s.1 192.168.0.140 <none> Raspbian GNU/Linux 10 (buster) 4.19.75-v7l+ docker://19.3.5
rpi-worker01 Ready,SchedulingDisabled <none> 10h v1.17.0+k3s.1 192.168.0.41 <none> Raspbian GNU/Linux 10 (buster) 4.19.66-v7+ containerd://1.3.0-k3s.5
rpi3-worker02 Ready,SchedulingDisabled <none> 10h v1.17.0+k3s.1 192.168.0.142 <none> Raspbian GNU/Linux 10 (buster) 4.19.75-v7+ containerd://1.3.0-k3s.5
rpi4-server2 Ready master 10h v1.17.0+k3s.1 192.168.0.143 <none> Raspbian GNU/Linux 10 (buster) 4.19.75-v7l+ docker://19.3.5
pi#rpi4-server1:~/Projects/test_demo_2020/test_kube_config/testchart/templates $ sudo kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system helm-install-traefik-l2z6l 0/1 Completed 2 11h 10.42.0.2 rpi4-server1 <none> <none>
test-demo n34-5c7b9475cb-zjlgl 1/1 Running 1 4h30m 10.42.0.32 rpi4-server1 <none> <none>
kube-system metrics-server-6d684c7b5-5wgf9 1/1 Running 3 11h 10.42.0.26 rpi4-server1 <none> <none>
metallb-system speaker-62rkm 0/1 Pending 0 99m <none> rpi-worker01 <none> <none>
metallb-system speaker-2shzq 0/1 Pending 0 99m <none> rpi3-worker02 <none> <none>
metallb-system speaker-2mcnt 1/1 Running 0 99m 192.168.0.143 rpi4-server2 <none> <none>
metallb-system speaker-v8j9g 1/1 Running 0 99m 192.168.0.140 rpi4-server1 <none> <none>
metallb-system controller-65895b47d4-pgcs6 1/1 Running 0 90m 10.42.0.49 rpi4-server1 <none> <none>
test-demo blah-server-858ccd7788-mnf67 1/1 Running 0 64m 10.42.0.50 rpi4-server1 <none> <none>
default nginx2-6f4f6f76fc-n2kbq 1/1 Running 0 22m 10.42.0.52 rpi4-server1 <none> <none>
test-demo blah-interface-587fc66bf9-qftv6 1/1 Running 0 22m 10.42.0.53 rpi4-server1 <none> <none>
test-demo blah-svc-6f8f68f46-gqcbw 1/1 Running 0 21m 10.42.0.54 rpi4-server1 <none> <none>
kube-system coredns-d798c9dd-hdwn5 1/1 Running 1 11h 10.42.0.27 rpi4-server1 <none> <none>
kube-system local-path-provisioner-58fb86bdfd-tjh7r 1/1 Running 31 11h 10.42.0.28 rpi4-server1 <none> <none>
kube-system traefik-6787cddb4b-tgq6j 1/1 Running 0 4h50m 10.42.1.23 rpi4-server2 <none> <none>
default testdemo2020-testchart-6f8d44b496-2hcfc 1/1 Running 1 6h31m 10.42.0.29 rpi4-server1 <none> <none>
test-demo test-apis-75bb68dcd7-d8rrp 1/1 Running 0 7m13s 10.42.1.29 rpi4-server2 <none> <none>
pi#rpi4-server1:~/Projects/test_demo_2020/test_kube_config/testchart/templates $ sudo kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 11h <none>
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 11h k8s-app=kube-dns
kube-system metrics-server ClusterIP 10.43.74.118 <none> 443/TCP 11h k8s-app=metrics-server
kube-system traefik-prometheus ClusterIP 10.43.78.135 <none> 9100/TCP 11h app=traefik,release=traefik
test-demo blah-server NodePort 10.43.224.128 <none> 5055:31211/TCP 10h io.kompose.service=blah-server
default testdemo2020-testchart ClusterIP 10.43.91.7 <none> 80/TCP 10h app.kubernetes.io/instance=testdemo2020,app.kubernetes.io/name=testchart
test-demo traf-dashboard NodePort 10.43.60.155 <none> 8080:30808/TCP 10h io.kompose.service=traf-dashboard
test-demo test-apis NodePort 10.43.248.59 <none> 8075:31423/TCP 7h11m io.kompose.service=test-apis
kube-system traefik LoadBalancer 10.43.168.18 192.168.0.240 80:30688/TCP,443:31263/TCP 11h app=traefik,release=traefik
default nginx2 LoadBalancer 10.43.249.123 192.168.0.241 80:30497/TCP 92m app=nginx2
test-demo n34 NodePort 10.43.171.206 <none> 7474:30474/TCP,7687:32051/TCP 72m io.kompose.service=n34
test-demo blah-interface LoadBalancer 10.43.149.158 192.168.0.242 80:30634/TCP 66m io.kompose.service=blah-interface
test-demo blah-svc LoadBalancer 10.43.19.242 192.168.0.243 5005:30005/TCP,5006:31904/TCP,5002:30685/TCP 51m io.kompose.service=blah-svc
Hi you issue could be related to the following issue.
After configuring the network under /etc/systemd/network/eth0.network (filename may differ in your case, since i am using arch linux on all pis)
[Match]
Name=eth0
[Network]
Address=x.x.x.x/24 # ip of node
Gateway=x.x.x.x # ip of gateway router
Domains=default.svc.cluster.local svc.cluster.local cluster.local
DNS=10.x.x.x # k3s dns ip x.x.x.x # ip of gateway router
After that I removed the 10.x.x.x routes with ip route del 10.x.x.x dev [flannel|cni0] on every node and restarted them.

Kubernetes Dashborad is not opening

My Master node ip address is 192.168.56.101. there is no node connected to master yet.
master#kmaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready master 125m v1.15.1
master#kmaster:~$
When i deployed my kubernetes-dashborad using below command, why running IP Address of kubernetes-dashboard-5c8f9556c4-f2jpz is 192.168.189.6
Similarly the other pods has also different IP address.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml
master#kmaster:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-7bd78b474d-r2bwg 1/1 Running 0 113m 192.168.189.2 kmaster <none> <none>
kube-system calico-node-dsgqt 1/1 Running 0 113m 192.168.56.101 kmaster <none> <none>
kube-system coredns-5c98db65d4-n2wml 1/1 Running 0 114m 192.168.189.3 kmaster <none> <none>
kube-system coredns-5c98db65d4-v5qc8 1/1 Running 0 114m 192.168.189.1 kmaster <none> <none>
kube-system etcd-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-apiserver-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-controller-manager-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-proxy-bgtmr 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-scheduler-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-f2jpz 1/1 Running 0 107m 192.168.189.6 kmaster <none> <none>
kubernetes-dashboard kubernetes-metrics-scraper-86456cdd8f-w45w2 1/1 Running 0 107m 192.168.189.4 kmaster <none> <none>
master#kmaster:~$
And also not able to access the kubernetes-dashboard UI. i am using the link
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
and the link KubeDNS https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy is also not working.
but when trying to access Kubernetes master at https://192.168.56.101:6443 is working.
master#kmaster:~$ kubectl cluster-info
Kubernetes master is running at https://192.168.56.101:6443
KubeDNS is running at https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Any suggestions.
Solution (see comments): Don't mix your physical and overlay network ranges.
Accessing the KubeDNS is only possible with DNS as protocol, not HTTP. If you want to query the DNS service you need to kubectl port-forward, not the HTTP (API) proxy.
If you try to access the dashboard with localhost:8081, you have to run kubectl proxy --port 8081 from your console to setup the proxy between you localhost to the k8s apiserver.
If you want to access dashboard from apiserver directly without the local proxy, try the following url https://192.168.56.101:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy (assuming your service name is kubernetes-dashboard)
You can also run kubectl port-forward svc/kubernetes-dashboard -n kubernetes-dashboard 443, then access the dashboard with https://localhost:443

Cannot access to Kubernetes Dashboard

I have a K8s cluster (1 master, 2 workers) running on 3 vagrant VMs on my computer.
I've installed kubernetes dashboard, like explained here.
All my pods are running correctly:
kubectl get pods -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-n5cpm 1/1 Running 1 61m 10.244.0.4 kmaster.example.com <none> <none>
coredns-fb8b8dccf-qwcr4 1/1 Running 1 61m 10.244.0.5 kmaster.example.com <none> <none>
etcd-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-apiserver-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-controller-manager-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-flannel-ds-amd64-hcjsm 1/1 Running 1 61m 172.42.42.100 kmaster.example.com <none> <none>
kube-flannel-ds-amd64-klv4f 1/1 Running 3 56m 172.42.42.102 kworker2.example.com <none> <none>
kube-flannel-ds-amd64-lmpnd 1/1 Running 2 59m 172.42.42.101 kworker1.example.com <none> <none>
kube-proxy-86qsw 1/1 Running 1 59m 10.0.2.15 kworker1.example.com <none> <none>
kube-proxy-dp29s 1/1 Running 1 61m 172.42.42.100 kmaster.example.com <none> <none>
kube-proxy-gqqq9 1/1 Running 1 56m 10.0.2.15 kworker2.example.com <none> <none>
kube-scheduler-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kubernetes-dashboard-5f7b999d65-zqbbz 1/1 Running 1 28m 10.244.1.3 kworker1.example.com <none> <none>
As you can see the dashboard is in "Running" status.
I also ran kubectl proxy and it's serving on 127.0.0.1:8001.
But when I try to open http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ I have the error:
This site can’t be reached
127.0.0.1 refused to connect.
ERR_CONNECTION_REFUSED
I'm trying to open the dashboard directly on my computer, not inside the vagram VM. Could that be the problem? If yes, how to solve it ? I'm able to ping my VM from my computer without any issue.
Thanks for helping me.
EDIT
Here is the ouput of kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 96m
kubernetes-dashboard NodePort 10.109.230.83 <none> 443:30089/TCP 63m
Kubernetes dashboard runs only in the cluster as default. You can control it with get svc command:
kubectl get svc -n kube-system
Default type of that service is ClusterIp, to reach from outside of the cluster yo have to change it to NodePort.
To change it follow this doc.

Helm error: dial tcp *:10250: i/o timeout

Created a local cluster using Vagrant + Ansible + VirtualBox. Manually deploying works fine, but when using Helm:
:~$helm install stable/nginx-ingress --name nginx-ingress-controller --set rbac.create=true
Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 10.0.52.15:10250: i/o timeout
Kubernetes cluster info:
:~$kubectl get nodes,po,deploy,svc,ingress --all-namespaces -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/ubuntu18-kube-master Ready master 32m v1.13.3 10.0.51.15 <none> Ubuntu 18.04.1 LTS 4.15.0-43-generic docker://18.6.1
node/ubuntu18-kube-node-1 Ready <none> 31m v1.13.3 10.0.52.15 <none> Ubuntu 18.04.1 LTS 4.15.0-43-generic docker://18.6.1
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default pod/nginx-server 1/1 Running 0 40s 10.244.1.5 ubuntu18-kube-node-1 <none> <none>
default pod/nginx-server-b8d78876d-cgbjt 1/1 Running 0 4m25s 10.244.1.4 ubuntu18-kube-node-1 <none> <none>
kube-system pod/coredns-86c58d9df4-5rsw2 1/1 Running 0 31m 10.244.0.2 ubuntu18-kube-master <none> <none>
kube-system pod/coredns-86c58d9df4-lfbvd 1/1 Running 0 31m 10.244.0.3 ubuntu18-kube-master <none> <none>
kube-system pod/etcd-ubuntu18-kube-master 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-apiserver-ubuntu18-kube-master 1/1 Running 0 30m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-controller-manager-ubuntu18-kube-master 1/1 Running 0 30m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-flannel-ds-amd64-jffqn 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-flannel-ds-amd64-vc6p2 1/1 Running 0 31m 10.0.52.15 ubuntu18-kube-node-1 <none> <none>
kube-system pod/kube-proxy-fbgmf 1/1 Running 0 31m 10.0.52.15 ubuntu18-kube-node-1 <none> <none>
kube-system pod/kube-proxy-jhs6b 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-scheduler-ubuntu18-kube-master 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/tiller-deploy-69ffbf64bc-x8lkc 1/1 Running 0 24m 10.244.1.2 ubuntu18-kube-node-1 <none> <none>
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
default deployment.extensions/nginx-server 1/1 1 1 4m25s nginx-server nginx run=nginx-server
kube-system deployment.extensions/coredns 2/2 2 2 32m coredns k8s.gcr.io/coredns:1.2.6 k8s-app=kube-dns
kube-system deployment.extensions/tiller-deploy 1/1 1 1 24m tiller gcr.io/kubernetes-helm/tiller:v2.12.3 app=helm,name=tiller
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32m <none>
default service/nginx-server NodePort 10.99.84.201 <none> 80:31811/TCP 12s run=nginx-server
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 32m k8s-app=kube-dns
kube-system service/tiller-deploy ClusterIP 10.99.4.74 <none> 44134/TCP 24m app=helm,name=tiller
Vagrantfile:
...
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
$hosts.each_with_index do |(hostname, parameters), index|
ip_address = "#{$subnet}.#{$ip_offset + index}"
config.vm.define vm_name = hostname do |vm_config|
vm_config.vm.hostname = hostname
vm_config.vm.box = box
vm_config.vm.network "private_network", ip: ip_address
vm_config.vm.provider :virtualbox do |vb|
vb.gui = false
vb.name = hostname
vb.memory = parameters[:memory]
vb.cpus = parameters[:cpus]
vb.customize ['modifyvm', :id, '--macaddress1', "08002700005#{index}"]
vb.customize ['modifyvm', :id, '--natnet1', "10.0.5#{index}.0/24"]
end
end
end
end
Workaround for VirtualBox issue: set diffenrent macaddress and internal_ip.
It is interesting to find a solution that can be placed in one of the configuration files: vagrant, ansible roles. Any ideas on the problem?
Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 10.0.52.15:10250: i/o timeout
You're getting bitten by a very common kubernetes-on-Vagrant bug: the kubelet believes its IP address is eth0, which is the NAT interface in Vagrant, versus using (what I hope you have) the :private_address network in your Vagrantfile. Thus, since all kubelet interactions happen directly to it (and not through the API server), things like kubectl exec and kubectl logs will fail in exactly the way you see.
The solution is to force kubelet to bind to the private network interface, or I guess you could switch your Vagrantfile to use the bridge network, if that's an option for you -- just so long as the interface isn't the NAT one.
The question is about how you manage TLS Certificates in the cluster, ensure that port 10250 is reachable.
Here is an example of how i fix it when i try to run exec a pod running in node (instance aws in my case),
resource "aws_security_group" "My_VPC_Security_Group" {
...
ingress {
description = "TLS from VPC"
from_port = 10250
to_port = 10250
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
For more details you can visit [1]: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-unauth-kublet-api-10250.html