Kubernetes dashboard: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout - kubernetes

I have a Kubernetes cluster in vagrant (1.14.0) and installed calico.
I have installed the kubernetes dashboard. When I use kubectl proxy to visit the dashboard:
Error: 'dial tcp 192.168.1.4:8443: connect: connection refused'
Trying to reach: 'https://192.168.1.4:8443/'
Here are my pods (dashboard is restarting frequently):
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-etcd-cj928 1/1 Running 0 11m
calico-node-4fnb6 1/1 Running 0 18m
calico-node-qjv7t 1/1 Running 0 20m
calico-policy-controller-b9b6749c6-29c44 1/1 Running 1 11m
coredns-fb8b8dccf-jjbhk 1/1 Running 0 20m
coredns-fb8b8dccf-jrc2l 1/1 Running 0 20m
etcd-k8s-master 1/1 Running 0 19m
kube-apiserver-k8s-master 1/1 Running 0 19m
kube-controller-manager-k8s-master 1/1 Running 0 19m
kube-proxy-8mrrr 1/1 Running 0 18m
kube-proxy-cdsr9 1/1 Running 0 20m
kube-scheduler-k8s-master 1/1 Running 0 19m
kubernetes-dashboard-5f7b999d65-nnztw 1/1 Running 3 2m11s
logs of the dasbhoard pod:
2019/03/30 14:36:21 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
I can telnet from both master and nodes to 10.96.0.1:443.
What is configured wrongly? The rest of the cluster seems to work fine, although I see this logs in kubelet:
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml"
kubelet seems to run fine on the master.
The cluster was created with this command:
kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master --pod-network-cidr=192.168.0.0/16

you should define your hostname in /etc/hosts
#hostname
YOUR_HOSTNAME
#nano /etc/hosts
YOUR_IP HOSTNAME
if you set your hostname in your master but it did not work try
# systemctl stop kubelet
# systemctl stop docker
# iptables --flush
# iptables -tnat --flush
# systemctl start kubelet
# systemctl start docker
and you should install dashboard before join worker node
and disable your firewall
and you can check your free ram.

Exclude -- node-name parameter from kubeadm init command
try this command
kubeadm init --apiserver-advertise-address=$(hostname -i) --apiserver-cert-extra-sans="192.168.50.10" --pod-network-cidr=192.168.0.0/16

For me the issue was I needed to create a NetworkPolicy that allowed Egress traffic to the kubernetes API

Related

Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: i/o timeout]

my pod stucks in ContainerCreating status with this massage :
Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "483590313b7fd092fe5eeec92356152721df3e971d942174464ac5a3f1529898" network for pod "my-nginx": networkPlugin cni failed to set up pod "my-nginx_default" network: CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "483590313b7fd092fe5eeec92356152721df3e971d942174464ac5a3f1529898", failed to clean up sandbox container "483590313b7fd092fe5eeec92356152721df3e971d942174464ac5a3f1529898" network for pod "my-nginx": networkPlugin cni failed to teardown pod "my-nginx_default" network: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: i/o timeout]
the state of worker node is Ready .
but the output of kubectl get pods -n kube-system seems to have issues :
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6dfcd885bf-ktbhb 1/1 Running 0 22h
calico-node-4fs2v 0/1 Init:RunContainerError 1 22h
calico-node-l9qvc 0/1 Running 0 22h
coredns-f9fd979d6-8pzcd 1/1 Running 0 23h
coredns-f9fd979d6-v4cq8 1/1 Running 0 23h
etcd-k8s-master 1/1 Running 1 23h
kube-apiserver-k8s-master 1/1 Running 128 23h
kube-controller-manager-k8s-master 1/1 Running 4 23h
kube-proxy-bwtwj 0/1 CrashLoopBackOff 342 23h
kube-proxy-stq7q 1/1 Running 1 23h
kube-scheduler-k8s-master 1/1 Running 4 23h
and the resualt of command kubectl -n kube-system logs kube-proxy-bwtwj the resulst was :
failed to try resolving symlinks in path "/var/log/pods/kube-system_kube-proxy-bwtwj_1a0f4b93-cc6f-46b9-bf29-125feba593cb/kube-proxy/348.log": lstat /var/log/pods/kube-system_kube-proxy-bwtwj_1a0f4b93-cc6f-46b9-bf29-125feba593cb/kube-proxy/348.log: no such file or directory
I see two topics here:
The default --pod-network-cidr for calico is 192.168.0.0/16. You can use a different one but always make sure that there are no overlays. However, I have tested with the default one and my cluster runs with no problems. In order to start over with a proper config, you should Remove the node and Clean up the control plane. Than proceed with:
kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
After that, join your worker nodes with kubeadm join
Use sudo where/if needed. All necessary details can be found in the official documentation.
The failed to try resolving symlinks error means that kubelet is looking for the pod logs in a wrong directory. In order to fix it you need to pass the --log-dir=/var/log flag to kubelet. After adding the flag you have run systemctl daemon-reload so the kubelet would be restarted. This has to be done on all of your nodes.
Make sure you deploy calico before joining other nodes to your cluster. When you have other nodes in your cluster calico-kube-controllers sometimes gets push to a worker node. This can lead to issues
You need to carefully check logs for calico-node pods.
In my case i have some other network interfaces and the autodetection mechanism in calico was detecting wrong interface (ip address).
You need to consult this documentation https://projectcalico.docs.tigera.io/reference/node/configuration.
What i did in my case, was simply:
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=cidr=172.16.8.0/24
cidr is my "working network".
After this, all calico-nodes restarted and suddenly everything was fine.

New kubernetes install has remnants of old cluster

I did a complete tear down of a v1.13.1 cluster and am now running v1.15.0 with calico cni v3.8.0. All pods are running:
[gms#thalia0 ~]$ kubectl get po --namespace=kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-59f54d6bbc-2mjxt 1/1 Running 0 7m23s
calico-node-57lwg 1/1 Running 0 7m23s
coredns-5c98db65d4-qjzpq 1/1 Running 0 8m46s
coredns-5c98db65d4-xx2sh 1/1 Running 0 8m46s
etcd-thalia0.ahc.umn.edu 1/1 Running 0 8m5s
kube-apiserver-thalia0.ahc.umn.edu 1/1 Running 0 7m46s
kube-controller-manager-thalia0.ahc.umn.edu 1/1 Running 0 8m2s
kube-proxy-lg4cn 1/1 Running 0 8m46s
kube-scheduler-thalia0.ahc.umn.edu 1/1 Running 0 7m40s
But, when I look at the endpoint, I get the following:
[gms#thalia0 ~]$ kubectl get ep --namespace=kube-system
NAME ENDPOINTS AGE
kube-controller-manager <none> 9m46s
kube-dns 192.168.16.194:53,192.168.16.195:53,192.168.16.194:53 + 3 more... 9m30s
kube-scheduler <none> 9m46s
If I look at the log for the apiserver, I get a ton of TLS handshake errors, along the lines of:
I0718 19:35:17.148852 1 log.go:172] http: TLS handshake error from 10.x.x.160:45042: remote error: tls: bad certificate
I0718 19:35:17.158375 1 log.go:172] http: TLS handshake error from 10.x.x.159:53506: remote error: tls: bad certificate
These IP addresses were from nodes in a previous cluster. I had deleted them and done a kubeadm reset on all nodes, including master, so I have no idea why these are showing up. I would assume this is why the endpoints for the controller-manager and the scheduler are showing up as <none>.
In order to completely wipe your cluster you should do next:
1) Reset cluster
$sudo kubeadm reset (or use appropriate to your cluster command)
2) Wipe your local directory with configs
$rm -rf .kube/
3) Remove /etc/kubernetes/
$sudo rm -rf /etc/kubernetes/
4)And one of the main point is to get rid of your previous etc state configuration.
$sudo rm -rf /var/lib/etcd/

Cannot connect to kubernetes pod from master: i/o timeout

I configured kubernetes cluster with one master and one node, the machines that run master and node aren't in the same network. For networking I installed calico and all the pods are running. For testing the cluster I used get shell example and when I run the following command from master machine:
kubectl exec -it shell-demo -- /bin/bash
I received the error:
Error from server: error dialing backend: dial tcp 10.138.0.2:10250: i/o timeout
The ip 10.138.0.2 is on eth0 interface on the node machine.
What configuration do I need to make to access the pod from master?
EDIT
kubectl get all --all-namespaces -o wide output:
default shell-demo 1/1 Running 0 10s 192.168.4.2 node-1
kube-system calico-node-7wlqw 2/2 Running 0 49m 10.156.0.2 instance-1
kube-system calico-node-lnk6d 2/2 Running 0 35s 10.132.0.2 node-1
kube-system coredns-78fcdf6894-cxgc2 1/1 Running 0 50m 192.168.0.5 instance-1
kube-system coredns-78fcdf6894-gwwjp 1/1 Running 0 50m 192.168.0.4 instance-1
kube-system etcd-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
kube-system kube-apiserver-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
kube-system kube-controller-manager-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
kube-system kube-proxy-b64b5 1/1 Running 0 50m 10.156.0.2 instance-1
kube-system kube-proxy-xxkn4 1/1 Running 0 35s 10.132.0.2 node-1
kube-system kube-scheduler-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
Thanks!
Before checking your status on Master .Please verify below things.
Please run below commands to check cluster info :
setenforce 0
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --reload
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
Run above command on both Master and worker node.
Then run below commands to check node status.
kubectl get nodes
I had this issue too. Don't know if you're on Azure, but I am, and I solved this by deleting the tunnelfront pod and letting Kubernetes restart it:
kubectl -n kube-system delete po -l component=tunnel
which is a solution I got from here
we had the same problem, and in the end we found that we have 2 Nic per host, and they have 2 different IPs, and the route is also messed up. so when this timeout happens, check your networking setup, make sure your network is healthy and that should give you some good clue there.

How to fix weave-net CrashLoopBackOff for the second node?

I have got 2 VMs nodes. Both see each other either by hostname (through /etc/hosts) or by ip address. One has been provisioned with kubeadm as a master. Another as a worker node. Following the instructions (http://kubernetes.io/docs/getting-started-guides/kubeadm/) I have added weave-net. The list of pods looks like the following:
vagrant#vm-master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-vm-master 1/1 Running 0 3m
kube-system kube-apiserver-vm-master 1/1 Running 0 5m
kube-system kube-controller-manager-vm-master 1/1 Running 0 4m
kube-system kube-discovery-982812725-x2j8y 1/1 Running 0 4m
kube-system kube-dns-2247936740-5pu0l 3/3 Running 0 4m
kube-system kube-proxy-amd64-ail86 1/1 Running 0 4m
kube-system kube-proxy-amd64-oxxnc 1/1 Running 0 2m
kube-system kube-scheduler-vm-master 1/1 Running 0 4m
kube-system kubernetes-dashboard-1655269645-0swts 1/1 Running 0 4m
kube-system weave-net-7euqt 2/2 Running 0 4m
kube-system weave-net-baao6 1/2 CrashLoopBackOff 2 2m
CrashLoopBackOff appears for each worker node connected. I have spent several ours playing with network interfaces, but it seems the network is fine. I have found similar question, where the answer advised to look into the logs and no follow up. So, here are the logs:
vagrant#vm-master:~$ kubectl logs weave-net-baao6 -c weave --namespace=kube-system
2016-10-05 10:48:01.350290 I | error contacting APIServer: Get https://100.64.0.1:443/api/v1/nodes: dial tcp 100.64.0.1:443: getsockopt: connection refused; trying with blank env vars
2016-10-05 10:48:01.351122 I | error contacting APIServer: Get http://localhost:8080/api: dial tcp [::1]:8080: getsockopt: connection refused
Failed to get peers
What I am doing wrong? Where to go from there?
I ran in the same issue too. It seems weaver wants to connect to the Kubernetes Cluster IP address, which is virtual. Just run this to find the cluster ip:
kubectl get svc. It should give you something like this:
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 100.64.0.1 <none> 443/TCP 2d
Weaver picks up this IP and tries to connect to it, but worker nodes does not know anything about it. Simple route will solve this issue. On all your worker nodes, execute:
route add 100.64.0.1 gw <your real master IP>
this happens with a single node setup, too. I tried several things like reapplying the configuration and recreation, but the most stable way at the moment is to perform a full tear down (as described in docs) and put the cluster up again.
I use these scripts for relaunching the cluster:
down.sh
#!/bin/bash
systemctl stop kubelet;
docker rm -f -v $(docker ps -q);
find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v;
rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd;
up.sh
#!/bin/bash
systemctl start kubelet
kubeadm init
# kubectl taint nodes --all dedicated- # single node!
kubectl create -f https://git.io/weave-kube
edit: I would also give other Pod networks a try, like Calico, if this is a weave related issue
The most common causes for this may be:
- presence of a firewall (e.g. firewalld on CentOS)
- network configuration (e.g. default NAT interface on VirtualBox)
Currently kubeadm is still alpha, and this is one of the issues that has already been reported by many of the alpha testers. We are looking into fixing this by documenting the most common problems, such documentation is going to be ready closer to beta version.
Right there exists a VirtualBox+Vargant+Ansible for Ubunutu and CentOS reference implementation that provides solutions for firewall, SELinux and VirtualBox NAT issues.
/usr/local/bin/weave reset
was the fix for me - Hope its useful - and yes make sure selinux is set to disabled
and firewalld is not running (on redhat / centos) releases
kube-system weave-net-2vlvj 2/2 Running 3 11d
kube-system weave-net-42k6p 1/2 Running 3 11d
kube-system weave-net-wvsk5 2/2 Running 3 11d

1.0.3 kibana-logging api link fails on flannel ip address timeout

I get the following error
Error: 'dial tcp 10.10.92.15:5601: i/o timeout'
Trying to reach: 'http://10.10.92.15:5601/'
10.10.92/24 is the flannel0 network on one of the kube slaves
Pods are up and running
elasticsearch-logging-v1-l1hu9 1/1 Running 0 1h
elasticsearch-logging-v1-rsgby 1/1 Running 0 1h
kibana-logging-v1-4wwfg 1/1 Running 0 1h
This was due to flannel not running on the kubernetes master, after I got flannel up, the url /api/v1/proxy/namespaces/kube-system/services/kibana-logging worked.
THe HOWTO for kubernetes I used excluded setting up flannel on the master.