Nodport services are not accessable from master - kubernetes

We have k8s cluster of 3 nodes.
I can access the services via slaves public IP & NodePort. It works for every slave in spite of actual pod location. But for some reason it doesn't work for master.
The kubectl get services --all-namespaces on master node print the actual services list:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
xxx kibana NodePort 10.100.214.218 <none> 5601:30141/TCP 1d
How can I enable such routing for master node?
Kubernetes version: v1.8.6
Infrastructure related output of kubectl get pods -o wide --all-namespaces:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system etcd-k8s-master 1/1 Running 9 2d 172.20.41.199 k8s-master
kube-system kube-apiserver-k8s-master 1/1 Running 1 2d 172.20.41.199 k8s-master
kube-system kube-controller-manager-k8s-master 1/1 Running 1 2d 172.20.41.199 k8s-master
kube-system kube-dns-545bc4bfd4-2fq94 3/3 Running 0 2d 10.36.0.14 k8s-slave2
kube-system kube-proxy-dhsl9 1/1 Running 1 2d 172.20.41.199 k8s-master
kube-system kube-proxy-mkjzn 1/1 Running 0 2d 172.20.41.194 k8s-slave1
kube-system kube-proxy-stjm6 1/1 Running 1 2d 172.20.41.195 k8s-slave2
kube-system kube-scheduler-k8s-master 1/1 Running 9 2d 172.20.41.199 k8s-master
kube-system kubernetes-dashboard-747d579ff5-qp7rh 1/1 Running 2 2d 10.32.0.2 k8s-master
kube-system weave-net-n8n64 2/2 Running 0 2d 172.20.41.194 k8s-slave1
kube-system weave-net-vh6ng 2/2 Running 1 2d 172.20.41.195 k8s-slave2
kube-system weave-net-w9mn8 2/2 Running 2 2d 172.20.41.199 k8s-master

Related

Dashboard UI is not accessible

Dashboard UI not working. I'm trying to troubleshoot kubernetes UI Dashboard. It is not working so far. I have a cluster with three nodes, 1 master and 2 workers
Error: 'dial tcp 172.16.1.4:8443: i/o timeout' Trying to reach: 'https://172.16.1.4:8443/'
Error: 'dial tcp 172.16.1.4:8443: i/o timeout' Trying to reach: 'https://172.16.1.4:8443/'
The issue is that when the proxy is activated, the Dashboard does not display on the worker machine (node1) which is the one where dashboard is running
I'm trying to troubleshoot kubernetes UI Dashboard. It is not working so far. I have a cluster with three nodes, 1 master and 2 workers:
[admin#k8s-node1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 4d21h v1.15.2
k8s-node1 Ready 4d20h v1.15.2
k8s-node2 Ready 4d20h v1.15.2
[admin#k8s-node1 ~]$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-5c98db65d4-7fztc 1/1 Running 2 4d20h 172.16.0.5 k8s-master kube-system coredns-5c98db65d4-wwb4t 1/1 Running 2 4d20h 172.16.0.4 k8s-master kube-system etcd-k8s-master 1/1 Running 1 4d20h 10.1.99.10 k8s-master
kube-system kube-apiserver-k8s-master 1/1 Running 1 4d20h 10.1.99.10 k8s-master
kube-system kube-controller-manager-k8s-master 1/1 Running 1 4d20h 10.1.99.10 k8s-master
kube-system kube-router-bt2rb 1/1 Running 0 30m 10.1.99.11 k8s-node1
kube-system kube-router-dnft9 1/1 Running 0 30m 10.1.99.10 k8s-master
kube-system kube-router-z98ns 1/1 Running 0 29m 10.1.99.12 k8s-node2
kube-system kube-scheduler-k8s-master 1/1 Running 1 4d20h 10.1.99.10 k8s-master
kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-8skmv 1/1 Running 0 43m 172.16.1.4 k8s-node1
kubernetes-dashboard kubernetes-metrics-scraper-86456cdd8f-htq9t 1/1 Running 0 43m 172.16.2.7 k8s-node2
URL: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
The Log for the Dashboard deployment shows the following message:
Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
Expect for the Dashboard UI to load with URL, but get error message instead.

Understanding Kubernetes networking, pods with same ip

I checked the pods in the kube-system namespace and noticed that some pods share the same ip address.The pods that share the same ip address appear to be on the same node.
In the Kubernetes documenatation it said that "Evert pod gets its own ip address." (https://kubernetes.io/docs/concepts/cluster-administration/networking/). I'm confused as to how same ip for some pods came about.
This was reported in issue 51322 and can depend on the network plugin you are using.
The issue was seen when using the basic kubenet network plugin on Linux.
Sometime, a reset/reboot can help
I suspect nodes have been configured with overlapped podCIDRs for such cases.
The pod CIDR could be checked by kubectl get node -o jsonpath='{.items[*].spec.podCIDR}'
Please check the Kubernetes manifests of the pods that have the same IP address as their node. If they have the parameter 'hostNetwork' set to be true, then this is not an issue.
master-node after logging in using PuTTY
worker-node01 after logging in using PuTTY
It clearly shows a separate CIDR for weave network. So it depends on the network plug-in. And some cases will override the pod specification CIDR provided during initialization.
After re-deploying across the new node - worker-node02
Yes. I have checked my 2 node clusters created using kubeadm on VMs running on AWS.
In the manifest files for static Pods hostNetwork=true is set.
Pods are:
-rw------- 1 root root 2100 Feb 4 16:48 etcd.yaml
-rw------- 1 root root 3669 Feb 4 16:48 kube-apiserver.yaml
-rw------- 1 root root 3346 Feb 4 16:48 kube-controller-manager.yaml
-rw------- 1 root root 1385 Feb 4 16:48 kube-scheduler.yaml
I have checked with weave and flannel.
All other pods getting IP, which was set during cluster initialization by kubeadm:
kubeadm init --pod-network-cidr=10.244.0.0/16
ubuntu#master-node:~$ kubectl get all -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default pod/my-nginx-deployment-5976fbfd94-2n2ff 1/1 Running 0 20m 10.244.1.17 worker-node01
default pod/my-nginx-deployment-5976fbfd94-4sghq 1/1 Running 0 20m 10.244.1.12 worker-node01
default pod/my-nginx-deployment-5976fbfd94-57lfp 1/1 Running 0 20m 10.244.1.14 worker-node01
default pod/my-nginx-deployment-5976fbfd94-77nrr 1/1 Running 0 20m 10.244.1.18 worker-node01
default pod/my-nginx-deployment-5976fbfd94-m7qbn 1/1 Running 0 20m 10.244.1.15 worker-node01
default pod/my-nginx-deployment-5976fbfd94-nsxvm 1/1 Running 0 20m 10.244.1.19 worker-node01
default pod/my-nginx-deployment-5976fbfd94-r5hr6 1/1 Running 0 20m 10.244.1.16 worker-node01
default pod/my-nginx-deployment-5976fbfd94-whtcg 1/1 Running 0 20m 10.244.1.13 worker-node01
kube-system pod/coredns-f9fd979d6-nghhz 1/1 Running 0 63m 10.244.0.3 master-node
kube-system pod/coredns-f9fd979d6-pdbrx 1/1 Running 0 63m 10.244.0.2 master-node
kube-system pod/etcd-master-node 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/kube-apiserver-master-node 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/kube-controller-manager-master-node 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/kube-proxy-8k9s4 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/kube-proxy-ln6gb 1/1 Running 0 37m 172.31.3.75 worker-node01
kube-system pod/kube-scheduler-master-node 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/weave-net-jc92w 2/2 Running 1 24m 172.31.8.115 master-node
kube-system pod/weave-net-l9rg2 2/2 Running 1 24m 172.31.3.75 worker-node01
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 443/TCP 63m
kube-system service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 63m k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 63m kube-proxy k8s.gcr.io/kube-proxy:v1.19.16 k8s-app=kube-proxy
kube-system daemonset.apps/weave-net 2 2 2 2 2 24m weave,weave-npc ghcr.io/weaveworks/launcher/weave-kube:2.8.1,ghcr.io/weaveworks/launcher/weave-npc:2.8.1 name=weave-net
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
default deployment.apps/my-nginx-deployment 8/8 8 8 20m nginx nginx app=my-nginx-deployment
kube-system deployment.apps/coredns 2/2 2 2 63m coredns k8s.gcr.io/coredns:1.7.0 k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
default replicaset.apps/my-nginx-deployment-5976fbfd94 8 8 8 20m nginx nginx app=my-nginx-deployment,pod-template-hash=5976fbfd94
kube-system replicaset.apps/coredns-f9fd979d6 2 2 2 63m coredns k8s.gcr.io/coredns:1.7.0 k8s-app=kube-dns,pod-template-hash=f9fd979d6
ubuntu#master-node:~$
I will add another worker node and check.
Note: I was testing with a one master and 3 worker node cluster, where pods were getting IP from some other CIDR 10.38 and 10.39. I am not sure, but the way steps are followed matters. I could not fix that cluster.

Gluster cluster in Kubernetes: Glusterd inactive (dead) after node reboot. How to debug?

I don't know what to do to debug it. I have 1 Kubernetes master node and three slave nodes. I have deployed on the three nodes a Gluster cluster just fine with this guide https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md.
I created volumes and everything is working. But when I reboot a slave node, and the node reconnects to the master node, the glusterd.service inside the slave node shows up dead and nothing works after this.
[root#kubernetes-node-1 /]# systemctl status glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: inactive (dead)
I don't know what to do from here, for example /var/log/glusterfs/glusterd.log has been updated last time 3 days ago (it's not being updated with errors after a reboot or a pod deletion+recreation).
I just want to know where glusterd crashes so I can find out why.
How can I debug this crash?
All the nodes (master + slaves) run on Ubuntu Desktop 18 64 bit LTS Virtualbox VMs.
requested logs (kubectl get all --all-namespaces):
NAMESPACE NAME READY STATUS RESTARTS AGE
glusterfs pod/glusterfs-7nl8l 0/1 Running 62 22h
glusterfs pod/glusterfs-wjnzx 1/1 Running 62 2d21h
glusterfs pod/glusterfs-wl4lx 1/1 Running 112 41h
glusterfs pod/heketi-7495cdc5fd-hc42h 1/1 Running 0 22h
kube-system pod/coredns-86c58d9df4-n2hpk 1/1 Running 0 6d12h
kube-system pod/coredns-86c58d9df4-rbwjq 1/1 Running 0 6d12h
kube-system pod/etcd-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-apiserver-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-controller-manager-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-flannel-ds-amd64-785q8 1/1 Running 5 3d19h
kube-system pod/kube-flannel-ds-amd64-8sj2z 1/1 Running 8 3d19h
kube-system pod/kube-flannel-ds-amd64-v62xb 1/1 Running 0 3d21h
kube-system pod/kube-flannel-ds-amd64-wx4jl 1/1 Running 7 3d21h
kube-system pod/kube-proxy-7f6d9 1/1 Running 5 3d19h
kube-system pod/kube-proxy-7sf9d 1/1 Running 0 6d12h
kube-system pod/kube-proxy-n9qxq 1/1 Running 8 3d19h
kube-system pod/kube-proxy-rwghw 1/1 Running 7 3d21h
kube-system pod/kube-scheduler-kubernetes-master-work 1/1 Running 0 6d12h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d12h
elastic service/glusterfs-dynamic-9ad03769-2bb5-11e9-8710-0800276a5a8e ClusterIP 10.98.38.157 <none> 1/TCP 2d19h
elastic service/glusterfs-dynamic-a77e02ca-2bb4-11e9-8710-0800276a5a8e ClusterIP 10.97.203.225 <none> 1/TCP 2d19h
elastic service/glusterfs-dynamic-ad16ed0b-2bb6-11e9-8710-0800276a5a8e ClusterIP 10.105.149.142 <none> 1/TCP 2d19h
glusterfs service/heketi ClusterIP 10.101.79.224 <none> 8080/TCP 2d20h
glusterfs service/heketi-storage-endpoints ClusterIP 10.99.199.190 <none> 1/TCP 2d20h
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 6d12h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
glusterfs daemonset.apps/glusterfs 3 3 0 3 0 storagenode=glusterfs 2d21h
kube-system daemonset.apps/kube-flannel-ds-amd64 4 4 4 4 4 beta.kubernetes.io/arch=amd64 3d21h
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 3d21h
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 3d21h
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 3d21h
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 3d21h
kube-system daemonset.apps/kube-proxy 4 4 4 4 4 <none> 6d12h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
glusterfs deployment.apps/heketi 1/1 1 0 2d20h
kube-system deployment.apps/coredns 2/2 2 2 6d12h
NAMESPACE NAME DESIRED CURRENT READY AGE
glusterfs replicaset.apps/heketi-7495cdc5fd 1 1 0 2d20h
kube-system replicaset.apps/coredns-86c58d9df4 2 2 2 6d12h
requested:
tasos#kubernetes-master-work:~$ kubectl logs -n glusterfs glusterfs-7nl8l
env variable is set. Update in gluster-blockd.service
Please check these similar topics:
GlusterFS deployment on k8s cluster-- Readiness probe failed: /usr/local/bin/status-probe.sh
and
https://github.com/gluster/gluster-kubernetes/issues/539
Check tcmu-runner.log log to debug it.
UPDATE:
I think it will be your issue:
https://github.com/gluster/gluster-kubernetes/pull/557
PR is prepared, but not merged.
UPDATE 2:
https://github.com/gluster/glusterfs/issues/417
Be sure that rpcbind is installed.

kube-system ingress pod is going to crashloopbackoff during openstack-helm installation

I'm trying to install openstack-helm on a bare-metal. While installing ingress pods, the ingress pod with openstack namespace is coming up successfully but the ingress pod of kube-system namespace is going to crashloopbackoff. The following are the list of pods
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-etcd-lbxpt 1/1 Running 0 3d 172.17.0.1
kube-system calico-kube-policy-controllers-64b8674d9-dwq8k 1/1 Running 0 3d 172.17.0.1
kube-system calico-node-sfq89 2/2 Running 0 3d 172.17.0.1
kube-system etcd-megam-1 1/1 Running 0 3d 172.17.0.1
kube-system ingress-error-pages-6bfc8d875-tmqvd 1/1 Running 0 3d 192.168.232.131
**kube-system ingress-rbv77 0/1 CrashLoopBackOff 1117** 3d 172.17.0.1
kube-system kube-apiserver-megam-1 1/1 Running 0 3d 172.17.0.1
kube-system kube-controller-manager-megam-1 1/1 Running 0 3d 172.17.0.1
kube-system kube-dns-85648cfc65-pq44v 3/3 Running 0 3d 192.168.232.129
kube-system kube-proxy-n9zts 1/1 Running 0 3d 172.17.0.1
kube-system kube-scheduler-megam-1 1/1 Running 0 3d 172.17.0.1
kube-system tiller-deploy-5c9c77f7c-bclsk 1/1 Running 0 3d 192.168.232.130
openstack ingress-c7f9b544c-984wk 1/1 Running 0 3d 192.168.232.132
openstack ingress-error-pages-57957f47f-25dgr 1/1 Running 0 3d 192.168.232.133
Even the nova pods are not coming up. Can anyone help how to fix this?

kube-dns and weave-net not starting

I am deploying Kubernetes 1.4 on Ubuntu 16 on Raspberry Pi 3 following the instructions at http://kubernetes.io/docs/getting-started-guides/kubeadm/. The master starts and the minion joins no problem but when I add weave kubedns won't start. Here's the pods:
k8s#k8s-master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-master 1/1 Running 1 23h
kube-system kube-apiserver-k8s-master 1/1 Running 3 23h
kube-system kube-controller-manager-k8s-master 1/1 Running 1 23h
kube-system kube-discovery-1943570393-ci2m9 1/1 Running 1 23h
kube-system kube-dns-4291873140-ia4y8 0/3 ContainerCreating 0 23h
kube-system kube-proxy-arm-nfvvy 1/1 Running 0 1h
kube-system kube-proxy-arm-tcnta 1/1 Running 1 23h
kube-system kube-scheduler-k8s-master 1/1 Running 1 23h
kube-system weave-net-4gqd1 0/2 CrashLoopBackOff 54 1h
kube-system weave-net-l758i 0/2 CrashLoopBackOff 44 1h
The events log doesn't show anything. getting logs for kube-dns doesn't get anything either.
What can I do to debug?
kube-dns won't start until the network is up.
Look in the kubelet logs on each machine for more information about the crash that is causing the CrashLoopBackoff.
How did you get ARM images for Weave Net? The weaveworks/weave-kube image on DockerHub is only built for x64.
Edit: as #pidster says Weave Net now supports ARM
UPDATE: As Bryan pointed out, Flannel is not the only overlay network anymore.
Note this two hints in the kubeadm install documentation:
Flannel is the only network overlay support for arm
If you are on another architecture than amd64, you should use the flannel overlay network as described in the multi-platform section
When using Flannel, you need to make a kubectl init --por-network-cidr=10.244.0.0/16
Note: this will autodetect the network interface to advertise the
master on as the interface with the default gateway. If you want to
use a different interface, specify
--api-advertise-addresses= argument to kubeadm init. If you want to use flannel as the pod network, specify
--pod-network-cidr=10.244.0.0/16 if you’re using the daemonset manifest below. However, please note that this is not required for any
other networks besides Flannel.
You may also would like to check my automated step-by-step installation for Raspberry Pi 3 with Ansible, since there is no issue with DNS and probably will work with Ubuntu 16 as well:
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox-894550917-7vj3z 1/1 Running 0 15h
default busybox-894550917-p9vnl 1/1 Running 1 3d
default gogs-3464422143-cf5wb 1/1 Running 0 16h
kube-system dummy-2501624643-pxmgz 1/1 Running 2 3d
kube-system etcd-master.cluster.local 1/1 Running 2 3d
kube-system kube-apiserver-master.cluster.local 1/1 Running 2 3d
kube-system kube-controller-manager-master.cluster.local 1/1 Running 2 3d
kube-system kube-discovery-1659614412-vrhv4 1/1 Running 2 3d
kube-system kube-dns-4211557627-kpsj4 4/4 Running 8 3d
kube-system kube-flannel-ds-d1bgg 2/2 Running 6 3d
kube-system kube-flannel-ds-fcp4b 2/2 Running 6 3d
kube-system kube-flannel-ds-n7p3m 2/2 Running 6 3d
kube-system kube-flannel-ds-tn7nd 2/2 Running 6 3d
kube-system kube-flannel-ds-vpk4w 2/2 Running 6 3d
kube-system kube-proxy-5nmtn 1/1 Running 2 3d
kube-system kube-proxy-gq7bz 1/1 Running 2 3d
kube-system kube-proxy-lkkgm 1/1 Running 2 3d
kube-system kube-proxy-mlh3v 1/1 Running 1 3d
kube-system kube-proxy-sg8n8 1/1 Running 2 3d
kube-system kube-scheduler-master.cluster.local 1/1 Running 2 3d
kube-system kubernetes-dashboard-3507263287-h9q33 1/1 Running 2 3d