Kubernetes Dashborad is not opening - kubernetes

My Master node ip address is 192.168.56.101. there is no node connected to master yet.
master#kmaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready master 125m v1.15.1
master#kmaster:~$
When i deployed my kubernetes-dashborad using below command, why running IP Address of kubernetes-dashboard-5c8f9556c4-f2jpz is 192.168.189.6
Similarly the other pods has also different IP address.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml
master#kmaster:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-7bd78b474d-r2bwg 1/1 Running 0 113m 192.168.189.2 kmaster <none> <none>
kube-system calico-node-dsgqt 1/1 Running 0 113m 192.168.56.101 kmaster <none> <none>
kube-system coredns-5c98db65d4-n2wml 1/1 Running 0 114m 192.168.189.3 kmaster <none> <none>
kube-system coredns-5c98db65d4-v5qc8 1/1 Running 0 114m 192.168.189.1 kmaster <none> <none>
kube-system etcd-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-apiserver-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-controller-manager-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-proxy-bgtmr 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-scheduler-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-f2jpz 1/1 Running 0 107m 192.168.189.6 kmaster <none> <none>
kubernetes-dashboard kubernetes-metrics-scraper-86456cdd8f-w45w2 1/1 Running 0 107m 192.168.189.4 kmaster <none> <none>
master#kmaster:~$
And also not able to access the kubernetes-dashboard UI. i am using the link
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
and the link KubeDNS https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy is also not working.
but when trying to access Kubernetes master at https://192.168.56.101:6443 is working.
master#kmaster:~$ kubectl cluster-info
Kubernetes master is running at https://192.168.56.101:6443
KubeDNS is running at https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Any suggestions.

Solution (see comments): Don't mix your physical and overlay network ranges.
Accessing the KubeDNS is only possible with DNS as protocol, not HTTP. If you want to query the DNS service you need to kubectl port-forward, not the HTTP (API) proxy.

If you try to access the dashboard with localhost:8081, you have to run kubectl proxy --port 8081 from your console to setup the proxy between you localhost to the k8s apiserver.
If you want to access dashboard from apiserver directly without the local proxy, try the following url https://192.168.56.101:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy (assuming your service name is kubernetes-dashboard)
You can also run kubectl port-forward svc/kubernetes-dashboard -n kubernetes-dashboard 443, then access the dashboard with https://localhost:443

Related

how to communicate with daemonset pod from another pod in another node?

I have a daemonset configuration that runs on all nodes.
every pod listens on port 34567. I want from other pod on different node to communicate with this pod. how can I achieve that?
Find the target Pod's IP address as shown below
controlplane $ k get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-42pq8 1/1 Running 1 5m43s 10.88.0.4 node01 <none> <none>
coredns-fb8b8dccf-f9n5x 1/1 Running 1 5m43s 10.88.0.3 node01 <none> <none>
etcd-controlplane 1/1 Running 0 4m38s 172.17.0.23 controlplane <none> <none>
katacoda-cloud-provider-74dc75cf99-2jrpt 1/1 Running 3 5m42s 10.88.0.2 node01 <none> <none>
kube-apiserver-controlplane 1/1 Running 0 4m33s 172.17.0.23 controlplane <none> <none>
kube-controller-manager-controlplane 1/1 Running 0 4m45s 172.17.0.23 controlplane <none> <none>
kube-keepalived-vip-smkdc 1/1 Running 0 5m27s 172.17.0.26 node01 <none> <none>
kube-proxy-8sxkt 1/1 Running 0 5m27s 172.17.0.26 node01 <none> <none>
kube-proxy-jdcqc 1/1 Running 0 5m43s 172.17.0.23 controlplane <none> <none>
kube-scheduler-controlplane 1/1 Running 0 4m47s 172.17.0.23 controlplane <none> <none>
weave-net-8cxqg 2/2 Running 1 5m27s 172.17.0.26 node01 <none> <none>
weave-net-s4tcj 2/2 Running 1 5m43s 172.17.0.23 controlplane <none> <none>
Next "exec" into the originating pod - kube-proxy-8sxkt in my example
kubectl -n kube-system exec -it kube-proxy-8sxkt sh
Next, you will use the destination pod's IP and port (10256 - my example) number to connect. Please note that you may have to install curl/telnet if your originating container's image does not include the application
# curl telnet://172.17.0.23:10256
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
You can call via pod's IP.
Note: This IP can only be used in the k8s cluster.
POD address (IP) is a good option you can use it, unless you know the POD IP which might get changed from time to time due to deployment and scaling changes.
i would suggest trying out the Daemon set by exposing it using the service type Node port if you have a fix amount of Node and not much autoscaling there.
If you want to connect your POD with a specific POD you can use the Node IP on which POD is scheduled and use the Node port service.
Node IP:Node port
Read more at : https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
If you don't want to connect to a specific POD and just any of the Daemon sets replica will work to connect with you can use the service name to connect PODs with each other.
my-svc.my-namespace.svc.cluster-domain.example
Read more about the service and POD DNS
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

K8s pods running in diffrent node can't communicate with each other

I have k8s cluster with two node, master and worker node, installed Calico.
I initialized cluster and installed calico with following commands
# Initialize cluster
kubeadm init --apiserver-advertise-address=<MatserNodePublicIP> --pod-network-cidr=192.168.0.0/16
# Install Calico. Refer to official document
# https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-50-nodes-or-less
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
After that, I found pods running in different node can't communicate with each other, but pods running in same node can communicate with each other.
Here are my operations:
# With following command, I ran a nginx pod scheduled to worker node
# and assigned pod id 192.168.199.72
kubectl create nginx --image=nginx
# With following command, I ran a busybox pod scheduled to master node
# and assigned pod id 192.168.119.197
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh
# In busybox bash, I executed following command
# '>' represents command output
wget 192.168.199.72
> Connecting to 192.168.199.72 (192.168.199.72:80)
> wget: can't connect to remote host (192.168.199.72): Connection timed out
However, if nginx pod run in master node (same as busybox), the wget would output a correct welcome html.
(For scheduling nginx pod to master node, I cordon worker node, and restarted nginx pod)
I also tried to schedule nginx and busybox pod to worker node, the wget ouput is a correct welcome html.
Here are my cluster status, everything looks fine. I searched all I can find but couldn't find solution.
matser and worker node can ping each other with private IP.
For firewall
systemctl status firewalld
> Unit firewalld.service could not be found.
For node infomation
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
pro-con-scrapydmanager Ready control-plane,master 26h v1.21.2 10.120.0.5 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://20.10.5
pro-con-scraypd-01 Ready,SchedulingDisabled <none>
For pod infomation
kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default busybox 0/1 Error 0 24h 192.168.199.72 pro-con-scrapydmanager <none> <none>
default nginx 1/1 Running 1 26h 192.168.119.197 pro-con-scraypd-01 <none> <none>
kube-system calico-kube-controllers-78d6f96c7b-msrdr 1/1 Running 1 26h 192.168.199.77 pro-con-scrapydmanager <none> <none>
kube-system calico-node-gjhwh 1/1 Running 1 26h 10.120.0.2 pro-con-scraypd-01 <none> <none>
kube-system calico-node-x8d7g 1/1 Running 1 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system coredns-558bd4d5db-mllm5 1/1 Running 1 26h 192.168.199.78 pro-con-scrapydmanager <none> <none>
kube-system coredns-558bd4d5db-whfnn 1/1 Running 1 26h 192.168.199.75 pro-con-scrapydmanager <none> <none>
kube-system etcd-pro-con-scrapydmanager 1/1 Running 1 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system kube-apiserver-pro-con-scrapydmanager 1/1 Running 1 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system kube-controller-manager-pro-con-scrapydmanager 1/1 Running 2 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system kube-proxy-84cxb 1/1 Running 2 26h 10.120.0.2 pro-con-scraypd-01 <none> <none>
kube-system kube-proxy-nj2tq 1/1 Running 2 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system kube-scheduler-pro-con-scrapydmanager 1/1 Running 1 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
lens-metrics kube-state-metrics-78596b555-zxdst 1/1 Running 1 26h 192.168.199.76 pro-con-scrapydmanager <none> <none>
lens-metrics node-exporter-ggwtc 1/1 Running 1 26h 192.168.199.73 pro-con-scrapydmanager <none> <none>
lens-metrics node-exporter-sbz6t 1/1 Running 1 26h 192.168.119.196 pro-con-scraypd-01 <none> <none>
lens-metrics prometheus-0 1/1 Running 1 26h 192.168.199.74 pro-con-scrapydmanager <none> <none>
For services
kubectl get services -o wide --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h <none>
default nginx ClusterIP 10.99.117.158 <none> 80/TCP 24h run=nginx
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 26h k8s-app=kube-dns
lens-metrics kube-state-metrics ClusterIP 10.104.32.63 <none> 8080/TCP 26h name=kube-state-metrics
lens-metrics node-exporter ClusterIP None <none> 80/TCP 26h name=node-exporter,phase=prod
lens-metrics prometheus ClusterIP 10.111.86.164 <none> 80/TCP 26h name=prometheus
Ok. It's fault of firewall. I opened all of the following ports on my master node and recreated my cluster, then everything got fine and cni0 interface appeared. Although I still don't know why.
During the proccessing of tring, I find cni0 interface is important. If there is no cni0, I could not ping pod running in diffrent node.
(Refer: https://docs.projectcalico.org/getting-started/bare-metal/requirements)
Configuration Host(s) Connection type Port/protocol
Calico networking (BGP) All Bidirectional TCP 179
Calico networking with IP-in-IP enabled (default) All Bidirectional IP-in-IP, often represented by its protocol number 4
Calico networking with VXLAN enabled All Bidirectional UDP 4789
Calico networking with Typha enabled Typha agent hosts Incoming TCP 5473 (default)
flannel networking (VXLAN) All Bidirectional UDP 4789
All kube-apiserver host Incoming Often TCP 443 or 6443*
etcd datastore etcd hosts Incoming Officially TCP 2379 but can vary

Is it possible to show the k8s network topology using kubectl?

I installed the minikube in my CentOS 7.7 Server.
there are several pods in it:
[dele#att root]$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-f9fd979d6-4p6xg 1/1 Running 1 23h 172.18.0.2 minikube <none> <none>
kube-system etcd-minikube 1/1 Running 0 22h 172.17.0.2 minikube <none> <none>
kube-system kube-apiserver-minikube 1/1 Running 0 22h 172.17.0.2 minikube <none> <none>
kube-system kube-controller-manager-minikube 1/1 Running 1 23h 172.17.0.2 minikube <none> <none>
kube-system kube-proxy-4k468 1/1 Running 1 23h 172.17.0.2 minikube <none> <none>
kube-system kube-scheduler-minikube 1/1 Running 1 23h 172.17.0.2 minikube <none> <none>
kube-system storage-provisioner 1/1 Running 2 23h 172.17.0.2 minikube <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-c95fcf479-k7zpn 1/1 Running 1 23h 172.18.0.3 minikube <none> <none>
kubernetes-dashboard kubernetes-dashboard-5c448bc4bf-f9swt 1/1 Running 1 23h 172.18.0.4 minikube <none> <none>
but I can not see a clear network topology diagram, is it possible to show the network topology using the kubectl?
This is not possible out of the box with kubernetes (and kubectl) as far as I know.
With additional software in your cluster I know about three possiblities with visualization:
Istio has the possibility to visualize the communication within the mesh with kiali (For reference: https://istio.io/latest/docs/tasks/observability/kiali/)
The second option is spekt8
Weavescope comes with agents that gather data and visualizes them
Despite these options others could exist and I would really like to see more options because not everyone wants to add Istio and accept the performance impact just to visualize the pod/network landscape.
And as far as I understand spekt8 it's more about the visualization of relations between Kubernetes resources than about network topology visualization.
Weavescope needs cluster administration rights therefore it isn't advisable to make it public accessible without setting up some form of authentication.

Cannot access to Kubernetes Dashboard

I have a K8s cluster (1 master, 2 workers) running on 3 vagrant VMs on my computer.
I've installed kubernetes dashboard, like explained here.
All my pods are running correctly:
kubectl get pods -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-n5cpm 1/1 Running 1 61m 10.244.0.4 kmaster.example.com <none> <none>
coredns-fb8b8dccf-qwcr4 1/1 Running 1 61m 10.244.0.5 kmaster.example.com <none> <none>
etcd-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-apiserver-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-controller-manager-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-flannel-ds-amd64-hcjsm 1/1 Running 1 61m 172.42.42.100 kmaster.example.com <none> <none>
kube-flannel-ds-amd64-klv4f 1/1 Running 3 56m 172.42.42.102 kworker2.example.com <none> <none>
kube-flannel-ds-amd64-lmpnd 1/1 Running 2 59m 172.42.42.101 kworker1.example.com <none> <none>
kube-proxy-86qsw 1/1 Running 1 59m 10.0.2.15 kworker1.example.com <none> <none>
kube-proxy-dp29s 1/1 Running 1 61m 172.42.42.100 kmaster.example.com <none> <none>
kube-proxy-gqqq9 1/1 Running 1 56m 10.0.2.15 kworker2.example.com <none> <none>
kube-scheduler-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kubernetes-dashboard-5f7b999d65-zqbbz 1/1 Running 1 28m 10.244.1.3 kworker1.example.com <none> <none>
As you can see the dashboard is in "Running" status.
I also ran kubectl proxy and it's serving on 127.0.0.1:8001.
But when I try to open http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ I have the error:
This site can’t be reached
127.0.0.1 refused to connect.
ERR_CONNECTION_REFUSED
I'm trying to open the dashboard directly on my computer, not inside the vagram VM. Could that be the problem? If yes, how to solve it ? I'm able to ping my VM from my computer without any issue.
Thanks for helping me.
EDIT
Here is the ouput of kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 96m
kubernetes-dashboard NodePort 10.109.230.83 <none> 443:30089/TCP 63m
Kubernetes dashboard runs only in the cluster as default. You can control it with get svc command:
kubectl get svc -n kube-system
Default type of that service is ClusterIp, to reach from outside of the cluster yo have to change it to NodePort.
To change it follow this doc.

Kubernetes services sometime no response

My cluster contains 1 master with 3 worker nodes in which 1 POD with 2 replica sets and 1 service are created. When I try to access the service via the command curl <ClusterIP>:<port> either from 2 worker nodes, sometimes it can feedback Nginx welcome, but sometimes it gets stuck and connection refused and timeout.
I checked the Kubernetes Service, POD and endpoints are fine, but no clue what is going on. Please advise.
vagrant#k8s-master:~/_projects/tmp1$ sudo kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready master 23d v1.12.2 192.168.205.10 <none> Ubuntu 16.04.4 LTS 4.4.0-139-generic docker://17.3.2
k8s-worker1 Ready <none> 23d v1.12.2 192.168.205.11 <none> Ubuntu 16.04.4 LTS 4.4.0-139-generic docker://17.3.2
k8s-worker2 Ready <none> 23d v1.12.2 192.168.205.12 <none> Ubuntu 16.04.4 LTS 4.4.0-139-generic docker://17.3.2
vagrant#k8s-master:~/_projects/tmp1$ sudo kubectl get pod -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default my-nginx-756f645cd7-pfdck 1/1 Running 0 5m23s 10.244.2.39 k8s-worker2 <none>
default my-nginx-756f645cd7-xpbnp 1/1 Running 0 5m23s 10.244.1.40 k8s-worker1 <none>
kube-system coredns-576cbf47c7-ljx68 1/1 Running 18 23d 10.244.0.38 k8s-master <none>
kube-system coredns-576cbf47c7-nwlph 1/1 Running 18 23d 10.244.0.39 k8s-master <none>
kube-system etcd-k8s-master 1/1 Running 18 23d 192.168.205.10 k8s-master <none>
kube-system kube-apiserver-k8s-master 1/1 Running 18 23d 192.168.205.10 k8s-master <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 18 23d 192.168.205.10 k8s-master <none>
kube-system kube-flannel-ds-54xnb 1/1 Running 2 2d5h 192.168.205.12 k8s-worker2 <none>
kube-system kube-flannel-ds-9q295 1/1 Running 2 2d5h 192.168.205.11 k8s-worker1 <none>
kube-system kube-flannel-ds-q25xw 1/1 Running 2 2d5h 192.168.205.10 k8s-master <none>
kube-system kube-proxy-gkpwp 1/1 Running 15 23d 192.168.205.11 k8s-worker1 <none>
kube-system kube-proxy-gncjh 1/1 Running 18 23d 192.168.205.10 k8s-master <none>
kube-system kube-proxy-m4jfm 1/1 Running 15 23d 192.168.205.12 k8s-worker2 <none>
kube-system kube-scheduler-k8s-master 1/1 Running 18 23d 192.168.205.10 k8s-master <none>
kube-system kubernetes-dashboard-77fd78f978-4r62r 1/1 Running 15 23d 10.244.1.38 k8s-worker1 <none>
vagrant#k8s-master:~/_projects/tmp1$ sudo kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d <none>
my-nginx ClusterIP 10.98.9.75 <none> 80/TCP 75s run=my-nginx
vagrant#k8s-master:~/_projects/tmp1$ sudo kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.205.10:6443 23d
my-nginx 10.244.1.40:80,10.244.2.39:80 101s
This sounds odd but it could be that one of your pods is serving traffic and the other is not. You can try shelling into the pods:
$ kubectl exec -it my-nginx-756f645cd7-rs2w2 sh
$ kubectl exec -it my-nginx-756f645cd7-vwzrl sh
You can see if they are listening on port 80:
$ curl localhost:80
You can also see if your service has the two endpoints 10.244.2.28:80 and 10.244.1.29:80.
$ kubectl get ep my-nginx
$ kubectl get ep my-nginx -o=yaml
Also, try to connect to each one of your endpoints from a node:
$ curl 10.244.2.28:80
$ curl 10.244.2.29:80