Virtualbox kubernetes NodePort access - kubernetes

Morning,
I have a simple nginx setup that is using NodePort to access on an alternate port 30000. I cannot seem to figure out how to actually access it on my workstation that has the virtualbox installed.
Some basic stuff:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
25h
nginx-55bc597fbf-zb2ml ClusterIP 10.101.124.73 <none> 8080/TCP
24h
nginx-service-np NodePort 10.105.157.230 <none>
8082:30000/TCP 22h
user-login-service NodePort 10.106.129.60 <none>
5000:31395/TCP 38m
I am using flannel
kubectl cluster-info
Kubernetes master is running at https://192.168.56.101:6443
KubeDNS is running at https://192.168.56.101:6443/api/v1/namespaces/kube-
system/services/kube-dns:dns/proxy
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 25h v1.15.1
k8s-worker1 Ready <none> 94m v1.15.1
k8s-worker2 Ready <none> 98m v1.15.1
I did port forwarding for NAT where it is supposed to forward 30000 to 80 and also did 31395 to 31396 for the user-login-service
Trying to access using master ip https://192.168.56.101:80 or https://192.168.56.101:31396 fails. I did try http as well, but cluster-info seems to show master using https and kubernetes is using 443/tcp.
There are two adapters for master and the workers. One adapter is NAT and used to allow flow of traffic outbound (e.g., for use with apt-get commands)
This seems to use 10.0.3.15 address assigned to all three nodes
The other adapter is host-ip and is what is giving the servers addresses in the 192.168.56.0 network. I did set those as static using netplan.
The three servers can see each other fine. I can do external traffic fine.
/etc/netplan# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-4xg8h 1/1 Running 17
120m
coredns-5c98db65d4-xn797 1/1 Running 17
120m
etcd-k8s-master 1/1 Running 8 25h
kube-apiserver-k8s-master 1/1 Running 8 25h
kube-controller-manager-dashap-k8s-master 1/1 Running 12 25h
kube-flannel-ds-amd64-6fw7x 1/1 Running 0 25h
kube-flannel-ds-amd64-hd4ng 1/1 Running 0
122m
kube-flannel-ds-amd64-z2wls 1/1 Running 0
126m
kube-proxy-g8k5l 1/1 Running 0 25h
kube-proxy-khn67 1/1 Running 0
126m
kube-proxy-zsvqs 1/1 Running 0
122m
kube-scheduler-k8s-master 1/1 Running 10 25h
weave-net-2l5cs 2/2 Running 0 44m
weave-net-n4zmr 2/2 Running 0 44m
weave-net-v6t74 2/2 Running 0 44m
This is my first setup, so it is hard to troubleshoot for me. Any help on how to reach the the two services using my browser on my workstation and not within the nodes would be appreciated.

Related

Pods not accessible (timeout) on 3 Node cluster created in AWS ec2 from master

I have 3 node cluster in AWS ec2 (Centos 8 ami).
When I try to access pods scheduled on worker node from master:
kubectl exec -it kube-flannel-ds-amd64-lfzpd -n kube-system /bin/bash
Error from server: error dialing backend: dial tcp 10.41.12.53:10250: i/o timeout
kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-54ff9cd656-8mpbx 1/1 Running 2 7d21h 10.244.0.7 master <none> <none>
kube-system coredns-54ff9cd656-xcxvs 1/1 Running 2 7d21h 10.244.0.6 master <none> <none>
kube-system etcd-master 1/1 Running 2 7d21h 10.41.14.198 master <none> <none>
kube-system kube-apiserver-master 1/1 Running 2 7d21h 10.41.14.198 master <none> <none>
kube-system kube-controller-manager-master 1/1 Running 2 7d21h 10.41.14.198 master <none> <none>
kube-system kube-flannel-ds-amd64-8zgpw 1/1 Running 2 7d21h 10.41.14.198 master <none> <none>
kube-system kube-flannel-ds-amd64-lfzpd 1/1 Running 2 7d21h 10.41.12.53 worker1 <none> <none>
kube-system kube-flannel-ds-amd64-nhw5j 1/1 Running 2 7d21h 10.41.15.9 worker3 <none> <none>
kube-system kube-flannel-ds-amd64-s6nms 1/1 Running 2 7d21h 10.41.15.188 worker2 <none> <none>
kube-system kube-proxy-47s8k 1/1 Running 2 7d21h 10.41.15.9 worker3 <none> <none>
kube-system kube-proxy-6lbvq 1/1 Running 2 7d21h 10.41.15.188 worker2 <none> <none>
kube-system kube-proxy-vhmfp 1/1 Running 2 7d21h 10.41.14.198 master <none> <none>
kube-system kube-proxy-xwsnk 1/1 Running 2 7d21h 10.41.12.53 worker1 <none> <none>
kube-system kube-scheduler-master 1/1 Running 2 7d21h 10.41.14.198 master <none> <none>
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 7d21h v1.13.10
worker1 Ready <none> 7d21h v1.13.10
worker2 Ready <none> 7d21h v1.13.10
worker3 Ready <none> 7d21h v1.13.10
I tried below steps in all nodes, but no luck so far:
iptables -w -P FORWARD ACCEPT on all nodes
Turn on Masquerade
Turn on port 10250/tcp
Turn on port 8472/udp
Start kubelet
Any pointer would be helpful.
Flannel does not support NFT, and since you are using CentOS 8, you can't fallback to iptables.
Your best bet in this situation would be to switch to Calico.
You have to update Calico DaemonSet with:
....
Environment:
FELIX_IPTABLESBACKEND: NFT
....
or use version 3.12 or newer, as it adds
Autodetection of iptables backend
Previous versions of Calico required you to specify the host’s iptables backend (one of NFT or Legacy). With this release, Calico can now autodetect the iptables variant on the host by setting the Felix configuration parameter IptablesBackend to Auto. This is useful in scenarios where you don’t know what the iptables backend might be such as in mixed deployments. For more information, see the documentation for iptables dataplane configuration
Or switch to Ubuntu 20.04. Ubuntu doesn't use nftables yet.
Issue was because of inbound port in SG.I added these ports in SG I am able to get pass that issue.
2222
24007
24008
49152-49251
My original installer script does not need to follow above steps while running on VMs and standalone machine.
As SG is specific to EC2, so port should be allowed in inbound.
Point to note here is all my nodes(master and worker) are on same SG. even then port hast to be opened in inbound rule, that's the way SG works.

Is it possible to show the k8s network topology using kubectl?

I installed the minikube in my CentOS 7.7 Server.
there are several pods in it:
[dele#att root]$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-f9fd979d6-4p6xg 1/1 Running 1 23h 172.18.0.2 minikube <none> <none>
kube-system etcd-minikube 1/1 Running 0 22h 172.17.0.2 minikube <none> <none>
kube-system kube-apiserver-minikube 1/1 Running 0 22h 172.17.0.2 minikube <none> <none>
kube-system kube-controller-manager-minikube 1/1 Running 1 23h 172.17.0.2 minikube <none> <none>
kube-system kube-proxy-4k468 1/1 Running 1 23h 172.17.0.2 minikube <none> <none>
kube-system kube-scheduler-minikube 1/1 Running 1 23h 172.17.0.2 minikube <none> <none>
kube-system storage-provisioner 1/1 Running 2 23h 172.17.0.2 minikube <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-c95fcf479-k7zpn 1/1 Running 1 23h 172.18.0.3 minikube <none> <none>
kubernetes-dashboard kubernetes-dashboard-5c448bc4bf-f9swt 1/1 Running 1 23h 172.18.0.4 minikube <none> <none>
but I can not see a clear network topology diagram, is it possible to show the network topology using the kubectl?
This is not possible out of the box with kubernetes (and kubectl) as far as I know.
With additional software in your cluster I know about three possiblities with visualization:
Istio has the possibility to visualize the communication within the mesh with kiali (For reference: https://istio.io/latest/docs/tasks/observability/kiali/)
The second option is spekt8
Weavescope comes with agents that gather data and visualizes them
Despite these options others could exist and I would really like to see more options because not everyone wants to add Istio and accept the performance impact just to visualize the pod/network landscape.
And as far as I understand spekt8 it's more about the visualization of relations between Kubernetes resources than about network topology visualization.
Weavescope needs cluster administration rights therefore it isn't advisable to make it public accessible without setting up some form of authentication.

Kubernetes Dashborad is not opening

My Master node ip address is 192.168.56.101. there is no node connected to master yet.
master#kmaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready master 125m v1.15.1
master#kmaster:~$
When i deployed my kubernetes-dashborad using below command, why running IP Address of kubernetes-dashboard-5c8f9556c4-f2jpz is 192.168.189.6
Similarly the other pods has also different IP address.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml
master#kmaster:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-7bd78b474d-r2bwg 1/1 Running 0 113m 192.168.189.2 kmaster <none> <none>
kube-system calico-node-dsgqt 1/1 Running 0 113m 192.168.56.101 kmaster <none> <none>
kube-system coredns-5c98db65d4-n2wml 1/1 Running 0 114m 192.168.189.3 kmaster <none> <none>
kube-system coredns-5c98db65d4-v5qc8 1/1 Running 0 114m 192.168.189.1 kmaster <none> <none>
kube-system etcd-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-apiserver-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-controller-manager-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-proxy-bgtmr 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-scheduler-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-f2jpz 1/1 Running 0 107m 192.168.189.6 kmaster <none> <none>
kubernetes-dashboard kubernetes-metrics-scraper-86456cdd8f-w45w2 1/1 Running 0 107m 192.168.189.4 kmaster <none> <none>
master#kmaster:~$
And also not able to access the kubernetes-dashboard UI. i am using the link
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
and the link KubeDNS https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy is also not working.
but when trying to access Kubernetes master at https://192.168.56.101:6443 is working.
master#kmaster:~$ kubectl cluster-info
Kubernetes master is running at https://192.168.56.101:6443
KubeDNS is running at https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Any suggestions.
Solution (see comments): Don't mix your physical and overlay network ranges.
Accessing the KubeDNS is only possible with DNS as protocol, not HTTP. If you want to query the DNS service you need to kubectl port-forward, not the HTTP (API) proxy.
If you try to access the dashboard with localhost:8081, you have to run kubectl proxy --port 8081 from your console to setup the proxy between you localhost to the k8s apiserver.
If you want to access dashboard from apiserver directly without the local proxy, try the following url https://192.168.56.101:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy (assuming your service name is kubernetes-dashboard)
You can also run kubectl port-forward svc/kubernetes-dashboard -n kubernetes-dashboard 443, then access the dashboard with https://localhost:443

Cannot access to Kubernetes Dashboard

I have a K8s cluster (1 master, 2 workers) running on 3 vagrant VMs on my computer.
I've installed kubernetes dashboard, like explained here.
All my pods are running correctly:
kubectl get pods -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-n5cpm 1/1 Running 1 61m 10.244.0.4 kmaster.example.com <none> <none>
coredns-fb8b8dccf-qwcr4 1/1 Running 1 61m 10.244.0.5 kmaster.example.com <none> <none>
etcd-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-apiserver-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-controller-manager-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-flannel-ds-amd64-hcjsm 1/1 Running 1 61m 172.42.42.100 kmaster.example.com <none> <none>
kube-flannel-ds-amd64-klv4f 1/1 Running 3 56m 172.42.42.102 kworker2.example.com <none> <none>
kube-flannel-ds-amd64-lmpnd 1/1 Running 2 59m 172.42.42.101 kworker1.example.com <none> <none>
kube-proxy-86qsw 1/1 Running 1 59m 10.0.2.15 kworker1.example.com <none> <none>
kube-proxy-dp29s 1/1 Running 1 61m 172.42.42.100 kmaster.example.com <none> <none>
kube-proxy-gqqq9 1/1 Running 1 56m 10.0.2.15 kworker2.example.com <none> <none>
kube-scheduler-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kubernetes-dashboard-5f7b999d65-zqbbz 1/1 Running 1 28m 10.244.1.3 kworker1.example.com <none> <none>
As you can see the dashboard is in "Running" status.
I also ran kubectl proxy and it's serving on 127.0.0.1:8001.
But when I try to open http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ I have the error:
This site can’t be reached
127.0.0.1 refused to connect.
ERR_CONNECTION_REFUSED
I'm trying to open the dashboard directly on my computer, not inside the vagram VM. Could that be the problem? If yes, how to solve it ? I'm able to ping my VM from my computer without any issue.
Thanks for helping me.
EDIT
Here is the ouput of kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 96m
kubernetes-dashboard NodePort 10.109.230.83 <none> 443:30089/TCP 63m
Kubernetes dashboard runs only in the cluster as default. You can control it with get svc command:
kubectl get svc -n kube-system
Default type of that service is ClusterIp, to reach from outside of the cluster yo have to change it to NodePort.
To change it follow this doc.

Istio bookinfo sample deployment The connection has timed out

I'm trying to setup istio on Google container engine, istio has been installed successfully but booking sample has been failed to load.
Is there something I have configured in wrong way?
Help me, please!
Thanks in advance!
Here's what's I have tried:
kubectl get pods
details-v1-3121678156-3h2wx 2/2 Running 0 58m
grafana-1395297218-h0tjv 1/1 Running 0 5h
istio-ca-4001657623-n00zx 1/1 Running 0 5h
istio-egress-2038322175-0jtf5 1/1 Running 0 5h
istio-ingress-2247081378-fvr33 1/1 Running 0 5h
istio-mixer-2450814972-jrrm4 1/1 Running 0 5h
istio-pilot-1836659236-kw7cr 2/2 Running 0 5h
productpage-v1-1440812148-gqrgl 0/2 Pending 0 57m
prometheus-3067433533-fqcfw 1/1 Running 0 5h
ratings-v1-3755476866-jbh80 2/2 Running 0 58m
reviews-v1-3728017321-0m7mk 0/2 Pending 0 58m
reviews-v2-196544427-6ftf5 0/2 Pending 0 58m
reviews-v3-959055789-079xz 0/2 Pending 0 57m
servicegraph-3127588006-03b93 1/1 Running 0 5h
zipkin-4057566570-0cb86 1/1 Running 0 5h
kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S)
details 10.11.249.214 <none> 9080/TCP
grafana 10.11.247.226 104.199.211.175 3000:31036/TCP
istio-egress 10.11.246.60 <none> 80/TCP
istio-ingress 10.11.242.178 35.189.165.119 80:31622/TCP,443:31241/TCP
istio-mixer 10.11.242.104 <none> 9091/TCP,9094/TCP,42422/TCP
istio-pilot 10.11.251.240 <none> 8080/TCP,8081/TCP
kubernetes 10.11.240.1 <none> 443/TCP
productpage 10.11.255.53 <none> 9080/TCP
prometheus 10.11.248.237 130.211.249.66 9090:32056/TCP
ratings 10.11.252.40 <none> 9080/TCP
reviews 10.11.242.168 <none> 9080/TCP
servicegraph 10.11.252.60 35.185.161.219 8088:32709/TCP
zipkin 10.11.245.4 35.185.144.62 9411:31677/TCP
get ingress IP and export env variable then curl
NAME HOSTS ADDRESS PORTS AGE
gateway * 35.189.165.119 80 1h
Abduls-MacBook-Pro:~ abdul$ export GATEWAY_URL=35.189.165.119:80
Abduls-MacBook-Pro:~ abdul$ curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage
000
I ran into a similar issue ("upstream connect error or disconnect/reset before headers") when i deployed the istio sample app on GKE. Try to delete all pods (and wait for all of to come up again). Then restart your proxy...
kubectl delete pods --all