Istio bookinfo sample deployment The connection has timed out - kubernetes

I'm trying to setup istio on Google container engine, istio has been installed successfully but booking sample has been failed to load.
Is there something I have configured in wrong way?
Help me, please!
Thanks in advance!
Here's what's I have tried:
kubectl get pods
details-v1-3121678156-3h2wx 2/2 Running 0 58m
grafana-1395297218-h0tjv 1/1 Running 0 5h
istio-ca-4001657623-n00zx 1/1 Running 0 5h
istio-egress-2038322175-0jtf5 1/1 Running 0 5h
istio-ingress-2247081378-fvr33 1/1 Running 0 5h
istio-mixer-2450814972-jrrm4 1/1 Running 0 5h
istio-pilot-1836659236-kw7cr 2/2 Running 0 5h
productpage-v1-1440812148-gqrgl 0/2 Pending 0 57m
prometheus-3067433533-fqcfw 1/1 Running 0 5h
ratings-v1-3755476866-jbh80 2/2 Running 0 58m
reviews-v1-3728017321-0m7mk 0/2 Pending 0 58m
reviews-v2-196544427-6ftf5 0/2 Pending 0 58m
reviews-v3-959055789-079xz 0/2 Pending 0 57m
servicegraph-3127588006-03b93 1/1 Running 0 5h
zipkin-4057566570-0cb86 1/1 Running 0 5h
kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S)
details 10.11.249.214 <none> 9080/TCP
grafana 10.11.247.226 104.199.211.175 3000:31036/TCP
istio-egress 10.11.246.60 <none> 80/TCP
istio-ingress 10.11.242.178 35.189.165.119 80:31622/TCP,443:31241/TCP
istio-mixer 10.11.242.104 <none> 9091/TCP,9094/TCP,42422/TCP
istio-pilot 10.11.251.240 <none> 8080/TCP,8081/TCP
kubernetes 10.11.240.1 <none> 443/TCP
productpage 10.11.255.53 <none> 9080/TCP
prometheus 10.11.248.237 130.211.249.66 9090:32056/TCP
ratings 10.11.252.40 <none> 9080/TCP
reviews 10.11.242.168 <none> 9080/TCP
servicegraph 10.11.252.60 35.185.161.219 8088:32709/TCP
zipkin 10.11.245.4 35.185.144.62 9411:31677/TCP
get ingress IP and export env variable then curl
NAME HOSTS ADDRESS PORTS AGE
gateway * 35.189.165.119 80 1h
Abduls-MacBook-Pro:~ abdul$ export GATEWAY_URL=35.189.165.119:80
Abduls-MacBook-Pro:~ abdul$ curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage
000

I ran into a similar issue ("upstream connect error or disconnect/reset before headers") when i deployed the istio sample app on GKE. Try to delete all pods (and wait for all of to come up again). Then restart your proxy...
kubectl delete pods --all

Related

kube-system pods core-dns and dashboard are pending

[root#master /]# kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-5644d7b6d9-97vkp 0/1 Pending 0 17h
kube-system pod/coredns-5644d7b6d9-p7mnl 0/1 Pending 0 17h
kube-system pod/etcd-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kube-apiserver-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kube-controller-manager-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kube-flannel-ds-amd64-r2rp8 1/1 Running 0 17h
kube-system pod/kube-flannel-ds-amd64-xp25f 1/1 Running 1 49m
kube-system pod/kube-proxy-k4hw5 1/1 Running 0 17h
kube-system pod/kube-proxy-nrzrv 1/1 Running 0 49m
kube-system pod/kube-scheduler-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kubernetes-dashboard-7c54d59f66-9w5b7 0/1 Pending 0 45m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
kube-system service/heapster ClusterIP 10.98.205.214 <none> 80/TCP 45m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 17h
kube-system service/kubernetes-dashboard ClusterIP 10.105.192.154 <none> 443/TCP 45m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-flannel-ds-amd64 2 2 2 2 2 beta .kubernetes.io/arch=amd64 17h
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta .kubernetes.io/arch=arm 17h
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta .kubernetes.io/arch=arm64 17h
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta .kubernetes.io/arch=ppc64le 17h
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta .kubernetes.io/arch=s390x 17h
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 beta .kubernetes.io/os=linux 17h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 0/2 2 0 17h
kube-system deployment.apps/kubernetes-dashboard 0/1 1 0 45m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-5644d7b6d9 2 2 0 17h
kube-system replicaset.apps/kubernetes-dashboard-7c54d59f66 1 1 0 45m
Can you send the logs for the pods in pending state. Also try to describe the pod and see why it is unschedulable ? (Insufficient Resources, Disk pressure, etc). Just by looking at this output it is tough to say. Also tell which version of kubernetes are you using, how many nodes you have, resources available per node, etc.
After reading your updates:
Looks like a systemd issue where docker is not able to find the cni settings. Can you try applying a networking on your cluster like weave or flannel, and see if it works?
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Virtualbox kubernetes NodePort access

Morning,
I have a simple nginx setup that is using NodePort to access on an alternate port 30000. I cannot seem to figure out how to actually access it on my workstation that has the virtualbox installed.
Some basic stuff:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
25h
nginx-55bc597fbf-zb2ml ClusterIP 10.101.124.73 <none> 8080/TCP
24h
nginx-service-np NodePort 10.105.157.230 <none>
8082:30000/TCP 22h
user-login-service NodePort 10.106.129.60 <none>
5000:31395/TCP 38m
I am using flannel
kubectl cluster-info
Kubernetes master is running at https://192.168.56.101:6443
KubeDNS is running at https://192.168.56.101:6443/api/v1/namespaces/kube-
system/services/kube-dns:dns/proxy
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 25h v1.15.1
k8s-worker1 Ready <none> 94m v1.15.1
k8s-worker2 Ready <none> 98m v1.15.1
I did port forwarding for NAT where it is supposed to forward 30000 to 80 and also did 31395 to 31396 for the user-login-service
Trying to access using master ip https://192.168.56.101:80 or https://192.168.56.101:31396 fails. I did try http as well, but cluster-info seems to show master using https and kubernetes is using 443/tcp.
There are two adapters for master and the workers. One adapter is NAT and used to allow flow of traffic outbound (e.g., for use with apt-get commands)
This seems to use 10.0.3.15 address assigned to all three nodes
The other adapter is host-ip and is what is giving the servers addresses in the 192.168.56.0 network. I did set those as static using netplan.
The three servers can see each other fine. I can do external traffic fine.
/etc/netplan# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-4xg8h 1/1 Running 17
120m
coredns-5c98db65d4-xn797 1/1 Running 17
120m
etcd-k8s-master 1/1 Running 8 25h
kube-apiserver-k8s-master 1/1 Running 8 25h
kube-controller-manager-dashap-k8s-master 1/1 Running 12 25h
kube-flannel-ds-amd64-6fw7x 1/1 Running 0 25h
kube-flannel-ds-amd64-hd4ng 1/1 Running 0
122m
kube-flannel-ds-amd64-z2wls 1/1 Running 0
126m
kube-proxy-g8k5l 1/1 Running 0 25h
kube-proxy-khn67 1/1 Running 0
126m
kube-proxy-zsvqs 1/1 Running 0
122m
kube-scheduler-k8s-master 1/1 Running 10 25h
weave-net-2l5cs 2/2 Running 0 44m
weave-net-n4zmr 2/2 Running 0 44m
weave-net-v6t74 2/2 Running 0 44m
This is my first setup, so it is hard to troubleshoot for me. Any help on how to reach the the two services using my browser on my workstation and not within the nodes would be appreciated.

Kubernetes Dashborad is not opening

My Master node ip address is 192.168.56.101. there is no node connected to master yet.
master#kmaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready master 125m v1.15.1
master#kmaster:~$
When i deployed my kubernetes-dashborad using below command, why running IP Address of kubernetes-dashboard-5c8f9556c4-f2jpz is 192.168.189.6
Similarly the other pods has also different IP address.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml
master#kmaster:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-7bd78b474d-r2bwg 1/1 Running 0 113m 192.168.189.2 kmaster <none> <none>
kube-system calico-node-dsgqt 1/1 Running 0 113m 192.168.56.101 kmaster <none> <none>
kube-system coredns-5c98db65d4-n2wml 1/1 Running 0 114m 192.168.189.3 kmaster <none> <none>
kube-system coredns-5c98db65d4-v5qc8 1/1 Running 0 114m 192.168.189.1 kmaster <none> <none>
kube-system etcd-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-apiserver-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-controller-manager-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-proxy-bgtmr 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-scheduler-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-f2jpz 1/1 Running 0 107m 192.168.189.6 kmaster <none> <none>
kubernetes-dashboard kubernetes-metrics-scraper-86456cdd8f-w45w2 1/1 Running 0 107m 192.168.189.4 kmaster <none> <none>
master#kmaster:~$
And also not able to access the kubernetes-dashboard UI. i am using the link
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
and the link KubeDNS https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy is also not working.
but when trying to access Kubernetes master at https://192.168.56.101:6443 is working.
master#kmaster:~$ kubectl cluster-info
Kubernetes master is running at https://192.168.56.101:6443
KubeDNS is running at https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Any suggestions.
Solution (see comments): Don't mix your physical and overlay network ranges.
Accessing the KubeDNS is only possible with DNS as protocol, not HTTP. If you want to query the DNS service you need to kubectl port-forward, not the HTTP (API) proxy.
If you try to access the dashboard with localhost:8081, you have to run kubectl proxy --port 8081 from your console to setup the proxy between you localhost to the k8s apiserver.
If you want to access dashboard from apiserver directly without the local proxy, try the following url https://192.168.56.101:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy (assuming your service name is kubernetes-dashboard)
You can also run kubectl port-forward svc/kubernetes-dashboard -n kubernetes-dashboard 443, then access the dashboard with https://localhost:443

Cannot access to Kubernetes Dashboard

I have a K8s cluster (1 master, 2 workers) running on 3 vagrant VMs on my computer.
I've installed kubernetes dashboard, like explained here.
All my pods are running correctly:
kubectl get pods -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-n5cpm 1/1 Running 1 61m 10.244.0.4 kmaster.example.com <none> <none>
coredns-fb8b8dccf-qwcr4 1/1 Running 1 61m 10.244.0.5 kmaster.example.com <none> <none>
etcd-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-apiserver-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-controller-manager-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kube-flannel-ds-amd64-hcjsm 1/1 Running 1 61m 172.42.42.100 kmaster.example.com <none> <none>
kube-flannel-ds-amd64-klv4f 1/1 Running 3 56m 172.42.42.102 kworker2.example.com <none> <none>
kube-flannel-ds-amd64-lmpnd 1/1 Running 2 59m 172.42.42.101 kworker1.example.com <none> <none>
kube-proxy-86qsw 1/1 Running 1 59m 10.0.2.15 kworker1.example.com <none> <none>
kube-proxy-dp29s 1/1 Running 1 61m 172.42.42.100 kmaster.example.com <none> <none>
kube-proxy-gqqq9 1/1 Running 1 56m 10.0.2.15 kworker2.example.com <none> <none>
kube-scheduler-kmaster.example.com 1/1 Running 1 60m 172.42.42.100 kmaster.example.com <none> <none>
kubernetes-dashboard-5f7b999d65-zqbbz 1/1 Running 1 28m 10.244.1.3 kworker1.example.com <none> <none>
As you can see the dashboard is in "Running" status.
I also ran kubectl proxy and it's serving on 127.0.0.1:8001.
But when I try to open http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ I have the error:
This site can’t be reached
127.0.0.1 refused to connect.
ERR_CONNECTION_REFUSED
I'm trying to open the dashboard directly on my computer, not inside the vagram VM. Could that be the problem? If yes, how to solve it ? I'm able to ping my VM from my computer without any issue.
Thanks for helping me.
EDIT
Here is the ouput of kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 96m
kubernetes-dashboard NodePort 10.109.230.83 <none> 443:30089/TCP 63m
Kubernetes dashboard runs only in the cluster as default. You can control it with get svc command:
kubectl get svc -n kube-system
Default type of that service is ClusterIp, to reach from outside of the cluster yo have to change it to NodePort.
To change it follow this doc.

Gluster cluster in Kubernetes: Glusterd inactive (dead) after node reboot. How to debug?

I don't know what to do to debug it. I have 1 Kubernetes master node and three slave nodes. I have deployed on the three nodes a Gluster cluster just fine with this guide https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md.
I created volumes and everything is working. But when I reboot a slave node, and the node reconnects to the master node, the glusterd.service inside the slave node shows up dead and nothing works after this.
[root#kubernetes-node-1 /]# systemctl status glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: inactive (dead)
I don't know what to do from here, for example /var/log/glusterfs/glusterd.log has been updated last time 3 days ago (it's not being updated with errors after a reboot or a pod deletion+recreation).
I just want to know where glusterd crashes so I can find out why.
How can I debug this crash?
All the nodes (master + slaves) run on Ubuntu Desktop 18 64 bit LTS Virtualbox VMs.
requested logs (kubectl get all --all-namespaces):
NAMESPACE NAME READY STATUS RESTARTS AGE
glusterfs pod/glusterfs-7nl8l 0/1 Running 62 22h
glusterfs pod/glusterfs-wjnzx 1/1 Running 62 2d21h
glusterfs pod/glusterfs-wl4lx 1/1 Running 112 41h
glusterfs pod/heketi-7495cdc5fd-hc42h 1/1 Running 0 22h
kube-system pod/coredns-86c58d9df4-n2hpk 1/1 Running 0 6d12h
kube-system pod/coredns-86c58d9df4-rbwjq 1/1 Running 0 6d12h
kube-system pod/etcd-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-apiserver-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-controller-manager-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-flannel-ds-amd64-785q8 1/1 Running 5 3d19h
kube-system pod/kube-flannel-ds-amd64-8sj2z 1/1 Running 8 3d19h
kube-system pod/kube-flannel-ds-amd64-v62xb 1/1 Running 0 3d21h
kube-system pod/kube-flannel-ds-amd64-wx4jl 1/1 Running 7 3d21h
kube-system pod/kube-proxy-7f6d9 1/1 Running 5 3d19h
kube-system pod/kube-proxy-7sf9d 1/1 Running 0 6d12h
kube-system pod/kube-proxy-n9qxq 1/1 Running 8 3d19h
kube-system pod/kube-proxy-rwghw 1/1 Running 7 3d21h
kube-system pod/kube-scheduler-kubernetes-master-work 1/1 Running 0 6d12h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d12h
elastic service/glusterfs-dynamic-9ad03769-2bb5-11e9-8710-0800276a5a8e ClusterIP 10.98.38.157 <none> 1/TCP 2d19h
elastic service/glusterfs-dynamic-a77e02ca-2bb4-11e9-8710-0800276a5a8e ClusterIP 10.97.203.225 <none> 1/TCP 2d19h
elastic service/glusterfs-dynamic-ad16ed0b-2bb6-11e9-8710-0800276a5a8e ClusterIP 10.105.149.142 <none> 1/TCP 2d19h
glusterfs service/heketi ClusterIP 10.101.79.224 <none> 8080/TCP 2d20h
glusterfs service/heketi-storage-endpoints ClusterIP 10.99.199.190 <none> 1/TCP 2d20h
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 6d12h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
glusterfs daemonset.apps/glusterfs 3 3 0 3 0 storagenode=glusterfs 2d21h
kube-system daemonset.apps/kube-flannel-ds-amd64 4 4 4 4 4 beta.kubernetes.io/arch=amd64 3d21h
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 3d21h
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 3d21h
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 3d21h
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 3d21h
kube-system daemonset.apps/kube-proxy 4 4 4 4 4 <none> 6d12h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
glusterfs deployment.apps/heketi 1/1 1 0 2d20h
kube-system deployment.apps/coredns 2/2 2 2 6d12h
NAMESPACE NAME DESIRED CURRENT READY AGE
glusterfs replicaset.apps/heketi-7495cdc5fd 1 1 0 2d20h
kube-system replicaset.apps/coredns-86c58d9df4 2 2 2 6d12h
requested:
tasos#kubernetes-master-work:~$ kubectl logs -n glusterfs glusterfs-7nl8l
env variable is set. Update in gluster-blockd.service
Please check these similar topics:
GlusterFS deployment on k8s cluster-- Readiness probe failed: /usr/local/bin/status-probe.sh
and
https://github.com/gluster/gluster-kubernetes/issues/539
Check tcmu-runner.log log to debug it.
UPDATE:
I think it will be your issue:
https://github.com/gluster/gluster-kubernetes/pull/557
PR is prepared, but not merged.
UPDATE 2:
https://github.com/gluster/glusterfs/issues/417
Be sure that rpcbind is installed.