How to debug Kubernetes on the proper way? - kubernetes

I would like to run Istio to play around, but I facing issues with my local kubernetes installation and I am successfuly stack with a way of debug my installation
That is a my current situation:
root#node1:/tmp/istio-0.1.5# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana 10.233.2.70 <pending> 3000:31202/TCP 1h
istio-egress 10.233.39.101 <none> 80/TCP 1h
istio-ingress 10.233.48.51 <pending> 80:30982/TCP,443:31195/TCP 1h
istio-manager 10.233.2.109 <none> 8080/TCP,8081/TCP 1h
istio-mixer 10.233.39.58 <none> 9091/TCP,9094/TCP,42422/TCP 1h
kubernetes 10.233.0.1 <none> 443/TCP 4h
prometheus 10.233.63.20 <pending> 9090:32170/TCP 1h
servicegraph 10.233.39.104 <pending> 8088:30814/TCP 1h
root#node1:/tmp/istio-0.1.5# kubectl get pods
NAME READY STATUS RESTARTS AGE
grafana-1261931457-3hx2p 0/1 Pending 0 1h
istio-ca-3887035158-6p3b0 0/1 Pending 0 1h
istio-egress-1920226302-vmlx1 0/1 Pending 0 1h
istio-ingress-2112208289-ctxj5 0/1 Pending 0 1h
istio-manager-2910860705-z28dp 0/2 Pending 0 1h
istio-mixer-2335471611-rsrhb 0/1 Pending 0 1h
prometheus-3067433533-l2m48 0/1 Pending 0 1h
servicegraph-3127588006-1k5rg 0/1 Pending 0 1h
kubectl get rs
NAME DESIRED CURRENT READY AGE
grafana-1261931457 1 1 0 1h
istio-ca-3887035158 1 1 0 1h
istio-egress-1920226302 1 1 0 1h
istio-ingress-2112208289 1 1 0 1h
istio-manager-2910860705 1 1 0 1h
istio-mixer-2335471611 1 1 0 1h
prometheus-3067433533 1 1 0 1h
servicegraph-3127588006 1 1 0 1h
kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
grafana-1261931457-3hx2p 0/1 Pending 0 1h app=grafana,pod-template-hash=1261931457
istio-ca-3887035158-6p3b0 0/1 Pending 0 1h istio=istio-ca,pod-template-hash=3887035158
istio-egress-1920226302-vmlx1 0/1 Pending 0 1h istio=egress,pod-template-hash=1920226302
istio-ingress-2112208289-ctxj5 0/1 Pending 0 1h istio=ingress,pod-template-hash=2112208289
istio-manager-2910860705-z28dp 0/2 Pending 0 1h istio=manager,pod-template-hash=2910860705
istio-mixer-2335471611-rsrhb 0/1 Pending 0 1h istio=mixer,pod-template-hash=2335471611
prometheus-3067433533-l2m48 0/1 Pending 0 1h app=prometheus,pod-template-hash=3067433533
servicegraph-3127588006-1k5rg 0/1 Pending 0 1h app=servicegraph,pod-template-hash=3127588006
root#node1:/tmp/istio-0.1.5# kubectl get nodes --show-labels
NAME STATUS AGE VERSION LABELS
node1 Ready 5h v1.6.4+coreos.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node1,node-role.kubernetes.io/master=true,node-role.kubernetes.io/node=true
node2 Ready 5h v1.6.4+coreos.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node2,node-role.kubernetes.io/master=true,node-role.kubernetes.io/node=true
node3 Ready 5h v1.6.4+coreos.0 app=prometeus,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node3,node-role.kubernetes.io/node=true
node4 Ready 5h v1.6.4+coreos.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node4,node-role.kubernetes.io/node=true
Unfortunately, after I read out most of documentation, I found out only few way to debug an installation
journalctl -r -u kubelet
kubectl get events
kubectl describe deployment
Is there any common workflow to debug Kubernetes installation?

Its in the documentation. follow the POD troubleshooting steps
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/

Related

Unable to find correct selector for vault on Kubernetes with MicroK8s

I am currently working for the first time with a vault and I am encountering an issue. I am following the official guide from hashicorp to install the vault on a Kubernetes cluster and I am running it with MicroK8s.
However, I am not able to unseal my vault. As you can see it is running but when I try to use the selector to be able to unseal afterwards, it doesn't find the namespace.
I am fairly new to this but I wasn't able to find the solution online.
Any help would be appreciated.
Please find below the console output.
Thank you !
root#vault-dev:/home/dev# kubectl --namespace="vault" get pods
NAME READY STATUS RESTARTS AGE
vault-0 0/1 Pending 0 12m
vault-1 0/1 Pending 0 12m
vault-2 0/1 Pending 0 12m
vault-3 0/1 Pending 0 12m
vault-4 0/1 Pending 0 12m
vault-agent-injector-7969bb745-nmswj 1/1 Running 0 12m
root#vault-dev:/home/dev# kubectl exec --stdin=true --tty=true vault-0 -- vault operator init
Error from server (NotFound): pods "vault-0" not found
root#vault-dev:/home/dev# kubectl --namespace="vault" get pods
NAME READY STATUS RESTARTS AGE
vault-0 0/1 Pending 0 15m
vault-1 0/1 Pending 0 15m
vault-2 0/1 Pending 0 15m
vault-3 0/1 Pending 0 15m
vault-4 0/1 Pending 0 15m
vault-agent-injector-7969bb745-nmswj 1/1 Running 0 15m
root#vault-dev:/home/dev# kubectl --namespace="vault" get all
NAME READY STATUS RESTARTS AGE
pod/vault-0 0/1 Pending 0 15m
pod/vault-1 0/1 Pending 0 15m
pod/vault-2 0/1 Pending 0 15m
pod/vault-3 0/1 Pending 0 15m
pod/vault-4 0/1 Pending 0 15m
pod/vault-agent-injector-7969bb745-nmswj 1/1 Running 0 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/vault-internal ClusterIP None <none> 8200/TCP,8201/TCP 15m
service/vault-ui LoadBalancer REDACTED <pending> 8200:32269/TCP 15m
service/vault-standby ClusterIP REDACTED <none> 8200/TCP,8201/TCP 15m
service/vault-agent-injector-svc ClusterIP REDACTED <none> 443/TCP 15m
service/vault ClusterIP REDACTED <none> 8200/TCP,8201/TCP 15m
service/vault-active ClusterIP REDACTED <none> 8200/TCP,8201/TCP 15m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/vault-agent-injector 1/1 1 1 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/vault-agent-injector-7969bb745 1 1 1 15m
NAME READY AGE
statefulset.apps/vault 0/5 15m
root#vault-dev:/home/dev# kubectl get pods --selector='statefulset.apps/name=vault' --namespace=' vault'
No resources found in vault namespace.

message: "services "kubernetes-dashboard" not found",

Kubernetes Dashboard docs ( https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ )
say:
Kubectl will make Dashboard available at http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
but I get this when using that URL:
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kubernetes-dashboard" not found",
reason: "NotFound",
details: {
name: "kubernetes-dashboard",
kind: "services"
},
code: 404
}
FYI
[vagrant#master ~]$ kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-54f8cbd98d-92d2m 1/1 Running 0 19m
pod/coredns-54f8cbd98d-v487z 1/1 Running 0 19m
pod/etcd-master.vagrant.vm 1/1 Running 0 19m
pod/kube-apiserver-master.vagrant.vm 1/1 Running 0 18m
pod/kube-controller-manager-master.vagrant.vm 1/1 Running 1 19m
pod/kube-flannel-ds-2tr49 1/1 Running 0 18m
pod/kube-flannel-ds-552d5 1/1 Running 0 13m
pod/kube-flannel-ds-dbv5p 1/1 Running 0 16m
pod/kube-proxy-29l59 1/1 Running 0 13m
pod/kube-proxy-ctsnx 1/1 Running 0 16m
pod/kube-proxy-swxhg 1/1 Running 0 19m
pod/kube-scheduler-master.vagrant.vm 1/1 Running 1 19m
pod/kubernetes-dashboard-86b8d78468-6c52b 1/1 Running 0 19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 19m
service/kubernetes-dashboard ClusterIP 10.103.110.54 <none> 443/TCP 19m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-flannel-ds 3 3 3 3 3 beta.kubernetes.io/arch=amd64 19m
daemonset.apps/kube-proxy 3 3 3 3 3 <none> 19m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2 2 2 2 19m
deployment.apps/kubernetes-dashboard 1 1 1 1 19m
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-54f8cbd98d 2 2 2 19m
replicaset.apps/coredns-85d6cff8d8 0 0 0 19m
replicaset.apps/kubernetes-dashboard-86b8d78468 1 1 1 19m
The docs are probably not updated. Since the service is created in kube-system namespace you can use below
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

kube-system pods core-dns and dashboard are pending

[root#master /]# kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-5644d7b6d9-97vkp 0/1 Pending 0 17h
kube-system pod/coredns-5644d7b6d9-p7mnl 0/1 Pending 0 17h
kube-system pod/etcd-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kube-apiserver-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kube-controller-manager-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kube-flannel-ds-amd64-r2rp8 1/1 Running 0 17h
kube-system pod/kube-flannel-ds-amd64-xp25f 1/1 Running 1 49m
kube-system pod/kube-proxy-k4hw5 1/1 Running 0 17h
kube-system pod/kube-proxy-nrzrv 1/1 Running 0 49m
kube-system pod/kube-scheduler-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kubernetes-dashboard-7c54d59f66-9w5b7 0/1 Pending 0 45m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
kube-system service/heapster ClusterIP 10.98.205.214 <none> 80/TCP 45m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 17h
kube-system service/kubernetes-dashboard ClusterIP 10.105.192.154 <none> 443/TCP 45m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-flannel-ds-amd64 2 2 2 2 2 beta .kubernetes.io/arch=amd64 17h
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta .kubernetes.io/arch=arm 17h
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta .kubernetes.io/arch=arm64 17h
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta .kubernetes.io/arch=ppc64le 17h
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta .kubernetes.io/arch=s390x 17h
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 beta .kubernetes.io/os=linux 17h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 0/2 2 0 17h
kube-system deployment.apps/kubernetes-dashboard 0/1 1 0 45m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-5644d7b6d9 2 2 0 17h
kube-system replicaset.apps/kubernetes-dashboard-7c54d59f66 1 1 0 45m
Can you send the logs for the pods in pending state. Also try to describe the pod and see why it is unschedulable ? (Insufficient Resources, Disk pressure, etc). Just by looking at this output it is tough to say. Also tell which version of kubernetes are you using, how many nodes you have, resources available per node, etc.
After reading your updates:
Looks like a systemd issue where docker is not able to find the cni settings. Can you try applying a networking on your cluster like weave or flannel, and see if it works?
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Gluster cluster in Kubernetes: Glusterd inactive (dead) after node reboot. How to debug?

I don't know what to do to debug it. I have 1 Kubernetes master node and three slave nodes. I have deployed on the three nodes a Gluster cluster just fine with this guide https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md.
I created volumes and everything is working. But when I reboot a slave node, and the node reconnects to the master node, the glusterd.service inside the slave node shows up dead and nothing works after this.
[root#kubernetes-node-1 /]# systemctl status glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: inactive (dead)
I don't know what to do from here, for example /var/log/glusterfs/glusterd.log has been updated last time 3 days ago (it's not being updated with errors after a reboot or a pod deletion+recreation).
I just want to know where glusterd crashes so I can find out why.
How can I debug this crash?
All the nodes (master + slaves) run on Ubuntu Desktop 18 64 bit LTS Virtualbox VMs.
requested logs (kubectl get all --all-namespaces):
NAMESPACE NAME READY STATUS RESTARTS AGE
glusterfs pod/glusterfs-7nl8l 0/1 Running 62 22h
glusterfs pod/glusterfs-wjnzx 1/1 Running 62 2d21h
glusterfs pod/glusterfs-wl4lx 1/1 Running 112 41h
glusterfs pod/heketi-7495cdc5fd-hc42h 1/1 Running 0 22h
kube-system pod/coredns-86c58d9df4-n2hpk 1/1 Running 0 6d12h
kube-system pod/coredns-86c58d9df4-rbwjq 1/1 Running 0 6d12h
kube-system pod/etcd-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-apiserver-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-controller-manager-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-flannel-ds-amd64-785q8 1/1 Running 5 3d19h
kube-system pod/kube-flannel-ds-amd64-8sj2z 1/1 Running 8 3d19h
kube-system pod/kube-flannel-ds-amd64-v62xb 1/1 Running 0 3d21h
kube-system pod/kube-flannel-ds-amd64-wx4jl 1/1 Running 7 3d21h
kube-system pod/kube-proxy-7f6d9 1/1 Running 5 3d19h
kube-system pod/kube-proxy-7sf9d 1/1 Running 0 6d12h
kube-system pod/kube-proxy-n9qxq 1/1 Running 8 3d19h
kube-system pod/kube-proxy-rwghw 1/1 Running 7 3d21h
kube-system pod/kube-scheduler-kubernetes-master-work 1/1 Running 0 6d12h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d12h
elastic service/glusterfs-dynamic-9ad03769-2bb5-11e9-8710-0800276a5a8e ClusterIP 10.98.38.157 <none> 1/TCP 2d19h
elastic service/glusterfs-dynamic-a77e02ca-2bb4-11e9-8710-0800276a5a8e ClusterIP 10.97.203.225 <none> 1/TCP 2d19h
elastic service/glusterfs-dynamic-ad16ed0b-2bb6-11e9-8710-0800276a5a8e ClusterIP 10.105.149.142 <none> 1/TCP 2d19h
glusterfs service/heketi ClusterIP 10.101.79.224 <none> 8080/TCP 2d20h
glusterfs service/heketi-storage-endpoints ClusterIP 10.99.199.190 <none> 1/TCP 2d20h
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 6d12h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
glusterfs daemonset.apps/glusterfs 3 3 0 3 0 storagenode=glusterfs 2d21h
kube-system daemonset.apps/kube-flannel-ds-amd64 4 4 4 4 4 beta.kubernetes.io/arch=amd64 3d21h
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 3d21h
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 3d21h
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 3d21h
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 3d21h
kube-system daemonset.apps/kube-proxy 4 4 4 4 4 <none> 6d12h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
glusterfs deployment.apps/heketi 1/1 1 0 2d20h
kube-system deployment.apps/coredns 2/2 2 2 6d12h
NAMESPACE NAME DESIRED CURRENT READY AGE
glusterfs replicaset.apps/heketi-7495cdc5fd 1 1 0 2d20h
kube-system replicaset.apps/coredns-86c58d9df4 2 2 2 6d12h
requested:
tasos#kubernetes-master-work:~$ kubectl logs -n glusterfs glusterfs-7nl8l
env variable is set. Update in gluster-blockd.service
Please check these similar topics:
GlusterFS deployment on k8s cluster-- Readiness probe failed: /usr/local/bin/status-probe.sh
and
https://github.com/gluster/gluster-kubernetes/issues/539
Check tcmu-runner.log log to debug it.
UPDATE:
I think it will be your issue:
https://github.com/gluster/gluster-kubernetes/pull/557
PR is prepared, but not merged.
UPDATE 2:
https://github.com/gluster/glusterfs/issues/417
Be sure that rpcbind is installed.

Istio bookinfo sample deployment The connection has timed out

I'm trying to setup istio on Google container engine, istio has been installed successfully but booking sample has been failed to load.
Is there something I have configured in wrong way?
Help me, please!
Thanks in advance!
Here's what's I have tried:
kubectl get pods
details-v1-3121678156-3h2wx 2/2 Running 0 58m
grafana-1395297218-h0tjv 1/1 Running 0 5h
istio-ca-4001657623-n00zx 1/1 Running 0 5h
istio-egress-2038322175-0jtf5 1/1 Running 0 5h
istio-ingress-2247081378-fvr33 1/1 Running 0 5h
istio-mixer-2450814972-jrrm4 1/1 Running 0 5h
istio-pilot-1836659236-kw7cr 2/2 Running 0 5h
productpage-v1-1440812148-gqrgl 0/2 Pending 0 57m
prometheus-3067433533-fqcfw 1/1 Running 0 5h
ratings-v1-3755476866-jbh80 2/2 Running 0 58m
reviews-v1-3728017321-0m7mk 0/2 Pending 0 58m
reviews-v2-196544427-6ftf5 0/2 Pending 0 58m
reviews-v3-959055789-079xz 0/2 Pending 0 57m
servicegraph-3127588006-03b93 1/1 Running 0 5h
zipkin-4057566570-0cb86 1/1 Running 0 5h
kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S)
details 10.11.249.214 <none> 9080/TCP
grafana 10.11.247.226 104.199.211.175 3000:31036/TCP
istio-egress 10.11.246.60 <none> 80/TCP
istio-ingress 10.11.242.178 35.189.165.119 80:31622/TCP,443:31241/TCP
istio-mixer 10.11.242.104 <none> 9091/TCP,9094/TCP,42422/TCP
istio-pilot 10.11.251.240 <none> 8080/TCP,8081/TCP
kubernetes 10.11.240.1 <none> 443/TCP
productpage 10.11.255.53 <none> 9080/TCP
prometheus 10.11.248.237 130.211.249.66 9090:32056/TCP
ratings 10.11.252.40 <none> 9080/TCP
reviews 10.11.242.168 <none> 9080/TCP
servicegraph 10.11.252.60 35.185.161.219 8088:32709/TCP
zipkin 10.11.245.4 35.185.144.62 9411:31677/TCP
get ingress IP and export env variable then curl
NAME HOSTS ADDRESS PORTS AGE
gateway * 35.189.165.119 80 1h
Abduls-MacBook-Pro:~ abdul$ export GATEWAY_URL=35.189.165.119:80
Abduls-MacBook-Pro:~ abdul$ curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage
000
I ran into a similar issue ("upstream connect error or disconnect/reset before headers") when i deployed the istio sample app on GKE. Try to delete all pods (and wait for all of to come up again). Then restart your proxy...
kubectl delete pods --all