message: "services "kubernetes-dashboard" not found", - kubernetes

Kubernetes Dashboard docs ( https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ )
say:
Kubectl will make Dashboard available at http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
but I get this when using that URL:
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kubernetes-dashboard" not found",
reason: "NotFound",
details: {
name: "kubernetes-dashboard",
kind: "services"
},
code: 404
}
FYI
[vagrant#master ~]$ kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-54f8cbd98d-92d2m 1/1 Running 0 19m
pod/coredns-54f8cbd98d-v487z 1/1 Running 0 19m
pod/etcd-master.vagrant.vm 1/1 Running 0 19m
pod/kube-apiserver-master.vagrant.vm 1/1 Running 0 18m
pod/kube-controller-manager-master.vagrant.vm 1/1 Running 1 19m
pod/kube-flannel-ds-2tr49 1/1 Running 0 18m
pod/kube-flannel-ds-552d5 1/1 Running 0 13m
pod/kube-flannel-ds-dbv5p 1/1 Running 0 16m
pod/kube-proxy-29l59 1/1 Running 0 13m
pod/kube-proxy-ctsnx 1/1 Running 0 16m
pod/kube-proxy-swxhg 1/1 Running 0 19m
pod/kube-scheduler-master.vagrant.vm 1/1 Running 1 19m
pod/kubernetes-dashboard-86b8d78468-6c52b 1/1 Running 0 19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 19m
service/kubernetes-dashboard ClusterIP 10.103.110.54 <none> 443/TCP 19m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-flannel-ds 3 3 3 3 3 beta.kubernetes.io/arch=amd64 19m
daemonset.apps/kube-proxy 3 3 3 3 3 <none> 19m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2 2 2 2 19m
deployment.apps/kubernetes-dashboard 1 1 1 1 19m
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-54f8cbd98d 2 2 2 19m
replicaset.apps/coredns-85d6cff8d8 0 0 0 19m
replicaset.apps/kubernetes-dashboard-86b8d78468 1 1 1 19m

The docs are probably not updated. Since the service is created in kube-system namespace you can use below
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Related

No matches for kind "Ingress" in version "networking.k8s.io/v1"

I am getting an error message indicating that the Kubernetes resource defined in the file "argocdingress.yaml" is of type "Ingress", but the version of the Kubernetes API being used (networking.k8s.io/v1) does not have a resource type called "Ingress".*
error: unable to recognize "argocdingress.yaml": no matches for kind "Ingress" in version "networking.k8s.io/v1"
[root#uat-master01 yaml]# kubectl apply -f argocdingress.yaml
error: unable to recognize "argocdingress.yaml": no matches for kind "Ingress" in version "networking.k8s.io/v1"
argocdingress.yaml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: argocdingressnamespace: argocdannotations:nginx.ingress.kubernetes.io/rewrite-target: /spec:rules:
host: argocd.fonepay.comhttp:paths:
path: /pathType: Prefixpath: /pathType: Prefixbackend:your textservice:name: argocd-serverport:name: http
[root#uat-master01 yaml]# kubectl get all -n argocd
NAME READY STATUS RESTARTS AGEpod/argocd-redis-66b48966cb-v4vwg 1/1 Running 0 19hpod/argocd-repo-server-7d956f8689-bl5dr 1/1 Running 0 19hpod/argocd-server-598494dbc7-pd2g5 1/1 Running 0 19hpod/argocd-application-controller-0 1/1 Running 0 19hpod/argocd-dex-server-8b975c7cc-42b26 1/1 Running 0 19h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/argocd-applicationset-controller ClusterIP 10.43.157.126 <none> 7000/TCP,8080/TCP 19hservice/argocd-dex-server ClusterIP 10.43.120.182 <none> 5556/TCP,5557/TCP,5558/TCP 19hservice/argocd-metrics ClusterIP 10.43.202.235 <none> 8082/TCP 19hservice/argocd-notifications-controller-metrics ClusterIP 10.43.99.138 <none> 9001/TCP 19hservice/argocd-redis ClusterIP 10.43.118.67 <none> 6379/TCP 19hservice/argocd-repo-server ClusterIP 10.43.9.59 <none> 8081/TCP,8084/TCP 19hservice/argocd-server ClusterIP 10.43.251.211 <none> 80/TCP,443/TCP 19hservice/argocd-server-metrics ClusterIP 10.43.5.168 <none> 8083/TCP 19h
NAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/argocd-redis 1/1 1 1 19hdeployment.apps/argocd-repo-server 1/1 1 1 19hdeployment.apps/argocd-server 1/1 1 1 19hdeployment.apps/argocd-dex-server 1/1 1 1 19h
NAME DESIRED CURRENT READY AGEreplicaset.apps/argocd-redis-66b48966cb 1 1 1 19hreplicaset.apps/argocd-repo-server-7d956f8689 1 1 1 19hreplicaset.apps/argocd-server-598494dbc7 1 1 1 19hreplicaset.apps/argocd-dex-server-8b975c7cc 1 1 1 19h
NAME READY AGEstatefulset.apps/argocd-application-controller 1/1 19h
nodeInfo:architecture: amd64bootID: 031dadd6-5586-453e-8add-ade7591791b7containerRuntimeVersion: containerd://1.3.3-k3s2kernelVersion: 3.10.0-1160.81.1.el7.x86_64kubeProxyVersion: v1.18.9+k3s1kubeletVersion: v1.18.9+k3s1machineID: 091bbe3a46464c0bac4082cd351b8db4operatingSystem: linuxosImage: CentOS Linux 7 (Core)systemUUID: CADD0442-7251-AE77-6375-D151A94CF9A7
Check by updating
apiVersion: networking.k8s.io/v1

istio integration with prometheus

I'm using microk8s installation of K8S and have installed Istio and Prometheus using:
microk8s enable prometheus
and
microk8s enable istio
I have also successfuly added the builtin prometheus and grafana of istio following the doc. Howeve, now I have two separate prometheus and grafana in my cluster 1. the one I installed using microk8s and 2. the one installed from istio.
I want to follow the instruction of option 2 in the doc to add extra configuration to the microk8s prometheus following Istio prometheus federation, this site. But I don't know where exactly I should add the configs mentioned in the link to my existing microk8s prometheus. Here is the list of all objects in my monitroing namespace (microk8s prometheus):
(base) ➜ inference kubectl get all -n monitoring
NAME READY STATUS RESTARTS AGE
pod/alertmanager-main-0 2/2 Running 2 (6h3m ago) 19h
pod/prometheus-adapter-59df95d9f5-c6v99 1/1 Running 1 (6h3m ago) 19h
pod/prometheus-operator-7775c66ccf-bl7qj 2/2 Running 2 (6h3m ago) 19h
pod/blackbox-exporter-55c457d5fb-rg9x4 3/3 Running 3 (6h3m ago) 19h
pod/node-exporter-sx4fw 2/2 Running 2 (6h3m ago) 19h
pod/kube-state-metrics-76f6cb7996-xmpss 3/3 Running 3 (6h3m ago) 19h
pod/grafana-6dd5b5f65-l44p4 1/1 Running 1 (6h3m ago) 19h
pod/prometheus-adapter-59df95d9f5-ksp85 1/1 Running 1 (6h3m ago) 19h
pod/prometheus-k8s-0 2/2 Running 3 (6h3m ago) 19h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus-operator ClusterIP None <none> 8443/TCP 19h
service/alertmanager-main ClusterIP 10.152.183.234 <none> 9093/TCP 19h
service/blackbox-exporter ClusterIP 10.152.183.176 <none> 9115/TCP,19115/TCP 19h
service/kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 19h
service/node-exporter ClusterIP None <none> 9100/TCP 19h
service/prometheus-adapter ClusterIP 10.152.183.240 <none> 443/TCP 19h
service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 19h
service/prometheus-operated ClusterIP None <none> 9090/TCP 19h
service/prometheus-k8s NodePort 10.152.183.52 <none> 9090:30090/TCP 19h
service/grafana NodePort 10.152.183.15 <none> 3000:30300/TCP 19h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-exporter 1 1 1 1 1 kubernetes.io/os=linux 19h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-operator 1/1 1 1 19h
deployment.apps/blackbox-exporter 1/1 1 1 19h
deployment.apps/kube-state-metrics 1/1 1 1 19h
deployment.apps/grafana 1/1 1 1 19h
deployment.apps/prometheus-adapter 2/2 2 2 19h
NAME DESIRED CURRENT READY AGE
replicaset.apps/prometheus-operator-7775c66ccf 1 1 1 19h
replicaset.apps/blackbox-exporter-55c457d5fb 1 1 1 19h
replicaset.apps/kube-state-metrics-76f6cb7996 1 1 1 19h
replicaset.apps/grafana-6dd5b5f65 1 1 1 19h
replicaset.apps/prometheus-adapter-59df95d9f5 2 2 2 19h
NAME READY AGE
statefulset.apps/alertmanager-main 1/1 19h
statefulset.apps/prometheus-k8s 1/1 19h
(base) ➜ inference
Which one should I edit? I'm very confused here.

kube-system pods core-dns and dashboard are pending

[root#master /]# kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-5644d7b6d9-97vkp 0/1 Pending 0 17h
kube-system pod/coredns-5644d7b6d9-p7mnl 0/1 Pending 0 17h
kube-system pod/etcd-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kube-apiserver-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kube-controller-manager-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kube-flannel-ds-amd64-r2rp8 1/1 Running 0 17h
kube-system pod/kube-flannel-ds-amd64-xp25f 1/1 Running 1 49m
kube-system pod/kube-proxy-k4hw5 1/1 Running 0 17h
kube-system pod/kube-proxy-nrzrv 1/1 Running 0 49m
kube-system pod/kube-scheduler-master.pronteelabs.com 1/1 Running 0 17h
kube-system pod/kubernetes-dashboard-7c54d59f66-9w5b7 0/1 Pending 0 45m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
kube-system service/heapster ClusterIP 10.98.205.214 <none> 80/TCP 45m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 17h
kube-system service/kubernetes-dashboard ClusterIP 10.105.192.154 <none> 443/TCP 45m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-flannel-ds-amd64 2 2 2 2 2 beta .kubernetes.io/arch=amd64 17h
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta .kubernetes.io/arch=arm 17h
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta .kubernetes.io/arch=arm64 17h
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta .kubernetes.io/arch=ppc64le 17h
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta .kubernetes.io/arch=s390x 17h
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 beta .kubernetes.io/os=linux 17h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 0/2 2 0 17h
kube-system deployment.apps/kubernetes-dashboard 0/1 1 0 45m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-5644d7b6d9 2 2 0 17h
kube-system replicaset.apps/kubernetes-dashboard-7c54d59f66 1 1 0 45m
Can you send the logs for the pods in pending state. Also try to describe the pod and see why it is unschedulable ? (Insufficient Resources, Disk pressure, etc). Just by looking at this output it is tough to say. Also tell which version of kubernetes are you using, how many nodes you have, resources available per node, etc.
After reading your updates:
Looks like a systemd issue where docker is not able to find the cni settings. Can you try applying a networking on your cluster like weave or flannel, and see if it works?
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Gluster cluster in Kubernetes: Glusterd inactive (dead) after node reboot. How to debug?

I don't know what to do to debug it. I have 1 Kubernetes master node and three slave nodes. I have deployed on the three nodes a Gluster cluster just fine with this guide https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md.
I created volumes and everything is working. But when I reboot a slave node, and the node reconnects to the master node, the glusterd.service inside the slave node shows up dead and nothing works after this.
[root#kubernetes-node-1 /]# systemctl status glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: inactive (dead)
I don't know what to do from here, for example /var/log/glusterfs/glusterd.log has been updated last time 3 days ago (it's not being updated with errors after a reboot or a pod deletion+recreation).
I just want to know where glusterd crashes so I can find out why.
How can I debug this crash?
All the nodes (master + slaves) run on Ubuntu Desktop 18 64 bit LTS Virtualbox VMs.
requested logs (kubectl get all --all-namespaces):
NAMESPACE NAME READY STATUS RESTARTS AGE
glusterfs pod/glusterfs-7nl8l 0/1 Running 62 22h
glusterfs pod/glusterfs-wjnzx 1/1 Running 62 2d21h
glusterfs pod/glusterfs-wl4lx 1/1 Running 112 41h
glusterfs pod/heketi-7495cdc5fd-hc42h 1/1 Running 0 22h
kube-system pod/coredns-86c58d9df4-n2hpk 1/1 Running 0 6d12h
kube-system pod/coredns-86c58d9df4-rbwjq 1/1 Running 0 6d12h
kube-system pod/etcd-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-apiserver-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-controller-manager-kubernetes-master-work 1/1 Running 0 6d12h
kube-system pod/kube-flannel-ds-amd64-785q8 1/1 Running 5 3d19h
kube-system pod/kube-flannel-ds-amd64-8sj2z 1/1 Running 8 3d19h
kube-system pod/kube-flannel-ds-amd64-v62xb 1/1 Running 0 3d21h
kube-system pod/kube-flannel-ds-amd64-wx4jl 1/1 Running 7 3d21h
kube-system pod/kube-proxy-7f6d9 1/1 Running 5 3d19h
kube-system pod/kube-proxy-7sf9d 1/1 Running 0 6d12h
kube-system pod/kube-proxy-n9qxq 1/1 Running 8 3d19h
kube-system pod/kube-proxy-rwghw 1/1 Running 7 3d21h
kube-system pod/kube-scheduler-kubernetes-master-work 1/1 Running 0 6d12h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d12h
elastic service/glusterfs-dynamic-9ad03769-2bb5-11e9-8710-0800276a5a8e ClusterIP 10.98.38.157 <none> 1/TCP 2d19h
elastic service/glusterfs-dynamic-a77e02ca-2bb4-11e9-8710-0800276a5a8e ClusterIP 10.97.203.225 <none> 1/TCP 2d19h
elastic service/glusterfs-dynamic-ad16ed0b-2bb6-11e9-8710-0800276a5a8e ClusterIP 10.105.149.142 <none> 1/TCP 2d19h
glusterfs service/heketi ClusterIP 10.101.79.224 <none> 8080/TCP 2d20h
glusterfs service/heketi-storage-endpoints ClusterIP 10.99.199.190 <none> 1/TCP 2d20h
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 6d12h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
glusterfs daemonset.apps/glusterfs 3 3 0 3 0 storagenode=glusterfs 2d21h
kube-system daemonset.apps/kube-flannel-ds-amd64 4 4 4 4 4 beta.kubernetes.io/arch=amd64 3d21h
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 3d21h
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 3d21h
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 3d21h
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 3d21h
kube-system daemonset.apps/kube-proxy 4 4 4 4 4 <none> 6d12h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
glusterfs deployment.apps/heketi 1/1 1 0 2d20h
kube-system deployment.apps/coredns 2/2 2 2 6d12h
NAMESPACE NAME DESIRED CURRENT READY AGE
glusterfs replicaset.apps/heketi-7495cdc5fd 1 1 0 2d20h
kube-system replicaset.apps/coredns-86c58d9df4 2 2 2 6d12h
requested:
tasos#kubernetes-master-work:~$ kubectl logs -n glusterfs glusterfs-7nl8l
env variable is set. Update in gluster-blockd.service
Please check these similar topics:
GlusterFS deployment on k8s cluster-- Readiness probe failed: /usr/local/bin/status-probe.sh
and
https://github.com/gluster/gluster-kubernetes/issues/539
Check tcmu-runner.log log to debug it.
UPDATE:
I think it will be your issue:
https://github.com/gluster/gluster-kubernetes/pull/557
PR is prepared, but not merged.
UPDATE 2:
https://github.com/gluster/glusterfs/issues/417
Be sure that rpcbind is installed.

How to debug Kubernetes on the proper way?

I would like to run Istio to play around, but I facing issues with my local kubernetes installation and I am successfuly stack with a way of debug my installation
That is a my current situation:
root#node1:/tmp/istio-0.1.5# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana 10.233.2.70 <pending> 3000:31202/TCP 1h
istio-egress 10.233.39.101 <none> 80/TCP 1h
istio-ingress 10.233.48.51 <pending> 80:30982/TCP,443:31195/TCP 1h
istio-manager 10.233.2.109 <none> 8080/TCP,8081/TCP 1h
istio-mixer 10.233.39.58 <none> 9091/TCP,9094/TCP,42422/TCP 1h
kubernetes 10.233.0.1 <none> 443/TCP 4h
prometheus 10.233.63.20 <pending> 9090:32170/TCP 1h
servicegraph 10.233.39.104 <pending> 8088:30814/TCP 1h
root#node1:/tmp/istio-0.1.5# kubectl get pods
NAME READY STATUS RESTARTS AGE
grafana-1261931457-3hx2p 0/1 Pending 0 1h
istio-ca-3887035158-6p3b0 0/1 Pending 0 1h
istio-egress-1920226302-vmlx1 0/1 Pending 0 1h
istio-ingress-2112208289-ctxj5 0/1 Pending 0 1h
istio-manager-2910860705-z28dp 0/2 Pending 0 1h
istio-mixer-2335471611-rsrhb 0/1 Pending 0 1h
prometheus-3067433533-l2m48 0/1 Pending 0 1h
servicegraph-3127588006-1k5rg 0/1 Pending 0 1h
kubectl get rs
NAME DESIRED CURRENT READY AGE
grafana-1261931457 1 1 0 1h
istio-ca-3887035158 1 1 0 1h
istio-egress-1920226302 1 1 0 1h
istio-ingress-2112208289 1 1 0 1h
istio-manager-2910860705 1 1 0 1h
istio-mixer-2335471611 1 1 0 1h
prometheus-3067433533 1 1 0 1h
servicegraph-3127588006 1 1 0 1h
kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
grafana-1261931457-3hx2p 0/1 Pending 0 1h app=grafana,pod-template-hash=1261931457
istio-ca-3887035158-6p3b0 0/1 Pending 0 1h istio=istio-ca,pod-template-hash=3887035158
istio-egress-1920226302-vmlx1 0/1 Pending 0 1h istio=egress,pod-template-hash=1920226302
istio-ingress-2112208289-ctxj5 0/1 Pending 0 1h istio=ingress,pod-template-hash=2112208289
istio-manager-2910860705-z28dp 0/2 Pending 0 1h istio=manager,pod-template-hash=2910860705
istio-mixer-2335471611-rsrhb 0/1 Pending 0 1h istio=mixer,pod-template-hash=2335471611
prometheus-3067433533-l2m48 0/1 Pending 0 1h app=prometheus,pod-template-hash=3067433533
servicegraph-3127588006-1k5rg 0/1 Pending 0 1h app=servicegraph,pod-template-hash=3127588006
root#node1:/tmp/istio-0.1.5# kubectl get nodes --show-labels
NAME STATUS AGE VERSION LABELS
node1 Ready 5h v1.6.4+coreos.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node1,node-role.kubernetes.io/master=true,node-role.kubernetes.io/node=true
node2 Ready 5h v1.6.4+coreos.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node2,node-role.kubernetes.io/master=true,node-role.kubernetes.io/node=true
node3 Ready 5h v1.6.4+coreos.0 app=prometeus,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node3,node-role.kubernetes.io/node=true
node4 Ready 5h v1.6.4+coreos.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node4,node-role.kubernetes.io/node=true
Unfortunately, after I read out most of documentation, I found out only few way to debug an installation
journalctl -r -u kubelet
kubectl get events
kubectl describe deployment
Is there any common workflow to debug Kubernetes installation?
Its in the documentation. follow the POD troubleshooting steps
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/