Kubernetes dashboard not working, “already exists” and “could not find the requested resource (get services heapster)” - kubernetes

I am new to Kubernetes
The goal is to get Kubernetes cluster dashboard working
The Kubernetes cluster was deployed using Kubespray: github.com/kubernetes-incubator/kubespray
Versions:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-15T08:51:21Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3+coreos.0", GitCommit:"42de91f04e456f7625941a6c4aaedaa69708be1b", GitTreeState:"clean", BuildDate:"2017-08-07T19:44:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
When I do kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml --validate=false as described here
I get:
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": rolebindings.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": deployments.extensions "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists
When I run kubectl get services --namespace kube-system, I get:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.233.0.3 <none> 53/UDP,53/TCP 10d
kubernetes-dashboard 10.233.28.132 <none> 80/TCP 9d
When I try to reach the dashboard kubernetes cluster, I get Connection refused
kubectl logs --namespace=kube-system kubernetes-dashboard-4167803980-1dz53 output:
2017/09/27 10:54:11 Using in-cluster config to connect to apiserver
2017/09/27 10:54:11 Using service account token for csrf signing
2017/09/27 10:54:11 No request provided. Skipping authorization
2017/09/27 10:54:11 Starting overwatch
2017/09/27 10:54:11 Successful initial request to the apiserver, version: v1.7.3+coreos.0
2017/09/27 10:54:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2017/09/27 10:54:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/09/27 10:54:11 Initializing secret synchronizer synchronously using secret kubernetes-dashboard-key-holder from namespace kube-system
2017/09/27 10:54:11 Initializing JWE encryption key from synchronized object
2017/09/27 10:54:11 Creating in-cluster Heapster client
2017/09/27 10:54:11 Serving securely on HTTPS port: 8443
2017/09/27 10:54:11 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
Other outputs:
kubectl get pods --namespace=kube-system:
NAME READY STATUS RESTARTS AGE
calico-node-bqckz 1/1 Running 0 12d
calico-node-r9svd 1/1 Running 2 12d
calico-node-w3tps 1/1 Running 0 12d
kube-apiserver-kubetest1 1/1 Running 0 12d
kube-apiserver-kubetest2 1/1 Running 0 12d
kube-controller-manager-kubetest1 1/1 Running 2 12d
kube-controller-manager-kubetest2 1/1 Running 2 12d
kube-dns-3888408129-n0m8d 3/3 Running 0 12d
kube-dns-3888408129-z8xx3 3/3 Running 0 12d
kube-proxy-kubetest1 1/1 Running 0 12d
kube-proxy-kubetest2 1/1 Running 0 12d
kube-proxy-kubetest3 1/1 Running 0 12d
kube-scheduler-kubetest1 1/1 Running 2 12d
kube-scheduler-kubetest2 1/1 Running 2 12d
kubedns-autoscaler-1629318612-sd924 1/1 Running 0 12d
kubernetes-dashboard-4167803980-1dz53 1/1 Running 0 1d
nginx-proxy-kubetest3 1/1 Running 0 12d
kubectl proxy:
Starting to serve on 127.0.0.1:8001panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x2692f20]
goroutine 1 [running]:
k8s.io/kubernetes/pkg/kubectl.(*ProxyServer).ServeOnListener(0x0, 0x3a95a60, 0xc420114110, 0x17, 0xc4208b7c28)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/proxy_server.go:201 +0x70
k8s.io/kubernetes/pkg/kubectl/cmd.RunProxy(0x3aa5ec0, 0xc42074e960, 0x3a7f1e0, 0xc42000c018, 0xc4201d7200, 0x0, 0x0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:156 +0x774
k8s.io/kubernetes/pkg/kubectl/cmd.NewCmdProxy.func1(0xc4201d7200, 0xc4203586e0, 0x0, 0x2)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:79 +0x4f
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc4201d7200, 0xc420358500, 0x2, 0x2, 0xc4201d7200, 0xc420358500)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x234
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc4202e4240, 0x5000107, 0x0, 0xffffffffffffffff)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x2fe
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc4202e4240, 0xc42074e960, 0x3a7f1a0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b
k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:39 +0xd5
main.main()
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:26 +0x22
kubectl top nodes:
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
kubectl get svc --namespace=kube-system:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.233.0.3 <none> 53/UDP,53/TCP 12d
kubernetes-dashboard 10.233.28.132 <none> 80/TCP 11d
curl http://localhost:8001/ui:
curl: (7) Failed to connect to 10.2.3.211 port 8001: Connection refused
How can I get the dashboard working? Appreciate your help.

you may be installing dashboard version 1.7. try installing version 1.6.3 its well tested.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
Update 10/2/17:
can you try this: Delete and install 1.6.3 version.
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml

I believe the kubernetes dashboard is by default available already if you are deploying it through GCP or Azure. The first error explains this already. To verify, you may do type the following command to look for the pods/service in the namespace kube-system.
>kubectl get pods --namespace=kube-system
>kubectl get svc --namespace=kube-system
From the above command, you should find your available kubernetes dashboard and so you don't need to deploy it again. To access the dashboard, you could type the following command.
>kubectl proxy
This will make the Dashboard available at http://localhost:8001/ui on the machine where you type this command.
But to understand more about your problem, may I know which version of kubernetes and what environment are you using now? Also, it will be great if you could show me the result of these two commands.
>kubectl get pods --namespace=kube-system
>kubectl top nodes

Related

Kubernetes-dashboard - error trying to reach service: dial tcp 10.36.0.1:8443: i/o timeout

I googled and searched for the answer to my dilemma all answers I could find are not applicable, but they say this has been discussed many times.
Below is my actual cluster setup. 4 worker nodes, two masters, and one load balancer.
I installed the dashboard
XXXX#master01:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox 1/1 Running 30 30h
kube-system coredns-78cb77577b-lbp87 1/1 Running 0 30h
kube-system coredns-78cb77577b-n7rvg 1/1 Running 0 30h
kube-system weave-net-d9jb6 2/2 Running 7 31h
kube-system weave-net-nsqss 2/2 Running 0 39h
kube-system weave-net-wnbq7 2/2 Running 7 31h
kube-system weave-net-zfsmn 2/2 Running 0 39h
kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-dhcpn 1/1 Running 0 28h
kubernetes-dashboard kubernetes-dashboard-665f4c5ff-6qnzp 1/1 Running 7 28h
I installed my service accounts and assigned them cluster-admin roles
XXXX#master01:~$ kubectl get sa -n kubernetes-dashboard
NAME SECRETS AGE
default 1 28h
kube-apiserver 1 25h
kubernetes-dashboard 1 28h
I am using the kube-apiserver user service account because it was easy to just load the certs in the browser I already have them.
Now I try to access the dashboard using the load balancer:
https://loadbalancer.local:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
at this point one would think I should get the dashboard and every question I have encountered makes that assumption but I am getting the following error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "error trying to reach service: dial tcp 10.36.0.1:8443: i/o timeout",
"code": 500
}
so I decided to pull the logs:
kubectl logs -n kubernetes-dashboard service/kubernetes-dashboard
Error from server: Get "https://worker04:10250/containerLogs/kubernetes-dashboard/kubernetes-dashboard-665f4c5ff-6qnzp/kubernetes-dashboard": x509: certificate signed by unknown
authority
all I get is this one line and I had an idea of finding out what the issue is with the certification from this worker node: worker04:10250
I used OpenSSL to check the certificate and I discovered the following:
worker04 has generated its own certificate alright, but it also generated its own CA as well.
and this is where I am with no idea how to fix this and bring up a dashboard.
I also tried a proxy on master01:
kubectl -v=9 proxy --port=8001 --address=192.168.1.24
and all I got was 403 Forbidden!
I made some progress with this, I figured out that when a node generate and registers itself to a cluster, it is generating its own certificate CSR signed by its own generated CA, to fix this I generated the certificates for all the nodes signed by the cluster CA and simply replaced the auto generated certificates and restarted the nodes..

Why does 'unauthorized' appear in the startup of kubernetes cluster?

I start k8s on my local machine and find most pods are not ready:
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-65dbdb44db-zrfpm 0/1 Running 0 3m9s
kube-system dashboard-metrics-scraper-545bbb8767-nfnbg 1/1 Running 1 16m
kube-system kube-flannel-ds-amd64-nm7kr 0/1 CrashLoopBackOff 5 16m
kube-system kubernetes-dashboard-65665f84db-rqxpv 0/1 CrashLoopBackOff 4 118s
kube-system metrics-server-869ffc99cd-6fhfl 0/1 CrashLoopBackOff 5 16m
Then I check the status of flannel:
kubectl logs kube-flannel-ds-amd64-nm7kr -n kube-system
The log tells me that something is unauthorized:
I0809 10:23:51.307347 1 main.go:518] Determining IP address of default interface
I0809 10:23:51.308840 1 main.go:531] Using interface with name wlo1 and address 192.168.1.102
I0809 10:23:51.308894 1 main.go:548] Defaulting external address to interface address (192.168.1.102)
W0809 10:23:51.308917 1 client_config.go:517] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
E0809 10:23:51.620449 1 main.go:243] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-amd64-nm7kr': Unauthorized
Then I check the coredns:
E0809 10:28:06.807582 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Unauthorized
Then I check the metrics-server:
2020/08/09 10:28:06 Starting overwatch
2020/08/09 10:28:06 Using namespace: kube-system
2020/08/09 10:28:06 Using in-cluster config to connect to apiserver
2020/08/09 10:28:06 Using secret token for csrf signing
2020/08/09 10:28:06 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Unauthorized
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00036f0e0)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000114080)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:501 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000114080)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:469 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:550
main.main()
/home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x20d
I use kubernetes 1.18.2.
So what's wrong with that?

Get ServiceUnavailable from `kubectl top` with heapster

I have a managed kubernetes setup, called Cluster Container Engine (CCE), in the Open Telekom Cloud. Their documentation can be found online.
My CCE has one master and three nodes which run k8s version 1.9.2 (more details below). I can access the CCE through kubectl and deploy new pods onto it.
The CCE has a deployment of heapster preinstalled. However, attempting to inspect node resource usage fails (I can observe the same effect for pod usage):
$ kubectl top pods
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get services http:heapster:)
I've attempted all debugging steps I could think of (see below) and I'm still lost when it comes to fixing this. Any advice?
The deployment, pod and service items for heapster are present (outputs filtered to include only heapster):
$ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
heapster-apiserver-84b844ffcf-lzh4b 1/1 Running 0 47m
$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heapster ClusterIP 10.247.150.244 <none> 80/TCP 19d
$ kubectl get deploy -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
heapster-apiserver 1/1 1 1 19d
To check that heapster does indeed collect metrics properly, I've ssh'd into one of the nodes and executed:
$ curl -k http://10.247.150.244:80/api/v1/model/metrics/
[
"cpu/usage_rate",
"memory/usage",
"cpu/request",
"cpu/limit",
"memory/request",
"memory/limit"
]
Pod Log Output
Finally, I checked the log output from the heapster-apiserver-84b844ffcf-lzh4b pod:
$ kubectl logs -n kube-system heapster-apiserver-84b844ffcf-lzh4b
I0311 13:38:18.334525 1 heapster.go:78] /heapster --source=kubernetes.summary_api:''?kubeletHttps=true&inClusterConfig=false&insecure=true&auth=/srv/config --api-server --secure-port=6443
I0311 13:38:18.334718 1 heapster.go:79] Heapster version v1.5.3
I0311 13:38:18.340912 1 configs.go:61] Using Kubernetes client with master "https://192.168.1.228:5443" and version <nil>
I0311 13:38:18.340996 1 configs.go:62] Using kubelet port 10255
I0311 13:38:18.358918 1 heapster.go:202] Starting with Metric Sink
I0311 13:38:18.510751 1 serving.go:327] Generated self-signed cert (/var/run/kubernetes/apiserver.crt, /var/run/kubernetes/apiserver.key)
E0311 13:38:18.540860 1 heapster.go:128] Could not create the API server: missing clientCA file
I0311 13:38:18.558944 1 heapster.go:112] Starting heapster on port 8082
Cluster Info
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.2-CCE2.0.7-B003", GitCommit:"302f471a1e2caa114c9bb708c077fbb363aa2f13", GitTreeState:"clean", BuildDate:"2018-06-20T03:27:16Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get nodes
192.168.1.163 Ready worker 19d v1.9.2-CCE2.0.7-B003
192.168.1.211 Ready nfs-server 19d v1.9.2-CCE2.0.7-B003
192.168.1.227 Ready worker 19d v1.9.2-CCE2.0.7-B003
All nodes use EulerOS_2.0_SP2 with kernel version 3.10.0-327.59.59.46.h38.x86_64.
I0311 13:38:18.510751 1 serving.go:327] Generated self-signed cert (/var/run/kubernetes/apiserver.crt, /var/run/kubernetes/apiserver.key)
It seems like your API server is running on HTTP but heapster has a https url configured. You need to set the --source parameter to override the Kubernetes master like described here:
--source=kubernetes:http://master-ip?inClusterConfig=false&useServiceAccount=true&auth=
BTW: heapster has been deprecated and it is advised to switch to metrics server.

Kubernetes Kubeadm single node, dashboard "malformed http response"

I've just set up a single node Kubernetes cluster following the kubeadm guide to the letter. The cluster itself looks good, and all pods are running correctly:
will#kubemaster:~$ sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-w6dkj 1/1 Running 0 16m
kube-system calico-node-mjsnr 2/2 Running 0 16m
kube-system calico-policy-controller-59fc4f7888-vc6x6 1/1 Running 0 16m
kube-system etcd-kubemaster 1/1 Running 0 16m
kube-system kube-apiserver-kubemaster 1/1 Running 1 16m
kube-system kube-controller-manager-kubemaster 1/1 Running 0 16m
kube-system kube-dns-545bc4bfd4-mbbrl 3/3 Running 0 16m
kube-system kube-proxy-wkmlj 1/1 Running 0 16m
kube-system kube-scheduler-kubemaster 1/1 Running 0 16m
kube-system kubernetes-dashboard-7f9dbb8685-rxwfw 1/1 Running 0 4m
I installed the dashboard using:
sudo kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
I've tried serving up the kubrnetes dashboard locally by running "sudo kubectl proxy".
When I load "http://127.0.0.1:8001" I get the API endpoint listing, and all looks well. But when I add the /ui to load the dashboard (http://127.0.0.1:8001/ui), I get the following response:
Error: 'malformed HTTP response "\x15\x03\x01\x00\x02\x02"'
Trying to reach: 'http://192.167.141.3:8443/'
Also note, the above URL gets redirected to the API:
http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/
If I replace the HTTP with HTTPS, I get a "Secure connection failed, SSL recieved a record that exceeded the maximum permisslbe length".
If I try loading the dashboard without using the kubectl proxy, e.g. using the master IP, I get a connection refused.
I'm running on Ubuntu 16.04, my kubectl version details are as follows:
will#kubemaster:~$ sudo kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Since v1.7, Dashboard can only be accessed over HTTPS by default.
It is available at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ with kubectl proxy.
To deploy dashboard with HTTP (Not recommended for Production)
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
Dashboard can be loaded at http://localhost:8001/ui with kubectl proxy.

How to get the endpoint for kubernetes-dashboard

I have installed kubernetes using minikube on ubuntu 16.04 machine.
I have also installed kubernetes-dashboard.
When i try accessing the dashboard i get
Waiting, endpoint for service is not registered yet
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
.....
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
`
However, when i try a kubectl get pods --all namespacesi get the below output
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-addon-manager-minikube 1/1 Running 0 11m
kube-system kube-dns-1301475494-xtb3b 3/3 Running 0 8m
kube-system kubernetes-dashboard-2039414953-dvv3m 1/1 Running 0 9m
kube-system kubernetes-dashboard-2crsk 1/1 Running 0 8m
kubectl get endpoints --all-namespaces
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 10.0.2.15:8443 11m
kube-system kube-controller-manager <none> 6m
kube-system kube-dns 172.17.0.4:53,172.17.0.4:53 8m
kube-system kube-scheduler <none> 6m
kube-system kubernetes-dashboard <none> 9m
How can i fix this issue? I don't seem to understand what is wrong. I am completely new to kubernetes
You need to run minikube dashboard. You shouldn't install dashboard separately; it comes with minikube.
some of the minikube commands
./minikube.exe version
./minikube.exe delete
./minikube.exe start --help
./minikube get-k8s-versions
./minikube.exe status
./minikube.exe ip
./minikube.exe dashboard --url=true