Kubernetes Dashboard Installation giving x509: certificate signed by unknown authority error - kubernetes

Trying to install kubernetes dashboard in Ubuntu 16.04 resulting in x509: certificate signed by unknown authority error.
Kubernetes cluster with a single node is running fine and deployments are happening too.
Tried enabling apiserver-host property in kubernetes-dashboard.yaml file without any lock.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
Unable to connect to the server: x509: certificate signed by unknown authority
Any suggestions.
Output from kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/calico-node-6dgkc 2/2 Running 4 4d23h
pod/calico-node-v8xjr 2/2 Running 0 2d4h
pod/coredns-fb8b8dccf-8jznp 1/1 Running 2 4d23h
pod/coredns-fb8b8dccf-pl87d 1/1 Running 2 4d23h
pod/etcd-ets-kubernetes 1/1 Running 2 4d23h
pod/kube-apiserver-ets-kubernetes 1/1 Running 2 4d23h
pod/kube-controller-manager-ets-kubernetes 1/1 Running 2 4d23h
pod/kube-proxy-24qjz 1/1 Running 0 2d4h
pod/kube-proxy-ccqpn 1/1 Running 2 4d23h
pod/kube-scheduler-ets-kubernetes 1/1 Running 2 4d23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/calico-typha ClusterIP 10.110.39.31 <none> 5473/TCP 4d23h
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d23h
Error from server (Forbidden): replicationcontrollers is forbidden: User "system:node:ets-kubernetes" cannot list resource "replicationcontrollers" in API group "" in the namespace "kube-system"
Error from server (Forbidden): daemonsets.apps is forbidden: User "system:node:ets-kubernetes" cannot list resource "daemonsets" in API group "apps" in the namespace "kube-system"
Error from server (Forbidden): deployments.apps is forbidden: User "system:node:ets-kubernetes" cannot list resource "deployments" in API group "apps" in the namespace "kube-system"
Error from server (Forbidden): replicasets.apps is forbidden: User "system:node:ets-kubernetes" cannot list resource "replicasets" in API group "apps" in the namespace "kube-system"
Error from server (Forbidden): statefulsets.apps is forbidden: User "system:node:ets-kubernetes" cannot list resource "statefulsets" in API group "apps" in the namespace "kube-system"
Error from server (Forbidden): horizontalpodautoscalers.autoscaling is forbidden: User "system:node:ets-kubernetes" cannot list resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "kube-system"
Error from server (Forbidden): jobs.batch is forbidden: User "system:node:ets-kubernetes" cannot list resource "jobs" in API group "batch" in the namespace "kube-system"
Error from server (Forbidden): cronjobs.batch is forbidden: User "system:node:ets-kubernetes" cannot list resource "cronjobs" in API group "batch" in the namespace "kube-system"
Output from kubectl get nodes
NAME STATUS ROLES AGE VERSION
ets-kubernetes Ready master 4d23h v1.14.1
ets-node Ready <none> 2d4h v1.14.1
Kubectl output.PNG
Certificate Error.PNG

It would be better if you would specify how did you deploy your cluster but, try to regenerate your cluster certificates. If you used kubeadm then from control plane node you can run
kubeadm alpha certs renew
For more info check this
EDIT according to update on original post:
According your updated output, as you can see from the events, somehow there are many lines like:
User "system:node:ets-kubernetes" cannot list resource .........
It means, above user doesn't have relevant role to do those actions on specified resources.
To fix this you have to create relevant Role and RoleBindings for this user.
You can get more info from official Using RBAC Authorization documentation

Had the same issue after resetting k8s to defaults while having kubectl proxy running.
Simply restarting kubectl proxy fixed the issue :)

Related

Why does 'unauthorized' appear in the startup of kubernetes cluster?

I start k8s on my local machine and find most pods are not ready:
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-65dbdb44db-zrfpm 0/1 Running 0 3m9s
kube-system dashboard-metrics-scraper-545bbb8767-nfnbg 1/1 Running 1 16m
kube-system kube-flannel-ds-amd64-nm7kr 0/1 CrashLoopBackOff 5 16m
kube-system kubernetes-dashboard-65665f84db-rqxpv 0/1 CrashLoopBackOff 4 118s
kube-system metrics-server-869ffc99cd-6fhfl 0/1 CrashLoopBackOff 5 16m
Then I check the status of flannel:
kubectl logs kube-flannel-ds-amd64-nm7kr -n kube-system
The log tells me that something is unauthorized:
I0809 10:23:51.307347 1 main.go:518] Determining IP address of default interface
I0809 10:23:51.308840 1 main.go:531] Using interface with name wlo1 and address 192.168.1.102
I0809 10:23:51.308894 1 main.go:548] Defaulting external address to interface address (192.168.1.102)
W0809 10:23:51.308917 1 client_config.go:517] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
E0809 10:23:51.620449 1 main.go:243] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-amd64-nm7kr': Unauthorized
Then I check the coredns:
E0809 10:28:06.807582 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Unauthorized
Then I check the metrics-server:
2020/08/09 10:28:06 Starting overwatch
2020/08/09 10:28:06 Using namespace: kube-system
2020/08/09 10:28:06 Using in-cluster config to connect to apiserver
2020/08/09 10:28:06 Using secret token for csrf signing
2020/08/09 10:28:06 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Unauthorized
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00036f0e0)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000114080)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:501 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000114080)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:469 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:550
main.main()
/home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x20d
I use kubernetes 1.18.2.
So what's wrong with that?

Delete All K8s Pods from a specific namespace

I want to remove all the pods from a specific namespace through Ansible Play. Here , I'm trying to delete all the postgres pods from a namespace 'postgres-ns'.
I'm using below ansible play to remove it:
- name: Unistalling postgres from K8s
block:
- name: Removing Statefulsets & Service from "{{postgres_namespace}}"
action:
shell kubectl -n "{{postgres_namespace}}" delete statefulsets "{{postgres_release_name}}" && kubectl -n "{{postgres_namespace}}" delete service "{{postgres_release_name}}"-service
register: postgres_removal_status
- debug:
var: postgres_removal_status.stdout_lines
but getting this error:
Error from server (NotFound): statefulsets.apps \"postgres\" not found
This is the result from kc -n postgres-ns get all:
NAME READY STATUS RESTARTS AGE`
`pod/postgres-postgresql-0 1/1 Running 0 57s`
`NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE`
`service/postgres-postgresql ClusterIP 10.108.64.70 <none> 5432/TCP 57s`
`service/postgres-postgresql-headless ClusterIP None <none> 5432/TCP 57s`
`NAME READY AGE`
`statefulset.apps/postgres-postgresql 1/1 57s
Can some one help me here?
Thanks in advance.
Error from server (NotFound): statefulsets.apps \"postgres\" not found
This says that you want to delete a statefulset which name is postgress, But from your get all command the name of statefulset is statefulset.apps/postgres-postgresql. You need to update delete statefulsets "{{postgres_release_name}}"-postgresql or pass correct value of postgres_release_name?

Unable to get the metrics from heapster pod in Kubernetes Cluster

I am trying to get the metrics in Kubernetes dashboard. For that I'm running the influxdb and heapster pod in my kube-system namespace. I checked the status of pods using the command kubectl get pods -n kube-system. Here is the link which I was followed But heapster shows the logs as
E1023 13:41:07.915723 1 reflector.go:190] k8s.io/heapster/metrics/util/util.go:30: Failed to list *v1.Node: Get https://kubernetes.default/api/v1/nodes?resourceVersion=0: dial tcp: i/o timeout
Could anybody suggest where might be I will do the changes in my configurations?
Looks like the heapster cannot talk to you kube-apiserver through the kubernetes service on your default namespace. A few of things, you can try:
Check that the service is defined in the default namespace:
$ kubectl get svc kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 92d
Check that all your kube-proxy pods are running ok:
$ kubectl -n kube-system -l=k8s-app=kube-proxy get pods
NAME READY STATUS RESTARTS AGE
kube-proxy-xxxxx 1/1 Running 0 4d18h
...
Check that all your overlay pods are running. For example for calico
$ kubectl -n kube-system -l=k8s-app=calico-node get pods
NAME READY STATUS RESTARTS AGE
calico-node-88fgd 2/2 Running 3 4d21h
...

Kubernetes dashboard not working, “already exists” and “could not find the requested resource (get services heapster)”

I am new to Kubernetes
The goal is to get Kubernetes cluster dashboard working
The Kubernetes cluster was deployed using Kubespray: github.com/kubernetes-incubator/kubespray
Versions:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-15T08:51:21Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3+coreos.0", GitCommit:"42de91f04e456f7625941a6c4aaedaa69708be1b", GitTreeState:"clean", BuildDate:"2017-08-07T19:44:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
When I do kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml --validate=false as described here
I get:
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": rolebindings.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": deployments.extensions "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists
When I run kubectl get services --namespace kube-system, I get:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.233.0.3 <none> 53/UDP,53/TCP 10d
kubernetes-dashboard 10.233.28.132 <none> 80/TCP 9d
When I try to reach the dashboard kubernetes cluster, I get Connection refused
kubectl logs --namespace=kube-system kubernetes-dashboard-4167803980-1dz53 output:
2017/09/27 10:54:11 Using in-cluster config to connect to apiserver
2017/09/27 10:54:11 Using service account token for csrf signing
2017/09/27 10:54:11 No request provided. Skipping authorization
2017/09/27 10:54:11 Starting overwatch
2017/09/27 10:54:11 Successful initial request to the apiserver, version: v1.7.3+coreos.0
2017/09/27 10:54:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2017/09/27 10:54:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/09/27 10:54:11 Initializing secret synchronizer synchronously using secret kubernetes-dashboard-key-holder from namespace kube-system
2017/09/27 10:54:11 Initializing JWE encryption key from synchronized object
2017/09/27 10:54:11 Creating in-cluster Heapster client
2017/09/27 10:54:11 Serving securely on HTTPS port: 8443
2017/09/27 10:54:11 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
Other outputs:
kubectl get pods --namespace=kube-system:
NAME READY STATUS RESTARTS AGE
calico-node-bqckz 1/1 Running 0 12d
calico-node-r9svd 1/1 Running 2 12d
calico-node-w3tps 1/1 Running 0 12d
kube-apiserver-kubetest1 1/1 Running 0 12d
kube-apiserver-kubetest2 1/1 Running 0 12d
kube-controller-manager-kubetest1 1/1 Running 2 12d
kube-controller-manager-kubetest2 1/1 Running 2 12d
kube-dns-3888408129-n0m8d 3/3 Running 0 12d
kube-dns-3888408129-z8xx3 3/3 Running 0 12d
kube-proxy-kubetest1 1/1 Running 0 12d
kube-proxy-kubetest2 1/1 Running 0 12d
kube-proxy-kubetest3 1/1 Running 0 12d
kube-scheduler-kubetest1 1/1 Running 2 12d
kube-scheduler-kubetest2 1/1 Running 2 12d
kubedns-autoscaler-1629318612-sd924 1/1 Running 0 12d
kubernetes-dashboard-4167803980-1dz53 1/1 Running 0 1d
nginx-proxy-kubetest3 1/1 Running 0 12d
kubectl proxy:
Starting to serve on 127.0.0.1:8001panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x2692f20]
goroutine 1 [running]:
k8s.io/kubernetes/pkg/kubectl.(*ProxyServer).ServeOnListener(0x0, 0x3a95a60, 0xc420114110, 0x17, 0xc4208b7c28)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/proxy_server.go:201 +0x70
k8s.io/kubernetes/pkg/kubectl/cmd.RunProxy(0x3aa5ec0, 0xc42074e960, 0x3a7f1e0, 0xc42000c018, 0xc4201d7200, 0x0, 0x0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:156 +0x774
k8s.io/kubernetes/pkg/kubectl/cmd.NewCmdProxy.func1(0xc4201d7200, 0xc4203586e0, 0x0, 0x2)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:79 +0x4f
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc4201d7200, 0xc420358500, 0x2, 0x2, 0xc4201d7200, 0xc420358500)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x234
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc4202e4240, 0x5000107, 0x0, 0xffffffffffffffff)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x2fe
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc4202e4240, 0xc42074e960, 0x3a7f1a0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b
k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:39 +0xd5
main.main()
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:26 +0x22
kubectl top nodes:
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
kubectl get svc --namespace=kube-system:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.233.0.3 <none> 53/UDP,53/TCP 12d
kubernetes-dashboard 10.233.28.132 <none> 80/TCP 11d
curl http://localhost:8001/ui:
curl: (7) Failed to connect to 10.2.3.211 port 8001: Connection refused
How can I get the dashboard working? Appreciate your help.
you may be installing dashboard version 1.7. try installing version 1.6.3 its well tested.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
Update 10/2/17:
can you try this: Delete and install 1.6.3 version.
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
I believe the kubernetes dashboard is by default available already if you are deploying it through GCP or Azure. The first error explains this already. To verify, you may do type the following command to look for the pods/service in the namespace kube-system.
>kubectl get pods --namespace=kube-system
>kubectl get svc --namespace=kube-system
From the above command, you should find your available kubernetes dashboard and so you don't need to deploy it again. To access the dashboard, you could type the following command.
>kubectl proxy
This will make the Dashboard available at http://localhost:8001/ui on the machine where you type this command.
But to understand more about your problem, may I know which version of kubernetes and what environment are you using now? Also, it will be great if you could show me the result of these two commands.
>kubectl get pods --namespace=kube-system
>kubectl top nodes

How to get the endpoint for kubernetes-dashboard

I have installed kubernetes using minikube on ubuntu 16.04 machine.
I have also installed kubernetes-dashboard.
When i try accessing the dashboard i get
Waiting, endpoint for service is not registered yet
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
.....
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
Temporary Error: Endpoint for service is not ready yet
`
However, when i try a kubectl get pods --all namespacesi get the below output
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-addon-manager-minikube 1/1 Running 0 11m
kube-system kube-dns-1301475494-xtb3b 3/3 Running 0 8m
kube-system kubernetes-dashboard-2039414953-dvv3m 1/1 Running 0 9m
kube-system kubernetes-dashboard-2crsk 1/1 Running 0 8m
kubectl get endpoints --all-namespaces
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 10.0.2.15:8443 11m
kube-system kube-controller-manager <none> 6m
kube-system kube-dns 172.17.0.4:53,172.17.0.4:53 8m
kube-system kube-scheduler <none> 6m
kube-system kubernetes-dashboard <none> 9m
How can i fix this issue? I don't seem to understand what is wrong. I am completely new to kubernetes
You need to run minikube dashboard. You shouldn't install dashboard separately; it comes with minikube.
some of the minikube commands
./minikube.exe version
./minikube.exe delete
./minikube.exe start --help
./minikube get-k8s-versions
./minikube.exe status
./minikube.exe ip
./minikube.exe dashboard --url=true