Kubernete-dashboard is not deploying - kubernetes

I am trying to install kubernete-dashboard on my cluster.
I am running the below command:-
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Error:-
Error from server (BadRequest): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": RoleBinding in version "v1" cannot be handled as a RoleBinding: no kind "RoleBinding" is registered for version "rbac.authorization.k8s.io/v1"
Any suggestion ?

You can try to create a Service account in your cluster and a user administrator:
Use this file...
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
Create user
Create sample user (if using RBAC - on by default on new installs with kops / kubeadm):
kubectl create -f sample-user.yaml
Get login token:
kubectl -n kube-system get secret | grep admin-user
kubectl -n kube-system describe secret admin-user-token-<id displayed by previous command>
Login to dashboard
Apply kubectl proxy
Go to http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Login with user and pass
kubectl config view
Login: admin
Password: the password that is listed in ~/.kube/config (open file in editor and look for "password: ..."
Choose for login token and enter the login token from the previous step
Login with minikube
minikube dashboard --url

Related

Kubernetes Dashboard Token Expired in One hour. How to create token for long time

We have created kubernetes dashboard using below command.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
kubectl patch svc -n kubernetes-dashboard kubernetes-dashboard --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'
created dashboard-adminuser.yaml file like below.
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
Created ClusterRoleBinding.yaml file like below
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
And then run the below command at the end we got a token to login dashboard.
kubectl apply -f dashboard-adminuser.yaml
kubectl apply -f ClusterRoleBinding.yaml
kubectl -n kubernetes-dashboard create token admin-user
But the problem is the token which we generated got expired in one hour. We couldn't able to use the same token again, if dashboard logged out.
So can we create a token without expiry or at least minimum 6 months?
What is the command/procedure to create a token for long time use?
And one more thing is that can now we are accessing kubernetes dashboard like below in outside.
https://server_ip_address:PORT_NUMBER
Now we want to open the kubernetes dashboard using our website URL like below and it should login automatically to the dashboard.
https://my-domain-name.com/kubernetes-dashboard/{kubernetes-dashboard-goto-url}
you can set --duration=0s:
--duration=0s:
Requested lifetime of the issued token. The server may return a token with a longer or shorter lifetime.
so this should work
kubectl -n kubernetes-dashboard create token admin-user --duration=times
you can check the further option
kubectl create token --help
kubectl-commands--toke
After play around with token, it seems like the maximum expiration is 720h.
kubectl create token default --duration=488h --output yaml
and the output shows
kind: TokenRequest
metadata:
creationTimestamp: null
spec:
audiences:
- https://container.googleapis.com/v1/projects/test/clusters/test
boundObjectRef: null
expirationSeconds: **172800**
status:
expirationTimestamp: "2022-08-21T12:37:02Z"
token: eyJhbGciOiJSUzI1N....
So the other option is to go with kubeconfig as the dashboard also accepts config.
dashboard-auth-kubeconfig

Why kubernetes default service account has full access to the API on docker desktop?

As far as i know the default service account in Kubernetes should not have any permissions assigned. But still I can perform following from the pod on my docker desktop k8s:
APISERVER=https://kubernetes.default.svc
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/pods
How is that posible?
Furhermore I discovered that each pod have a different value of the SA token (cat /var/run/secrets/kubernetes.io/serviceaccount/token) and different from the one returned by kubectl describe secret default-token-cl9ds
Shouldn't it be the same?
Update:
$ kubectl get rolebindings.rbac.authorization.k8s.io podviewerrolebinding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"podviewerrolebinding","namespace":"default"},"roleRef":{"apiGroup"
:"rbac.authorization.k8s.io","kind":"Role","name":"podviewerrole"},"subjects":[{"kind":"ServiceAccount","name":"podviewerserviceaccount"}]}
creationTimestamp: "2021-09-07T10:01:51Z"
name: podviewerrolebinding
namespace: default
resourceVersion: "402212"
uid: 2d32f045-b172-4fff-a6b0-1525b0b96e65
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: podviewerrole
subjects:
- kind: ServiceAccount
name: podviewerserviceaccount
I hit the same issue, it looks like docker desktop has elevated permissions (i.e. admin) by default, see the article here.
Removing the clusterrolebinding docker-for-desktop-binding via the following command fixes the issue.
kubectl delete clusterrolebinding docker-for-desktop-binding

Not able to login to Kubernetes Dashboard with token

I am new with Kubernetes. I have created the control node and wanted to add a service user to login in dashboard.
root#bm-mbi-01:~# cat admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
root#bm-mbi-01:~# cat admin-user-clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
root#bm-mbi-01:~# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-kd8c8
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 226e0ea4-9d2e-480e-8b1d-709b9860e561
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjVZOS02T3M2T3AwNUZhQXA3NDdJZENXZlpIU2F6UUtNdEdJNmd3MFg0WEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWtkOGM4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMjZlMGVhNC05ZDJlLTQ4MGUtOGIxZC03MDliOTg2MGU1NjEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.OfRZlszXRt5AKxCumqSicPOkIK6g-fqPzitH_DjqskFxz6SzwYoDeFIPqyQ8O_6SFFgU6b-lgwiRmZtoj3dTKxr04PDl_t37KD7QTmBtX33vrW_sgq2EFbRkaiRxyTvFPjQDmo04iiyOQmlfzj67MIbgYYmem3NaTqgqx-j-SEi-CKTwVM4JyGa3GrTN7xeRfsFNSq1YOV6Yx1keyiD-gVEZiDxkBCJcdCJOM6p6q1s3cXgH1KWIDYkGXIHFX1f0tvu4xlr_-jgpSVehaAU98WN9DtgXL16ny1ckgKL1mPpBezrjVrf4k1lOSsXHWuE1cnlG9SnUIhbZ9k11HQJNtw
root#bm-mbi-01:~#
Used this token to login in dashboard. But after clicking in login, no response.
With IP, the URL is browsable but login button clicking is not working.
Finally solved it by doing SSH local port formwarding
Kubenetes Proxy was started by:
root#bm-mbi-01:~# kubectl proxy --address=10.20.200.75 --accept-hosts=.* &
SSH tunnel from my local PC to bm-mbi-01 server
✘ s.c#MB-SC  ~  ssh -L 8001:localhost:8001 bmadmin#bm-mbi-01
enter image description here

Kubernetes-dashboard empty : every resources are forbidden

It seems I have a very common problem but I cannot figure it out.
On a new kubernetes cluster (v1.17) I'm trying to install Kubernetes-dashboard.
For this I followed the official steps, starting by installing the dashboard :
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
Then I created the ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
And the ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
Everything is running smoothly and all the objets get created (I can get them and everything looks alright)
After running kubectl proxy the dashboard is accessible at this URL :
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Then I enter the token I got with this command :
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user-token | awk '{print $1}')
I can login, but the dashboard is empty. The notifications panel is full of
[OBJECT] is forbidden: User "system:serviceaccount:kubernetes-dashboard:admin-user" cannot list resource "[OBJECT]" in API group "extensions" in the namespace "default"
Replace [OBJECT] with every kubernetes object and you have a good overview of my notifications panel ;)
The admin-user has obviously not enough rights to access the objects.
Questions
Did I miss something ?
How can I debug this situation ?
Thank you for your help !
Edit: That was an outage from my cloud provider. I don't know what happened nor how they solved it but they did something and everything is working now.
In the end, that was an outage from the cloud provider. I ran into another problem with PVC, they solved it and tadaa the dashboard is working just fine with no modifications.
role binding give this error?
The ClusterRoleBinding "kubernetes-dashboard" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"cluster-admin"}: cannot change roleRef

Kubernetes dashboard error using service account token

I have a Kubernetes cluster with various resources running fine. I am trying to get the Dashboard working but getting the following error when I launch the dashboard and enter the service-account token.
persistentvolumeclaims is forbidden: User
"system:serviceaccount:kube-system:kubernetes-dashboard" cannot list
resource "persistentvolumeclaims" in API group "" in the namespace
"default"
It does not allow the listing of any resources from my cluster (persistent volumes, pods, ingresses etc). My cluster has multiple namespaces.
This is my service-account yaml file:
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-test # replace with your preferred username
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin # replace with your preferred username
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin # replace with your preferred username
namespace: kube-system
Any help is appreciated.
FIX: Create a Role Binding for the cluster role.
This should fix the problem:
kubectl delete clusterrole cluster-admin
kubectl delete clusterrolebinding kubernetes-dashboard
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
The above command will create a role binding that gives all permissions to all resources.
Run the Proxy:
kubectl proxy
Check the DashBoard: Please check the URL and port provided by kubectl
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/persistentvolume?namespace=default
More info: Cluster role:
You can check out the 'cluster-admin' role by:
kubectl edit clusterrole cluster-admin
The problem here is that the serviceaccount 'kubernetes-dashboard' does not have 'list' permissions for the resource 'persistentVolumeClaims'.
I would recommend using Web UI (Dashboard) documentation from Kubernetes.
Deploying the Dashboard UI
The Dashboard UI is not deployed by default. To deploy it, run the following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
From your yaml I can see that you specified them for namespace kube-system but dashboard is trying to list resources from namespace default, at least that's what is says in your error message.
Also it seems your yaml is also incorrect for ServiceAccount name, as in the file you have k8s-test and error message says it's using kubernetes-dashboard.