RBAC - Limit access for one service account - kubernetes

I want to limit the permissions to the following service account, created it as follows:
kubectl create serviceaccount alice --namespace default
secret=$(kubectl get sa alice -o json | jq -r .secrets[].name)
kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -d > ca.crt
user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -d)
c=`kubectl config current-context`
name=`kubectl config get-contexts $c | awk '{print $3}' | tail -n 1`
endpoint=`kubectl config view -o jsonpath="{.clusters[?(#.name == \"$name\")].cluster.server}"`
kubectl config set-cluster cluster-staging \
--embed-certs=true \
--server=$endpoint \
--certificate-authority=./ca.crt
kubectl config set-credentials alice-staging --token=$user_token
kubectl config set-context alice-staging \
--cluster=cluster-staging \
--user=alice-staging \
--namespace=default
kubectl config get-contexts
#kubectl config use-context alice-staging
This has permission to see everything with:
kubectl --context=alice-staging get pods --all-namespaces
I try to limit it with the following, but still have all the permissions:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: no-access
rules:
- apiGroups: [""]
resources: [""]
verbs: [""]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: no-access-role
subjects:
- kind: ServiceAccount
name: alice
namespace: default
roleRef:
kind: ClusterRole
name: no-access
apiGroup: rbac.authorization.k8s.io
The idea is to limit access to a namespace to distribute tokens for users, but I do not get it ... I think it may be for inherited permissions but I can not disabled for a single serviceacount.
Using: GKE, container-vm
THX!

Note that service accounts are not meant for users, but for processes running inside pods (https://kubernetes.io/docs/admin/service-accounts-admin/).
In Create user in Kubernetes for kubectl you can find how to create a user account for your K8s cluster.
Moreover, I advise you to check whether RBAC is actually enabled in your cluster, which could explain that a user can do more operations that expected.

Related

How to edit a sealed secret in kubernetes?

I've a kubernetes sealed secret with encrypted data in it. How can I edit the sealed secret like editing a deployment using command "kubectl edit deployment".
I know kubectl edit secret works on normal secrets not on sealed secrets.
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: my-secret
namespace: test-ns
spec:
encryptedData:
password: AgCEP9MHaeVGelL3jSbnfUF47m+eR0Vv4ufzPyPXBvEjYcJluI0peLgilSqWUMnDLjp00UJq186nRYufEf7Bi1Jxm39t3nDbW13+wTSbblVzb9A2iKJI2VbxgTG/IDodNFxFWdKefZdZgVct2hTsuwlivIxpdxDZtcND4h+Cx8YFQMUpT5oO26oqISzRTh5Ewa6ehWtv6krfeFmbMQCF70eg0yioe6Op0YaQKloiFVInc1JK5KTR5iQYCeKb2R0ovKVa/lipbqjHCYSRhLR/3q5wz/ZuWz7g7ng6G9Q7o1pVv3udQYUvp2B6XvK1Nezc85wbhGgmuz5kcZUa36uF+eKMes6UdPcD7q58ndaj/0KWozdTAuk1OblV7mrUaK8Q45GIf+JqaBfzVt52INMT07P4MId/KB31sZDeE+OwEXhCDVTBAlxSRM0U9NjxDDb+mwUzxHNZHL1sY8M1YCoX+rr6n1+yW1HG42VHLCRzeBa2V31OFuQTNjoNxDEfUq+CSTRNDCmt8UvercSkgyM3mBa6JpHdkySllpqyEJDYKM1YvVRrjVvg1qGTF5dOCx6x3ROXnZtA3NBIafTu0+pHovVo+X7nUkl7hyupd0KKzBG+afgNpYQOxeuei5A+o++o92G5lexxk2v4bQt6ANYBxMlvT0LdBUW9e/L2y+TuNAHL23Xa/aTq1lagNBi9JTowX0lx0br2CqDbKg==
username: AgArwZm3qh83Fpzles1r/PjTDKQ2/SZ482IKC84T72/kI4M29aG2VT4SCXcqbmtVDYuVUN0wTbsFYsnwY1DSRrL4oup2xRg6N34IxHjj0ZtF1q0YtBKIM/DPcF2bBVAYc9/vOI0L3+VVSF9r93XYEMUWX6hY9eHa8VUHBM/Y65Sj3Il7Pmx/qoEcZ+e9UJhqWEJPotz6W5OMh/Al/QPJZknwUulM4coZ3C0J4TmrBVexPturcRCimDEQnd9UitotnGDoNAp2O28ovhXoImNsJBhNK5LykesRxEfIp4UJOb3I0CpLdoz9khEcb2r31j+KTtxifLez7Rg3Pg7BGpR3EKC3INZWrR8S/aUm5u/dP12ELgW3nq4WbafRitrZcHhLFZkHma/Er8miFbuTXvpFcXE1g+BnG2vIs4kHSl2QcP32HPGKHJJt0KEd1dUJrXXTjS9eXHJ2KsA5DZk4TcFA5dPAG76ZdKo0GCIQwvNeT0Ao4ntqmeOiijAQgmhXdCtD2WVavXi54h0f8F2ue6b0mBFCgTGKZyypjbXznzB/MPAZxgIu+UWQzV1CczwKlitPy638s/9iSan2/u2rhKu2SP0JFMZ6pPnfO51nMpDHtCDGFc1unjsjM4ZpnNXtaQJJmXo7Hw0L4dW2/N3uxCfxNtmYuBxE1t4GCefSUCTIleDgmAbB00nKkja+ml9bidcxawlIgHnoq/XNCqy2R3PkEw==
template:
data: null
metadata:
creationTimestamp: null
name: my-secret
namespace: test-ns
You can update the existing SealedSecret by using --merge-into option in SealedSecret service. You can simply copy & paste the encrypted data into a json and merge this to the existing SealedSecret like this
$ echo -n bar | kubectl create secret generic mysecret --dry-run=client --from-file=foo=/dev/stdin -o json \
| kubeseal > mysealedsecret.json
$ echo -n baz | kubectl create secret generic mysecret --dry-run=client --from-file=bar=/dev/stdin -o json \
| kubeseal --merge-into mysealedsecret.json
When the sealedsecret needs to decrypt and work the same as normal secrets in kubernetes then both sealedsecrets and secrets need to be in the same namespace.
For more detailed information refer this official sealedsecrets github page
To know more about the usage of SealedSecret refer to this document

Create access token for native Kubernetes

I want to create service account for native Kubernetes cluster so that I can send API calls:
kubernetes#kubernetes1:~$ kubectl create serviceaccount user1
serviceaccount/user1 created
kubernetes#kubernetes1:~$ kubectl create clusterrole nodeaccessrole --verb=get --verb=list --verb=watch --resource=nodes
clusterrole.rbac.authorization.k8s.io/nodeaccessrole created
kubernetes#kubernetes1:~$ kubectl create clusterrolebinding nodeaccessrolebinding --serviceaccount=default:user1 --clusterrole=nodeaccessrole
clusterrolebinding.rbac.authorization.k8s.io/nodeaccessrolebinding created
kubernetes#kubernetes1:~$
kubernetes#kubernetes1:~$ kubectl get serviceaccount user1
NAME SECRETS AGE
user1 0 7m15s
kubernetes#kubernetes1:~$
Do you know how I can get the token?
SOLUTION for v1.25.1:
kubectl create sa cicd
kubectl get sa,secret
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: cicd
spec:
serviceAccount: cicd
containers:
- image: nginx
name: cicd
EOF
kubectl exec cicd -- cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl create token cicd
kubectl create token cicd --duration=999999h
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cicd
annotations:
kubernetes.io/service-account.name: "cicd"
EOF
kubectl get sa,secret
kubectl describe secret cicd
kubectl describe sa cicd
kubectl get sa cicd -oyaml
kubectl get sa,secret
One thing is not clear:
kubectl exec cicd -- cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
should I use '--' into the above commands?
If you just want to retrieve the token from the given SA you can simply execute:
kubectl get secret $(kubectl get sa <sa-name> -o jsonpath='{.secrets[0].name}' -n <namespace>) -o jsonpath='{.data.token}' -n <namespace> | base64 --decode
Feel free to remove the | base64 --decode if you don't want to decode. Just as a side node, this command might need to be amended depending on the type of secret, however for your use-case this should work
Once you have your value you can execute curl commands, such as:
curl -k -H "Authorization: Bearer $TOKEN" -X GET "https://<KUBE-API-IP>:6443/api/v1/nodes"

RBAC (Role Based Access Control) on K3s

after watching a view videos on RBAC (role based access control) on kubernetes (of which this one was the most transparent for me), I've followed the steps, however on k3s, not k8s as all the sources imply. From what I could gather (not working), the problem isn't with the actual role binding process, but rather the x509 user cert which isn't acknowledged from the API service
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Also not documented on Rancher's wiki on security for K3s (while documented for their k8s implementation)?, while described for rancher 2.x itself, not sure if it's a problem with my implementation, or a k3s <-> k8s thing.
$ kubectl version --short
Client Version: v1.20.5+k3s1
Server Version: v1.20.5+k3s1
With duplication of the process, my steps are as follows:
Get k3s ca certs
This was described to be under /etc/kubernetes/pki (k8s), however based on this seems to be at /var/lib/rancher/k3s/server/tls/ (server-ca.crt & server-ca.key).
Gen user certs from ca certs
#generate user key
$ openssl genrsa -out user.key 2048
#generate signing request from ca
openssl req -new -key user.key -out user.csr -subj "/CN=user/O=rbac"
# generate user.crt from this
openssl x509 -req -in user.csr -CA server-ca.crt -CAkey server-ca.key -CAcreateserial -out user.crt -days 365
... all good:
Creating kubeConfig file for user, based on the certs:
# Take user.crt and base64 encode to get encoded crt
cat user.crt | base64 -w0
# Take user.key and base64 encode to get encoded key
cat user.key | base64 -w0
Created config file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <server-ca.crt base64-encoded>
server: https://<k3s masterIP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: user
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: user
user:
client-certificate-data: <user.crt base64-encoded>
client-key-data: <user.key base64-encoded>
Setup role & roleBinding (within specified namespace 'rbac')
role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
roleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
apiGroup: rbac.authorization.k8s.io
kind: User
name: user
After all of this, I get fun times of...
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Any suggestions please?
Apparently this stackOverflow question presented a solution to the problem, but following the github feed, it came more-or-less down to the same approach followed here (unless I'm missing something)?
As we can find in the Kubernetes Certificate Signing Requests documentation:
A few steps are required in order to get a normal user to be able to authenticate and invoke an API.
I will create an example to illustrate how you can get a normal user who is able to authenticate and invoke an API (I will use the user john as an example).
First, create PKI private key and CSR:
# openssl genrsa -out john.key 2048
NOTE: CN is the name of the user and O is the group that this user will belong to
# openssl req -new -key john.key -out john.csr -subj "/CN=john/O=group1"
# ls
john.csr john.key
Then create a CertificateSigningRequest and submit it to a Kubernetes Cluster via kubectl.
# cat <<EOF | kubectl apply -f -
> apiVersion: certificates.k8s.io/v1
> kind: CertificateSigningRequest
> metadata:
> name: john
> spec:
> groups:
> - system:authenticated
> request: $(cat john.csr | base64 | tr -d '\n')
> signerName: kubernetes.io/kube-apiserver-client
> usages:
> - client auth
> EOF
certificatesigningrequest.certificates.k8s.io/john created
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 39s kubernetes.io/kube-apiserver-client system:admin Pending
# kubectl certificate approve john
certificatesigningrequest.certificates.k8s.io/john approved
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 52s kubernetes.io/kube-apiserver-client system:admin Approved,Issued
Export the issued certificate from the CertificateSigningRequest:
# kubectl get csr john -o jsonpath='{.status.certificate}' | base64 -d > john.crt
# ls
john.crt john.csr john.key
With the certificate created, we can define the Role and RoleBinding for this user to access Kubernetes cluster resources. I will use the Role and RoleBinding similar to yours.
# cat role.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: john-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
# kubectl apply -f role.yml
role.rbac.authorization.k8s.io/john-role created
# cat rolebinding.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: john-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: john-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: john
# kubectl apply -f rolebinding.yml
rolebinding.rbac.authorization.k8s.io/john-binding created
The last step is to add this user into the kubeconfig file (see: Add to kubeconfig)
# kubectl config set-credentials john --client-key=john.key --client-certificate=john.crt --embed-certs=true
User "john" set.
# kubectl config set-context john --cluster=default --user=john
Context "john" created.
Finally, we can change the context to john and check if it works as expected.
# kubectl config use-context john
Switched to context "john".
# kubectl config current-context
john
# kubectl get pods
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 30m
# kubectl run web-2 --image=nginx
Error from server (Forbidden): pods is forbidden: User "john" cannot create resource "pods" in API group "" in the namespace "default"
As you can see, it works as expected (user john only has get and list permissions).
Thank you matt_j for the example | answer provided to my question. Marked that as the answer, as it was an direct answer to my question regarding RBAC via certificates. In addition to that, I'd also like to provide the an example for RBAC via service accounts, as a variation (for those whom prefer with specific use case).
Service account creation
//kubectl create serviceaccount name -n namespace
$ kubectl create serviceaccount udef -n rbac
This creates the service account + automatically a corresponding secret (udef-token-lhvm8). See with yaml output:
Get token from created secret:
// kubectl describe secret secretName -o yaml
$ kubectl describe secret udef-token-lhvm8 -o yaml
secret will contain 3 objects, (1) ca.crt (2) namespace (3) token
# ... other secret context
Data
====
ca.crt: x bytes
namespace: x bytes
token: xxxx token xxxx
Put token into config file
Can start by getting your 'admin' config file and output to file
// location of **k3s** kubeconfig
$ sudo cat /etc/rancher/k3s/k3s.yaml > /home/{userHomeFolder}/userKubeConfig
Under users section, can replace certificate data with token:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx root ca cert content xxx
server: https://<host IP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: nametype
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: nametype
user:
token: xxxx token xxxx
The roles and rolebinding manifests can be created as required, like previously specified (nb within the same namespace), in this case linking to the service account:
# role manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
---
# rolebinding manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
- kind: ServiceAccount
name: udef
namespace: rbac
With this being done, you will be able to test remotely:
// show pods -> will be allowed
$ kubectl get pods --kubeconfig
..... valid response provided
// get namespaces (or other types of commands) -> should not be allowed
$ kubectl get namespaces --kubeconfig
Error from server (Forbidden): namespaces is forbidden: User bla-bla

Unable to sign in kubernetes dashboard?

After follow all steps to create service account and role binding unable to sign in
kubectl create serviceaccount dashboard -n default
kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard
and apply yml file
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
--------
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
Getting following error when i click on sign in for config sevice
[![{
"status": 401,
"plugins": \[\],
"errors": \[
{
"ErrStatus": {
"metadata": {},
"status": "Failure",
"message": "MSG_LOGIN_UNAUTHORIZED_ERROR",
"reason": "Unauthorized",
"code": 401
}
}][1]][1]
Roman Marusyk is on the right path. This is what you have to do.
$ kubectl get secret -n kubernetes-dashboard
NAME TYPE DATA AGE
default-token-rqw2l kubernetes.io/service-account-token 3 9m8s
kubernetes-dashboard-certs Opaque 0 9m8s
kubernetes-dashboard-csrf Opaque 1 9m8s
kubernetes-dashboard-key-holder Opaque 2 9m8s
kubernetes-dashboard-token-5tqvd kubernetes.io/service-account-token 3 9m8s
From here you will get the kubernetes-dashboard-token-5tqvd, which is the secret holding the token to access the dashboard.
$ kubectl get secret kubernetes-dashboard-token-5tqvd -n kubernetes-dashboard -oyaml | grep -m 1 token | awk -F : '{print $2}'
ZXlK...
Now you will need to decode it:
echo -n ZXlK... | base64 -d
eyJhb...
introduce the token in the sign in page:
And you are in.
UPDATE
You can also do the 2 steps in one to get the secret and decode it:
$ kubectl get secret kubernetes-dashboard-token-5tqvd -n kubernetes-dashboard -oyaml | grep -m 1 token | awk -F ' ' '{print $2}' | base64 -d
namespace: kube-system
Usually, Kubernetes Dashboard runs in namespace: kubernetes-dashboard
Please ensure that you have the correct namespace for ClusterRoleBinding and ServiceAccount.
If you have been installing the dashboard according to the official doc, then it was installed to kubernetes-dashboard namespace.
You can check this post for reference.
EDIT 22-May-2020
Issue was i'm accessing dashboard UI through http protocal from external machine and use default ClusterIP.
Indeed, the Dashboard should not be exposed publicly over HTTP. For domains accessed over HTTP it will not be possible to sign in. Nothing will happen after clicking 'Sign In' button on login page.
That is discussed in details in this post on StackOverflow. There is a workaround for that behavior discussed in a same thread. Basically it is the same thing I've been referring to here: "You can check this post for reference. "

How to sign in kubernetes dashboard?

I just upgraded kubeadm and kubelet to v1.8.0. And install the dashboard following the official document.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
After that, I started the dashboard by running
$ kubectl proxy --address="192.168.0.101" -p 8001 --accept-hosts='^*$'
Then fortunately, I was able to access the dashboard thru http://192.168.0.101:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
I was redirected to a login page like this which I had never met before.
It looks like that there are two ways of authentication.
I tried to upload the /etc/kubernetes/admin.conf as the kubeconfig but got failed. Then I tried to use the token I got from kubeadm token list to sign in but failed again.
The question is how I can sign in the dashboard. It looks like they added a lot of security mechanism than before. Thanks.
As of release 1.7 Dashboard supports user authentication based on:
Authorization: Bearer <token> header passed in every request to Dashboard. Supported from release 1.6. Has the highest priority. If present, login view will not be shown.
Bearer Token that can be used on Dashboard login view.
Username/password that can be used on Dashboard login view.
Kubeconfig file that can be used on Dashboard login view.
— Dashboard on Github
Token
Here Token can be Static Token, Service Account Token, OpenID Connect Token from Kubernetes Authenticating, but not the kubeadm Bootstrap Token.
With kubectl, we can get an service account (eg. deployment controller) created in kubernetes by default.
$ kubectl -n kube-system get secret
# All secrets with type 'kubernetes.io/service-account-token' will allow to log in.
# Note that they have different privileges.
NAME TYPE DATA AGE
deployment-controller-token-frsqj kubernetes.io/service-account-token 3 22h
$ kubectl -n kube-system describe secret deployment-controller-token-frsqj
Name: deployment-controller-token-frsqj
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=deployment-controller
kubernetes.io/service-account.uid=64735958-ae9f-11e7-90d5-02420ac00002
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZXBsb3ltZW50LWNvbnRyb2xsZXItdG9rZW4tZnJzcWoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVwbG95bWVudC1jb250cm9sbGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjQ3MzU5NTgtYWU5Zi0xMWU3LTkwZDUtMDI0MjBhYzAwMDAyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRlcGxveW1lbnQtY29udHJvbGxlciJ9.OqFc4CE1Kh6T3BTCR4XxDZR8gaF1MvH4M3ZHZeCGfO-sw-D0gp826vGPHr_0M66SkGaOmlsVHmP7zmTi-SJ3NCdVO5viHaVUwPJ62hx88_JPmSfD0KJJh6G5QokKfiO0WlGN7L1GgiZj18zgXVYaJShlBSz5qGRuGf0s1jy9KOBt9slAN5xQ9_b88amym2GIXoFyBsqymt5H-iMQaGP35tbRpewKKtly9LzIdrO23bDiZ1voc5QZeAZIWrizzjPY5HPM1qOqacaY9DcGc7akh98eBJG_4vZqH2gKy76fMf0yInFTeNKr45_6fWt8gRM77DQmPwb3hbrjWXe1VvXX_g
Kubeconfig
The dashboard needs the user in the kubeconfig file to have either username & password or token, but admin.conf only has client-certificate. You can edit the config file to add the token that was extracted using the method above.
$ kubectl config set-credentials cluster-admin --token=bearer_token
Alternative (Not recommended for Production)
Here are two ways to bypass the authentication, but use for caution.
Deploy dashboard with HTTP
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
Dashboard can be loaded at http://localhost:8001/ui with kubectl proxy.
Granting admin privileges to Dashboard's Service Account
$ cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
EOF
Afterwards you can use Skip option on login page to access Dashboard.
If you are using dashboard version v1.10.1 or later, you must also add --enable-skip-login to the deployment's command line arguments. You can do so by adding it to the args in kubectl edit deployment/kubernetes-dashboard --namespace=kube-system.
Example:
containers:
- args:
- --auto-generate-certificates
- --enable-skip-login # <-- add this line
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
TL;DR
To get the token in a single oneliner:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
This assumes that your ~/.kube/config is present and valid. And also that kubectl config get-contexts indicates that you are using the correct context (cluster and namespace) for the dashboard you are logging into.
Explanation
I derived this answer from what I learned from #silverfox's answer. That is a very informative write up. Unfortunately it falls short of telling you how to actually put the information into practice. Maybe I've been doing DevOps too long, but I think in shell. It's much more difficult for me to learn or teach in English.
Here is that oneliner with line breaks and indents for readability:
kubectl -n kube-system describe secret $(
kubectl -n kube-system get secret | \
awk '/^deployment-controller-token-/{print $1}'
) | \
awk '$1=="token:"{print $2}'
There are 4 distinct commands and they get called in this order:
Line 2 - This is the first command from #silverfox's Token section.
Line 3 - Print only the first field of the line beginning with deployment-controller-token- (which is the pod name)
Line 1 - This is the second command from #silverfox's Token section.
Line 5 - Print only the second field of the line whose first field is "token:"
If you don't want to grant admin permission to dashboard service account, you can create cluster admin service account.
$ kubectl create serviceaccount cluster-admin-dashboard-sa
$ kubectl create clusterrolebinding cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=default:cluster-admin-dashboard-sa
And then, you can use the token of just created cluster admin service account.
$ kubectl get secret | grep cluster-admin-dashboard-sa
cluster-admin-dashboard-sa-token-6xm8l kubernetes.io/service-account-token 3 18m
$ kubectl describe secret cluster-admin-dashboard-sa-token-6xm8l
I quoted it from giantswarm guide - https://docs.giantswarm.io/guides/install-kubernetes-dashboard/
Combining two answers: 49992698 and 47761914 :
# Create service account
kubectl create serviceaccount -n kube-system cluster-admin-dashboard-sa
# Bind ClusterAdmin role to the service account
kubectl create clusterrolebinding -n kube-system cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:cluster-admin-dashboard-sa
# Parse the token
TOKEN=$(kubectl describe secret -n kube-system $(kubectl get secret -n kube-system | awk '/^cluster-admin-dashboard-sa-token-/{print $1}') | awk '$1=="token:"{print $2}')
You need to follow these steps before the token authentication
Create a Cluster Admin service account
kubectl create serviceaccount dashboard -n default
Add the cluster binding rules to your dashboard account
kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard
Get the secret token with this command
kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode
Choose token authentication in the Kubernetes dashboard login page
Now you can able to login
A self-explanatory simple one-liner to extract token for kubernetes dashboard login.
kubectl describe secret -n kube-system | grep deployment -A 12
Copy the token and paste it on the kubernetes dashboard under token sign in option and you are good to use kubernetes dashboard
All the previous answers are good to me. But a straight forward answer on my side would come from https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md. Just use kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}'). You will have many values for some keys (Name, Namespace, Labels, ..., token). The most important is the token that corresponds to your name. copy that token and paste it in the token box. Hope this helps.
You can get the token:
kubectl describe secret -n kube-system | grep deployment -A 12
Take the Token value which is something like
token: eyJhbGciOiJSUzI1NiIsI...
Use port-forward to /kubernetes-dashboard:
kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8080:443 --address='0.0.0.0'
Access the Site Using:
https://<IP-of-Master-node>:8080/
Provide the Token when asked.
Note the https on the URL. Tested site on Firefox because With new Updates Google Chrome has become strict of not allowing traffic from unknown SSL certificates.
Also note, the 8080 port should be opened in the VM of Master Node.
However, if you are using After kubernetes 1.24 version,
creating service accounts will not generate tokens
, instead should use following command.
kubectl -n kubernetes-dashboard create token admin-user
this is finally what works now (2023)
create two files create-service-cccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
and create-cluster-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
then run
kubectl apply -f create-service-cccount.yaml
kubectl apply -f create-cluster-role-binding.yaml
kubectl -n kubernetes-dashboard create token admin-user
for latest update please check
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
The skip login has been disabled by default due to security issues. https://github.com/kubernetes/dashboard/issues/2672
in your dashboard yaml add this arg
- --enable-skip-login
to get it back
Download
https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
add
type: NodePort for the Service
And then run this command:
kubectl apply -f kubernetes-dashboard.yaml
Find the exposed port with the command :
kubectl get services -n kube-system
You should be able to get the dashboard at http://hostname:exposedport/
with no authentication
An alternative way to obtain the kubernetes-dashboard token:
kubectl -n kubernetes-dashboard get secret -o=jsonpath='{.items[?(#.metadata.annotations.kubernetes\.io/service-account\.name=="kubernetes-dashboard")].data.token}' | base64 --decode
Explanation:
Get all the secret in the kubernetes-dashboard name space.
Look at the items array, and match for: metadata -> annotations -> kubernetes.io/service-account.name == kubernetes-dashboard
Print data -> token
Decode content. (If you perform kubectl describe secret, the token is already decoded.)
For version 1.26.0/1.26.1 at 2023,
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
kubectl create serviceaccount admin-user -n kubernetes-dashboard
kubectl create clusterrolebinding dashboard-admin -n kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=admin-user
kubectl -n kubernetes-dashboard create token admin-user
The newest guide: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md