How to configure kubectl to act as a service account? - kubernetes

I wish to run a Drone CI/CD pipeline on a Raspberry Pi, including a stage to update a Kubernetes Deployment. Unfortunately, all the pre-built solutions that I've found for doing so (e.g. 1, e.g. ) are not built for arm64 architecture, so I believe I need to build my own.
I am attempting to adapt the commands from here (see also README.md, which describes the authorization required), but my attempt to contact the cluster still fails with authorization problems:
$ cat service-account-definition.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: drone-demo-service-account
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: drone-demo-service-account-clusterrolebinding
subjects:
- kind: ServiceAccount
name: drone-demo-service-account
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
$ kubectl apply -f service-account-definition.yaml
serviceaccount/drone-demo-service-account created
clusterrolebinding.rbac.authorization.k8s.io/drone-demo-service-account-clusterrolebinding created
$ kubectl get serviceaccount drone-demo-service-account
NAME SECRETS AGE
drone-demo-service-account 1 10s
$ kubectl get secret $(kubectl get secrets | grep 'drone-demo-service-account-token' | cut -f1 -d' ') -o jsonpath='{.data.ca\.crt}' > secrets/cert
$ head -c 10 secrets/cert
LS0tLS1CRU%
$ kubectl get secret $(kubectl get secrets | grep 'drone-demo-service-account-token' | cut -f1 -d' ') -o jsonpath='{.data.token}' | base64 > secrets/token
$ head -c 10 secrets/token
WlhsS2FHSk%
$ cat Dockerfile
FROM busybox
COPY . .
CMD ["./script.sh"]
$ cat script.sh
#!/bin/sh
server=$(cat secrets/server) # Pre-filled
cert=$(cat secrets/cert)
# Added this `tr` call, which is not present in the source I'm working from, after noticing that
# the file-content contains newlines
token=$(cat secrets/token | tr -d '\n')
echo "DEBUG: server is $server, cert is $(echo $cert | head -c 10)..., token is $(echo $token | head -c 10)..."
# Cannot depend on the binami/kubectl image (https://hub.docker.com/r/bitnami/kubectl), because
# it's not available for arm64 - https://github.com/bitnami/charts/issues/7305
wget https://storage.googleapis.com/kubernetes-release/release/v1.19.2/bin/linux/arm64/kubectl
chmod +x kubectl
./kubectl config set-credentials default --token=$token
echo $cert | base64 -d > ca.crt
./kubectl config set-cluster default --server=$server --certificate-authority=ca.crt
./kubectl config set-context default --cluster=default --user=default
./kubectl config use-context default
echo "Done with setup, now cat-ing .kube/config"
echo
cat $HOME/.kube/config
echo "Attempting to get pods"
echo
./kubectl get pods
$ docker build -t stack-overflow-testing . && docker run stack-overflow-testing
Sending build context to Docker daemon 10.75kB
Step 1/3 : FROM busybox
---> 3c277069c6ae
Step 2/3 : COPY . .
---> 74c6a132d255
Step 3/3 : CMD ["./script.sh"]
---> Running in dc55f33f74bb
Removing intermediate container dc55f33f74bb
---> dc68a5d6ba9b
Successfully built dc68a5d6ba9b
Successfully tagged stack-overflow-testing:latest
DEBUG: server is https://rassigma.avril:6443, cert is LS0tLS1CRU..., token is WlhsS2FHSk...
Connecting to storage.googleapis.com (142.250.188.16:443)
wget: note: TLS certificate validation not implemented
saving to 'kubectl'
kubectl 18% |***** | 7118k 0:00:04 ETA
kubectl 43% |************* | 16.5M 0:00:02 ETA
kubectl 68% |********************** | 26.2M 0:00:01 ETA
kubectl 94% |****************************** | 35.8M 0:00:00 ETA
kubectl 100% |********************************| 38.0M 0:00:00 ETA
'kubectl' saved
User "default" set.
Cluster "default" set.
Context "default" created.
Switched to context "default".
Done with setup, now cat-ing .kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /ca.crt
server: https://rassigma.avril:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
token: WlhsS2FHSkhZM[...REDACTED]
Attempting to get pods
error: You must be logged in to the server (Unauthorized)
If I copy the ~/.kube/config from my laptop to the docker container, kubectl commands succeed as expected - so, this isn't a networking issue, just an authorization one. I do note that my laptop-based ~/.kube/config lists client-certificate-data and client-key-data rather than token under users: user:, but I suspect that's because my base config is recording a non-service-account.
How can I set up kubectl to authorize as a service account?
Some reading I have done that didn't answer the question for me:
kubenetes documentation on AuthN/AuthZ
Google Kubernetes Engine article on service accounts
Configure Service Accounts for Pods (this described how to create and associate the accounts, but not how to act as them)
Two blog posts (1, 2) that refer to Service Accounts

It appears you have used | base64 instead of | base64 --decode

Related

Edit kubernetes resource using kubectl run --command

I am trying to create a pod run a command edit an exist resource , but its not working
My CR is
apiVersion: feature-toggle.resource.api.sap/v1
kind: TestCR
metadata:
name: test
namespace: my-namespace
spec:
enabled: true
strategies:
- name: tesst
parameters:
perecetage: "10"
The command I am trying to run is
kubectl run kube-bitname --image=bitnami/kubectl:latest -n my-namespace --command -- kubectl get testcr test -n my-namespace -o json | jq '.spec.strategies[0].parameters.perecetage="66"' | kubectl apply -f -
But This not work ? any idea ?
It would be better if you post more info about the error o the trace that are you getting executing the command, but I have a question that could be a good insight about what is happening here.
Has the kubectl command that you are running inside the bitnami/kubectl:latest any context that allow it to connect to your cluster?
If you take a look into the kubectl docker hub documentation you can see that you should map a config file to the pod in order to connect to your own cluster.
$ docker run --rm --name kubectl -v /path/to/your/kube/config:/.kube/config bitnami/kubectl:latest

Create access token for native Kubernetes

I want to create service account for native Kubernetes cluster so that I can send API calls:
kubernetes#kubernetes1:~$ kubectl create serviceaccount user1
serviceaccount/user1 created
kubernetes#kubernetes1:~$ kubectl create clusterrole nodeaccessrole --verb=get --verb=list --verb=watch --resource=nodes
clusterrole.rbac.authorization.k8s.io/nodeaccessrole created
kubernetes#kubernetes1:~$ kubectl create clusterrolebinding nodeaccessrolebinding --serviceaccount=default:user1 --clusterrole=nodeaccessrole
clusterrolebinding.rbac.authorization.k8s.io/nodeaccessrolebinding created
kubernetes#kubernetes1:~$
kubernetes#kubernetes1:~$ kubectl get serviceaccount user1
NAME SECRETS AGE
user1 0 7m15s
kubernetes#kubernetes1:~$
Do you know how I can get the token?
SOLUTION for v1.25.1:
kubectl create sa cicd
kubectl get sa,secret
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: cicd
spec:
serviceAccount: cicd
containers:
- image: nginx
name: cicd
EOF
kubectl exec cicd -- cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl create token cicd
kubectl create token cicd --duration=999999h
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cicd
annotations:
kubernetes.io/service-account.name: "cicd"
EOF
kubectl get sa,secret
kubectl describe secret cicd
kubectl describe sa cicd
kubectl get sa cicd -oyaml
kubectl get sa,secret
One thing is not clear:
kubectl exec cicd -- cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
should I use '--' into the above commands?
If you just want to retrieve the token from the given SA you can simply execute:
kubectl get secret $(kubectl get sa <sa-name> -o jsonpath='{.secrets[0].name}' -n <namespace>) -o jsonpath='{.data.token}' -n <namespace> | base64 --decode
Feel free to remove the | base64 --decode if you don't want to decode. Just as a side node, this command might need to be amended depending on the type of secret, however for your use-case this should work
Once you have your value you can execute curl commands, such as:
curl -k -H "Authorization: Bearer $TOKEN" -X GET "https://<KUBE-API-IP>:6443/api/v1/nodes"

How to create custom themes on Keycloak Operator deployment on Kubernetes?

Complete flow is somewhat like this:
Step-1: Applying all the relevant YAMLs
$ sudo kind create cluster --name aftab-cluster --config cluster-config.yaml
$ curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.17.0/install.sh | bash -s v0.17.0
$ kubectl apply -f keycloak_backup.yaml
$ kubectl apply -f keycloaks_client.yaml
$ kubectl apply -f keycloaks_realm.yaml //Theme configs not there. So, added loginTheme.
loginTheme:
description: Login Theme
type: string
loginWithEmailAllowed:
description: Login with email
type: boolean
$ kubectl apply -f keycloak_users.yaml
$ kubectl apply -f keycloaks_crd.yaml
$ kubectl apply -f namespace.yaml
$ kubectl apply -f role.yaml -n keycloak-namespace
$ kubectl apply -f role_binding.yaml -n keycloak-namespace
$ kubectl apply -f sa.yaml -n keycloak-namespace
$ kubectl apply -f operator.yaml -n keycloak-namespace
$ kubectl apply -f keycloak.yaml -n keycloak-namespace
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: example-keycloak
labels:
app: sso
spec:
instances: 1
extensions:
- /PATH/FOR/MY/COLOR-THEME/JAR/
externalAccess:
enabled: True
Step-2: Verifing if pods are running. RUNNING HAPPILY.
$ kubectl get po -n keycloak-namespace // I can see podsa are running successfuly.
NAME READY STATUS RESTARTS AGE
keycloak-0 1/1 Running 0 3m13s
keycloak-operator-798747fb9d-2lgzn 1/1 Running 0 4m21s
keycloak-postgresql-85579c4d6d-4tgxj 1/1 Running 0 3m13s
Step-3: Creating a new Realm and client
$ kubectl apply -f my-realm.yaml -n keycloak-namespace
apiVersion: keycloak.org/v1alpha1
kind: KeycloakRealm
metadata:
name: myrealm-realm
labels:
app: myrealm-realm
spec:
realm:
id: "myrealm"
realm: "myrealm"
enabled: True
displayName: "myrealm"
userRegistration: True
registrationAllowed: True
editUsernameAllowed: True
resetPasswordAllowed: True
rememberMe: True
registrationEmailAsUsername: True
loginTheme: "COLOR-THEME" <<<<<<<<<< MY CUSTOM THEME
users:
- username: "admin"
firstName: "Admin"
realmRoles:
- "offline_access"
- "uma_authorization"
$ kubectl apply -f my-client.yaml -n keycloak-namespace
Step-4: Finally, accessed keycloak instance at http://localhost:3010, Working as expected.
Reams, clients, users, etc are looking good. But, my COLOR-THEME not found at the realm setting tab. Only default themes are there (keycloak and base).
directory structure looks like this:
$ ls
cluster-config.yaml keycloak_backup.yaml keycloaks_crd.yaml namespace.yaml role_binding.yaml my-client.yaml
xyz keycloak_users.yaml keycloaks_realm.yaml operator.yaml sa.yaml my_realm.yaml
keycloak.yaml keycloaks_client.yaml keyclok-ing.yaml role.yaml themes myrealm-realm.yaml
How do we use CRDs in order to use or create new Keycloak themes?
For the first part of the question, if you want to add/change a field (i.e., the Realm Theme) that the Keycloak Operator recognizes natively, the only change you will have to do is to add to the each of your Realm CRD, the following:
spec:
realm:
id: Realm_ID
...
loginTheme: "my_login_theme"
For the second part (i.e., create new Keycloak themes):
You can't. First you create the new Theme, add the folders of the new Theme into the Keycloak deployment, then you add to the Keycloak Operator as previously mentioned.
To check if the Keycloak Operator support the loginTheme field search in the file keycloak-operator/deploy/crds/keycloak.org_keycloakrealms.yaml. If it is not there, you will need to add:
loginTheme:
description: Login Theme
type: string
loginWithEmailAllowed:
description: Login with email
type: boolean
Moreover, in the file pkg/apis/keycloak/v1alpha1/keycloakrealm_types.go you need to add that extra field to the KeycloakAPIRealm struct, namely:
type KeycloakAPIRealm struct {
// +kubebuilder:validation:Required
// +optional
ID string `json:"id"`
// Realm name.
// +kubebuilder:validation:Required
Realm string `json:"realm"`
// Realm enabled flag.
// +optional
Enabled bool `json:"enabled"`
// Login Theme name
// +optional
LoginTheme string `json:"loginTheme,omitempty"`
.....
}
build the project and run.

Run a specific command in all pods

I was wondering if it is possible to run a specific command (Example: echo "foo") at a specific time in all existing pods (pods that are not in the default namespace are included). It would be like a cronJob, but the only difference is that I want to specify/deploy it in one place only. Is that even possible?
It is possible. Please find the steps I followed, hope it help you.
First, create a simple script to read pod's name, exec and execute the command.
import os, sys
import logging
from datetime import datetime
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
dt = datetime.now()
ts = dt.strftime("%d-%m-%Y-%H-%M-%S-%f")
pods = os.popen("kubectl get po --all-namespaces").readlines()
for pod in pods:
ns = pod.split()[0]
po = pod.split()[1]
try:
h = os.popen("kubectl -n %s exec -i %s sh -- hostname" %(ns, po)).read()
os.popen("kubectl -n %s exec -i %s sh -- touch /tmp/foo-%s.txt" %(ns, po, ts))
logging.debug("Executed on %s" %h)
except Exception as e:
logging.error(e)
Next, Dockerize the above script, build and push.
FROM python:3.8-alpine
ENV KUBECTL_VERSION=v1.18.0
WORKDIR /foo
ADD https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl .
RUN chmod +x kubectl &&\
mv kubectl /usr/local/bin
COPY foo.py .
CMD ["python", "foo.py"]
Later we'll use this image in CronJob. You can see I have installed kubectl in the Dockerfile to trigger the kubectl commands. But it is insufficient, we should add clusterole and clusterrolebinding to the service account which runs the CronJob.
I have created a ns foo and I bound foo's default service account to cluster role I created as shown below.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: foo
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: foo
subjects:
- kind: ServiceAccount
name: default
namespace: foo
roleRef:
kind: ClusterRole
name: foo
apiGroup: rbac.authorization.k8s.io
Now service account default of foo has permissions to get, list, exec to all the pods in the cluster.
Finally create a cronjob to run the task.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: foo
spec:
schedule: "15 9 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: foo
image: harik8/sof:62177831
imagePullPolicy: Always
restartPolicy: OnFailure
Login to the pods and check, it should have created file with timestamp at /tmp directory of each pod.
$ kubectl exec -it app-59666bb5bc-v6p2h sh
# ls -lah /tmp
-rw-r--r-- 1 root root 0 Jun 4 09:15 foo-04-06-2020-09-15-06-792614.txt
logs
error: cannot exec into a container in a completed pod; current phase is Failed
error: cannot exec into a container in a completed pod; current phase is Succeeded
DEBUG:root:Executed on foo-1591262100-798ng
DEBUG:root:Executed on grafana-5f6f8cbf75-jtksp
DEBUG:root:Executed on istio-egressgateway-557dcf8d8-npfnd
DEBUG:root:Executed on istio-ingressgateway-6489d9556d-2dp7j
command terminated with exit code 126
DEBUG:root:Executed on OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"hostname\": executable file not found in $PATH": unknown
DEBUG:root:Executed on istiod-774777b79-mvmqm
It is possible but a bit complicated and you would need to write everything yourself, as there is no automatic tools to do that as far as I'm aware.
You could use Kubernetes API to collect all pod names, use those in a loop to push kubectl exec pod_name command to all those pods.
To list all pods in a cluster GET /api/v1/pods, this will also list the system ones.
This script could be run using Kubernetes CronJob at your specified time.
There you go:
for ns in $(kubectl get ns -oname | awk -F "/" '{print $2}'); do for pod in $(kubectl get po -n $ns -oname | awk -F "/" '{print $2}'); do kubectl exec $pod -n $ns echo foo; done; done
It will return en error if echo (or the command) is not available in the container. Other then that, should work.

How to sign in kubernetes dashboard?

I just upgraded kubeadm and kubelet to v1.8.0. And install the dashboard following the official document.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
After that, I started the dashboard by running
$ kubectl proxy --address="192.168.0.101" -p 8001 --accept-hosts='^*$'
Then fortunately, I was able to access the dashboard thru http://192.168.0.101:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
I was redirected to a login page like this which I had never met before.
It looks like that there are two ways of authentication.
I tried to upload the /etc/kubernetes/admin.conf as the kubeconfig but got failed. Then I tried to use the token I got from kubeadm token list to sign in but failed again.
The question is how I can sign in the dashboard. It looks like they added a lot of security mechanism than before. Thanks.
As of release 1.7 Dashboard supports user authentication based on:
Authorization: Bearer <token> header passed in every request to Dashboard. Supported from release 1.6. Has the highest priority. If present, login view will not be shown.
Bearer Token that can be used on Dashboard login view.
Username/password that can be used on Dashboard login view.
Kubeconfig file that can be used on Dashboard login view.
— Dashboard on Github
Token
Here Token can be Static Token, Service Account Token, OpenID Connect Token from Kubernetes Authenticating, but not the kubeadm Bootstrap Token.
With kubectl, we can get an service account (eg. deployment controller) created in kubernetes by default.
$ kubectl -n kube-system get secret
# All secrets with type 'kubernetes.io/service-account-token' will allow to log in.
# Note that they have different privileges.
NAME TYPE DATA AGE
deployment-controller-token-frsqj kubernetes.io/service-account-token 3 22h
$ kubectl -n kube-system describe secret deployment-controller-token-frsqj
Name: deployment-controller-token-frsqj
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=deployment-controller
kubernetes.io/service-account.uid=64735958-ae9f-11e7-90d5-02420ac00002
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZXBsb3ltZW50LWNvbnRyb2xsZXItdG9rZW4tZnJzcWoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVwbG95bWVudC1jb250cm9sbGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjQ3MzU5NTgtYWU5Zi0xMWU3LTkwZDUtMDI0MjBhYzAwMDAyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRlcGxveW1lbnQtY29udHJvbGxlciJ9.OqFc4CE1Kh6T3BTCR4XxDZR8gaF1MvH4M3ZHZeCGfO-sw-D0gp826vGPHr_0M66SkGaOmlsVHmP7zmTi-SJ3NCdVO5viHaVUwPJ62hx88_JPmSfD0KJJh6G5QokKfiO0WlGN7L1GgiZj18zgXVYaJShlBSz5qGRuGf0s1jy9KOBt9slAN5xQ9_b88amym2GIXoFyBsqymt5H-iMQaGP35tbRpewKKtly9LzIdrO23bDiZ1voc5QZeAZIWrizzjPY5HPM1qOqacaY9DcGc7akh98eBJG_4vZqH2gKy76fMf0yInFTeNKr45_6fWt8gRM77DQmPwb3hbrjWXe1VvXX_g
Kubeconfig
The dashboard needs the user in the kubeconfig file to have either username & password or token, but admin.conf only has client-certificate. You can edit the config file to add the token that was extracted using the method above.
$ kubectl config set-credentials cluster-admin --token=bearer_token
Alternative (Not recommended for Production)
Here are two ways to bypass the authentication, but use for caution.
Deploy dashboard with HTTP
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
Dashboard can be loaded at http://localhost:8001/ui with kubectl proxy.
Granting admin privileges to Dashboard's Service Account
$ cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
EOF
Afterwards you can use Skip option on login page to access Dashboard.
If you are using dashboard version v1.10.1 or later, you must also add --enable-skip-login to the deployment's command line arguments. You can do so by adding it to the args in kubectl edit deployment/kubernetes-dashboard --namespace=kube-system.
Example:
containers:
- args:
- --auto-generate-certificates
- --enable-skip-login # <-- add this line
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
TL;DR
To get the token in a single oneliner:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
This assumes that your ~/.kube/config is present and valid. And also that kubectl config get-contexts indicates that you are using the correct context (cluster and namespace) for the dashboard you are logging into.
Explanation
I derived this answer from what I learned from #silverfox's answer. That is a very informative write up. Unfortunately it falls short of telling you how to actually put the information into practice. Maybe I've been doing DevOps too long, but I think in shell. It's much more difficult for me to learn or teach in English.
Here is that oneliner with line breaks and indents for readability:
kubectl -n kube-system describe secret $(
kubectl -n kube-system get secret | \
awk '/^deployment-controller-token-/{print $1}'
) | \
awk '$1=="token:"{print $2}'
There are 4 distinct commands and they get called in this order:
Line 2 - This is the first command from #silverfox's Token section.
Line 3 - Print only the first field of the line beginning with deployment-controller-token- (which is the pod name)
Line 1 - This is the second command from #silverfox's Token section.
Line 5 - Print only the second field of the line whose first field is "token:"
If you don't want to grant admin permission to dashboard service account, you can create cluster admin service account.
$ kubectl create serviceaccount cluster-admin-dashboard-sa
$ kubectl create clusterrolebinding cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=default:cluster-admin-dashboard-sa
And then, you can use the token of just created cluster admin service account.
$ kubectl get secret | grep cluster-admin-dashboard-sa
cluster-admin-dashboard-sa-token-6xm8l kubernetes.io/service-account-token 3 18m
$ kubectl describe secret cluster-admin-dashboard-sa-token-6xm8l
I quoted it from giantswarm guide - https://docs.giantswarm.io/guides/install-kubernetes-dashboard/
Combining two answers: 49992698 and 47761914 :
# Create service account
kubectl create serviceaccount -n kube-system cluster-admin-dashboard-sa
# Bind ClusterAdmin role to the service account
kubectl create clusterrolebinding -n kube-system cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:cluster-admin-dashboard-sa
# Parse the token
TOKEN=$(kubectl describe secret -n kube-system $(kubectl get secret -n kube-system | awk '/^cluster-admin-dashboard-sa-token-/{print $1}') | awk '$1=="token:"{print $2}')
You need to follow these steps before the token authentication
Create a Cluster Admin service account
kubectl create serviceaccount dashboard -n default
Add the cluster binding rules to your dashboard account
kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard
Get the secret token with this command
kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode
Choose token authentication in the Kubernetes dashboard login page
Now you can able to login
A self-explanatory simple one-liner to extract token for kubernetes dashboard login.
kubectl describe secret -n kube-system | grep deployment -A 12
Copy the token and paste it on the kubernetes dashboard under token sign in option and you are good to use kubernetes dashboard
All the previous answers are good to me. But a straight forward answer on my side would come from https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md. Just use kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}'). You will have many values for some keys (Name, Namespace, Labels, ..., token). The most important is the token that corresponds to your name. copy that token and paste it in the token box. Hope this helps.
You can get the token:
kubectl describe secret -n kube-system | grep deployment -A 12
Take the Token value which is something like
token: eyJhbGciOiJSUzI1NiIsI...
Use port-forward to /kubernetes-dashboard:
kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8080:443 --address='0.0.0.0'
Access the Site Using:
https://<IP-of-Master-node>:8080/
Provide the Token when asked.
Note the https on the URL. Tested site on Firefox because With new Updates Google Chrome has become strict of not allowing traffic from unknown SSL certificates.
Also note, the 8080 port should be opened in the VM of Master Node.
However, if you are using After kubernetes 1.24 version,
creating service accounts will not generate tokens
, instead should use following command.
kubectl -n kubernetes-dashboard create token admin-user
this is finally what works now (2023)
create two files create-service-cccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
and create-cluster-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
then run
kubectl apply -f create-service-cccount.yaml
kubectl apply -f create-cluster-role-binding.yaml
kubectl -n kubernetes-dashboard create token admin-user
for latest update please check
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
The skip login has been disabled by default due to security issues. https://github.com/kubernetes/dashboard/issues/2672
in your dashboard yaml add this arg
- --enable-skip-login
to get it back
Download
https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
add
type: NodePort for the Service
And then run this command:
kubectl apply -f kubernetes-dashboard.yaml
Find the exposed port with the command :
kubectl get services -n kube-system
You should be able to get the dashboard at http://hostname:exposedport/
with no authentication
An alternative way to obtain the kubernetes-dashboard token:
kubectl -n kubernetes-dashboard get secret -o=jsonpath='{.items[?(#.metadata.annotations.kubernetes\.io/service-account\.name=="kubernetes-dashboard")].data.token}' | base64 --decode
Explanation:
Get all the secret in the kubernetes-dashboard name space.
Look at the items array, and match for: metadata -> annotations -> kubernetes.io/service-account.name == kubernetes-dashboard
Print data -> token
Decode content. (If you perform kubectl describe secret, the token is already decoded.)
For version 1.26.0/1.26.1 at 2023,
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
kubectl create serviceaccount admin-user -n kubernetes-dashboard
kubectl create clusterrolebinding dashboard-admin -n kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=admin-user
kubectl -n kubernetes-dashboard create token admin-user
The newest guide: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md