How to create a secret for service account using Kubernetes version 1.24 - kubernetes

I am using Kubernetes version 1.24, I have created a secret for my service account manually, but when I run kubectl get serviceaccounts, it is showing that I do not have any secrets for that service account?

If you are creating the secret manually you have to manually add the secret to the service account.
You can edit the existing service account using the command kubectl edit sa <name of sa> or else create the YAML and reapply the changes to configure those.
However, if you are creating the ServiceAccount it will auto-generate the secret token.
bash-4.2$ kubectl get sa
NAME SECRETS AGE
default 1 11d
bash-4.2$ kubectl create sa test
serviceaccount/test created
bash-4.2$ kubectl get secret
NAME TYPE DATA AGE
default-token-dvgd8 kubernetes.io/service-account-token 3 11d
test-token-k6dpd kubernetes.io/service-account-token 3 7s
bash-4.2$ kubectl get sa
NAME SECRETS AGE
default 1 11d
test 1 59s
bash-4.2$
Update
If you are on K8s version 1.24
The serviceaccount won't create the secret automatically.
You have to create it manually.
Example :
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: token-secret
annotations:
kubernetes.io/service-account.name: "<SA name>"
If you just want to create the token you can use the : kubectl create token <Name>
Read more about it : https://medium.com/#harsh.manvar111/k8s-v1-24-is-unable-to-create-a-serviceaccount-secret-798f8454e6e7

When creating a secret manually, it needs to be manually added to the ServiceAccount. You can use kubectl edit for this.

Related

Who has created a namespace and have access to it

I want to understand who has created a namespace and who has access to a specific namespace in Openshift.
This is specifically required as would require to block access and be very selective about access.
Who has created a specific namespace in OpenShift, can be found checking the parent Project annotations:
$ oc describe project example-project
Name: example-project
Created: 15 months ago
Labels: <none>
Annotations: alm-manager=operator-lifecycle-manager.olm-operator
openshift.io/display-name=Example Project
openshift.io/requester=**here is the username**
...
Who has access to a specific namespace: depends on what you mean by this. The oc client would allow you to review privileges for a given verb, in a given namespace, ... something like this:
$ oc adm policy who-can get pods -n specific-namespace
resourceaccessreviewresponse.authorization.openshift.io/<unknown>
Namespace: specific-namespace
Verb: get
Resource: pods
Users: username1
username2
...
system:admin
system:kube-scheduler
system:serviceaccount:default:router
system:serviceaccount:kube-service-catalog:default
Groups: system:cluster-admins
system:cluster-readers
system:masters

Problem deploying cockroachdb-client-secure

I am following this helm + secure - guide:
https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html#helm
I deployed the cluster with this command: $ helm install my-release --values my-values.yaml cockroachdb/cockroachdb --namespace=thesis-crdb
This is how it looks: $ helm list --namespace=thesis-crdb
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-release thesis-crdb 1 2021-01-31 17:38:52.8102378 +0100 CET deployed cockroachdb-5.0.4 20.2.4
Here is how it looks using: $ kubectl get all --namespace=thesis-crdb
NAME READY STATUS RESTARTS AGE
pod/my-release-cockroachdb-0 1/1 Running 0 7m35s
pod/my-release-cockroachdb-1 1/1 Running 0 7m35s
pod/my-release-cockroachdb-2 1/1 Running 0 7m35s
pod/my-release-cockroachdb-init-fhzdn 0/1 Completed 0 7m35s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-release-cockroachdb ClusterIP None <none> 26257/TCP,8080/TCP 7m35s
service/my-release-cockroachdb-public ClusterIP 10.xx.xx.x <none> 26257/TCP,8080/TCP 7m35s
NAME READY AGE
statefulset.apps/my-release-cockroachdb 3/3 7m35s
NAME COMPLETIONS DURATION AGE
job.batch/my-release-cockroachdb-init 1/1 43s 7m36s
In the my-values.yaml-file I only changed the tls from false to true:
tls:
enabled: true
So far so good, but from here on the guide isn't really working for me anymore. I try as they say with getting the csr: kubectl get csr --namespace=thesis-crdb
No resources found
Ok, perhaps not needed. I carry on to deploy the client-secure
I download the file: https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
And changes the serviceAccountName: cockroachdb to serviceAccountName: my-release-cockroachdb.
I try to deploy it with $ kubectl create -f client-secure.yaml --namespace=thesis-crdb but it throws this error:
Error from server (Forbidden): error when creating "client-secure.yaml": pods "cockroachdb-client-secure" is forbidden: error looking up service account thesis-crdb/my-release-cockroachdb: serviceaccount "my-release-cockroachdb" not found
Anyone got an idea how to solve this? I'm fairly sure it's something with the namespace that is messing it up.
I have tried to put the namespace in the metadata-section
metadata:
namespace: thesis-crdb
And then try to deploy it with: kubectl create -f client-secure.yaml but to no avail:
Error from server (Forbidden): error when creating "client-secure.yaml": pods "cockroachdb-client-secure" is forbidden: error looking up service account thesis-crdb/my-release-cockroachdb: serviceaccount "my-release-cockroachdb" not found
You mention in question that you have changed serviceAccountName in YAML.
And changes the serviceAccountName: cockroachdb to serviceAccountName: my-release-cockroachdb.
So Root Cause of your issue is related with ServiceAccount misconfiguration.
Background
In your cluster you have something called ServiceAccount.
When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default).
To ServiceAccount you also should configure RBAC which grants you permissions to create resources.
Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization.
RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.
If you don't have proper RBAC permissions you will not be able to create resources.
In Kubernetes you can find Role and ClusterRole. Role sets permissions within a particular namespace and ClusterRole sets permissions in whole cluster.
Besides that, you also need to bind roles using RoleBinding and ClusterRoleBinding.
In addition, if you would use Cloud environment, you would also need special rights in project. Your guide provides instructions to do it here.
Root cause
I've checked cockroachdb chart and it creates ServiceAccount, Role, ClusterRole, RoleBinding and ClusterRoleBinding for cockroachdb and prometheus. There is no configuration for my-release-cockroachdb.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cockroachdb
...
verbs:
- create
- get
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cockroachdb
labels:
app: cockroachdb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cockroachdb
...
In client-secure.yaml you change serviceAccountName to my-release-cockroachdb and Kubernetes cannot find that ServiceAccount as it was not created by cluster administrator or cockroachdb chart.
To list ServiceAccounts in default namespace you can use command $ kubectl get ServiceAccount, however if you would check all ServiceAccounts in cluster you should add -A to your command - $ kubectl get ServiceAccount -A.
Solution
Option 1 is to use existing ServiceAccount with proper permissions like SA created by cockroachdb chart which is cockroachdb, not my-release-cockroachdb.
Option 2 is to create ServiceAccount, Role/ClusterRole and RoleBinding/ClusterRoleBinding for my-release-cockroachdb.

kubernetes service account secrets is not listed

I created a secret of type service-account using the below code. The secret got created but when I run the kubectl get secrets the service-account secret is not listed. Where am I going wrong
apiVersion: v1
kind: Secret
metadata:
name: secret-sa-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
# You can include additional key value pairs as you do with Opaque Secrets
extra: YmFyCg==
kubectl create -f sa-secret.yaml
secret/secret-sa-sample created```
it might have been created in default namespace.
Specify namespace explicitly using -n $NS argument to kubectl

One liner command to get secret name and secret's token

What's the one liner command to replace 2 commands like below to get the Kubernetes secret's token? Example usecase will be getting token from kubernetes-dashboard-admin's secret to login and view kubernetes-dashboard.
Command example:
$ kubectl describe serviceaccount default
Name: default
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-zvxf4
Tokens: default-token-zvxf4
Events: <none>
$ kubectl describe secret default-token-zvxf4
Name: default-token-zvxf4
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: default
kubernetes.io/service-account.uid: 809835e7-2564-439f-82f3-14762688ca80
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 7 bytes
token: TOKENHERE
Answer that I discovered was below. By using jsonpath to retrieve and xargs to pass the secret name/output to second command. Will need to decode the encrypted token with base64 at the end.
$ kubectl get serviceaccount default -o=jsonpath='{.secrets[0].name}' | xargs kubectl get secret -ojsonpath='{.data.token}' | base64 --decode
TOKENHERE%
The tailing % is not part of the token
This should be able to work on MacOS without install additional app like jq which should be able to do the same. Hope this is helpful for others.
You generally don't need to run either command. Kubernetes will automatically mount the credentials to /var/run/secrets/kubernetes.io/serviceaccount/token in a pod declared using that service account, and the various Kubernetes SDKs know to look for credentials there. Accessing the API from a Pod in the Kubernetes documentation describes this setup in more detail.
Configure Service Accounts for Pods describes the Pod-level setup that's possible to do, though there are reasonable defaults for these.
apiVersion: v1
kind: Pod # or a pod spec embedded in a Deployment &c.
spec:
serviceAccountName: my-service-account # defaults to "default"
automountServiceAccountToken: true # defaults to true
I wouldn't try to make requests from outside the cluster as a service account. User permissions are better suited for this use case. As a user you could launch a Job with service-account permissions if you needed to.
Example using kubectl describe instead of kubectl get and adding the namespace definition:
kubectl -n kube-system describe secret $(kubectl -n kube-system describe sa default | grep 'Mountable secrets' | awk '{ print $3 }') | grep 'token:' | awk '{ print $2 }'

How to sign in kubernetes dashboard?

I just upgraded kubeadm and kubelet to v1.8.0. And install the dashboard following the official document.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
After that, I started the dashboard by running
$ kubectl proxy --address="192.168.0.101" -p 8001 --accept-hosts='^*$'
Then fortunately, I was able to access the dashboard thru http://192.168.0.101:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
I was redirected to a login page like this which I had never met before.
It looks like that there are two ways of authentication.
I tried to upload the /etc/kubernetes/admin.conf as the kubeconfig but got failed. Then I tried to use the token I got from kubeadm token list to sign in but failed again.
The question is how I can sign in the dashboard. It looks like they added a lot of security mechanism than before. Thanks.
As of release 1.7 Dashboard supports user authentication based on:
Authorization: Bearer <token> header passed in every request to Dashboard. Supported from release 1.6. Has the highest priority. If present, login view will not be shown.
Bearer Token that can be used on Dashboard login view.
Username/password that can be used on Dashboard login view.
Kubeconfig file that can be used on Dashboard login view.
— Dashboard on Github
Token
Here Token can be Static Token, Service Account Token, OpenID Connect Token from Kubernetes Authenticating, but not the kubeadm Bootstrap Token.
With kubectl, we can get an service account (eg. deployment controller) created in kubernetes by default.
$ kubectl -n kube-system get secret
# All secrets with type 'kubernetes.io/service-account-token' will allow to log in.
# Note that they have different privileges.
NAME TYPE DATA AGE
deployment-controller-token-frsqj kubernetes.io/service-account-token 3 22h
$ kubectl -n kube-system describe secret deployment-controller-token-frsqj
Name: deployment-controller-token-frsqj
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=deployment-controller
kubernetes.io/service-account.uid=64735958-ae9f-11e7-90d5-02420ac00002
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZXBsb3ltZW50LWNvbnRyb2xsZXItdG9rZW4tZnJzcWoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVwbG95bWVudC1jb250cm9sbGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjQ3MzU5NTgtYWU5Zi0xMWU3LTkwZDUtMDI0MjBhYzAwMDAyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRlcGxveW1lbnQtY29udHJvbGxlciJ9.OqFc4CE1Kh6T3BTCR4XxDZR8gaF1MvH4M3ZHZeCGfO-sw-D0gp826vGPHr_0M66SkGaOmlsVHmP7zmTi-SJ3NCdVO5viHaVUwPJ62hx88_JPmSfD0KJJh6G5QokKfiO0WlGN7L1GgiZj18zgXVYaJShlBSz5qGRuGf0s1jy9KOBt9slAN5xQ9_b88amym2GIXoFyBsqymt5H-iMQaGP35tbRpewKKtly9LzIdrO23bDiZ1voc5QZeAZIWrizzjPY5HPM1qOqacaY9DcGc7akh98eBJG_4vZqH2gKy76fMf0yInFTeNKr45_6fWt8gRM77DQmPwb3hbrjWXe1VvXX_g
Kubeconfig
The dashboard needs the user in the kubeconfig file to have either username & password or token, but admin.conf only has client-certificate. You can edit the config file to add the token that was extracted using the method above.
$ kubectl config set-credentials cluster-admin --token=bearer_token
Alternative (Not recommended for Production)
Here are two ways to bypass the authentication, but use for caution.
Deploy dashboard with HTTP
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
Dashboard can be loaded at http://localhost:8001/ui with kubectl proxy.
Granting admin privileges to Dashboard's Service Account
$ cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
EOF
Afterwards you can use Skip option on login page to access Dashboard.
If you are using dashboard version v1.10.1 or later, you must also add --enable-skip-login to the deployment's command line arguments. You can do so by adding it to the args in kubectl edit deployment/kubernetes-dashboard --namespace=kube-system.
Example:
containers:
- args:
- --auto-generate-certificates
- --enable-skip-login # <-- add this line
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
TL;DR
To get the token in a single oneliner:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
This assumes that your ~/.kube/config is present and valid. And also that kubectl config get-contexts indicates that you are using the correct context (cluster and namespace) for the dashboard you are logging into.
Explanation
I derived this answer from what I learned from #silverfox's answer. That is a very informative write up. Unfortunately it falls short of telling you how to actually put the information into practice. Maybe I've been doing DevOps too long, but I think in shell. It's much more difficult for me to learn or teach in English.
Here is that oneliner with line breaks and indents for readability:
kubectl -n kube-system describe secret $(
kubectl -n kube-system get secret | \
awk '/^deployment-controller-token-/{print $1}'
) | \
awk '$1=="token:"{print $2}'
There are 4 distinct commands and they get called in this order:
Line 2 - This is the first command from #silverfox's Token section.
Line 3 - Print only the first field of the line beginning with deployment-controller-token- (which is the pod name)
Line 1 - This is the second command from #silverfox's Token section.
Line 5 - Print only the second field of the line whose first field is "token:"
If you don't want to grant admin permission to dashboard service account, you can create cluster admin service account.
$ kubectl create serviceaccount cluster-admin-dashboard-sa
$ kubectl create clusterrolebinding cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=default:cluster-admin-dashboard-sa
And then, you can use the token of just created cluster admin service account.
$ kubectl get secret | grep cluster-admin-dashboard-sa
cluster-admin-dashboard-sa-token-6xm8l kubernetes.io/service-account-token 3 18m
$ kubectl describe secret cluster-admin-dashboard-sa-token-6xm8l
I quoted it from giantswarm guide - https://docs.giantswarm.io/guides/install-kubernetes-dashboard/
Combining two answers: 49992698 and 47761914 :
# Create service account
kubectl create serviceaccount -n kube-system cluster-admin-dashboard-sa
# Bind ClusterAdmin role to the service account
kubectl create clusterrolebinding -n kube-system cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:cluster-admin-dashboard-sa
# Parse the token
TOKEN=$(kubectl describe secret -n kube-system $(kubectl get secret -n kube-system | awk '/^cluster-admin-dashboard-sa-token-/{print $1}') | awk '$1=="token:"{print $2}')
You need to follow these steps before the token authentication
Create a Cluster Admin service account
kubectl create serviceaccount dashboard -n default
Add the cluster binding rules to your dashboard account
kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard
Get the secret token with this command
kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode
Choose token authentication in the Kubernetes dashboard login page
Now you can able to login
A self-explanatory simple one-liner to extract token for kubernetes dashboard login.
kubectl describe secret -n kube-system | grep deployment -A 12
Copy the token and paste it on the kubernetes dashboard under token sign in option and you are good to use kubernetes dashboard
All the previous answers are good to me. But a straight forward answer on my side would come from https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md. Just use kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}'). You will have many values for some keys (Name, Namespace, Labels, ..., token). The most important is the token that corresponds to your name. copy that token and paste it in the token box. Hope this helps.
You can get the token:
kubectl describe secret -n kube-system | grep deployment -A 12
Take the Token value which is something like
token: eyJhbGciOiJSUzI1NiIsI...
Use port-forward to /kubernetes-dashboard:
kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8080:443 --address='0.0.0.0'
Access the Site Using:
https://<IP-of-Master-node>:8080/
Provide the Token when asked.
Note the https on the URL. Tested site on Firefox because With new Updates Google Chrome has become strict of not allowing traffic from unknown SSL certificates.
Also note, the 8080 port should be opened in the VM of Master Node.
However, if you are using After kubernetes 1.24 version,
creating service accounts will not generate tokens
, instead should use following command.
kubectl -n kubernetes-dashboard create token admin-user
this is finally what works now (2023)
create two files create-service-cccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
and create-cluster-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
then run
kubectl apply -f create-service-cccount.yaml
kubectl apply -f create-cluster-role-binding.yaml
kubectl -n kubernetes-dashboard create token admin-user
for latest update please check
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
The skip login has been disabled by default due to security issues. https://github.com/kubernetes/dashboard/issues/2672
in your dashboard yaml add this arg
- --enable-skip-login
to get it back
Download
https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
add
type: NodePort for the Service
And then run this command:
kubectl apply -f kubernetes-dashboard.yaml
Find the exposed port with the command :
kubectl get services -n kube-system
You should be able to get the dashboard at http://hostname:exposedport/
with no authentication
An alternative way to obtain the kubernetes-dashboard token:
kubectl -n kubernetes-dashboard get secret -o=jsonpath='{.items[?(#.metadata.annotations.kubernetes\.io/service-account\.name=="kubernetes-dashboard")].data.token}' | base64 --decode
Explanation:
Get all the secret in the kubernetes-dashboard name space.
Look at the items array, and match for: metadata -> annotations -> kubernetes.io/service-account.name == kubernetes-dashboard
Print data -> token
Decode content. (If you perform kubectl describe secret, the token is already decoded.)
For version 1.26.0/1.26.1 at 2023,
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
kubectl create serviceaccount admin-user -n kubernetes-dashboard
kubectl create clusterrolebinding dashboard-admin -n kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=admin-user
kubectl -n kubernetes-dashboard create token admin-user
The newest guide: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md