How to get token from service account? - kubernetes

I'm new to Kubernetes. I need to get token from service account which was created by me. I used kubectl get secrets command and I got "No resources found in default namespace." as return. Then I used kubectl describe serviceaccount deploy-bot-account command to check my service account. It returns me as below.
Name: deploy-bot-account
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
How can I fix this issue?

When service account is crated, k8s automatically creates a secrets and maps the same to sa. The secret contains ca.crt, token and namespace that are required for authN against API server.
refer the following commands
# kubectl create serviceaccount sa1
# kubectl get serviceaccount sa1 -oyaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa1
namespace: default
secrets:
- name: sa1-token-l2hgs
You can retrieve the token from the secret mapped to the service account as shown below
# kubectl get secret sa1-token-l2hgs -oyaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EUXlNakV4TVRVeE1Wb1hEVE13TURReU1ERXhNVFV4TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT2lCCk5RTVFPU0Rvdm5IcHQ2MjhkMDZsZ1FJRmpWbGhBb3Q2Uk1TdFFFQ3c3bFdLRnNPUkY4aU1JUDkrdjlJeHFBUEkKNWMrTXkvamNuRWJzMTlUaWEz-NnA0L0pBT25wNm1aSVgrUG1tYU9hS3gzcm13bFZDZHNVQURsdWJHdENhWVNpMQpGMmpBUXRCMkZrTUN2amRqNUdnNnhCTXMrcXU2eDNLQmhKNzl3MEFxNzZFVTBoTkcvS2pCOEd5aVk4b3ZKNStzCmI2LzcwYU53TE54TVU3UjZhV1d2OVJhUmdXYlVPY2RxcWk4WnZtcTZzWGZFTEZqSUZ5SS9GeHd6SWVBalNwRjEKc0xsM1dHVXZONkxhNThUdFhrNVFhVmZKc1JDUGF0ZjZVRzRwRVJDQlBZdUx-lMzl4bW1LVk95TEg5ditsZkVjVApVcng5Qk9LYmQ4VUZrbXdpVSs4Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKMkhUMVFvbkswWnFJa0kwUUJDcUJUblRoT0cKeE56ZURSalVSMEpRZTFLT2N1eStZMWhwTVpYOTFIT3NjYTk0RlNiMkhOZy9MVGkwdnB1bWFGT2d1SE9ncndPOQpIVXZVRFZPTDlFazF5SElLUzBCRHdrWDR5WElMajZCOHB1Wm1FTkZlQ0cyQ1I5anpBVzY5ei9CalVYclFGVSt3ClE2OE9YSEUybzFJK3VoNzBiNzhvclRaaC9hVUhybVAycXllakM2dUREMEt1QzlZcGRjNmVna2U3SkdXazJKb3oKYm5OV0NHWklEUjF1VFBiRksxalN5dTlVT1MyZ1dzQ1BQZS8vZ2JqUURmUmpyTjJldmt2RWpBQWF0OEpsd1FDeApnc3ZlTEtCaTRDZzlPZDJEdWphVmxtR2YwUVpXR1FmMFZGaEFlMzIxWE5hajJNL2lhUXhzT3FwZzJ2Zz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaW-FJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluTmhNUzEwYjJ0bGJpMXNNbWhuY3lJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZ5ZG1salpTMWhZMk52ZFc1MExtNWhiV1VpT2lKellURWlMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIzVnVkQzUxYVdRaU9pSXhaRFUyWW1Vd09DMDRORGt4TFRFeFpXRXRPV0ppWWkwd01qUXlZV014TVRBd01UVWlMQ0p6ZFdJaU9pSnplWE4wWlcwNmMyVnlkbWxqWldGalkyOT-FiblE2WkdWbVlYVnNkRHB6WVRFaWZRLmFtdGFORHZUNE9DUlJjZVNpTUE0WjhxaExIeTVOMUlfSG12cTBPWDdvV3RVNzdEWl9wMnVTVm13Wnlqdm1DVFB0T01acUhKZ29BX0puYUphWmlIU3IyaGh3Y2pTN2VPX3dhMF8tamk0ZXFfa0wxVzVNMDVFSG1YZFlTNzdib-DAtZ29jTldxT2RORVhpX1VBRWZLR0RwMU1LeFpFdlBjamRkdDRGWVlBSmJ5LWRqdXNhRjhfTkJEclhJVUNnTzNLUUlMeHZtZjZPY2VDeXYwR3l4ajR4SWRPRTRSSzZabzlzSW5qY0lWTmRvVm85Y3o5UzlvaGExNXdrMWl2VDgwRnBqU3dnUUQ0OTFqdEljdFppUkJBQzIxZkhYMU5scENaQTdIb3Zvck5Yem9maGpmUG03V0xRUUYyQjc4ZkktUEhqMHM2RnNpMmI0NUpzZzFJTTdXWU50UQ==
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: sa1
kubernetes.io/service-account.uid: 1d56be08-8491-11ea-9bbb-0242ac110015
name: sa1-token-l2hgs
namespace: default
type: kubernetes.io/service-account-token

Related

How to add Mountable secrets to a Service Account?

I realized ServiceAccount token secrets are no longer automatically generated in k8s 1.24. So I manually created a secret and attached it to the Service Account I created, but I found Mountable Secrets part is still empty, and I didn't find a way how to attach it to my Service Account.
apiVersion: v1
kind: ServiceAccount
metadata:
name: spinnaker-sa
namespace: spinnaker
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: sa-token
namespace: spinnaker
annotations:
kubernetes.io/service-account.name: "spinnaker-sa"
kubernetes.io/enforce-mountable-secrets: "true"
After I applied the above yaml file, I got the following result when I try kubectl describe serviceaccount
Name: spinnaker-sa
Namespace: spinnaker
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: sa-token
Events: <none>
Pls advise what I should do to add Mountable secrets. Thanks!

Restoring k8s service account tokens

I'd like to restore a kubernetes service account token from a backup (which is actually just an export of the corresponding secret):
apiVersion: v1
kind: Secret
metadata:
name: my-service-account-token-lqrvp
annotations:
kubernetes.io/service-account.name: my-service-account
type: kubernetes.io/service-account-token
data:
token: bXktc2ltcGxlLXRva2VuCg==
The secret has been applied successfully and was added to the service account:
# kubectl apply -f my-service-account.yaml
secret/my-service-account-token-lqrvp created
# kubectl describe sa my-service-account
Name: my-service-account
Namespace: my-namespace
Labels: <none>
Annotations: kubernetes.io/service-account.name: my-service-account
Image pull secrets: my-service-account-dockercfg-lv9hp
Mountable secrets: my-service-account-token-lv9hp
Tokens: my-service-account-token-lqrvp
Events: <none>
Unfortunately, everytime I try to access the api using the token, I always get the error "The token provided is invalid or expired":
# kubectl login https://api.my-k8s-cluster.mydomain.com:6443 --token=my-simple-token
error: The token provided is invalid or expired
I know that the token is usually automatically generated by the controller-manager, but is restoring a token supported by kubernetes?

Attach IAM Role to Serviceaccount from a Pod in EKS

I am trying to attach an IAM role to a pod's service account from within the POD in EKS.
kubectl annotate serviceaccount -n $namespace $serviceaccount eks.amazonaws.com/role-arn=$ARN
The current role attached to the $serviceaccountis outlined below:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: common-role
rules:
- apiGroups: [""]
resources:
- event
- secrets
- configmaps
- serviceaccounts
verbs:
- get
- create
However, when I execute the kubectl command I get the following:
error from server (forbidden): serviceaccounts $serviceaccount is forbidden: user "system:servi...." cannot get resource "serviceaccounts" in API group "" ...
Is my role correct? Why can't I modify the service account?
Kubernetes by default will run the pods with service account: default which don`t have the right permissions. Since I cannot determine which one you are using for your pod I can only assume that you are using either default or some other created by you. In both cases the error suggest that the service account your are using to run your pod does not have proper rights.
If you run this pod with service account type default you will have add the appropriate rights to it. Alternative way is to run your pod with another service account created for this purpose. Here`s an example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: run-kubectl-from-pod
Then you will have to create appropriate role (you can find full list of verbs here):
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-service-accounts
rules:
- apiGroups: [""]
resources:
- serviceaccounts
verbs:
- get
- create
- patch
- list
I'm using here more verbs as a test. Get and Patch would be enough for this use case. I`m mentioning this since its best practice to provide as minimum rights as possible.
Then create your role accordingly:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-service-account-bind
subjects:
- kind: ServiceAccount
name: run-kubectl-from-pod
roleRef:
kind: Role
name: modify-service-accounts
apiGroup: rbac.authorization.k8s.io
And now you just have reference that service account when your run your pod:
apiVersion: v1
kind: Pod
metadata:
name: run-kubectl-in-pod
spec:
serviceAccountName: run-kubectl-from-pod
containers:
- name: kubectl-in-pod
image: bitnami/kubectl
command:
- sleep
- "3600"
Once that is done, you just exec into the pod:
➜ kubectl-pod kubectl exec -ti run-kubectl-in-pod sh
And then annotate the service account:
$ kubectl get sa
NAME SECRETS AGE
default 1 19m
eks-sa 1 36s
run-kubectl-from-pod 1 17m
$ kubectl annotate serviceaccount eks-sa eks.amazonaws.com/role-arn=$ARN
serviceaccount/eks-sa annotated
$ kubectl describe sa eks-sa
Name: eks-sa
Namespace: default
Labels: <none>
Annotations: eks.amazonaws.com/role-arn:
Image pull secrets: <none>
Mountable secrets: eks-sa-token-sldnn
Tokens: <none>
Events: <none>
If you encounter any issues with request being refused please start with reviewing your request attributes and determine the appropriate request verb.
You can also check your access with kubectl auth can-i command:
kubectl-pod kubectl auth can-i patch serviceaccount
API server will respond with simple yes or no.
Please Note that If you want to patch a service account to use an IAM role you will have delete and re-create any existing pods that are assocaited with the service account to apply credentials environment variables. You can read more about it here.
While your role appears to be correct, please keep in mind that when executing kubectl, the RBAC permissions of your account in kubeconfig are relevant for whether you are allowed to perform an action.
From your question, I understand that your role is attached to the service account you are trying to annotate, which is irrelevant to the kubectl permission check.

cert-mananger configuration on GKE with clouddns

So I am looking to set up cert-manager on GKE using google clouddns. It seems like a lot of the older questions on SO that have been asked are using http01 instead of dns01. I want to make sure everything is correct so I don't get rate limited.
here is my issuer.yaml
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: engineering#company.com
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- dns01:
clouddns:
project: MY-GCP_PROJECT
# This is the secret used to access the service account
serviceAccountSecretRef:
name: clouddns-dns01-solver-svc-acct
key: key.json
here is my certificate.yaml
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: my-website
namespace: default
spec:
secretName: my-website-tls
issuerRef:
# The issuer created previously
name: letsencrypt-staging
dnsNames:
- my.website.com
I ran these commands to get everything configured:
kubectx my-cluster
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.1/cert-manager.yaml
kubectl get pods --namespace cert-manager
gcloud iam service-accounts create dns01-solver --display-name "dns01-solver"
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:dns01-solver#$PROJECT_ID.iam.gserviceaccount.com --role roles/dns.admin
gcloud iam service-accounts keys create key.json --iam-account dns01-solver#$PROJECT_ID.iam.gserviceaccount.com
kubectl create secret generic clouddns-dns01-solver-svc-acct --from-file=key.json
kubectl apply -f issuer.yaml
kubectl apply -f certificate.yaml
here is the output from kubectl describe certificaterequests
Name: my-certificaterequests
Namespace: default
Labels: <none>
Annotations: cert-manager.io/certificate-name: my-website
cert-manager.io/private-key-secret-name: my-website-tls
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"cert-manager.io/v1alpha2","kind":"Certificate","metadata":{"annotations":{},"name":"my-cluster","namespace":"default...
API Version: cert-manager.io/v1alpha3
Kind: CertificateRequest
Metadata:
Creation Timestamp: 2020-06-28T00:05:55Z
Generation: 1
Owner References:
API Version: cert-manager.io/v1alpha2
Block Owner Deletion: true
Controller: true
Kind: Certificate
Name: my-cluster
UID: 81efe2fd-5f58-4c84-ba25-dd9bc63b032a
Resource Version: 192470614
Self Link: /apis/cert-manager.io/v1alpha3/namespaces/default/certificaterequests/my-certificaterequests
UID: 8a0c3e2d-c48e-4cda-9c70-b8dcfe94f14c
Spec:
Csr: ...
Issuer Ref:
Name: letsencrypt-staging
Status:
Certificate: ...
Conditions:
Last Transition Time: 2020-06-28T00:07:51Z
Message: Certificate fetched from issuer successfully
Reason: Issued
Status: True
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal OrderCreated 16m cert-manager Created Order resource default/my-certificaterequests-484284207
Normal CertificateIssued 14m cert-manager Certificate fetched from issuer successfully
I see the secret kubectl get secret my-website-tls
NAME TYPE DATA AGE
my-website-tls kubernetes.io/tls 3 18m
Does that means everything worked and I should try it in prod? What worries me is that I didn't see any DNS records change in my cloud console.
In addition I wanted to confirm:
How would I change the certificate to be for a wildcard *.company.com?
If in fact I am ready for prod and will get the cert, I just need to updated the secret name in my ingress deployment to redeploy?
Any insight would be greatly appreciated. Thanks
I answered you on Slack already. And you would change the name by changing the value in the dnsNames section of the Certificate or the spec.tls.*.hosts if using ingress-shim, you just include the wildcard name exactly as you showed it.

Not able to create Prometheus in K8S cluster

I'm trying to install Prometheus on my K8S cluster
when I run command
kubectl get namespaces
I got the following namespace:
default Active 26h
kube-public Active 26h
kube-system Active 26h
monitoring Active 153m
prod Active 5h49m
Now I want to create the Prometheus via
helm install stable/prometheus --name prom -f k8s-values.yml
and I got error:
Error: release prom-demo failed: namespaces "default" is forbidden:
User "system:serviceaccount:kube-system:default" cannot get resource
"namespaces" in API group "" in the namespace "default"
even if I switch to monitoring ns I got the same error,
the k8s-values.yml look like following
rbac:
create: false
server:
name: server
service:
nodePort: 30002
type: NodePort
Any idea what could be missing here ?
You are getting this error because you are using RBAC without giving the right permissions.
Give the tiller permissions:
taken from https://github.com/helm/helm/blob/master/docs/rbac.md
Example: Service account with cluster-admin role
In rbac-config.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Note: The cluster-admin role is created by default in a Kubernetes cluster, so you don't have to define it explicitly.
$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller
Create a service account for prometheus:
Change the value of rbac.create to true:
rbac:
create: true
server:
name: server
service:
nodePort: 30002
type: NodePort
Look at prometheus operator to spin up all monitoring services from prometheus stack.
below link is helpful
https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus/manifests
all the manifests are listed there. go through those files and deploy whatever you need to monitor in your k8s cluster