How to add Mountable secrets to a Service Account? - kubernetes

I realized ServiceAccount token secrets are no longer automatically generated in k8s 1.24. So I manually created a secret and attached it to the Service Account I created, but I found Mountable Secrets part is still empty, and I didn't find a way how to attach it to my Service Account.
apiVersion: v1
kind: ServiceAccount
metadata:
name: spinnaker-sa
namespace: spinnaker
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: sa-token
namespace: spinnaker
annotations:
kubernetes.io/service-account.name: "spinnaker-sa"
kubernetes.io/enforce-mountable-secrets: "true"
After I applied the above yaml file, I got the following result when I try kubectl describe serviceaccount
Name: spinnaker-sa
Namespace: spinnaker
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: sa-token
Events: <none>
Pls advise what I should do to add Mountable secrets. Thanks!

Related

How to get token from service account?

I'm new to Kubernetes. I need to get token from service account which was created by me. I used kubectl get secrets command and I got "No resources found in default namespace." as return. Then I used kubectl describe serviceaccount deploy-bot-account command to check my service account. It returns me as below.
Name: deploy-bot-account
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
How can I fix this issue?
When service account is crated, k8s automatically creates a secrets and maps the same to sa. The secret contains ca.crt, token and namespace that are required for authN against API server.
refer the following commands
# kubectl create serviceaccount sa1
# kubectl get serviceaccount sa1 -oyaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa1
namespace: default
secrets:
- name: sa1-token-l2hgs
You can retrieve the token from the secret mapped to the service account as shown below
# kubectl get secret sa1-token-l2hgs -oyaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EUXlNakV4TVRVeE1Wb1hEVE13TURReU1ERXhNVFV4TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT2lCCk5RTVFPU0Rvdm5IcHQ2MjhkMDZsZ1FJRmpWbGhBb3Q2Uk1TdFFFQ3c3bFdLRnNPUkY4aU1JUDkrdjlJeHFBUEkKNWMrTXkvamNuRWJzMTlUaWEz-NnA0L0pBT25wNm1aSVgrUG1tYU9hS3gzcm13bFZDZHNVQURsdWJHdENhWVNpMQpGMmpBUXRCMkZrTUN2amRqNUdnNnhCTXMrcXU2eDNLQmhKNzl3MEFxNzZFVTBoTkcvS2pCOEd5aVk4b3ZKNStzCmI2LzcwYU53TE54TVU3UjZhV1d2OVJhUmdXYlVPY2RxcWk4WnZtcTZzWGZFTEZqSUZ5SS9GeHd6SWVBalNwRjEKc0xsM1dHVXZONkxhNThUdFhrNVFhVmZKc1JDUGF0ZjZVRzRwRVJDQlBZdUx-lMzl4bW1LVk95TEg5ditsZkVjVApVcng5Qk9LYmQ4VUZrbXdpVSs4Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKMkhUMVFvbkswWnFJa0kwUUJDcUJUblRoT0cKeE56ZURSalVSMEpRZTFLT2N1eStZMWhwTVpYOTFIT3NjYTk0RlNiMkhOZy9MVGkwdnB1bWFGT2d1SE9ncndPOQpIVXZVRFZPTDlFazF5SElLUzBCRHdrWDR5WElMajZCOHB1Wm1FTkZlQ0cyQ1I5anpBVzY5ei9CalVYclFGVSt3ClE2OE9YSEUybzFJK3VoNzBiNzhvclRaaC9hVUhybVAycXllakM2dUREMEt1QzlZcGRjNmVna2U3SkdXazJKb3oKYm5OV0NHWklEUjF1VFBiRksxalN5dTlVT1MyZ1dzQ1BQZS8vZ2JqUURmUmpyTjJldmt2RWpBQWF0OEpsd1FDeApnc3ZlTEtCaTRDZzlPZDJEdWphVmxtR2YwUVpXR1FmMFZGaEFlMzIxWE5hajJNL2lhUXhzT3FwZzJ2Zz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaW-FJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluTmhNUzEwYjJ0bGJpMXNNbWhuY3lJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZ5ZG1salpTMWhZMk52ZFc1MExtNWhiV1VpT2lKellURWlMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIzVnVkQzUxYVdRaU9pSXhaRFUyWW1Vd09DMDRORGt4TFRFeFpXRXRPV0ppWWkwd01qUXlZV014TVRBd01UVWlMQ0p6ZFdJaU9pSnplWE4wWlcwNmMyVnlkbWxqWldGalkyOT-FiblE2WkdWbVlYVnNkRHB6WVRFaWZRLmFtdGFORHZUNE9DUlJjZVNpTUE0WjhxaExIeTVOMUlfSG12cTBPWDdvV3RVNzdEWl9wMnVTVm13Wnlqdm1DVFB0T01acUhKZ29BX0puYUphWmlIU3IyaGh3Y2pTN2VPX3dhMF8tamk0ZXFfa0wxVzVNMDVFSG1YZFlTNzdib-DAtZ29jTldxT2RORVhpX1VBRWZLR0RwMU1LeFpFdlBjamRkdDRGWVlBSmJ5LWRqdXNhRjhfTkJEclhJVUNnTzNLUUlMeHZtZjZPY2VDeXYwR3l4ajR4SWRPRTRSSzZabzlzSW5qY0lWTmRvVm85Y3o5UzlvaGExNXdrMWl2VDgwRnBqU3dnUUQ0OTFqdEljdFppUkJBQzIxZkhYMU5scENaQTdIb3Zvck5Yem9maGpmUG03V0xRUUYyQjc4ZkktUEhqMHM2RnNpMmI0NUpzZzFJTTdXWU50UQ==
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: sa1
kubernetes.io/service-account.uid: 1d56be08-8491-11ea-9bbb-0242ac110015
name: sa1-token-l2hgs
namespace: default
type: kubernetes.io/service-account-token

Restoring k8s service account tokens

I'd like to restore a kubernetes service account token from a backup (which is actually just an export of the corresponding secret):
apiVersion: v1
kind: Secret
metadata:
name: my-service-account-token-lqrvp
annotations:
kubernetes.io/service-account.name: my-service-account
type: kubernetes.io/service-account-token
data:
token: bXktc2ltcGxlLXRva2VuCg==
The secret has been applied successfully and was added to the service account:
# kubectl apply -f my-service-account.yaml
secret/my-service-account-token-lqrvp created
# kubectl describe sa my-service-account
Name: my-service-account
Namespace: my-namespace
Labels: <none>
Annotations: kubernetes.io/service-account.name: my-service-account
Image pull secrets: my-service-account-dockercfg-lv9hp
Mountable secrets: my-service-account-token-lv9hp
Tokens: my-service-account-token-lqrvp
Events: <none>
Unfortunately, everytime I try to access the api using the token, I always get the error "The token provided is invalid or expired":
# kubectl login https://api.my-k8s-cluster.mydomain.com:6443 --token=my-simple-token
error: The token provided is invalid or expired
I know that the token is usually automatically generated by the controller-manager, but is restoring a token supported by kubernetes?

How to run kubectl within a job in a namespace?

Hi I saw this documentation where kubectl can run inside a pod in the default pod.
Is it possible to run kubectl inside a Job resource in a specified namespace?
Did not see any documentation or examples for the same..
When I tried adding serviceAccounts to the container i got the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
This was when i tried sshing into the container and running the kubctl.
Editing the question.....
As I mentioned earlier, based on the documentation I had added the service Accounts, Below is the yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl
command:
- "bin/bash"
- "-c"
- "kubectl get pods"
restartPolicy: Never
On running the job, I get the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
Is it possible to run kubectl inside a Job resource in a specified namespace? Did not see any documentation or examples for the same..
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. It means the permission aspect is the same as in a normal pod, meaning that yes, it is possible to run kubectl inside a job resource.
TL;DR:
Your yaml file is correct, maybe there were something else in your cluster, I recommend deleting and recreating these resources and try again.
Also check the version of your Kubernetes installation and job image's kubectl version, if they are more than 1 minor-version apart, you may have unexpected incompatibilities
Security Considerations:
Your job role's scope is the best practice according to documentation (specific role, to specific user on specific namespace).
If you use a ClusterRoleBinding with the cluster-admin role it will work, but it's over permissioned, and not recommended since it's giving full admin control over the entire cluster.
Test Environment:
I deployed your config on a kubernetes 1.17.3 and run the job with bitnami/kubectl and bitnami/kubectl:1:17.3. It worked on both cases.
In order to avoid incompatibility, use the kubectl with matching version with your server.
Reproduction:
$ cat job-kubectl.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl:1.17.3
command:
- "bin/bash"
- "-c"
- "kubectl get pods -n my-namespace"
restartPolicy: Never
$ cat job-svc-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
I created two pods just to add output to the log of get pods.
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty --namespace my-namespace
the pod is running
$ kubectl run ubuntu --generator=run-pod/v1 --image=ubuntu -n my-namespace
pod/ubuntu created
Then I apply the job, ServiceAccount, Role and RoleBinding
$ kubectl get pods -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 88s
testing-stuff-ddpvf 0/1 Completed 0 13s
ubuntu 0/1 Completed 3 63s
Now let's check the testing-stuff pod log to see if it logged the command output:
$ kubectl logs testing-stuff-ddpvf -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 76s
testing-stuff-ddpvf 1/1 Running 0 1s
ubuntu 1/1 Running 3 51s
As you can see, it has succeeded running the job with the custom ServiceAccount.
Let me know if you have further questions about this case.
Create service account like this.
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
Create ClusterRoleBinding using this.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: modify-pods-to-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: internal-kubectl
Now create pod with same config that are given at Documentation.
When you use kubectl from the pod for any operation such as getting pod or creating roles and role bindings it will use the default service account. This service account don't have permission to perform those operations by default. So you need to
create a service account, role and rolebinding using a more privileged account.You should have a kubeconfig file with admin privilege or admin like privilege. Use that kubeconfig file with kubectl from outside the pod to create the service account, role, rolebinding etc.
After that is done create pod by specifying that service account and you should be able perform operations which are defined in the role from within this pod using kubectl and the service account.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: internal-kubectl

imagePullSecrets on default service account don't seem to work

I am basically trying to pull GCR images from Azure kubernetes cluster.
I have the folowing for my default service account:
kubectl get serviceaccounts default -o yaml
apiVersion: v1
imagePullSecrets:
- name: gcr-json-key-stg
kind: ServiceAccount
metadata:
creationTimestamp: "2019-12-24T03:42:15Z"
name: default
namespace: default
resourceVersion: "151571"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 7f88785d-05de-4568-b050-f3a5dddd8ad1
secrets:
- name: default-token-gn9vb
If I add the same imagePullSecret to individual deployments, it works. So, the secret is correct. However, when I use it for a default service account, I get a ImagePullBackOff error which on describing confirms that it's a permission issue.
Am I missing something?
I have made sure that my deployment is not configured with any other specific serviceaccount and should be using the default serviceaccount.
ok, the problem was that the default service account that I added the imagePullSecret wasn't in the same namespace.
Once, I patched the default service account in that namespace, it works perfectly well.
After you add the secret for pulling the image to the service account, then you need to add the service account into your pod or deployment. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
selector:
matchLabels:
run: helloworld
replicas: 1
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: yourPrivateRegistry/image:tag
ports:
- containerPort: 80
serviceAccountName: pull-image # your service account
And the service account pull-image looks like this:

Not able to create Prometheus in K8S cluster

I'm trying to install Prometheus on my K8S cluster
when I run command
kubectl get namespaces
I got the following namespace:
default Active 26h
kube-public Active 26h
kube-system Active 26h
monitoring Active 153m
prod Active 5h49m
Now I want to create the Prometheus via
helm install stable/prometheus --name prom -f k8s-values.yml
and I got error:
Error: release prom-demo failed: namespaces "default" is forbidden:
User "system:serviceaccount:kube-system:default" cannot get resource
"namespaces" in API group "" in the namespace "default"
even if I switch to monitoring ns I got the same error,
the k8s-values.yml look like following
rbac:
create: false
server:
name: server
service:
nodePort: 30002
type: NodePort
Any idea what could be missing here ?
You are getting this error because you are using RBAC without giving the right permissions.
Give the tiller permissions:
taken from https://github.com/helm/helm/blob/master/docs/rbac.md
Example: Service account with cluster-admin role
In rbac-config.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Note: The cluster-admin role is created by default in a Kubernetes cluster, so you don't have to define it explicitly.
$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller
Create a service account for prometheus:
Change the value of rbac.create to true:
rbac:
create: true
server:
name: server
service:
nodePort: 30002
type: NodePort
Look at prometheus operator to spin up all monitoring services from prometheus stack.
below link is helpful
https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus/manifests
all the manifests are listed there. go through those files and deploy whatever you need to monitor in your k8s cluster