How to schedule a job on behalf of a service account from another namespace? - kubernetes

I have a Kubernetes service running in namespace NA that is configured to run as a service account A. The service schedules a Kubernetes job in namespace NB. How do I make a job in NB act on behalf of service account A? I tried to specify the name of the service account for the job, but I get the following error:
Error creating: pods "pod_id_x is forbidden: error looking up service account NB/A: serviceaccount "A" not found
P.S. I am using Google Kubernetes Engine

AFAIK this can be done by granting Service Account [A] a rolebinding in namespace NB allowing it to deploy pods. You just need the proper role.
You can simply reference a ServiceAccount from another namespace in the RoleBinding:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: pod-reader
namespace: ns2
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-ns1
namespace: ns2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- kind: ServiceAccount
name: ns1-service-account
namespace: ns1

Related

Does Fluentd DaemonSet necessarily needs a ClusterRole or can it be used with a Role as well?

I have a namespace namespace:development in my K8s cluster. I wanted to deploy Fluentd following:
fluentd-daemonset-elasticsearch-rbac.yaml
I ONLY changed:
Type of role from ClusterRole to Role (the rules parts is the same)
Name of the ServiceAccount
Instead of namespace: kube-system I changed it to namespace: development in ServiceAccount, Role and RoleBinding
ServiceAccount in RoleBinding to my own service account
When I deployed I got the following error:
start_pod_watch: Exception encountered setting up pod watch from Kubernetes API v1 endpoint https://<ip>:443/api: pods is forbidden: User "system:serviceaccount:development:my-svc-account" cannot list resource "pods" in API group "" at the cluster scope ({"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \\"system:serviceaccount:development:my-svc-account\\" cannot list resource \\"pods\\" in API group \\"\\" at the cluster scope","reason":"Forbidden","details":{"kind":"pods"},"code":403} (Fluent::ConfigError)
My question: Is this mandatory to have a clusterRole to deploy Fluentd in a cluster?
If you have change the Clusterrole to role you also have to update the bindings.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: development
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: Role
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: development
---

K8s - using service accounts across namespaces

I have a service account, in the default namespace, with a clusterRoleBinding to a clusterRole that can observe jobs.
I wish to use this service account in any namespace rather than have to define a new service account in each new namespace. The service account is used by an init container to check a job has completed before allowing deployment to continue.
Not sure what extra info I need to provide but will do so on request.
You can simply reference a ServiceAccount from another namespace in the RoleBinding:
For example, below is sample use to refer the service account in one namespace to another for just reading the pods.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: pod-reader
namespace: ns2
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-ns1
namespace: ns2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- kind: ServiceAccount
name: ns1-service-account
namespace: ns1

deletecollection kubernetes (tekton) resources - specific RBAC needed?

I am trying to delete tekton kubernetes resources in the context of a service account with an on-cluster kubernetes config, and am experiencing errors specific to accessing deletecollection with all tekton resources. An example error is as follows:
pipelines.tekton.dev is forbidden: User "system:serviceaccount:my-account:default" cannot deletecollection resource "pipelines" in API group "tekton.dev" in the namespace "my-namespace"
I have tried to apply RBAC to help here, but continue to experience the same errors. My RBAC attempt is as follows:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-role
namespace: my-namespace
rules:
- apiGroups: ["tekton.dev"]
resources: ["pipelines", "pipelineruns", "tasks", "taskruns"]
verbs: ["get", "watch", "list", "delete", "deletecollection"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-role-binding
namespace: my-namespace
subjects:
- kind: User
name: system:serviceaccount:my-account:default
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: my-role
apiGroup: rbac.authorization.k8s.io
These RBAC configurations continue to result in the same error. Is this, or similar necessary? Are there any examples of RBAC when interfacing with, specifically deleting, tekton resources?
Given two namespaces my-namespace and my-account the default service account in the my-account namespace is correctly granted permissions to the deletecollection verb on pipelines in my-namespace.
You can verify this using kubectl auth can-i like this after applying:
$ kubectl -n my-namespace --as="system:serviceaccount:my-account:default" auth can-i deletecollection pipelines.tekton.de
yes
Verify that you have actually applied your RBAC manifests.
Change the RBAC as below
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-role
namespace: my-namespace
rules:
- apiGroups: ["tekton.dev"]
resources: ["pipelines", "pipelineruns", "tasks", "taskruns"]
verbs: ["get", "watch", "list", "delete", "deletecollection"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-rolebinding
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: my-account
roleRef:
kind: Role
name: my-role
apiGroup: rbac.authorization.k8s.io
Few things to note:
Fixed subjects to use ServiceAccount from User. This is actually the cause of the failure because the service account was not granted the RBAC.
I assumed that you want to delete the Tekton resources in my-namespace by the default service account of my-account namespace . If it's different then changes in Role and RoleBinding need to be done accordingly.

How to bind roles with service accounts - Kubernetes

I know there are a lot of similar questions but none of them has a solution as far as I have browsed.
Coming to the issue, I have created a service account (using command), role (using .yaml file), role binding (using .yaml files). The role grants access only to the pods. But when I login into the dashboard (Token method) using the SA that the role is attached to, I'm able to view all the resources without any restrictions. Here are the file and commands used by me.
Role.yaml:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: assembly-prod
name: testreadrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
RoleBinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrolebinding
namespace: assembly-prod
subjects:
- kind: ServiceAccount
name: testsa
apiGroup: ""
roleRef:
kind: Role
name: testreadrole
apiGroup: rbac.authorization.k8s.io
Command used to create service account:
kubectl create serviceaccount <saname> --namespace <namespacename>
UPDATE: I create a service account and did not attach any kind of role to it. When I tried to login with this SA, It let me through and I was able to perform all kinds activities including deleting "secrets". So by default all SA are assuming admin access and that is the reason why my above roles are not working. Is this behavior expected, If yes then how can I change it?
Try the below steps
# create service account
kubectl create serviceaccount pod-viewer
# Create cluster role/role
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-viewer
rules:
- apiGroups: [""] # core API group
resources: ["pods", "namespaces"]
verbs: ["get", "watch", "list"]
---
# create cluster role binding
kubectl create clusterrolebinding pod-viewer \
--clusterrole=pod-viewer \
--serviceaccount=default:pod-viewer
# get service account secret
kubectl get secret | grep pod-viewer
pod-viewer-token-6fdcn kubernetes.io/service-account-token 3 2m58s
# get token
kubectl describe secret pod-viewer-token-6fdcn
Name: pod-viewer-token-6fdcn
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: pod-viewer
kubernetes.io/service-account.uid: bbfb3c4e-2254-11ea-a26c-0242ac110009
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InBvZC12aWV3ZXItdG9rZW4tNmZkY24iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicG9kLXZpZXdlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImJiZmIzYzRlLTIyNTQtMTFlYS1hMjZjLTAyNDJhYzExMDAwOSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnBvZC12aWV3ZXIifQ.Pgco_4UwTCiOfYYS4QLwqgWnG8nry6JxoGiJCDuO4ZVDWUOkGJ3w6-8K1gGRSzWFOSB8E0l2YSQR4PB9jlc_9GYCFQ0-XNgkuiZBPvsTmKXdDvCNFz7bmg_Cua7HnACkKDbISKKyK4HMH-ShgVXDoMG5KmQQ_TCWs2E_a88COGMA543QL_BxckFowQZk19Iq8yEgSEfI9m8qfz4n6G7dQu9IpUSmVNUVB5GaEsaCIg6h_AXxDds5Ot6ngWUawvhYrPRv79zVKfAxYKwetjC291-qiIM92XZ63-YJJ3xbxPAsnCEwL_hG3P95-CNzoxJHKEfs_qa7a4hfe0k6HtHTWA
ca.crt: 1025 bytes
namespace: 7 bytes
```
Login to dashboard using the above token. you should see only pods and namespaces
[![Refer the below link][1]][1]
[1]: https://i.stack.imgur.com/D9bDi.png
Okay I've found the solution for this. The major issue was I'm running my cluster on Azure AKS, which I should have mentioned in the question but did not. It was my mistake. In Azure AKS, if rbac is not enabled during cluster creation, then there is no use of roles and role-bindings at all. All request to the api-server will be treated as requests from Admin. This was confirmed by Azure support too. So that was the reason my cluster-role-binding and roles didn't apply.
I see that the .yamls you provided need some adjustments.
Role has wrong formatting after the rules part.
RoleBinding is missing namespace: after subjects:, and also is formatted wrongly.
Try something like this:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: assembly-prod
name: testreadrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrolebinding
namespace: assembly-prod
subjects:
- kind: ServiceAccount
name: testsa
namespace: assembly-prod
roleRef:
kind: Role
name: testreadrole
apiGroup: rbac.authorization.k8s.io
There is a very useful guide about Non-Privileged RBAC User Administration in Kubernetes where you can find more detailed info regarding this particular topic.

RBAC not working as expected when trying to lock namespace

I'm trying to lock down a namespace in kubernetes using RBAC so I followed this tutorial.
I'm working on a baremetal cluster (no minikube, no cloud provider) and installed kubernetes using Ansible.
I created the folowing namespace :
apiVersion: v1
kind: Namespace
metadata:
name: lockdown
Service account :
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-lockdown
namespace: lockdown
Role :
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: lockdown
rules:
- apiGroups: [""] # "" indicates the core API group
resources: [""]
verbs: [""]
RoleBinding :
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rb-lockdown
subjects:
- kind: ServiceAccount
name: sa-lockdown
roleRef:
kind: Role
name: lockdown
apiGroup: rbac.authorization.k8s.io
And finally I tested the authorization using the next command
kubectl auth can-i get pods --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown
This SHOULD be returning "No" but I got "Yes" :-(
What am I doing wrong ?
Thx
A couple possibilities:
are you running the "can-i" check against the secured port or unsecured port (add --v=6 to see). Requests made against the unsecured (non-https) port are always authorized.
RBAC is additive, so if there is an existing clusterrolebinding or rolebinding granting "get pods" permissions to that service account (or one of the groups system:serviceaccounts:lockdown, system:serviceaccounts, or system:authenticated), then that service account will have that permission. You cannot "ungrant" permissions by binding more restrictive roles
I finally found what was the problem.
The role and rolebinding must be created inside the targeted namespace.
I changed the following role and rolebinding types by specifying the namespace inside the yaml directly.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: lockdown
namespace: lockdown
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rb-lockdown
namespace: lockdown
subjects:
- kind: ServiceAccount
name: sa-lockdown
roleRef:
kind: Role
name: lockdown
apiGroup: rbac.authorization.k8s.io
In this example I gave permission to the user sa-lockdown to get, watch and list the pods in the namespace lockdown.
Now if I ask to get the pods : kubectl auth can-i get pods --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown it will return yes.
On the contrary if ask to get the deployments : kubectl auth can-i get deployments --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown it will return no.
You can also leave the files like they were in the question and simply create them using kubectl create -f <file> -n lockdown.