Is there a way for K8s service account to create another service account in a different namespace? - kubernetes

I have an app which interacts with an existing service account ("the agent") on a designated namespace. I want the agent to be able to create additional service accounts and roles on other namespaces. Is there a way to do so?

I've already answered to this question in a comment section, but I've also decided to provide more comprehensive information with examples.
Background
Kubernetes includes RBAC (role-based access control) mechanism that enables you to specify which actions are permitted for specific user or group of users. From Kubernetes v1.6 RBAC is enabled by default.
There are four Kubernetes objects: Role, ClusterRole, RoleBinding and ClusterRoleBinding, that we can use to configure needed RBAC rules. Role and RoleBinding are namespaced and ClusterRole and ClusterRoleBinding are cluster scoped resources.
We use Role and RoleBinding to authorize user to namespaced resources and we use ClusterRole and ClusterRoleBinding for cluster wide resources.
However, we can also mix this resurces.
Below I will briefly describe common combinations.
NOTE: It is impossible to link ClusterRoleBindings with Role.
For every test case I created new test namespace and test-agent service account.
Role and RoleBinding
I created simple Role and RoleBinding in specific namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-role
namespace: test
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rolebinding
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-role
subjects:
- kind: ServiceAccount
name: test-agent
We can see that test-agent has access only to resources in test namespace:
$ kubectl auth can-i get pod -n test --as=system:serviceaccount:test:test-agent
yes
$ kubectl auth can-i get pod -n default --as=system:serviceaccount:test:test-agent
no
ClusterRole and RoleBinding
I created ClusterRole and RoleBinding:
NOTE: I didn't specify any namespace for ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test-clusterrole
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rolebinding
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-clusterrole
subjects:
- kind: ServiceAccount
name: test-agent
Now we can see, that if a ClusterRole is linked to a ServiceAccount using a RoleBinding, the ClusterRole permissions apply ONLY to the namespace in which this RoleBinding has been created:
$ kubectl auth can-i get pod -n test --as=system:serviceaccount:test:test-agent
yes
$ kubectl auth can-i get pod -n default --as=system:serviceaccount:test:test-agent
no
ClusterRole and ClusterRoleBinding
Finally I created ClusterRole and ClusterRoleBinding:
NOTE: I didn't specify any namespace for ClusterRole and ClusterRoleBinding.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test-clusterrole
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: test-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-clusterrole
subjects:
- kind: ServiceAccount
name: test-agent
namespace: test
Now we can see, that if a ClusterRole is linked to a ServiceAccount using a ClusterRoleBinding, the ClusterRole permissions apply to all namespaces:
$ kubectl auth can-i get pod -n test --as=system:serviceaccount:test:test-agent
yes
$ kubectl auth can-i get pod -n default --as=system:serviceaccount:test:test-agent
yes
$ kubectl auth can-i get pod -n kube-system --as=system:serviceaccount:test:test-agent
yes
Useful note: You can display all possible verbs for specific resource using
kubectl api-resources -o wide, e.g. to display all possible verbs for Deployment we can use:
$ kubectl api-resources -o wide | grep deployment
deployments deploy apps/v1 true Deployment [create delete deletecollection get list patch update watch]

Related

How to create a service account to get a list of pods from inside a Kubernetes cluster?

I have created a service account to get a list of pods in minikube.
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-sa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: list-pods
namespace: default
rules:
- apiGroups:
- ''
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: list-pods_demo-sa
namespace: default
roleRef:
kind: Role
name: list-pods
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: demo-sa
namespace: default
The problem is, that I get an error message if I use the service account to get the list of pods. kubectl auth can-i list pod --as demo-sa answers always with no.
You cannot use:
kubectl auth can-i list pod --as <something>
to impersonate ServiceAccounts. You can only impersonate users --as and impersonate groups --as-group
A workaround is to use the service account token.
kubectl get secret demo-sa-token-7fx44 -o=jsonpath='{.data.token}' | base64 -d
You can use the output here and any kubectl request. However, I checked with kubectl auth can-i list pod and I don't think auth works with a token (you always get a yes)

Rolebinding with ClusterRole is not restricted to namespace when using a ServiceAccount

I want to set a serviceaccount authorizations by associating it to a clusterRole but restricted to a namespace using a rolebinding.
I declared one clusterrole and I configured a rolebinding in a namespace pointing to that clusterrole.
However when I access the cluster with the serviceaccount token defined in the rolebinding I'm not restricted to the namespace.
On the other hand, when I'm accessing the cluster with a "User" certificate, this is working. I have only access to the namespace.
Kubernetes v1.13.5
The Rolebinding I defined:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: exploitant
namespace: myNamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: default
namespace: myNamespace
- apiGroup: rbac.authorization.k8s.io
kind: User
name: myUser
This is what I get:
kubectl auth can-i --token=XXXXXXXX get po -n myNamespace
yes
--> as expected
kubectl auth can-i --token=XXXXXXXX get po -n kube-system
yes
--> not expected !!!
The solution is to create a specific ServiceAccount. The "default" serviceAccount should not be used. By default all pods run with the default service account (if you dont specify one). So, the default service account exist in all namespace, so default service account can read pods in all namespace.

RBAC not working as expected when trying to lock namespace

I'm trying to lock down a namespace in kubernetes using RBAC so I followed this tutorial.
I'm working on a baremetal cluster (no minikube, no cloud provider) and installed kubernetes using Ansible.
I created the folowing namespace :
apiVersion: v1
kind: Namespace
metadata:
name: lockdown
Service account :
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-lockdown
namespace: lockdown
Role :
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: lockdown
rules:
- apiGroups: [""] # "" indicates the core API group
resources: [""]
verbs: [""]
RoleBinding :
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rb-lockdown
subjects:
- kind: ServiceAccount
name: sa-lockdown
roleRef:
kind: Role
name: lockdown
apiGroup: rbac.authorization.k8s.io
And finally I tested the authorization using the next command
kubectl auth can-i get pods --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown
This SHOULD be returning "No" but I got "Yes" :-(
What am I doing wrong ?
Thx
A couple possibilities:
are you running the "can-i" check against the secured port or unsecured port (add --v=6 to see). Requests made against the unsecured (non-https) port are always authorized.
RBAC is additive, so if there is an existing clusterrolebinding or rolebinding granting "get pods" permissions to that service account (or one of the groups system:serviceaccounts:lockdown, system:serviceaccounts, or system:authenticated), then that service account will have that permission. You cannot "ungrant" permissions by binding more restrictive roles
I finally found what was the problem.
The role and rolebinding must be created inside the targeted namespace.
I changed the following role and rolebinding types by specifying the namespace inside the yaml directly.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: lockdown
namespace: lockdown
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rb-lockdown
namespace: lockdown
subjects:
- kind: ServiceAccount
name: sa-lockdown
roleRef:
kind: Role
name: lockdown
apiGroup: rbac.authorization.k8s.io
In this example I gave permission to the user sa-lockdown to get, watch and list the pods in the namespace lockdown.
Now if I ask to get the pods : kubectl auth can-i get pods --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown it will return yes.
On the contrary if ask to get the deployments : kubectl auth can-i get deployments --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown it will return no.
You can also leave the files like they were in the question and simply create them using kubectl create -f <file> -n lockdown.

convert kubernetes yaml to kubectl cli command

kubectl I am looking for a single command or maybe combination of commands for the following yaml file
---
#
# Create a role, `pod-reader`, that can list pods and
# bind the default service account in the `mynamespace` namespace
# to that role.
#
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
namespace: mynamespace
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: mynamespace
subjects:
- kind: Group
name: system:serviceaccounts:mynamespace
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
So pretty much create a namespace that can get, watch and list pods
This assumes the namespace and service account are already created:
$ kubectl create namespace mynamespace
$ kubectl create serviceaccount mysa -n mynamespace
Create the role with:
$ kubectl create role pod-reader --namespace=mynamespace \
--verb=get,list,watch \
--resource=pods \
Create the rolebinding with:
$ kubectl create rolebinding read-pods --namespace=mynamespace \
--role=pod-reader \
--group=system:serviceaccounts:mynamespace
Tip: if you want to preview the generated YAML rather than actually creating the resource, append --dry-run=true -o=yaml to the commands.

How to protect k8s secrets using rbac?

I set up a k8s cluster in GKE with rbac enabled, and I install Istio into the cluster.
I follow this step (link) to create key/certs for the Istio ingress controller, and key/certs are stored as secret whose name is istio-ingress-certs.
Now I want to use RBAC to limit access to istio-ingress-certs, so that every component in istio-system is allowed to read the secret, but none could modify or delete it.
I create a secrets-rbac.yaml file, and run kubectl apply -f secrets-rbac.yaml, which creates a role to read the secret, and binds this role to all serviceaccounts in istio-system namespace.
To verify that a serviceaccount is not allowed to modify istio-ingress-certs. I use this command to test.
kubectl auth can-i edit secrets/istio-ingress-certs -n istio-system --as system:serviceaccount:istio-system:istio-pilot-service-account
I expect that the command would return false, but it returns true. I think I didn't set up rbac correctly in the yaml file, but I am not clear which part is not correct.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: istio-system
name: istio-ingress-certs-reader
rules:
- apiGroups: ["*"]
resources: ["secrets"]
resourceNames: ["istio-ingress-certs"]
verbs: ["get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: istio-system
name: read-istio-ingress-certs
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: istio-ingress-certs-reader
subjects:
- kind: Group
name: system:serviceaccounts:istio-system
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: system:unauthenticated
apiGroup: rbac.authorization.k8s.io