Why the namepsace bit in the RoleBinding definition kubernetes - kubernetes

There's something I don't quite understand in the way RBAC works in Kubernetes.
I'll state what I understood and what not. Based on the documentation, the RBAC API defines 4 kinds of kubernetes objects:
Role
ClusterRole
RoleBinding
ClusterRoleBinding
Role
A Role defines a set of permissions within a specific namespace. The Role definition contains a namepsace field, and the Role object is created within that namespace. From the docs:
A Role always sets permissions within a particular namespace; when you create a Role, you have to specify the namespace it belongs in.
I suppose that this means that all the rules defined in the Role applies only to the objects that are in that namespace. I'll continue supposing this assumption is true, please correct me otherwise.
ClusterRole
ClusterRole, by contrast, is a non-namespaced resource.
From what I understand (again, correct me if I'm wrong) a ClusterRole is used to define rules that define permissions regarding to resources that are not bound to any namespace, such as nodes.
RoleBinding
A RoleBinding object is a namespaced object. Its function is to bind Roles to subjects, i.e. grant subjects (users, ServiceAccounts, groups) with specific Role. It can also bind subjects with ClusterRoles.
ClusterRoleBinding
Not so much of my interest for the manner of this post.
The Question
My questions is, why is there a namespace metadata bit in the RoleBinding definition? If indeed as I assumed in the Role section, a Role grants permissions to objects only in the specified namespace of that Role, then that restriction is already defined in the Role object itself, why is it again defined in the RoleBinding object?
As I'm writing these lines I suddenly think of an optional answer to that question, please tell me if this is correct:
A RoleBinding can also bind a ClusterRole to a list of subjects, and the permissions defined in that ClusterRole will apply only to resources in the namespace specified in the RoleBinding object. That is why we need a namespace bit in the RoleBinding definition. Indeed it is not necessary when we use a RoleBinding to bind a Role rather than a ClusterRole.
Is that correct?

It is as you said. A RoleBinding needs the namespace specified, because it can also reference a ClusterRole which is not namespaced. So a ClusterRole can be seen (in some cases) as a template for a Role in a specific namespace.
The ClusterRole edit is a good example for this usecase: you would reference this ClusterRole (not namespaced) in your RoleBinding (namespaced):
apiVersion: rbac.authorization.k8s.io/v1namespace.
kind: RoleBinding
metadata:
name: editor
namespace: my-namespace
subjects:
- kind: User
name: jane
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
So the user jane would get the permissions defined in the ClusterRole edit but only in the namespace my-namespace.

Related

Whats difference between "kubectl auth reconcile" and "kubectl apply" for working with RBAC?

I have some average yaml file defining some average role resource, all yaml should reflect my resource's desired state.
To get new average role into cluster I usually run kubectl apply -f my-new-role.yaml
but now I see this (recommended!?) alternative kubectl auth reconcile -f my-new-role.yaml
Ok, there may be RBAC relationships, ie Bindings, but shouldn't an apply do same thing?
Is there ever a case where one would update (cluster) roles but not want their related (cluster) bindings updated?
The kubectl auth reconcile command-line utility has been added in Kubernetes v1.8.
Properly applying RBAC permissions is a complex task because you need to compute logical covers operations between rule sets.
As you can see in the CHANGELOG-1.8.md:
Added RBAC reconcile commands with kubectl auth reconcile -f FILE. When passed a file which contains RBAC roles, rolebindings, clusterroles, or clusterrolebindings, this command computes covers and adds the missing rules. The logic required to properly apply RBAC permissions is more complicated than a JSON merge because you have to compute logical covers operations between rule sets. This means that we cannot use kubectl apply to update RBAC roles without risking breaking old clients, such as controllers.
The kubectl auth reconcile command will ignore any resources that are not Role, RoleBinding, ClusterRole, and ClusterRoleBinding objects, so you can safely run reconcile on the full set of manifests (see: Use 'kubectl auth reconcile' before 'kubectl apply')
I've created an example to demonstrate how useful the kubectl auth reconcile command is.
I have a simple secret-reader RoleBinding and I want to change a binding's roleRef (I want to change the Role that this binding refers to):
NOTE: A binding to a different role is a fundamentally different binding (see: A binding to a different role is a fundamentally different binding).
# BEFORE
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-reader
subjects:
- kind: ServiceAccount
name: service-account-1
namespace: default
# AFTER
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-creator
subjects:
- kind: ServiceAccount
name: service-account-1
namespace: default
As we know, roleRef is immutable, so it is not possible to update this secret-admin RoleBinding using kubectl apply:
$ kubectl apply -f secret-admin.yml
The RoleBinding "secret-admin" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"secret-creator"}: cannot change roleRef
Instead, we can use kubectl auth reconcile. If a RoleBinding is updated to a new roleRef, the kubectl auth reconcile command handles a delete/recreate related objects for us.
$ kubectl auth reconcile -f secret-admin.yml
rolebinding.rbac.authorization.k8s.io/secret-admin reconciled
reconciliation required recreate
Additionally, you can use the --remove-extra-permissions and --remove-extra-subjects options.
Finally, we can check if everything has been successfully updated:
$ kubectl describe rolebinding secret-admin
Name: secret-admin
Labels: <none>
Annotations: <none>
Role:
Kind: Role
Name: secret-creator
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount service-account-1 default

ClusterRole exists and cannot be imported into the current release?

I am trying to install the same chart two times in the same cluster in two different namespaces. However I am getting this error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "nfs-provisioner" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "namespace2": current value is "namespace1"
As I understood cluster roles suposed to be independet from the namespace, so I found this contradictory. We are using helm3
I decided to provide a Community Wiki answer that may help other people facing a similar issue.
I assume you want to install the same chart multiple times but get the following error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "<CLUSTERROLE_NAME>" in namespace "" exists and cannot be imported into the current release: ...
First, it's important to decide if we really need ClusterRole instead of Role.
As we can find in the Role and ClusterRole documentation:
If you want to define a role within a namespace, use a Role; if you want to define a role cluster-wide, use a ClusterRole.
Second, we can use the variable name for ClusterRole instead of hard-coding the name in the template:
For example, instead of:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: clusterrole-1
...
Try to use something like:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.clusterrole.name }}
...
Third, we can use the lookup function and the if control structure to skip creating resources if they already exist.
Take a look at a simple example:
$ cat clusterrole-demo/values.yaml
clusterrole:
name: clusterrole-1
$ cat clusterrole-demo/templates/clusterrole.yaml
{{- if not (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" .Values.clusterrole.name) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.clusterrole.name }}
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
{{- end }}
In the example above, if ClusterRole clusterrole-1 already exits, it won’t be created.
ClusterRole sets permission across your Kubernetes cluster, not for particular namespace. It think you misunderstand with Role. You can see further information of the differences between ClusterRole and Role here, Role and ClusterRole.
A Role always sets permissions within a particular namespace; when you create a Role, you have to specify the namespace it belongs in.
ClusterRole, by contrast, is a non-namespaced resource. The resources have different names (Role and ClusterRole) because a Kubernetes object always has to be either namespaced or not namespaced; it can't be both.

Kubernetes OIDC: No valid group mapping

I have the problem that I can log on to my dashboard via OIDC, but then the oidc group information is not mapped correctly and I cannot access the corresponding resources.
Basic setup
K8s version: 1.19.0
K8s setup: 1 master + 2 worker nodes
Based on Debian 10 VMs
CNI: Calico
Louketo Proxy as OIDC proxy
OIDC: Keycloak Server (Keycloak X [Quarkus])
Configurations
I have configured the K8s apiserver with these parameters.
kube-apiserver.yaml
- --oidc-issuer-url=https://test.test.com/auth/realms/Test
- --oidc-client-id=test
- --oidc-username-claim=preferred_username
- --oidc-username-prefix="oidc:"
- --oidc-groups-claim=groups
- --oidc-groups-prefix="oidc:"
ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "test-cluster-admin"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: "Test"
I used the following louketo parameters
Louketo Proxy
/usr/bin/louketo-proxy --discovery-url=$OIDC_DISCOVERY_URL --client-id=$OIDC_CLIENT_ID --client-secret=$OIDC_CLIENT_SECRET -listen=$OIDC_LISTEN_URL --encryption-key=$OIDC_ENCRYPTION_KEY --redirection-url=$OIDC_REDIRECTION_KEY --enable-refresh-tokens=true --upstream-url=$OIDC_UPSTREAM_URL --enable-metrics
I get the following error message inside the dashboard.
K8s error
replicasets.apps is forbidden: User "\"oidc:\"<user_name>" cannot list resource "replicasets" in API group "apps" in the namespace "default"
I hope you can help me with this problem, I already tried most of the manuals from the internet, but haven't found a solution yet.
PS: I have done the corresponding group mapping in the Keycloak server and also validated that the group entry is transferred.
If you are facing the same challenge as I did and you want to integrate Keycloak into your K8s cluster, share the dashboard and connect it to Keycloak, you can find my configuration below. Within my cluster I use the Louketo Proxy as interface between Kubernetes and Keycloak. The corresponding configuration of the deployment is not included in this post.
Keycloak
I want to start with the configuration of Keycloak. In the first step I created a corresponding client with the following settings.
After that I created the two group membership and audience (needed by the louketo proxy) mappers.
The exact settings of the mappers can be taken from the two images.
Group membership mapping
Audience mapping
Kubernetes
In the second step I had to update the api server manifest and create the RoleBinding and ClusterRoleBinding within the Kubernetes cluster.
Api server manifest (default path: /etc/kubernetes/manifests/kube-apiserver.yaml)
- --oidc-issuer-url=https://test.test.com/auth/realms/Test
- --oidc-client-id=test
- --oidc-username-claim=preferred_username
- --oidc-username-prefix="oidc:"
- --oidc-groups-claim=groups
- --oidc-groups-prefix="oidc:"
RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "test"
namespace: "kubernetes-dashboard"
subjects:
- kind: User
name: "\"oidc:\"Test"
namespace: "kube-system"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "test"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: "\"oidc:\"Test"
#Community I hope I can help you with this configuration. If you have any questions, feel free to ask me.
This is a community wiki answer aimed to approach the issue from the Kubernetes side. Any one familiar with the possible Keycloak group/role mapping solution feel free to edit it.
The error you see means that the service account for OIDC doesn't have the proper privileges to list replicasets in the default namespace. The easiest way out of it would be to simply setup the ServiceAccount, ClusterRole and ClusterRoleBinding from scratch and make sure it has the proper privileges. For example, you can create a clusterrolebinding with permissions “admin” by executing:
kubectl create clusterrolebinding OIDCrolebinding - -clusterrole=admin - - group=system:serviceaccounts:OIDC
The same can be done for the ClusterRole:
kubectl create clusterrole OIDC --verb=get,list,watch --resource=replicasets --namespace=default
More examples of how to use the kubectl create in this scenario can be found here.
Here you can find a whole official guide regarding the RBAC Authorization.
EDIT:
Also, please also check if your ClusterRoleBinding for the "\"oidc:\"<user_name>" is in the "default" namespace.

Kubernetes - ClusterRole with namespace is allowed?

The documentation says,
If you want to define a role within a namespace, use a Role; if you want to define a role cluster-wide, use a ClusterRole.
So, why am I able to create a ClusterRole in a namespace for a non-namespaced resource StorageClass (or, sc in short)?
kubectl -n apps create clusterrole scrole --verb=create,update,patch,delete --resource=sc
Also, is it correct to say that, a Role is applicable for namespaced resources only and ClusterRole for non-namespaced resources only?
The namespace parameter is redundant and not used even if you provide it while creating a ClusterRole.
A ClusterRole can define permission to namespace scoped resources as well and you can use a RoleBinding to assign the permission to a user or group or service account. When you do that it's basically providing permission to the resources within the namespace where the RoleBinding is created.This is to avoid creating duplicate Roles in every namespace to provide the same permission.
Also when you have cluster scoped resources in a ClusterRole you need to use a ClusterRoleBinding to assign the permission to a a user or group or service account.

Kubernetes log, User "system:serviceaccount:default:default" cannot get services in the namespace

Forbidden!Configured service account doesn't have access. Service account may have been revoked. User "system:serviceaccount:default:default" cannot get services in the namespace "mycomp-services-process"
For the above issue I have created "mycomp-service-process" namespace and checked the issue.
But it shows again message like this:
Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. User "system:serviceaccount:mycomp-services-process:default" cannot get services in the namespace "mycomp-services-process"
Creating a namespace won't, of course, solve the issue, as that is not the problem at all.
In the first error the issue is that serviceaccount default in default namespace can not get services because it does not have access to list/get services. So what you need to do is assign a role to that user using clusterrolebinding.
Following the set of minimum privileges, you can first create a role which has access to list services:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services"]
verbs: ["get", "watch", "list"]
What above snippet does is create a clusterrole which can list, get and watch services. (You will have to create a yaml file and apply above specs)
Now we can use this clusterrole to create a clusterrolebinding:
kubectl create clusterrolebinding service-reader-pod \
--clusterrole=service-reader \
--serviceaccount=default:default
In above command the service-reader-pod is name of clusterrolebinding and it is assigning the service-reader clusterrole to default serviceaccount in default namespace. Similar steps can be followed for the second error you are facing.
In this case I created clusterrole and clusterrolebinding but you might want to create a role and rolebinding instead. You can check the documentation in detail here
This is only for non prod clusters
You should bind service account system:serviceaccount:default:default (which is the default account bound to Pod) with role cluster-admin, just create a yaml (named like fabric8-rbac.yaml) with following contents:
# NOTE: The service account `default:default` already exists in k8s cluster.
# You can create a new account following like this:
#---
#apiVersion: v1
#kind: ServiceAccount
#metadata:
# name: <new-account-name>
# namespace: <namespace>
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fabric8-rbac
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
Then, apply it by running kubectl apply -f fabric8-rbac.yaml.
If you want unbind them, just run kubectl delete -f fabric8-rbac.yaml.
Just to add.
This can also occur when you are redeploying an existing application to the wrong Kubernetes cluster that are similar.
Ensure you check to be sure that the Kubernetes cluster you're deploying to is the correct cluster.