How to carve out exceptions in Kubernetes RBAC - kubernetes

Use-cases:
Grant full access to all resources on the cluster (including the ability to e.g. create new namespaces), except for in certain namespaces such as kube-system.
Grant read permissions to all resources in the cluster except for Secrets.
This seems like a really basic set of use-cases that are not obvious how to implement.

Grant read permissions to all resources in the cluster except for Secrets.
kubectl get clusterrole view | grep -v secrets
fix the metadata creating a new ClusterRole. create ClusterRoleBindings using that ClusterRole.
Grant full access to all resources ... except in certain namespaces
For this, you would need to create rolebindings in each namespace you want to delegate those privileges to, you won't be able to filter out namespaces by their name.
You could use the clusterrole "admin", and create rolebindings in all your projects. OpenShift would have some defaultProjectTemplate you could customize automatically adding those RoleBindings when provisioning a new namespace. While I don't think traditional Kubernetes have such an option: you might then use a CronJob, say in kube-system, creating those RoleBindings into new namespaces on a schedule.

Related

Openshift custom resources restrictions

I didn't get how I can restrict the access of custom resources in Openshift using RBAC
let's assume I have a custom api:
apiVersion: yyy.xxx.com/v1
kind: MyClass
metadata:
...
Is it possible to prevent some users to deploy resources where apiVersion=yyy.xxx.com/v1 and kind=MyClass?
Also can I grant access to other users to deploy resources where apiVersion=yyy.xxx.com/v1 and kind=MyOtherClass?
If this can be done using RBAC roles, how can I deploy RBAC roles in Openshift? only using CLI or I can create some yaml configuration files and deploy them with Yaml for example?
You can use cluster roles and RBAC roles:
oc adm policy add/remove-cluster-role-to-group oauth:system:authenticated
So the general idea is to remove the permission to deploy the resource to all the authenticated users.
The next step is to add the permission to deploy that resourse only to ServicesAccounts assigned to specific namepsaces
OpenShift/Kubernetes has Cluster Role/Binding and Local Role/Binding.
Here is the definitions in the docs. *1
Cluster role/binding: Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles.
Local role/binding: Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles.
If your Custom Resource is the resource existing in a single namespace. You can manage to give permission to others.
Whereas, if the Custom Resource is the cluster wide resource, cluster admin can only deploy the resource.
*1: https://docs.openshift.com/container-platform/4.11/authentication/using-rbac.html

Deploy from CI/CD via HELM to external Kubernetes cluster with limited rights

I understand that I can copy my .kube/config to my CI/CD server, or just name the ServiceAccount to allow my CD pipeline to use HELM for deployment.
However, what if I want to allow deployment via Helm, but restrict a lot of other access, like:
reading data from pods or a deployed database
port-forward services
... so basically accessing all data in the cluster, except for stateless Docker containers deployed via Helm.
Would it be possible to create a new ClusterRole with limited rights? What verbs in a ClusterRole does Helm need at least to function properly?
What rights does Helm need at the least?
It comes down to what your Helm chart is doing to Kubernetes.
ClusterRoles can be bound to a particular namespace through reference in a RoleBinding. The admin, edit and view default ClusterRoles are commonly used in this manner. For more detailed info see this description. For example edit is a default ClusterRole which allows read/write access to most objects in a namespace. It does not allow viewing or modifying Roles or RoleBindings; and granting a user cluster-admin access at the namespace scope provides full control over every resource in the namespace, including the namespace itself.
You can also restrict a user's access to a particular namespace by using either the edit or the admin role. See this example.
The permissions strategy could also depend on what objects will be created by the installation. The user will need all access to those API objects that will be managed by helm installations. Using RBAC Authorization has this concept explained in more detail with several examples that you could use as a reference. Also, this source would be helpful.

Prevent a user to spawn pod in restricted namespace on kubernetes

How to prevent a user to spawn pods in namespace with serviceaccounts that have high privileges but allow them to create namespace ?
For example, I have a cluster with velero in a velero namespace. I want to prevent the user to create pods with the veleroe serviceaccount to prevent the user to create privileged accounts. But I want that the user can create namespace and use serviceaccount with restritected PSP.
In my opinion the idiomatic way of enforcing this in Kubernetes is by creating a dynamic validating admission controller.
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/ https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook
I know it could sound a bit complex, but trust me, it's really simple. Eventually, an admission control is simply a webhook endpoint (a piece of code) which can change and/or enforce a certain state on created objects.
So in your case: create a dynamic validating webhook and simply disallow creation of pods that does not match your restrictions, with a corresponding relevant error message.
First of all the service account used by Valero is in the Valero namespace. So if the user don't have RBAC to do anything in Valero namespace it will not be able to use the service account used by Valero. You should define RBAC for users such a way that they only can do CRUD on resources in the intended namespaces and can not do CRUD on resources in other namespaces. When I say resources it also includes service account.

Kubernetes RBAC: How to allow exec only to a specific Pod created by Deployment

I have an application namespace with 30 services. Most are stateless Deployments, mixed with some StatefulSets etc. Fairly standard stuff that is.
I need to grant a special user a Role that can only exec into certain Pod. Currently RBAC grants the exec right to all pods in the namespace, but I need to tighten it down.
The problem is Pod(s) are created by a Deployment configurator, and the Pod name(s) are thus "generated", configurator-xxxxx-yyyyyy. Since you cannot use glob (ie. configurator-*), and Role cannot grant exec for Deployments directly.
So far I've thought about:
Converting Deployment into StatefulSet or a plain Pod, so Pod would have a known non-generated name, and glob wouldn't be needed
Moving the Deployment into separate namespace, so the global exec right is not a problem
Both of these work, but neither is optimal. Is there a way to write a proper Role for this?
RBAC, as it is meant by now, doesn't allow to filter resources by other attributes than namespace and resource name. The discussion is open here.
Thus, namespaces are the smallest piece at authorizing access to pods. Services must be separated in namespaces thinking in what users could need access to them.
The optimal solution right now is to move this deployment to another namespace since it needs different access rules than other deployments in the original namespace.

Kubernetes: Role vs ClusterRole

Which is the difference between a Role or a ClusterRole?
When should I create one or the other one?
I don't quite figure out which is the difference between them.
From the documentation:
A Role can only be used to grant access to resources within a single namespace.
Example: List all pods in a namespace
A ClusterRole can be used to grant the same permissions as a Role, but
because they are cluster-scoped, they can also be used to grant access
to:
cluster-scoped resources (like nodes)
non-resource endpoints (like “/healthz”)
namespaced resources (like pods) across all namespaces (needed to run kubectl get pods --all-namespaces, for example)
Examples: List all pods in all namespaces. Get a list of all nodes and theis public IP.
Cluster roles also allow for the reuse of common permission sets across namespaces (via role bindings). The bootstrap admin, edit and view cluster roles are the canonical examples.