Prevent a user to spawn pod in restricted namespace on kubernetes - kubernetes

How to prevent a user to spawn pods in namespace with serviceaccounts that have high privileges but allow them to create namespace ?
For example, I have a cluster with velero in a velero namespace. I want to prevent the user to create pods with the veleroe serviceaccount to prevent the user to create privileged accounts. But I want that the user can create namespace and use serviceaccount with restritected PSP.

In my opinion the idiomatic way of enforcing this in Kubernetes is by creating a dynamic validating admission controller.
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/ https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook
I know it could sound a bit complex, but trust me, it's really simple. Eventually, an admission control is simply a webhook endpoint (a piece of code) which can change and/or enforce a certain state on created objects.
So in your case: create a dynamic validating webhook and simply disallow creation of pods that does not match your restrictions, with a corresponding relevant error message.

First of all the service account used by Valero is in the Valero namespace. So if the user don't have RBAC to do anything in Valero namespace it will not be able to use the service account used by Valero. You should define RBAC for users such a way that they only can do CRUD on resources in the intended namespaces and can not do CRUD on resources in other namespaces. When I say resources it also includes service account.

Related

How to carve out exceptions in Kubernetes RBAC

Use-cases:
Grant full access to all resources on the cluster (including the ability to e.g. create new namespaces), except for in certain namespaces such as kube-system.
Grant read permissions to all resources in the cluster except for Secrets.
This seems like a really basic set of use-cases that are not obvious how to implement.
Grant read permissions to all resources in the cluster except for Secrets.
kubectl get clusterrole view | grep -v secrets
fix the metadata creating a new ClusterRole. create ClusterRoleBindings using that ClusterRole.
Grant full access to all resources ... except in certain namespaces
For this, you would need to create rolebindings in each namespace you want to delegate those privileges to, you won't be able to filter out namespaces by their name.
You could use the clusterrole "admin", and create rolebindings in all your projects. OpenShift would have some defaultProjectTemplate you could customize automatically adding those RoleBindings when provisioning a new namespace. While I don't think traditional Kubernetes have such an option: you might then use a CronJob, say in kube-system, creating those RoleBindings into new namespaces on a schedule.

K8s RBAC needed when no API calls?

My pod is running with the default service account. My pod uses secrets through mounted files and config maps but this is defined in yaml and the pod does not contain kubectl or similar component.
Is there a point of using RBAC for anything if I don't call the API? The best practices state "Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means)."
Only things that call the Kubernetes API, like the kubectl command and the various Kubernetes SDK libraries, use RBAC. For your basic application, you as the user need permission to create deployments, create secrets, etc. but if you have cluster-administrator permissions you don't need anything special setup.
You could imagine an orchestrator application that wanted to farm out work by creating Kubernetes Jobs. In this case the orchestrator itself would need an RBAC setup; typically its Helm chart or other deployment YAML would contain a Role (to create Jobs), a ServiceAccount, and a RoleBinding, and set its own Deployment to run using that ServiceAccount. This isn't the "normal" case of a straightforward HTTP-based application (Deployment/Service/Ingress) with a backing database (StatefulSet/Service).
... restrict reading data in Secrets ...
If you can kubectl get secret -o yaml then the Secret values are all but there to read; they are base64 encoded but not encrypted at all. It's good practice to limit the ability to do this. This having been said, you can also create a Pod, mounting the Secret, and make the main container command be to dump out the Secret value to somewhere readable, so even then Secrets aren't that secret. It's still a good practice, but not required per se, particularly in an evaluation or test cluster.

Deploy from CI/CD via HELM to external Kubernetes cluster with limited rights

I understand that I can copy my .kube/config to my CI/CD server, or just name the ServiceAccount to allow my CD pipeline to use HELM for deployment.
However, what if I want to allow deployment via Helm, but restrict a lot of other access, like:
reading data from pods or a deployed database
port-forward services
... so basically accessing all data in the cluster, except for stateless Docker containers deployed via Helm.
Would it be possible to create a new ClusterRole with limited rights? What verbs in a ClusterRole does Helm need at least to function properly?
What rights does Helm need at the least?
It comes down to what your Helm chart is doing to Kubernetes.
ClusterRoles can be bound to a particular namespace through reference in a RoleBinding. The admin, edit and view default ClusterRoles are commonly used in this manner. For more detailed info see this description. For example edit is a default ClusterRole which allows read/write access to most objects in a namespace. It does not allow viewing or modifying Roles or RoleBindings; and granting a user cluster-admin access at the namespace scope provides full control over every resource in the namespace, including the namespace itself.
You can also restrict a user's access to a particular namespace by using either the edit or the admin role. See this example.
The permissions strategy could also depend on what objects will be created by the installation. The user will need all access to those API objects that will be managed by helm installations. Using RBAC Authorization has this concept explained in more detail with several examples that you could use as a reference. Also, this source would be helpful.

How to Block Certain Kubernetes User to Issuing Certain Commands

For example, I don't want this user to :
Edit Cluster
Edit Deployment
Edit ig
Delete Pods
...
But Allow this user to:
Get nodes
Get pods
Describe Pods
If I use RBAC, can I have guidance?
you will need to use RBAC for that, after creating a user you will need to create (ROLE or CLUSTER ROLE depends if you want it to apply to a specific namespace or not) and then create (ROLE BINDING or CLUSTER ROLE BINDING) and bind between the user and the role you created.
you can find it all here https://kubernetes.io/docs/reference/access-authn-authz/rbac/

Kubernetes RBAC: How to allow exec only to a specific Pod created by Deployment

I have an application namespace with 30 services. Most are stateless Deployments, mixed with some StatefulSets etc. Fairly standard stuff that is.
I need to grant a special user a Role that can only exec into certain Pod. Currently RBAC grants the exec right to all pods in the namespace, but I need to tighten it down.
The problem is Pod(s) are created by a Deployment configurator, and the Pod name(s) are thus "generated", configurator-xxxxx-yyyyyy. Since you cannot use glob (ie. configurator-*), and Role cannot grant exec for Deployments directly.
So far I've thought about:
Converting Deployment into StatefulSet or a plain Pod, so Pod would have a known non-generated name, and glob wouldn't be needed
Moving the Deployment into separate namespace, so the global exec right is not a problem
Both of these work, but neither is optimal. Is there a way to write a proper Role for this?
RBAC, as it is meant by now, doesn't allow to filter resources by other attributes than namespace and resource name. The discussion is open here.
Thus, namespaces are the smallest piece at authorizing access to pods. Services must be separated in namespaces thinking in what users could need access to them.
The optimal solution right now is to move this deployment to another namespace since it needs different access rules than other deployments in the original namespace.