Openshift custom resources restrictions - kubernetes

I didn't get how I can restrict the access of custom resources in Openshift using RBAC
let's assume I have a custom api:
apiVersion: yyy.xxx.com/v1
kind: MyClass
metadata:
...
Is it possible to prevent some users to deploy resources where apiVersion=yyy.xxx.com/v1 and kind=MyClass?
Also can I grant access to other users to deploy resources where apiVersion=yyy.xxx.com/v1 and kind=MyOtherClass?
If this can be done using RBAC roles, how can I deploy RBAC roles in Openshift? only using CLI or I can create some yaml configuration files and deploy them with Yaml for example?

You can use cluster roles and RBAC roles:
oc adm policy add/remove-cluster-role-to-group oauth:system:authenticated
So the general idea is to remove the permission to deploy the resource to all the authenticated users.
The next step is to add the permission to deploy that resourse only to ServicesAccounts assigned to specific namepsaces

OpenShift/Kubernetes has Cluster Role/Binding and Local Role/Binding.
Here is the definitions in the docs. *1
Cluster role/binding: Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles.
Local role/binding: Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles.
If your Custom Resource is the resource existing in a single namespace. You can manage to give permission to others.
Whereas, if the Custom Resource is the cluster wide resource, cluster admin can only deploy the resource.
*1: https://docs.openshift.com/container-platform/4.11/authentication/using-rbac.html

Related

How to carve out exceptions in Kubernetes RBAC

Use-cases:
Grant full access to all resources on the cluster (including the ability to e.g. create new namespaces), except for in certain namespaces such as kube-system.
Grant read permissions to all resources in the cluster except for Secrets.
This seems like a really basic set of use-cases that are not obvious how to implement.
Grant read permissions to all resources in the cluster except for Secrets.
kubectl get clusterrole view | grep -v secrets
fix the metadata creating a new ClusterRole. create ClusterRoleBindings using that ClusterRole.
Grant full access to all resources ... except in certain namespaces
For this, you would need to create rolebindings in each namespace you want to delegate those privileges to, you won't be able to filter out namespaces by their name.
You could use the clusterrole "admin", and create rolebindings in all your projects. OpenShift would have some defaultProjectTemplate you could customize automatically adding those RoleBindings when provisioning a new namespace. While I don't think traditional Kubernetes have such an option: you might then use a CronJob, say in kube-system, creating those RoleBindings into new namespaces on a schedule.

Deploy from CI/CD via HELM to external Kubernetes cluster with limited rights

I understand that I can copy my .kube/config to my CI/CD server, or just name the ServiceAccount to allow my CD pipeline to use HELM for deployment.
However, what if I want to allow deployment via Helm, but restrict a lot of other access, like:
reading data from pods or a deployed database
port-forward services
... so basically accessing all data in the cluster, except for stateless Docker containers deployed via Helm.
Would it be possible to create a new ClusterRole with limited rights? What verbs in a ClusterRole does Helm need at least to function properly?
What rights does Helm need at the least?
It comes down to what your Helm chart is doing to Kubernetes.
ClusterRoles can be bound to a particular namespace through reference in a RoleBinding. The admin, edit and view default ClusterRoles are commonly used in this manner. For more detailed info see this description. For example edit is a default ClusterRole which allows read/write access to most objects in a namespace. It does not allow viewing or modifying Roles or RoleBindings; and granting a user cluster-admin access at the namespace scope provides full control over every resource in the namespace, including the namespace itself.
You can also restrict a user's access to a particular namespace by using either the edit or the admin role. See this example.
The permissions strategy could also depend on what objects will be created by the installation. The user will need all access to those API objects that will be managed by helm installations. Using RBAC Authorization has this concept explained in more detail with several examples that you could use as a reference. Also, this source would be helpful.

client_email or client_id: which field to use to grant cluster-admin-rights before creating RBAC roles

I am deploying GKE components using GKE API.
Since it is an automated process, I am passing service-account.json to my program. This file is used for authenticating with GKE.
I want to deploy an RBAC role using the above setup.
According to GKE-RBAC-Docs, USER_ACCOUNT needs to be granted cluster-admin-binding before being able to make RBAC roles.
The service-account.json file has a field for client_email and another field for client_id.
On some clusters, I need to grant client_email as the User in cluster-admin-binding whereas on some client_id.
Can you tell me what I need to configure in my cluster so that only client_id is needed for creating RBAC roles?

AWS EKS: How is the first user added to system:masters group by EKS

EKS documentation says
"When you create an Amazon EKS cluster, the IAM entity (user or role) is automatically granted system:master permissions in the cluster's RBAC configuration".
But after the EKS cluster creation, if you check the aws-auth config map, it does NOT have the ARN mapping to system:masters group. But I am able to access the cluster via kubectl. So if the aws-auth (heptio config map) DOES NOT have the my ARN (I was the one who created the EKS cluster) mapped to system:masters group, how does the heptio aws authenticator authenticate me?
I got to know the answer. Basically on the heptio server side component, the static mapping for system:master is done under /etc/kubernetes/aws-iam-authenticator/ (https://github.com/kubernetes-sigs/aws-iam-authenticator#3-configure-your-api-server-to-talk-to-the-server) which is mounted into the heptio authenticator pod. Since you do not have access to this in EKS, you cant see it. However if you do invoke the /authenticate yourself with the pre-signed request, you should get the TokenReviewStatus response from heptio authenticator showing the mapping for ARN (who created the cluster) to system:master group!
when you create your cluster, you also install aws-iam-authenticator,
and since you created the cluster, I'm sure you have ~/.aws/credentials.
If you check the aws-auth file you can see it has aws-iam-authenticator in it.
also you have ~/.kube/config file where you can see that iam-authenticator maps your AWS-PROFILE as a ConfigMap.
so when over you run kubectl commandit reads kube config file to authenticate with your cluster.

Kubernetes: Role vs ClusterRole

Which is the difference between a Role or a ClusterRole?
When should I create one or the other one?
I don't quite figure out which is the difference between them.
From the documentation:
A Role can only be used to grant access to resources within a single namespace.
Example: List all pods in a namespace
A ClusterRole can be used to grant the same permissions as a Role, but
because they are cluster-scoped, they can also be used to grant access
to:
cluster-scoped resources (like nodes)
non-resource endpoints (like “/healthz”)
namespaced resources (like pods) across all namespaces (needed to run kubectl get pods --all-namespaces, for example)
Examples: List all pods in all namespaces. Get a list of all nodes and theis public IP.
Cluster roles also allow for the reuse of common permission sets across namespaces (via role bindings). The bootstrap admin, edit and view cluster roles are the canonical examples.