Kubernetes [RBAC]: User with access to specific Pods - kubernetes

I need to give access to a set of pods within a namespace to an external support.
I've been reading about the RBAC API, [Cluster]Roles and [Cluster]Role Bindings; but I could not find anything about how to apply a role to a group of pods (based on annotations or labels).
Does anyone know if it is possible to do that?
This is the Role that I use now, and need limit it to a specific pods set:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: <ClientX>-PodMonitor
namespace: <namespace>
rules:
- apiGroups: [""]
verbs: ["get", "list"]
resources: ["pods", "pods/log"]
If you guys need more details, please let me know.
Thanks.

Try below way of defining role-binding with resource name as example on docs:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: configmap-updater
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["my-configmap"]
verbs: ["update", "get"]

Related

How to set up permissions for Kubernetes resources?

First off, I'm aware of the Kubernetes RBAC method. My question is: is there a way to create Kubernetes resources that can only be read and/or written by a specific Role (or a ClusterRole)?
For example, let's say I have a Kubernetes Secret. I want this Secret to be bound to a specific ClusterRole, then only a ServiceAccount bound to this specific ClusterRole could read it. Is there a way to set up something like that?
Edit: it looks like what I want here is not possible. Kubernetes RBAC was designed to GRANT access to certain resources. I wanted to DENY access based on a specific group (or set of rules).
You can use the RBAC for managing the Role-based access in K8s
For example, let's say I have a Kubernetes Secret. I want this Secret
to be bound to a specific ClusterRole, so only a ServiceAccount bound
to this specific ClusterRole could read it. Is there a way to set up
something like that?
No, you can not use the ClusterRole for granular level access, however, you can create some Role to restrict secret.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: secret-read-role
rules:
- apiGroups: ["*"]
resources: ["secret"]
verbs: ["get", "watch", "list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: secret-read-sa
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: secret-read-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: secret-read-sa
apiGroup: ""
roleRef:
kind: Role
name: secret-read-role
apiGroup: ""
Checkout about the resourceNames you can also give a name or pattern in name so this way it might be helpful to attach a specific secret to Role.
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
resourceNames: ["userA-*"]
If you planning to Go with RBAC you can use the RBAC manager for better management : https://github.com/FairwindsOps/rbac-manager
Extra :
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: data-engineering
name: umbrella:data-engineering-app
rules:
– apiGroups: [“”]
resources: [“configmaps”]
resourceNames: [“data-engineering-app-configmap”] <<<<<<<<<
verbs: [“get”]
—
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: umbrella:data-engineering-app
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: umbrella:data-engineering-app
subjects:
– kind: ServiceAccount
name: data-engineering-app
namespace: data-engineering
You can also refer to resources by name for certain requests through the resourceNames list. When specified, requests can be restricted to individual instances of a resource. Here is an example that restricts its subject to only get or update a ConfigMap named my-configmap
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: configmap-updater
rules:
- apiGroups: [""]
#
# at the HTTP level, the name of the resource for accessing ConfigMap
# objects is "configmaps"
resources: ["configmaps"]
resourceNames: ["my-configmap"]
verbs: ["update", "get"]
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources
Good Example : https://thenewstack.io/three-realistic-approaches-to-kubernetes-rbac/
It is not possible to restrict access on a resource per resource basis.
The RBAC framework works by allowing specified Roles to perform certain actions (get, update, delete etc.) over certain resources (pods, secrets etc) in a certain namespace.
Clusterroles are used to grant access across all namespaces or to non namespaced resources like nodes.
To achieve what you are looking for you need to isolate your Kubernetes secret in a namespace where you only allow your specific role to read secrets.

Does Istio recommend using Roles and ClusterRoles?

I am trying to setup my Helm chart to be able to deploy a VirtualService. My deploy user has the Edit ClusterRole bound to it. But I realized that because Istio is not part of the core Kubernetes distro, the Edit ClusterRole does not have permissions to add a VirtualService (or even look at them).
I can, of course, make my own Roles and ClusterRoles if needed. But I figured I would see if Istio has a recommended Role or ClusterRole for that.
But all the docs that I can find for Istio Roles and ClusterRoles are for old versions of Istio.
Does Istio not recommend using Roles and ClusterRoles anymore? If not, what do they recommend? If they do, where are the docs for it?
I ended up using these ClusterRoles. They merge with the standard Kubernetes roles of admin, edit and view. (My edit role only allows access to the VirtualService because that fit my situtation.)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-admin
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-edit
labels:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["virtualservices"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-view
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]

How can I allow port-forwarding for a specific deployment in Kubernetes?

I am trying to allow some users in my org to forward ports to our production namespace in Kubernetes. However, I don't want them to be able to forward ports to all services. I want to restrict access to only certain services. Is this possible?
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: allow-port-forward-for-deployment-a
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["get", "list", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: allow-port-forward-for-deployment-a
namespace: production
subjects:
- kind: User
name: "xyz#org.com"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: allow-port-forward-for-deployment-a
apiGroup: rbac.authorization.k8s.io
The above set up allows all services, but I don't want that.
I believe you can't. According to the docs
Resources can also be referred to by name for certain requests through
the resourceNames list. When specified, requests can be restricted to
individual instances of a resource. To restrict a subject to only
“get” and “update” a single configmap, you would write:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: configmap-updater
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["my-configmap"]
verbs: ["update", "get"]
Note that create requests
cannot be restricted by resourceName, as the object name is not known
at authorization time. The other exception is deletecollection.
Since you want to give the user permissions to create the forward ports, I don't think you can.
These rules worked for me
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: port-forward
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list"]
Assuming users already have access to your kubernetes cluster and relevant namespace. They can simply port-forward local port to a pod (resource) port.
How can you do this?
kubectl port-forward <POD_NAME> <LOCAL_PORT>:<POD_PORT>
See Documentation
Quoting from the document - kubectl port-forward allows using resource name, such as a pod name, to select a matching pod to port forward to since Kubernetes v1.10.
Refer this article if you wish, this nicely explains when you would need RBAC vs kubectl port-forward
RBAC could have been useful only when, you wanted person or a group of people only to port-forward for any services in a relevant namespace in your kubernetes cluster.
Workaround A: StatefulSets and resourceNames
It is possible to restrict port forwarding to a pod with a specific name. resourceNames refer to resources, not subresources:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: allow-port-forward-for-deployment-a
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
resourceNames: ["my-app"]
verbs: ["create"]
A StatefulSet generates predictable pod names, but is different from a ReplicaSet and might not fit your use case.
Workaround B: Jump pod and NetworkPolicy
Sketch:
A StatefulSet that runs kubectl port-forward services/my-service inside the cluster (JUMP).
A NetworkPolicy which restricts traffic from pods belonging to JUMP to the target service
RBAC which restricts creation of subresource portforward to the pods of JUMP up to a predefined maximum number of replicas resourceNames: ["jump-0", "jump-1", ..., "jump-N"].

RBAC For kubernetes Dashboard

I have a User "A". I have namespaces X,Y,Z. I have created a RBAC user role and role binding for user "A" who has access to Namespace "X".
I wanted to give the user "A" access to kubernetes dashboard (which is a role and role binding for Kube-System). But when I give the access for dashboard, user "A" is able to see all the namespaces.
But I want him to see only namespace X which he has access).
How could I go about this?
What's the version of your Dashboard? As far as I know, from 1.7 on, Dashboard has used more secure setup, It means, that by default it has the minimal set of privileges, that are required to make Dashboard work.
Anyway, you can check the privileges of the sa that used by Dashboard, make sure it has the minimal privileges, like this:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
Then, create RBAC rules to give the full privileges for namespace X to A:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: user-A-admin
namespace: X
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: A
Make sure user A doesn't have any other RBAC rules.

Create ServiceAccounts for access to the Kubernetes Deployments

I want to access to the kubernetes deployment objects via api server .
I have a service account file shown as below .
apiVersion: v1
kind: ServiceAccount
metadata:
name: name
namespace: namespace
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: name
namespace: namespace
rules:
- apiGroups: [""]
resources: ["deployment"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["deployment/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["deployment/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: name
namespace: namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: name
subjects:
- kind: ServiceAccount
name: name
I'm getting 403 Forbidden error the token with owner of this service-accounts while access the endpoint
/apis/apps/v1beta1/namespaces/namespace/deployments
All of your role's rules are for the core (empty) API group. However, the URL you're trying to access, /apis/apps/v1beta1, is in the "apps" API group (the part of the path after /apis). So to access that particular API path, you need to change the role definition to
rules:
- apiGroups: ["apps"]
resources: ["deployment"]
verbs: ["create","delete","get","list","patch","update","watch"]
# and also the other deployment subpaths