RBAC - Access to kubernetes API - kubernetes

In the below manifest yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-operator
rules:
- apiGroups: [apiextensions.k8s.io]
resources: [customresourcedefinitions]
verbs: ['*']
- apiGroups: [monitoring.coreos.com]
resources:
- alertmanagers
- prometheuses
- prometheuses/finalizers
- alertmanagers/finalizers
- servicemonitors
- prometheusrules
verbs: ['*']
What does rules with apiGroups signify?

In Kubernetes resources can be either grouped resources or individual resources .
Example :
kubectl api-resources | grep 'crds\|pods\|NAME'
NAME SHORTNAMES APIVERSION NAMESPACED KIND
pods po v1 true Pod
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
In above output , for pods apiVersion is displayed as v1 so it is an individual resource . where as for customresourcedefinitions apiVersion is displayed as apiextensions.k8s.io/v1 which indicates that it is a grouped resource under the group apiextensions.k8s.io.
When we are defining RBAC rules (roles/clusreroles) for grouped resources we need to mention apiGroups along with resources.

Related

what is the importance of apigroups in kubernetes role definition

Could some one please help me with this..
I would like to understand a bit about the apiGroups & its usage in below Role definition.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: example.com-superuser
rules:
- apiGroups: ["example.com"]
resources: ["*"]
verbs: ["*"]
I was going through RBAC in Kubernetes. https://kubernetes.io/docs/reference/access-authn-authz/rbac/
Above example is from this link.
An api group groups a set of resource types in a common namespace. For example, resource types related to Ingress services are grouped under the networking.k8s.io api group:
$ kubectl api-resources --api-group newtorking.k8s.io
NAME SHORTNAMES APIVERSION NAMESPACED KIND
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
It is possible to have two different resource types that have the same short name in different resource groups. For example, in my OpenShift system there are two different groups that provide a Subscription resource type:
$ kubectl api-resources | awk '$NF == "Subscription" {print}'
subscriptions appsub apps.open-cluster-management.io/v1 true Subscription
subscriptions sub,subs operators.coreos.com/v1alpha1 true Subscription
If I am creating a role, I need to specify to which Subscription I want to grant access. This:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: allow-config-access
rules:
- apiGroups:
- operators.coreos.com
resources:
- subscriptions
verbs: ["*"]
Provides access to different resources than this:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: allow-config-access
rules:
- apiGroups:
- apps.open-cluster-management.io
resources:
- subscriptions
verbs: ["*"]
ApiGroups in Kubernetes are used to specify the set of resources that a Role or ClusterRole can access. In the example given, apiGroups is set to ["example.com"] which means the Role is allowed to access all resources from the “example.com” api. This allows admins to control access to different resources within the Kubernetes cluster.

k8s - give permission for all resources

Im trying to create a job to list all resources because my connection is terrible. Is there any way to give permission to a pod run the below command?
Here is ClusterRole that I am trying:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: istio-system
name: workaround
rules:
- apiGroups: [""]
resources:
- '*'
verbs:
- '*'
- apiGroups: ['*']
resources:
- '*'
verbs:
- '*'
The command is:
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -n ibm-rancher
If you are just looking to give your workload an admin role, you can use the prebuilt cluster-admin cluster role which should be available on every k8s cluster.
See the docs for more details - https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles

Kubernetes Role should grant access to all resources but it ignores some resources

The role namespace-limited should have full access to all resources (of the specified API groups) inside of a namespace. My Role manifest looks like this:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace-limited
namespace: restricted-xample
rules:
- apiGroups:
- core
- apps
- batch
- networking.k8s.io
resources: ["*"] # asterisk to grant access to all resources of the specified api groups
verbs: ["*"]
I associated the Role to a ServiceAccount using a RoleBinding but unfortunately this ServiceAccount has no access to Pod, Service, Secret, ConfigMap and Endpoint Resources. These resources are all part of the core API group. All the other common Workloads work though. Why is that?
The core group, also referred to as the legacy group, is at the REST path /api/v1 and uses apiVersion: v1
You need to use "" for core API group.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: restricted-xample
name: namespace-limited
rules:
- apiGroups: ["", "apps", "batch", "networking.k8s.io"] # "" indicates the core API group
resources: ["*"]
verbs: ["*"]
To test the permission of the service account use below commands
kubectl auth can-i get pods --as=system:serviceaccount:restricted-xample:default -n restricted-xample
kubectl auth can-i get secrets --as=system:serviceaccount:restricted-xample:default -n restricted-xample
kubectl auth can-i get configmaps --as=system:serviceaccount:restricted-xample:default -n restricted-xample
kubectl auth can-i get endpoints --as=system:serviceaccount:restricted-xample:default -n restricted-xample
Just figured out, that it works when I omit the core keyword, like in this example. Following Role manifest works:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace-limited
namespace: restricted-xample
rules:
- apiGroups: ["", "apps", "batch", "networking.k8s.io"]
resources: ["*"]
verbs: ["*"]
But why it does not work if I specify the core API group is a mystery to me.

How can I allow port-forwarding for a specific deployment in Kubernetes?

I am trying to allow some users in my org to forward ports to our production namespace in Kubernetes. However, I don't want them to be able to forward ports to all services. I want to restrict access to only certain services. Is this possible?
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: allow-port-forward-for-deployment-a
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["get", "list", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: allow-port-forward-for-deployment-a
namespace: production
subjects:
- kind: User
name: "xyz#org.com"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: allow-port-forward-for-deployment-a
apiGroup: rbac.authorization.k8s.io
The above set up allows all services, but I don't want that.
I believe you can't. According to the docs
Resources can also be referred to by name for certain requests through
the resourceNames list. When specified, requests can be restricted to
individual instances of a resource. To restrict a subject to only
“get” and “update” a single configmap, you would write:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: configmap-updater
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["my-configmap"]
verbs: ["update", "get"]
Note that create requests
cannot be restricted by resourceName, as the object name is not known
at authorization time. The other exception is deletecollection.
Since you want to give the user permissions to create the forward ports, I don't think you can.
These rules worked for me
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: port-forward
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list"]
Assuming users already have access to your kubernetes cluster and relevant namespace. They can simply port-forward local port to a pod (resource) port.
How can you do this?
kubectl port-forward <POD_NAME> <LOCAL_PORT>:<POD_PORT>
See Documentation
Quoting from the document - kubectl port-forward allows using resource name, such as a pod name, to select a matching pod to port forward to since Kubernetes v1.10.
Refer this article if you wish, this nicely explains when you would need RBAC vs kubectl port-forward
RBAC could have been useful only when, you wanted person or a group of people only to port-forward for any services in a relevant namespace in your kubernetes cluster.
Workaround A: StatefulSets and resourceNames
It is possible to restrict port forwarding to a pod with a specific name. resourceNames refer to resources, not subresources:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: allow-port-forward-for-deployment-a
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
resourceNames: ["my-app"]
verbs: ["create"]
A StatefulSet generates predictable pod names, but is different from a ReplicaSet and might not fit your use case.
Workaround B: Jump pod and NetworkPolicy
Sketch:
A StatefulSet that runs kubectl port-forward services/my-service inside the cluster (JUMP).
A NetworkPolicy which restricts traffic from pods belonging to JUMP to the target service
RBAC which restricts creation of subresource portforward to the pods of JUMP up to a predefined maximum number of replicas resourceNames: ["jump-0", "jump-1", ..., "jump-N"].

prometheus cannot able to monitor all the pods in kubernetes

So i have 3 name spaces when i deployed prometheus on kubernetes i see the error in the logs. it is unable to monitor all the name spaces.
Error :
\"system:serviceaccount:development:default\" cannot list endpoints at the cluster scope"
level=error ts=2018-06-28T21:22:07.390161824Z caller=main.go:216 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:268: Failed to list *v1.Endpoints: endpoints is forbidden: User \"system:serviceaccount:devops:default\" cannot list endpoints at the cluster scope"
You'd better use a service account to access the kubernetes, and give the sa special privilidge that the prometheus needed. like the following:
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: kube-system
Presumes that you deploy prometheus in the kube-system namespace. Also you need specify the sa like ' serviceAccount: prometheus' in your prometheus deployment file .