Kubernetes RBAC rules for PersistentVolume - kubernetes

I'm trying to create RBAC Role / rules for a service that needs a persistent volume and it's still failing with forbidden error.
Here is my role config:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: logdrop-user-full-access
namespace: logdrop
rules:
- apiGroups: ["", "extensions", "apps", "autoscaling"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["*"]
And this is my cut down PersistentVolume manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: logdrop-pv
namespace: logdrop
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: logdrop
name: logdrop-pvc
hostPath:
path: /efs/logdrop/logdrop-pv
When I try to apply it I get a forbidden error.
$ kubectl --kubeconfig ~/logdrop/kubeconfig-logdrop.yml apply -f pv-test.yml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "logdrop-pv", Namespace: ""
Object: &{map["apiVersion":"v1" "kind":"PersistentVolume" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "name":"logdrop-pv"] "spec":map["accessModes":["ReadWriteMany"] "capacity":map["storage":"10Gi"] "claimRef":map["name":"logdrop-pvc" "namespace":"logdrop"] "hostPath":map["path":"/efs/logdrop/logdrop-pv"] "persistentVolumeReclaimPolicy":"Retain"]]}
from server for: "pv-test.yml": persistentvolumes "logdrop-pv" is forbidden: User "system:serviceaccount:logdrop:logdrop-user" cannot get resource "persistentvolumes" in API group "" at the cluster scope
On the last line it specifically says resource "persistentvolumes" in API group "" - that's what I have allowed in the rules!
I can create the PV with admin credentials from the same yaml file and I can create any other resources (pods, services, etc) with the logdrop permissions. Just the PersistentVolume doesn't work for some reason. Any idea why?
I'm using Kubernetes 1.15.0.
Update:
This is my role binding as requested:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: logdrop-user-view
namespace: logdrop
subjects:
- kind: ServiceAccount
name: logdrop-user
namespace: logdrop
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: logdrop-user-full-access
It's not a ClusterRoleBinding as my intention is to give the user access only to one namespace (logdrop), not to all namespaces across the cluster.

PVs, namespaces, nodes and storages are cluster-scoped objects. As a best practice, to be able to list/watch those objects, you need to create ClusterRole and bind them to a ServiceAccount via ClusterRoleBinding. As an example;
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: <name of your cluster role>
rules:
- apiGroups: [""]
resources:
- nodes
- persistentvolumes
- namespaces
verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: <name of your cluster role binding>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <name of your cluster role which should be matched with the previous one>
subjects:
- kind: ServiceAccount
name: <service account name>

I see a potential problem here.
PersistentVolumes are cluster scoped resources. They are expected to be provisioned by the administrator without any namespace.
PersistentVolumeClaims however, can be created by users within a particular namespace as they are a namespaced resources.
That's why when you use admin credentials it works but with logdrop it returns an error.
Please let me know if that makes sense.

The new role needs to be granted to a user, or group of users, with a rolebinding, e.g.:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: logdrop-rolebinding
namespace: logdrop
subjects:
- kind: User
name: logdrop-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: logdrop-user-full-access
apiGroup: rbac.authorization.k8s.io

Related

Kubernetes cannot create resource "namespaces" in API group "" at the cluster scope even after creating rolebindings

Im running a pipeline that creates a kubernetes namespace but when I run it I get:
Error from server (Forbidden): namespaces is forbidden: User "system:serviceaccount:gitlab-runner:default" cannot create resource "namespaces" in API group "" at the cluster scope
I created a ClusterRole and a ClusterRoleBinding to allow the service user default in the gitlab-runner namespace to create namespaces with:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: modify-namespace
rules:
- apiGroups: [""]
resources:
- namespace
verbs:
- create
and:
ind: ClusterRoleBinding
metadata:
name: modify-namespace-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: modify-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab-runner
But that gives me the same error.
What am I doing wrong?
[""] in clusterrole manifest it should be just "".
because [""] will be array where apiGroups expects a string.
under resources it should be namespaces not namespace because :
kubectl api-resources | grep 'namespace\|NAME'
NAME SHORTNAMES APIVERSION NAMESPACED KIND
namespaces ns v1 false Namespace
so clusterrole manifest should be as following :
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: modify-namespace
rules:
- apiGroups: ""
resources:
- namespaces
verbs:
- create
I had this issue below:
Namespaces is forbidden: User "system:serviceaccount:openshift-operators:minio-operator" cannot create resource "namespaces" in API group "" at the cluster scope
Got solved with below yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-role-cesar-3
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: role-binding-cesar-3
namespace: openshift-operators
subjects:
- kind: ServiceAccount
name: minio-operator
namespace: openshift-operators
roleRef:
kind: ClusterRole
name: cluster-role-cesar-3
apiGroup: rbac.authorization.k8s.io

how to restrict the kubemonkey/chaoskube to get the cluster-wide permissions?

in order to make a high availability test in kubernetes cluster, i use a tool such as chaoskube or kube-monkey , which kills random pods in namespaces to create a "chaos" and to see how the system and applications will react.
by default these tools need a cluster role, in order to let its service account to list/kill pods for all namespaces in cluster.
in my situation i want to install this tool and make the test just in one namespace (namespace x)
is there any way to restrict the permissions of the service account just to give it the permissions to list/kill pods from (namespace x) and the whole cluster ?
i already tried to create a role & rolebinding in (namespace x) but still have the same RBAC error, as the service account expects to have the cluster permissions :
"pods is forbidden: User \"system:serviceaccount:x:chaoskube-sa\" cannot list resource \"pods\" in API group \"\ at the cluster scope"
update: role & rolebinding
this is the default permissions for its service account:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaoskube-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaoskube-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaoskube-role
subjects:
- kind: ServiceAccount
name: chaoskube-sa
namespace: x
with these configration it works fine.
now with restricted permissions for a specific namespace :
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: chaoskube-role
namespace: x
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: chaoskube-rolebinding
namespace: x
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: chaoskube-role
subjects:
- kind: ServiceAccount
name: chaoskube-sa
namespace: x
it can not list the pods , and i receive the RBAC error.

How to execute shell script from POD A to another POD B

So like you said I created a another pod which is of kind:job and included the script.sh.
In the script.sh file, I run "kubectl exec" to the main pod to run few commands
The script gets executed, but I get the error "cannot create resource "pods/exec in API group"
So I created a clusterrole with resources: ["pods/exec"] and bind it to the default service account using ClusterRoleBinding
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: service-account-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
In the pod which is of kind:job, I include the service account like shown below
restartPolicy: Never
serviceAccountName: default
but I still get the same error. What am I doing wrong here ?
Error from server (Forbidden): pods "mongo-0" is forbidden: User "system:serviceaccount:default:default" cannot create resource "pods/exec" in API group "" in the namespace "default"
If this is something that needs to be regularly run for maintenance look into Kubernetes daemon set object.

Kubernetes can not list pods as user

I am seeing this error for user jenkins when deploying.
Error: pods is forbidden: User "system:serviceaccount:ci:jenkins" cannot list pods in the namespace "kube-system"
I have created a definition for service account
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
I have created a ClusterRoleBinding
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: jenkins
namespace: kube-system
Any advice?
Without a namespace (in the ServiceAccount creation) you will automatically create it in the default namespace. The same with a Role (which is always a one-namespace resource).
What you need to do is to create a ClusterRole with the correct permissions (basically just change your role into a ClusterRole) and then set the correct namespace either on the ServiceAccount resource or in the ClusterRole binding.
You can also skip creating the Role and RoleBinding, as the ClusterRole and ClusterRoleBinding will override it either way.
--
With that said. It's always good practice to create a specific ServiceAccount and RoleBinding per namespace when it comes to deploys, so that you don't accidently create an admin account which is used in a remote CI tool like... Jenkins ;)

Kubernetes RBAC to restrict user to see only required resources on kubernetes dashboard

Hi Everyone,
I want to restrict my developers to be able to see only required resources on kubernetes dashboard(For example only their namespace not all the namespaces). Is possible to do that . If yes can someone point me to the right documents ? Many Thanks
I am using the below RBAC for the kube-system namespace. However the user is able to see all the namespaces on the dashboard rather than seeing only the namespaces he has access to.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-system
name: dashboard-reader-role
rules:
- apiGroups: [""]
resources: ["service/proxy"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dashboard-reader-ad-group-rolebinding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dashboard-reader-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: "****************"
please see the k8s rbac documentation:
example:
create a developer role in development namespace:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: development
name: developer
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["list", "get", "watch"]
# You can use ["*"] for all verbs
then bind it:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: developer-role-binding
namespace: development
subjects:
- kind: User
name: DevDan
apiGroup: ""
roleRef:
kind: Role
name: developer
apiGroup: ""
also , there is a built in view only role that u can bind to user:
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings
C02W84XMHTD5:~ iahmad$ kubectl get clusterroles --all-namespaces | grep view
system:aggregate-to-view 17d
view 17d
but this is clusterwide view role , if you want them to see only the stuff in a specific namespace only then create a view role in that namespace and bind it , exmaple above.