Why is my PodSecurityPolicy applied even if I don't have access? - kubernetes

I have two PodSecurityPolicy:
000-privileged (only kube-system service accounts and admin users)
100-restricted (everything else)
I have a problem with their assignment to pods.
First policy binding:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:privileged
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
- 000-privileged
verbs:
- use
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:privileged-kube-system
namespace: kube-system
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: psp:privileged
apiGroup: rbac.authorization.k8s.io
Second policy binding:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:restricted
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
- 100-restricted
verbs:
- use
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:restricted
subjects:
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: psp:restricted
apiGroup: rbac.authorization.k8s.io
Everything works fine in kube-system.
However, in other namespaces, it does not work as expected:
If I create a Deployment (kubectl apply -f deployment.yml), its pod gets tagged with psp 100-restricted.
If I create a Pod (kubectl apply -f pod.yml), it gets tagged with psp 000-privileged. I really don't get why its not 100-restricted.
My kubectl is configured with external authentication token from OpenID Connect (OIDC).
I verified the access and everything seems ok:
kubectl auth can-i use psp/100-restricted
yes
kubectl auth can-i use psp/000-privileged
no
Any clue?

The problem was that my user had access to all verbs (*) of all resources (*) of the 'extensions' apiGroup in its Role.
The documentation is a bit unclear (https://github.com/kubernetes/examples/tree/master/staging/podsecuritypolicy/rbac):
The use verb is a special verb that grants access to use a policy while not permitting any other access. Note that a user with superuser permissions within a namespace (access to * verbs on * resources) would be allowed to use any PodSecurityPolicy within that namespace.
I got confused by the mention of 'within that namespace'. Since PodSecurityGroup are not "namespaced", I assumed they could not be used without a ClusterRole/RoleBinding in the namespace giving explicit access. Seems I was wrong...
I modified my role to specify the following:
rules:
- apiGroups: ["", "apps", "autoscaling", "batch"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["extensions"]
resources: ["*"]
# Avoid using * here to prevent use of 'use' verb for podsecuritypolicies resource
verbs: ["create", "get", "watch", "list", "patch", "delete", "deletecollection", "update"]
And now it picks up the proper PSP. One interesting thing, it also prevent users from modifying (create/delete/update/etc) podsecuritypolicies.
It seems 'use' verb is quite special after all.....

Related

how to restrict the kubemonkey/chaoskube to get the cluster-wide permissions?

in order to make a high availability test in kubernetes cluster, i use a tool such as chaoskube or kube-monkey , which kills random pods in namespaces to create a "chaos" and to see how the system and applications will react.
by default these tools need a cluster role, in order to let its service account to list/kill pods for all namespaces in cluster.
in my situation i want to install this tool and make the test just in one namespace (namespace x)
is there any way to restrict the permissions of the service account just to give it the permissions to list/kill pods from (namespace x) and the whole cluster ?
i already tried to create a role & rolebinding in (namespace x) but still have the same RBAC error, as the service account expects to have the cluster permissions :
"pods is forbidden: User \"system:serviceaccount:x:chaoskube-sa\" cannot list resource \"pods\" in API group \"\ at the cluster scope"
update: role & rolebinding
this is the default permissions for its service account:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaoskube-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaoskube-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaoskube-role
subjects:
- kind: ServiceAccount
name: chaoskube-sa
namespace: x
with these configration it works fine.
now with restricted permissions for a specific namespace :
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: chaoskube-role
namespace: x
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: chaoskube-rolebinding
namespace: x
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: chaoskube-role
subjects:
- kind: ServiceAccount
name: chaoskube-sa
namespace: x
it can not list the pods , and i receive the RBAC error.

deletecollection kubernetes (tekton) resources - specific RBAC needed?

I am trying to delete tekton kubernetes resources in the context of a service account with an on-cluster kubernetes config, and am experiencing errors specific to accessing deletecollection with all tekton resources. An example error is as follows:
pipelines.tekton.dev is forbidden: User "system:serviceaccount:my-account:default" cannot deletecollection resource "pipelines" in API group "tekton.dev" in the namespace "my-namespace"
I have tried to apply RBAC to help here, but continue to experience the same errors. My RBAC attempt is as follows:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-role
namespace: my-namespace
rules:
- apiGroups: ["tekton.dev"]
resources: ["pipelines", "pipelineruns", "tasks", "taskruns"]
verbs: ["get", "watch", "list", "delete", "deletecollection"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-role-binding
namespace: my-namespace
subjects:
- kind: User
name: system:serviceaccount:my-account:default
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: my-role
apiGroup: rbac.authorization.k8s.io
These RBAC configurations continue to result in the same error. Is this, or similar necessary? Are there any examples of RBAC when interfacing with, specifically deleting, tekton resources?
Given two namespaces my-namespace and my-account the default service account in the my-account namespace is correctly granted permissions to the deletecollection verb on pipelines in my-namespace.
You can verify this using kubectl auth can-i like this after applying:
$ kubectl -n my-namespace --as="system:serviceaccount:my-account:default" auth can-i deletecollection pipelines.tekton.de
yes
Verify that you have actually applied your RBAC manifests.
Change the RBAC as below
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-role
namespace: my-namespace
rules:
- apiGroups: ["tekton.dev"]
resources: ["pipelines", "pipelineruns", "tasks", "taskruns"]
verbs: ["get", "watch", "list", "delete", "deletecollection"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-rolebinding
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: my-account
roleRef:
kind: Role
name: my-role
apiGroup: rbac.authorization.k8s.io
Few things to note:
Fixed subjects to use ServiceAccount from User. This is actually the cause of the failure because the service account was not granted the RBAC.
I assumed that you want to delete the Tekton resources in my-namespace by the default service account of my-account namespace . If it's different then changes in Role and RoleBinding need to be done accordingly.

What happens when multiple cluster roles are assigned to one service account in kubernetes?

I know that you can assign multiple roles to one service account when you want your service account to access multiple namespaces, but what I wonder is how it will behave when you assign to it more than one clusterrole which is cluster scoped. From my perspective, I think that it will choose one of them but I'm not sure.
Permissions are purely additive (there are no "deny" rules).
reference
This is the golden 🥇 rule here that we must memorize for kubernetes RBAC roles.
"purely additive" means always ALLOW no revoke.
Hence, "purely additive" means there are neither conflicts nor order of precedence.
It's not like AWS IAM policies where we have DENY and ALLOW .. That's time, we have to know which one has the highest order of precedence.
It's not like also subnets ACL , where we have DENY and ALLOW .. That's time, we need to assign number for each rule. This number will decide the order of precedence.
Example:
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: node-reader
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: pod-reader
subjects:
- kind: User
name: abdennour
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: node-reader
subjects:
- kind: User
name: abdennour
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: node-reader
apiGroup: rbac.authorization.k8s.io
as you can see in this example, the user Abdennour should have at the end the wide read access for both: nodes & pods.
If you assign a service account multiple clusterroles using multiple role or clusterrole bindings the service account will have permission which is aggregate of all of those cluster roles meaning all the verbs on all the resources defined in those clusterroles.

Kubernetes RBAC rules for PersistentVolume

I'm trying to create RBAC Role / rules for a service that needs a persistent volume and it's still failing with forbidden error.
Here is my role config:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: logdrop-user-full-access
namespace: logdrop
rules:
- apiGroups: ["", "extensions", "apps", "autoscaling"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["*"]
And this is my cut down PersistentVolume manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: logdrop-pv
namespace: logdrop
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: logdrop
name: logdrop-pvc
hostPath:
path: /efs/logdrop/logdrop-pv
When I try to apply it I get a forbidden error.
$ kubectl --kubeconfig ~/logdrop/kubeconfig-logdrop.yml apply -f pv-test.yml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "logdrop-pv", Namespace: ""
Object: &{map["apiVersion":"v1" "kind":"PersistentVolume" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "name":"logdrop-pv"] "spec":map["accessModes":["ReadWriteMany"] "capacity":map["storage":"10Gi"] "claimRef":map["name":"logdrop-pvc" "namespace":"logdrop"] "hostPath":map["path":"/efs/logdrop/logdrop-pv"] "persistentVolumeReclaimPolicy":"Retain"]]}
from server for: "pv-test.yml": persistentvolumes "logdrop-pv" is forbidden: User "system:serviceaccount:logdrop:logdrop-user" cannot get resource "persistentvolumes" in API group "" at the cluster scope
On the last line it specifically says resource "persistentvolumes" in API group "" - that's what I have allowed in the rules!
I can create the PV with admin credentials from the same yaml file and I can create any other resources (pods, services, etc) with the logdrop permissions. Just the PersistentVolume doesn't work for some reason. Any idea why?
I'm using Kubernetes 1.15.0.
Update:
This is my role binding as requested:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: logdrop-user-view
namespace: logdrop
subjects:
- kind: ServiceAccount
name: logdrop-user
namespace: logdrop
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: logdrop-user-full-access
It's not a ClusterRoleBinding as my intention is to give the user access only to one namespace (logdrop), not to all namespaces across the cluster.
PVs, namespaces, nodes and storages are cluster-scoped objects. As a best practice, to be able to list/watch those objects, you need to create ClusterRole and bind them to a ServiceAccount via ClusterRoleBinding. As an example;
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: <name of your cluster role>
rules:
- apiGroups: [""]
resources:
- nodes
- persistentvolumes
- namespaces
verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: <name of your cluster role binding>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <name of your cluster role which should be matched with the previous one>
subjects:
- kind: ServiceAccount
name: <service account name>
I see a potential problem here.
PersistentVolumes are cluster scoped resources. They are expected to be provisioned by the administrator without any namespace.
PersistentVolumeClaims however, can be created by users within a particular namespace as they are a namespaced resources.
That's why when you use admin credentials it works but with logdrop it returns an error.
Please let me know if that makes sense.
The new role needs to be granted to a user, or group of users, with a rolebinding, e.g.:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: logdrop-rolebinding
namespace: logdrop
subjects:
- kind: User
name: logdrop-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: logdrop-user-full-access
apiGroup: rbac.authorization.k8s.io

Read only kubernetes user

I'm trying to create a read only user. I want the user to be able to list nodes and pods and view the dashboard. I got the certs created and can connect but I'm getting the following error.
$ kubectl --context minikube-ro get pods --all-namespaces
Error from server (Forbidden): pods is forbidden: User "erst-operation" cannot list pods at the cluster scope
My cluster role...
$ cat helm/namespace-core/templates/pod-reader-cluster-role.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: '*'
name: pod-reader
rules:
- apiGroups: ["extensions", "apps"]
resources: ["pods"]
verbs: ["get", "list", "watch"]
My cluster role binding...
$ cat helm/namespace-core/templates/pod-reader-role-binding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: erst-operation
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
I'm aware the above shouldn't grant permissions to see the dashboard but how do I get it to just list the pods?
You cluster role should contain Core group as resource pods are in Core group.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: '*'
name: pod-reader
rules:
- apiGroups: ["extensions", "apps", ""]
resources: ["pods"]
verbs: ["get", "list", "watch"]