I am creating a Pod Security Policy to stop ALL users from creating a Pod as Root user. My cluster is on GKE.
The steps Ive carried out till now are
1) Enable PodSecurityPolicy in my cluster
gcloud beta container clusters update standard-cluster-11 --enable-pod-security-policy
2) Define the Policy. The policy is simple and restricts root user.
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: a-restrict-root
spec:
privileged: false
runAsUser:
rule: MustRunAsNonRoot # <------ Root user restricted.
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
3) And then ofcourse implementing the correct RBAC rules so that it can be implemented for ALL users.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gce:podsecuritypolicy:a-restrict-root
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
rules:
- apiGroups:
- policy
resourceNames:
- a-restrict-root
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gce:podsecuritypolicy:a-restrict-root
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gce:podsecuritypolicy:a-restrict-root
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:serviceaccounts
Now comes the part where I try to spin up a Pod. The pod definition looks like this :
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 0
fsGroup: 0
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
As you can see, the runAsUser is set to 0 meaning root.
When I run kubectl create -f pod.yaml, the pod is creating and goes into Running state.
When I exec into the Pod, I can see all process running as root
$ kubectl exec -it security-context-demo -- sh
# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 4336 812 ? Ss 19:25 0:00 /bin/sh -c node server.js
root 6 0.4 0.5 772124 22656 ? Sl 19:25 0:00 node server.js
root 11 0.0 0.0 4336 724 ? Ss 19:26 0:00 sh
root 16 0.0 0.0 17500 2072 ? R+ 19:26 0:00 ps aux
But based on my PodSecurityPolicy this should NOT be allowed. Is there something I have missed ?
UPDATE :
I spun up a default nginx pod that I know always starts as root user. Its manifest is :
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
And when I create it, it too starts up successfully.
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 2m
Whereas because of the PSP, it should not start up.
If you fetch the created pod from the API, it will contain an annotation indicating which PSP allowed the pod. I expect a different PSP is in place which allows the pod
I got what you are missing just add this in your pod config file:
Before that, make sure to create the user (say, appuser) uid -> say, 999 and group (say, appgroup) gid ->say, 999 in the Docker container, and then try starting the container with that user and add:
securityContext:
runAsUser: 999
This might be a good read: SecurityContext
Also, when you are doing this:
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 0
fsGroup: 0
You are overriding the PodSecurityPolicy see here
Update: 1
How to fix this:
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: a-restrict-root
spec:
privileged: false
defautlAllowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we can provide it for defense in depth.
requiredDropCapabilities:
- ALL
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
rule: 'MustRunAsNonRoot'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
Related
I am trying to set up the security polices in the cluster. I enabled pod security and created a restricted psp
1.Step 1 - Created PSP
2.Step 2 - Created Cluster Role
3.Step 3 - Create ClusterRoleBinding
PSP
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
serviceaccount.cluster.cattle.io/pod-security: restricted
serviceaccount.cluster.cattle.io/pod-security-version: "2315292"
creationTimestamp: "2022-02-28T20:48:12Z"
labels:
cattle.io/creator: norman
name: restricted-psp
spec:
allowPrivilegeEscalation: false
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
Cluster Role -
apiVersion: rbac.authorization.k8s.io/v1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
serviceaccount.cluster.cattle.io/pod-security: restricted
labels:
cattle.io/creator: norman
name: restricted-clusterrole
rules:
- apiGroups:
- extensions
resourceNames:
- restricted-psp
resources:
- podsecuritypolicies
verbs:
- use
ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: restricted-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: restricted-clusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:security
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:authenticated
Create couple of yams one for deplyment and other for pod
kubectl create ns security
$ cat previleged-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: privileged-deploy
name: privileged-pod
spec:
containers:
- image: alpine
name: alpine
stdin: true
tty: true
securityContext:
privileged: true
hostPID: true
hostNetwork: true
$ cat previleged-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: privileged-deploy
name: privileged-deploy
spec:
replicas: 1
selector:
matchLabels:
app: privileged-deploy
template:
metadata:
labels:
app: privileged-deploy
spec:
containers:
- image: alpine
name: alpine
stdin: true
tty: true
securityContext:
privileged: true
hostPID: true
hostNetwork: true
The expectation was both pod and deployment to be prevented . But the pod got created and deployment failed
$ kg all -n security
NAME READY STATUS RESTARTS AGE
**pod/privileged-pod 1/1 Running 0 13m**
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/privileged-deploy 0/1 0 0 13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/privileged-deploy-77d7c75dd8 1 0 0 13m
As Expected Error for Deployment came as below
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 3m10s (x18 over 14m) replicaset-controller Error creating: pods "privileged-deploy-77d7c75dd8-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
But the pod created directly though yaml worked . Is PSP only for pods getting created through deplyment/rs ? Please help , how can we prevent users from creating pods which are previleged and dangerous
But the pod created directly though yaml worked . Is PSP only for pods
getting created through deplyment/rs ?
That's because when you create a bare pod (creating a pod directly) it will be created by the user called kubernetes-admin (in default scenarios), who is a member of the group system:masters, which is mapped to a cluster role called cluster-admin, which has access to all the PSPs that get created on the cluster. So the creation of bare pods will be successful.
Whereas pods that are created by deployment,rs,sts,ds (all the managed pods) will be created using the service account mentioned in their definition. The creation of these pods will be successful only if these service accounts have access to PSP via a cluster role or role.
how can we prevent users from creating pods which are previleged and dangerous
We need to identify what is that user and group that will be creating these pods (by checking ~/kube/config or its certificate) and then make sure, it does not have access to PSP via any cluster role or role.
I am currently running a deployment that is pulling from a database and is set as read only. Unfortunately the deployment freezes on coaction without notice so I came up with the idea of having the code write to a log file and have the liveness probe check to see if the file is up to date.
This has been tested and it works great, however, the issue is getting past the read only part. As you well know, I can not create or write to a file in read only mode so when my code attempts this, I get a permission denied error and the deployment ends. I thought I could fix this by including two file paths in the container and using:
allowedHostPaths:
- pathPrefix: "/code"
readonly: true
- pathPrefix: "/code/logs"
readonly: false
and tell the code to wright in the "logs" file, but this did not work. Is there a why to have all my code in a read only file but also have a log file that can be read and written to?
Here is my Dockerfile:
FROM python:3.9.7-alpine3.14
RUN mkdir -p /code/logs
WORKDIR /code
COPY requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY src/ .
CMD [ "python", "code.py"]
and here is my yaml file:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false # Required to prevent escalations to root.
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes: # Allow core volume types.
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
runAsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
readOnlyRootFilesystem: true
allowedHostPaths:
- pathPrefix: "/code"
readOnly: true
- pathPrefix: "/code/logs"
readOnly: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp:restricted
rules:
- apiGroups:
- policy
resources:
- podsecuritypolicies
verbs:
- use
resourceNames:
- restricted
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: manage-app
namespace: default
subjects:
- kind: User
name: my-app-sa
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: psp:restricted
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
selector:
matchLabels:
app: orbservice
template:
metadata:
labels:
app: orbservice
spec:
serviceAccountName: my-app-sa
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
containers:
- name: my-app
image: app:v0.0.1
resources:
limits:
memory: "128Mi"
cpu: "500m"
livenessProbe:
exec:
command:
- python
- check_logs.py
initialDelaySeconds: 60
periodSeconds: 30
failureThreshold: 1
An explanation of the solution or errors you see would be most appreciated. Thanks!
allowedHostPaths is related to hostPath volume mounts (aka bind-mounting in folder from the host system). Nothing to do with stuff inside the container image. What you probably want is readOnlyRootFilesystem: true like you have now plus an emptyDir volume mounted at /code/logs set up to be writable by the container's user.
I am trying to play around with PodSecurityPolicies in kubernetes so pods can't be created if they are using the root user.
This is my psp definition:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: eks.restrictive
spec:
hostNetwork: false
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: MustRunAsNonRoot
fsGroup:
rule: RunAsAny
volumes:
- '*'
and this is my statefulset definition
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
securityContext:
#only takes integers.
runAsUser: 1000
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
When trying to create this statefulset I get
create Pod web-0 in StatefulSet web failed error: pods "web-0" is forbidden: unable to validate against any pod security policy:
It doesn't specify what policy am I violating, and since I am specifying I want to run this on user 1000, I am not running this as root (Hence my understanding is that this statefulset pod definition is not violating any rules defined in the PSP). There is no USER specified in the Dockerfile used for this image.
The other weird part, is that this works fine for standard pods (kind: Pod, instead of kind:Statefulset), for example, this works just fine, when the same PSP exists:
apiVersion: v1
kind: Pod
metadata:
name: my-nodejs
spec:
securityContext:
runAsUser: 1000
containers:
- name: my-node
image: node
ports:
- name: web
containerPort: 80
protocol: TCP
command:
- /bin/sh
- -c
- |
npm install http-server-g
npx http-server
What am I missing / doing wrong?
You seems to have forgotten about binding this psp to a service account.
You need to apply the following:
cat << EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp-role
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- eks.restrictive
EOF
cat << EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: psp-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-role
subjects:
- kind: ServiceAccount
name: default
namespace: default
EOF
If you dont want to use the default account you can create a separate service account and bind the role to it.
Read more about it k8s documentation - pod-security-policy.
I'm trying to deploy a restricted psp which should disable the use of the root user in a pod:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: unprivilegedpolicy
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
readOnlyRootFilesystem: false
I've added this psp to a ClusterRole and bound it to the namespace hello-world:
Name: UnPrivilegedClusterRole
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
podsecuritypolicies.policy [] [unprivilegedpolicy] [use]
[root#master01 ~]# kubectl describe clusterrolebindings.rbac.authorization.k8s.io HelloWorldRoleBinding
Name: HelloWorldRoleBinding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: UnPrivilegedClusterRole
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:serviceaccounts hello-world
BUT if I try to run a ngnix container using kubectl run --name=nginx hello-world the container successfully runs as root user. The deployment is deployed via a ServiceAccount.
The PodSecurityPolicy admission controller is enabled.
Does anybody has a solution for this?
First of all:
$ kubectl run --name=nginx hello-world
You did not specify image name of the pod. Correct syntax should be:
$ kubectl run --image=nginx NAME_OF_DEPLOYMENT
As said above commands will try to create a deployment.
The issue you are encountering is most probably connected with:
Not working/turned on admission controller
On newly created Kubernetes cluster with pod security policy turned on you should not be able to spawn any pod regardless of your privileges.
Pod security policy control is implemented as an optional (but recommended) admission controller. PodSecurityPolicies are enforced by enabling the admission controller, but doing so without authorizing any policies will prevent any pods from being created in the cluster.
-- Kubernetes.io: enabling pod security policies
Admission controller as well as pod security policy and RBAC are strongly connected with solutions you are working with. You should refer to documentation specific to your case.
For example:
Newly created GKE cluster with pod security enabled and none PSP configured will not create pods. It will display a message: Unable to validate against any pod security policy: []
Warning: If you enable the PodSecurityPolicy controller without first defining and authorizing any actual policies, no users, controllers, or service accounts can create or update Pods. If you are working with an existing cluster, you should define and authorize policies before enabling the controller.
-- GKE: Pod security policies and how to enable/disable it
Newly created Kubernetes cluster with kubespray (with pod security policy variable set to true when provisioning and running on Ubuntu) will have a restrictive PSP created and it will have a MustRunAsNonRoot parameter inside the PSP.
There is another issue with NGINX pod. NGINX image will try to run as root user inside of the pod. Admission controller with PSP configured with:
runAsUser:
rule: MustRunAsNonRoot
will deny that with message: Error: container has runAsNonRoot and image will run as root with accordance to PSP.
To run NGINX pod with this policy you would need to either:
Create a PSP (which will allow to run as root user inside a pod) with:
runAsUser:
rule: RunAsAny
Create own NGINX image configured in a way that allows running NGINX as non root user.
Example of such pod below:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: non-root-nginx
name: non-root-nginx
spec:
securityContext:
runAsUser: 101
fsGroup: 101
containers:
- image: nginx
name: non-root-nginx
volumeMounts:
- mountPath: /var/cache/nginx
name: edir
- mountPath: /var/run
name: varun
- mountPath: /etc/nginx/conf.d/default.conf
name: default-conf
subPath: default.conf
dnsPolicy: ClusterFirst
restartPolicy: Never
volumes:
- name: edir
emptyDir: {}
- name: varun
emptyDir: {}
- name: default-conf
configMap:
name: nginx8080
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx8080
namespace: default
data:
default.conf: |+
server {
listen 8080;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
I'm trying to apply podSecurityPolicy and try to test whether it's allowing me to create privileged pod.
Below is the podSecurityPolicy resource manifest.
kind: PodSecurityPolicy
apiVersion: policy/v1beta1
metadata:
name: podsecplcy
spec:
hostIPC: false
hostNetwork: false
hostPID: false
privileged: false
readOnlyRootFilesystem: true
hostPorts:
- min: 10000
max: 30000
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
seLinux:
rule: RunAsAny
volumes:
- '*'
current psp as below
[root#master ~]# kubectl get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
podsecplcy false RunAsAny RunAsAny RunAsAny RunAsAny true *
[root#master ~]#
After submitted the above manifest,i'm trying to create privileged pod using below manifest.
apiVersion: v1
kind: Pod
metadata:
name: pod-privileged
spec:
containers:
- name: main
image: alpine
command: ["/bin/sleep", "999999"]
securityContext:
privileged: true
Without any issues the pod is created.I hope it should throw error since privileged pod creation is restricted through podSecurityPolicy.
Then i realized,it may be a admission controller plugin is not enabled and i saw which admission controller plugins are enabled by describe the kube-apiserver pod(Removed some lines for readability purpose) and able to see only NodeRestriction is enabled
[root#master ~]# kubectl -n kube-system describe po kube-apiserver-master.k8s
--client-ca-file=/etc/kubernetes/pki/ca.crt
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
**Attempt:**
Tried to edit /etc/systemd/system/multi-user.target.wants/kubelet.service and changed ExecStart=/usr/bin/kubelet --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
restarted kubelet service.But no luck
Now how to enable other admission controller plugins?
1. Locate the static pod manifest-path
From systemd status, you will be able to locate the kubelet unit file
systemctl status kubelet.service
Do cat /etc/systemd/system/kubelet.service (replace path with the one you got from above command)
Go to the directory which is pointing to --pod-manifest-path=
2. Open the yaml which starts kube-apiserver-master.k8s Pod
Example steps to locate YAML is below
cd /etc/kubernetes/manifests/
grep kube-apiserver-master.k8s *
3. Append PodSecurityPolicy to flag --enable-admission-plugins= in YAML file
4. Create a PSP and corresponding bindings for kube-system namespace
Create a PSP to grant access to pods in kube-system namespace including CNI
kubectl apply -f - <<EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
name: privileged
spec:
allowedCapabilities:
- '*'
allowPrivilegeEscalation: true
fsGroup:
rule: 'RunAsAny'
hostIPC: true
hostNetwork: true
hostPID: true
hostPorts:
- min: 0
max: 65535
privileged: true
readOnlyRootFilesystem: false
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
volumes:
- '*'
EOF
Cluster role which grants access to the privileged pod security policy
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: privileged-psp
rules:
- apiGroups:
- policy
resourceNames:
- privileged
resources:
- podsecuritypolicies
verbs:
- use
EOF
Role binding
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kube-system-psp
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: privileged-psp
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:kube-system
EOF