securityContext.privileged: Forbidden: disallowed by cluster policy - kubernetes

I can't start pod which requires privileged security context.
PodSecurityPolicy:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: pod-security-policy
spec:
privileged: true
allowPrivilegeEscalation: true
readOnlyRootFilesystem: false
allowedCapabilities:
- '*'
allowedProcMountTypes:
- '*'
allowedUnsafeSysctls:
- '*'
volumes:
- '*'
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
hostNetwork: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
ClusterRole:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: privileged
rules:
- apiGroups:
- '*'
resourceNames:
- pod-security-policy
resources:
- '*'
verbs:
- '*'
ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: privileged-role-binding
roleRef:
kind: ClusterRole
name: privileged
apiGroup: rbac.authorization.k8s.io
subjects:
# Authorize specific service accounts:
- kind: ServiceAccount
name: default
namespace: kube-system
- kind: ServiceAccount
name: default
namespace: default
- kind: Group
# apiGroup: rbac.authorization.k8s.io
name: system:authenticated
# Authorize specific users (not recommended):
- kind: User
apiGroup: rbac.authorization.k8s.io
name: admin
$ k auth can-i use psp/pod-security-policy
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'extensions'
yes
$ k apply -f daemonset.yml
The DaemonSet "daemonset" is invalid: spec.template.spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy
Not sure if it is needed but I have added PodSecurityContext to args/kube-apiserver --enable-admission-plugins
Any advice and insight is appreciated. WTF is this: "It looks like your post is mostly code; please add some more details." !?!

Just checked your Pod Security Policy configuration on my current environment:
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1"
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1"
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1"
I assume that you've included Privileged securityContext in the current DaemonSet manifest file.
securityContext:
privileged: true
In order to allow Kubernetes API spawning Privileged containers you might have to set kube-apiserver flag --allow-privileged to true value.
--allow-privileged=true
Therefore, I'm facing the same issue in my k8s cluster, once I disallow to run privileged containers with false option.

Related

Is PSP only for pods created through deplyment/replica set?

I am trying to set up the security polices in the cluster. I enabled pod security and created a restricted psp
1.Step 1 - Created PSP
2.Step 2 - Created Cluster Role
3.Step 3 - Create ClusterRoleBinding
PSP
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
serviceaccount.cluster.cattle.io/pod-security: restricted
serviceaccount.cluster.cattle.io/pod-security-version: "2315292"
creationTimestamp: "2022-02-28T20:48:12Z"
labels:
cattle.io/creator: norman
name: restricted-psp
spec:
allowPrivilegeEscalation: false
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
Cluster Role -
apiVersion: rbac.authorization.k8s.io/v1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
serviceaccount.cluster.cattle.io/pod-security: restricted
labels:
cattle.io/creator: norman
name: restricted-clusterrole
rules:
- apiGroups:
- extensions
resourceNames:
- restricted-psp
resources:
- podsecuritypolicies
verbs:
- use
ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: restricted-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: restricted-clusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:security
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:authenticated
Create couple of yams one for deplyment and other for pod
kubectl create ns security
$ cat previleged-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: privileged-deploy
name: privileged-pod
spec:
containers:
- image: alpine
name: alpine
stdin: true
tty: true
securityContext:
privileged: true
hostPID: true
hostNetwork: true
$ cat previleged-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: privileged-deploy
name: privileged-deploy
spec:
replicas: 1
selector:
matchLabels:
app: privileged-deploy
template:
metadata:
labels:
app: privileged-deploy
spec:
containers:
- image: alpine
name: alpine
stdin: true
tty: true
securityContext:
privileged: true
hostPID: true
hostNetwork: true
The expectation was both pod and deployment to be prevented . But the pod got created and deployment failed
$ kg all -n security
NAME READY STATUS RESTARTS AGE
**pod/privileged-pod 1/1 Running 0 13m**
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/privileged-deploy 0/1 0 0 13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/privileged-deploy-77d7c75dd8 1 0 0 13m
As Expected Error for Deployment came as below
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 3m10s (x18 over 14m) replicaset-controller Error creating: pods "privileged-deploy-77d7c75dd8-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
But the pod created directly though yaml worked . Is PSP only for pods getting created through deplyment/rs ? Please help , how can we prevent users from creating pods which are previleged and dangerous
But the pod created directly though yaml worked . Is PSP only for pods
getting created through deplyment/rs ?
That's because when you create a bare pod (creating a pod directly) it will be created by the user called kubernetes-admin (in default scenarios), who is a member of the group system:masters, which is mapped to a cluster role called cluster-admin, which has access to all the PSPs that get created on the cluster. So the creation of bare pods will be successful.
Whereas pods that are created by deployment,rs,sts,ds (all the managed pods) will be created using the service account mentioned in their definition. The creation of these pods will be successful only if these service accounts have access to PSP via a cluster role or role.
how can we prevent users from creating pods which are previleged and dangerous
We need to identify what is that user and group that will be creating these pods (by checking ~/kube/config or its certificate) and then make sure, it does not have access to PSP via any cluster role or role.

How can I check if 'use' of podsecuritypolicy is authorized in the namespace using 'kubectl auth can-i ... psp'?

The following error is returned:
error: you must specify two or three arguments: verb, resource, and optional resourceName
when I executed:
kubectl auth --as=system:serviceaccount:mytest1:default can-i use psp 00-mytest1
I already have following manifests for podsecuritypolicy (psp.yaml), role (role.yaml) and rolebinding (rb.yaml) and deployed in the namespace mytest1.
psp.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: 00-mytest1
labels: {}
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
runAsUser:
rule: 'MustRunAsNonRoot'
runAsGroup:
rule: 'MustRunAs'
ranges:
- min: 1000
max: 1000
- min: 1
max: 65535
supplementalGroups:
rule: 'MayRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MayRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
hostNetwork: false
hostIPC: false
hostPID: false
hostPorts: []
volumes:
- configMap
- downwardAPI
- emptyDir
- projected
- secret
role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: mytest1
namespace: "mytest1"
labels: {}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['00-mytest1']
and rb.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: mytest1
namespace: "mytest1"
labels: {}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: mytest1
subjects:
- kind: ServiceAccount
name: default
namespace: "mytest1"
I expect the return yes or no for kubectl auth can-i ... check and not the above mentioned error. Is the use-case for auth check correct? I appreciate he correction.
You are missing the flag --subresource. If I execute
kubectl auth --as=system:serviceaccount:mytest1:default can-i use psp --subresource=00-mytest1
I have clear answer. In my situation:
no
You can also get an warning like this:
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
But it is related directly to your config.
For more information about kubectl auth can-i command check
kubectl auth can-i --help
in your terminal.
You can also read this doc.
This one doesnt work:
kubectl auth can-i create deployments --namespace dev
However this one works when "resource" name is provided as shown below:
kubectl auth can-i get pods/dark-blue-app -n blue
Make sure to include resource name / name-of-the-physical resource

Making a PodSecurityPolicy but Official manual not working

I trying to make a PodsSecurityPolicy on my Kubernetes Cluster and I got a Official manual from here
Itn't work: I made all steps on my Kubernetes Cluter, but I can't to get a Forbidden massage.
My Kubernetes-cluster:
nks#comp:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Steps in my case (I marked trhe places "(?!)" where I should get the Forbidden-message but didn't it):
nks#comp:~$ cat psp.yml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nksrole
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- example
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nkscrb
roleRef:
kind: ClusterRole
name: nksrole
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:serviceaccounts
---
nks#comp:~$ kubectl apply -f psp.yml
clusterrole.rbac.authorization.k8s.io/nksrole created
clusterrolebinding.rbac.authorization.k8s.io/nkscrb created
nks#comp:~$ kubectl create namespace psp-example
namespace/psp-example created
nks#comp:~$ kubectl create serviceaccount -n psp-example fake-user
serviceaccount/fake-user created
nks#comp:~$ kubectl create rolebinding -n psp-example fake-editor --clusterrole=edit --serviceaccount=psp-example:fake-user
rolebinding.rbac.authorization.k8s.io/fake-editor created
nks#comp:~$ alias kubectl-admin='kubectl -n psp-example'
nks#comp:~$ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n psp-example'
nks#comp:~$ cat example-psp.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: example
spec:
privileged: false # Don't allow privileged pods!
# The rest fills in some required fields.
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
nks#comp:~$ kubectl-admin create -f example-psp.yaml
podsecuritypolicy.policy/example created
nks#comp:~$ kubectl-user create -f- <<EOF
> apiVersion: v1
> kind: Pod
> metadata:
> name: pause
> spec:
> containers:
> - name: pause
> image: k8s.gcr.io/pause
> EOF
pod/pause created
nks#comp:~$ kubectl-user auth can-i use podsecuritypolicy/example
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
yes
(?!)
nks#comp:~$ kubectl-admin create role psp:unprivileged \
> --verb=use \
> --resource=podsecuritypolicy \
> --resource-name=example
role.rbac.authorization.k8s.io/psp:unprivileged created
nks#comp:~$ kubectl-admin create rolebinding fake-user:psp:unprivileged \
> --role=psp:unprivileged \
> --serviceaccount=psp-example:fake-user
rolebinding.rbac.authorization.k8s.io/fake-user:psp:unprivileged created
nks#comp:~$ kubectl-user auth can-i use podsecuritypolicy/example
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
yes
nks#comp:~$ kubectl-user create -f- <<EOF
> apiVersion: v1
> kind: Pod
> metadata:
> name: privileged
> spec:
> containers:
> - name: pause
> image: k8s.gcr.io/pause
> securityContext:
> privileged: true
> EOF
pod/privileged created
(?!)
Can you help me, please! I have not idea what is wrong
Your cluster version is v1.17.4 and the feature is beta in v1.18 , try after upgrading your cluster.
Also make sure admission controller is enabled for Pod Security Policies,
You need to enable a PSP support in a admission controller
[master]# vi /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.100.50:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.100.50
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction,PodSecurityPolicy ## Added PodSecurityPolicy
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
...
[master]# systemctl restart kubelet
It'll be useful for PSP
[master]# kubectl-user create -f- <<EOF
apiVersion: v1
kind: Pod
metadata:
name: privileged
spec:
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
EOF
Error from server (Forbidden): error when creating "STDIN": pods "privileged" is forbidden: unable to validate agains t any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
It's my case on a Kubernetes v1.18 - I can't try on Kuberentes v1.17 now

how to enable admission controller plugin on k8s where API server is deployed as a systemd service?

I'm trying to apply podSecurityPolicy and try to test whether it's allowing me to create privileged pod.
Below is the podSecurityPolicy resource manifest.
kind: PodSecurityPolicy
apiVersion: policy/v1beta1
metadata:
name: podsecplcy
spec:
hostIPC: false
hostNetwork: false
hostPID: false
privileged: false
readOnlyRootFilesystem: true
hostPorts:
- min: 10000
max: 30000
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
seLinux:
rule: RunAsAny
volumes:
- '*'
current psp as below
[root#master ~]# kubectl get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
podsecplcy false RunAsAny RunAsAny RunAsAny RunAsAny true *
[root#master ~]#
After submitted the above manifest,i'm trying to create privileged pod using below manifest.
apiVersion: v1
kind: Pod
metadata:
name: pod-privileged
spec:
containers:
- name: main
image: alpine
command: ["/bin/sleep", "999999"]
securityContext:
privileged: true
Without any issues the pod is created.I hope it should throw error since privileged pod creation is restricted through podSecurityPolicy.
Then i realized,it may be a admission controller plugin is not enabled and i saw which admission controller plugins are enabled by describe the kube-apiserver pod(Removed some lines for readability purpose) and able to see only NodeRestriction is enabled
[root#master ~]# kubectl -n kube-system describe po kube-apiserver-master.k8s
--client-ca-file=/etc/kubernetes/pki/ca.crt
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
**Attempt:**
Tried to edit /etc/systemd/system/multi-user.target.wants/kubelet.service and changed ExecStart=/usr/bin/kubelet --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
restarted kubelet service.But no luck
Now how to enable other admission controller plugins?
1. Locate the static pod manifest-path
From systemd status, you will be able to locate the kubelet unit file
systemctl status kubelet.service
Do cat /etc/systemd/system/kubelet.service (replace path with the one you got from above command)
Go to the directory which is pointing to --pod-manifest-path=
2. Open the yaml which starts kube-apiserver-master.k8s Pod
Example steps to locate YAML is below
cd /etc/kubernetes/manifests/
grep kube-apiserver-master.k8s *
3. Append PodSecurityPolicy to flag --enable-admission-plugins= in YAML file
4. Create a PSP and corresponding bindings for kube-system namespace
Create a PSP to grant access to pods in kube-system namespace including CNI
kubectl apply -f - <<EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
name: privileged
spec:
allowedCapabilities:
- '*'
allowPrivilegeEscalation: true
fsGroup:
rule: 'RunAsAny'
hostIPC: true
hostNetwork: true
hostPID: true
hostPorts:
- min: 0
max: 65535
privileged: true
readOnlyRootFilesystem: false
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
volumes:
- '*'
EOF
Cluster role which grants access to the privileged pod security policy
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: privileged-psp
rules:
- apiGroups:
- policy
resourceNames:
- privileged
resources:
- podsecuritypolicies
verbs:
- use
EOF
Role binding
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kube-system-psp
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: privileged-psp
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:kube-system
EOF

Kubernetes: My PodSecurityPolicy is not working or misconfigured

I'm trying to restrict all pods except few from running with the privileged mode.
So I created two Pod security policies:
One allowing running privileged containers and one for restricting privileged containers.
[root#master01 vagrant]# kubectl get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
privileged true RunAsAny RunAsAny RunAsAny RunAsAny false *
restricted false RunAsAny RunAsAny RunAsAny RunAsAny false *
Created the Cluster role that can use the pod security policy "restricted" and binded that role to all the serviceaccounts in the cluster
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp-restricted
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- restricted
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: psp-restricted
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-restricted
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
Now I deploying a pod with "privileged" mode. But it is getting deployed. The created pod annotation indicating that the psp "privileged" was validated during the pod creation time. Why is that ? the restricted PSP should be validated right ?
apiVersion: v1
kind: Pod
metadata:
name: psp-test-pod
namespace: default
spec:
serviceAccountName: default
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
[root#master01 vagrant]# kubectl create -f pod.yaml
pod/psp-test-pod created
[root#master01 vagrant]# kubectl get pod psp-test-pod -o yaml |grep kubernetes.io/psp
kubernetes.io/psp: privileged
Kubernetes version: v1.14.5
Am i missing something here ? Any help is appreciated.
Posting the answer to my own question. Hope it will help someone
All my PodSecurityPolicy configurations are correct. The issue was, I tried to deploy a pod by its own not via any controller manager like Deployment/Replicaset/Daemonset etc..
Most Kubernetes pods are not created directly by users. Instead, they are typically created indirectly as part of a Deployment, ReplicaSet or other templated controller via the controller manager.
In the case of a pod deployed by its own, pod is created by kubectl not by controller manager.
In Kubernetes there is one superuser role named "cluster-admin". In my case, kubectl is running with superuser role "cluster-admin". This "cluster-admin" role has access to all the pod security policies. Because, to associate a pod security policy to a role, we need to use 'use' verbs and set 'resources' to 'podsecuritypolicy' in 'apiGroups'
In the cluster-admin role 'resources' * include 'podsecuritypolicies' and 'verbs' * include 'use'. So all policies will also enforce on the cluster-admin role as well.
[root#master01 vagrant]# kubectl get clusterrole cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: psp-test-pod
namespace: default
spec:
serviceAccountName: default
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
I deployed the above pod.yaml using the command kubectl create -f pod.yaml
Since I had created two pod security policies one for restriction and one for privileges, cluster-admin role has access to both policies. So the above pod will launch fine with kubectl because cluster-admin role has access to the privileged policy( privileged: false also works because admin role has access to restriction policy as well). This situation happens only if either a pod created directly by kubectl not by kube control managers or a pod has access to the "cluster-admin" role via serviceaccount
In the case of a pod created by Deployment/Replicaset etc..first kubectl pass the control to the controller manager, then the controller will try to deploy the pod after validating the permissions(serviceaccount, podsecuritypolicies)
In the below Deployment file, pod is trying to run with privileged mode. In my case, this deployment will fail because I already set the "restricted" policy as the default policy for all the serviceaccounts in the cluster. So no pod will able to run with privileged mode. If a pod needs to run with privileged mode, allow the serviceaccount of that pod to use the "privileged" policy
apiVersion: apps/v1
kind: Deployment
metadata:
name: pause-deploy-privileged
namespace: kube-system
labels:
app: pause
spec:
replicas: 1
selector:
matchLabels:
app: pause
template:
metadata:
labels:
app: pause
spec:
serviceAccountName: default
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true