I trying to make a PodsSecurityPolicy on my Kubernetes Cluster and I got a Official manual from here
Itn't work: I made all steps on my Kubernetes Cluter, but I can't to get a Forbidden massage.
My Kubernetes-cluster:
nks#comp:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Steps in my case (I marked trhe places "(?!)" where I should get the Forbidden-message but didn't it):
nks#comp:~$ cat psp.yml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nksrole
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- example
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nkscrb
roleRef:
kind: ClusterRole
name: nksrole
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:serviceaccounts
---
nks#comp:~$ kubectl apply -f psp.yml
clusterrole.rbac.authorization.k8s.io/nksrole created
clusterrolebinding.rbac.authorization.k8s.io/nkscrb created
nks#comp:~$ kubectl create namespace psp-example
namespace/psp-example created
nks#comp:~$ kubectl create serviceaccount -n psp-example fake-user
serviceaccount/fake-user created
nks#comp:~$ kubectl create rolebinding -n psp-example fake-editor --clusterrole=edit --serviceaccount=psp-example:fake-user
rolebinding.rbac.authorization.k8s.io/fake-editor created
nks#comp:~$ alias kubectl-admin='kubectl -n psp-example'
nks#comp:~$ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n psp-example'
nks#comp:~$ cat example-psp.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: example
spec:
privileged: false # Don't allow privileged pods!
# The rest fills in some required fields.
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
nks#comp:~$ kubectl-admin create -f example-psp.yaml
podsecuritypolicy.policy/example created
nks#comp:~$ kubectl-user create -f- <<EOF
> apiVersion: v1
> kind: Pod
> metadata:
> name: pause
> spec:
> containers:
> - name: pause
> image: k8s.gcr.io/pause
> EOF
pod/pause created
nks#comp:~$ kubectl-user auth can-i use podsecuritypolicy/example
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
yes
(?!)
nks#comp:~$ kubectl-admin create role psp:unprivileged \
> --verb=use \
> --resource=podsecuritypolicy \
> --resource-name=example
role.rbac.authorization.k8s.io/psp:unprivileged created
nks#comp:~$ kubectl-admin create rolebinding fake-user:psp:unprivileged \
> --role=psp:unprivileged \
> --serviceaccount=psp-example:fake-user
rolebinding.rbac.authorization.k8s.io/fake-user:psp:unprivileged created
nks#comp:~$ kubectl-user auth can-i use podsecuritypolicy/example
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
yes
nks#comp:~$ kubectl-user create -f- <<EOF
> apiVersion: v1
> kind: Pod
> metadata:
> name: privileged
> spec:
> containers:
> - name: pause
> image: k8s.gcr.io/pause
> securityContext:
> privileged: true
> EOF
pod/privileged created
(?!)
Can you help me, please! I have not idea what is wrong
Your cluster version is v1.17.4 and the feature is beta in v1.18 , try after upgrading your cluster.
Also make sure admission controller is enabled for Pod Security Policies,
You need to enable a PSP support in a admission controller
[master]# vi /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.100.50:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.100.50
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction,PodSecurityPolicy ## Added PodSecurityPolicy
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
...
[master]# systemctl restart kubelet
It'll be useful for PSP
[master]# kubectl-user create -f- <<EOF
apiVersion: v1
kind: Pod
metadata:
name: privileged
spec:
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
EOF
Error from server (Forbidden): error when creating "STDIN": pods "privileged" is forbidden: unable to validate agains t any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
It's my case on a Kubernetes v1.18 - I can't try on Kuberentes v1.17 now
Related
Friends, I'm new to Kubernetes and recently installed Kubernetes manually through a tutorial,execute the command:kubectl exec -it -n kube-system coredns-867b8c5ddf-8xfz6 -- sh,an error occurred: "x509: certificate signed by unknown authority",kubectl log command will also report the same error,but kubectl get nodes and kubectl get podes can get node information normally,This is the step for me to configure RBAC authorization to allow the kube-api server to access the kubelet API on each worker node:
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
This is admin.kubeconfig content:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t******tLQo=
server: https://127.0.0.1:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0t******Cg==
client-key-data: LS0tL******LQo=
The content in "~/.kube/config" is the same as the content in admin.kubeconfig. I went to check and confirmed that my certificate has not expired. It seems that the Token authentication of the dashboard is also affected by this problem and cannot pass,my system's CentOS7.7. The kubernetes component version is 1.22.4. I hope to get help.
The following error is returned:
error: you must specify two or three arguments: verb, resource, and optional resourceName
when I executed:
kubectl auth --as=system:serviceaccount:mytest1:default can-i use psp 00-mytest1
I already have following manifests for podsecuritypolicy (psp.yaml), role (role.yaml) and rolebinding (rb.yaml) and deployed in the namespace mytest1.
psp.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: 00-mytest1
labels: {}
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
runAsUser:
rule: 'MustRunAsNonRoot'
runAsGroup:
rule: 'MustRunAs'
ranges:
- min: 1000
max: 1000
- min: 1
max: 65535
supplementalGroups:
rule: 'MayRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MayRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
hostNetwork: false
hostIPC: false
hostPID: false
hostPorts: []
volumes:
- configMap
- downwardAPI
- emptyDir
- projected
- secret
role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: mytest1
namespace: "mytest1"
labels: {}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['00-mytest1']
and rb.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: mytest1
namespace: "mytest1"
labels: {}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: mytest1
subjects:
- kind: ServiceAccount
name: default
namespace: "mytest1"
I expect the return yes or no for kubectl auth can-i ... check and not the above mentioned error. Is the use-case for auth check correct? I appreciate he correction.
You are missing the flag --subresource. If I execute
kubectl auth --as=system:serviceaccount:mytest1:default can-i use psp --subresource=00-mytest1
I have clear answer. In my situation:
no
You can also get an warning like this:
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
But it is related directly to your config.
For more information about kubectl auth can-i command check
kubectl auth can-i --help
in your terminal.
You can also read this doc.
This one doesnt work:
kubectl auth can-i create deployments --namespace dev
However this one works when "resource" name is provided as shown below:
kubectl auth can-i get pods/dark-blue-app -n blue
Make sure to include resource name / name-of-the-physical resource
I'm trying to apply podSecurityPolicy and try to test whether it's allowing me to create privileged pod.
Below is the podSecurityPolicy resource manifest.
kind: PodSecurityPolicy
apiVersion: policy/v1beta1
metadata:
name: podsecplcy
spec:
hostIPC: false
hostNetwork: false
hostPID: false
privileged: false
readOnlyRootFilesystem: true
hostPorts:
- min: 10000
max: 30000
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
seLinux:
rule: RunAsAny
volumes:
- '*'
current psp as below
[root#master ~]# kubectl get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
podsecplcy false RunAsAny RunAsAny RunAsAny RunAsAny true *
[root#master ~]#
After submitted the above manifest,i'm trying to create privileged pod using below manifest.
apiVersion: v1
kind: Pod
metadata:
name: pod-privileged
spec:
containers:
- name: main
image: alpine
command: ["/bin/sleep", "999999"]
securityContext:
privileged: true
Without any issues the pod is created.I hope it should throw error since privileged pod creation is restricted through podSecurityPolicy.
Then i realized,it may be a admission controller plugin is not enabled and i saw which admission controller plugins are enabled by describe the kube-apiserver pod(Removed some lines for readability purpose) and able to see only NodeRestriction is enabled
[root#master ~]# kubectl -n kube-system describe po kube-apiserver-master.k8s
--client-ca-file=/etc/kubernetes/pki/ca.crt
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
**Attempt:**
Tried to edit /etc/systemd/system/multi-user.target.wants/kubelet.service and changed ExecStart=/usr/bin/kubelet --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
restarted kubelet service.But no luck
Now how to enable other admission controller plugins?
1. Locate the static pod manifest-path
From systemd status, you will be able to locate the kubelet unit file
systemctl status kubelet.service
Do cat /etc/systemd/system/kubelet.service (replace path with the one you got from above command)
Go to the directory which is pointing to --pod-manifest-path=
2. Open the yaml which starts kube-apiserver-master.k8s Pod
Example steps to locate YAML is below
cd /etc/kubernetes/manifests/
grep kube-apiserver-master.k8s *
3. Append PodSecurityPolicy to flag --enable-admission-plugins= in YAML file
4. Create a PSP and corresponding bindings for kube-system namespace
Create a PSP to grant access to pods in kube-system namespace including CNI
kubectl apply -f - <<EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
name: privileged
spec:
allowedCapabilities:
- '*'
allowPrivilegeEscalation: true
fsGroup:
rule: 'RunAsAny'
hostIPC: true
hostNetwork: true
hostPID: true
hostPorts:
- min: 0
max: 65535
privileged: true
readOnlyRootFilesystem: false
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
volumes:
- '*'
EOF
Cluster role which grants access to the privileged pod security policy
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: privileged-psp
rules:
- apiGroups:
- policy
resourceNames:
- privileged
resources:
- podsecuritypolicies
verbs:
- use
EOF
Role binding
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kube-system-psp
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: privileged-psp
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:kube-system
EOF
I'm trying to restrict all pods except few from running with the privileged mode.
So I created two Pod security policies:
One allowing running privileged containers and one for restricting privileged containers.
[root#master01 vagrant]# kubectl get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
privileged true RunAsAny RunAsAny RunAsAny RunAsAny false *
restricted false RunAsAny RunAsAny RunAsAny RunAsAny false *
Created the Cluster role that can use the pod security policy "restricted" and binded that role to all the serviceaccounts in the cluster
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp-restricted
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- restricted
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: psp-restricted
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-restricted
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
Now I deploying a pod with "privileged" mode. But it is getting deployed. The created pod annotation indicating that the psp "privileged" was validated during the pod creation time. Why is that ? the restricted PSP should be validated right ?
apiVersion: v1
kind: Pod
metadata:
name: psp-test-pod
namespace: default
spec:
serviceAccountName: default
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
[root#master01 vagrant]# kubectl create -f pod.yaml
pod/psp-test-pod created
[root#master01 vagrant]# kubectl get pod psp-test-pod -o yaml |grep kubernetes.io/psp
kubernetes.io/psp: privileged
Kubernetes version: v1.14.5
Am i missing something here ? Any help is appreciated.
Posting the answer to my own question. Hope it will help someone
All my PodSecurityPolicy configurations are correct. The issue was, I tried to deploy a pod by its own not via any controller manager like Deployment/Replicaset/Daemonset etc..
Most Kubernetes pods are not created directly by users. Instead, they are typically created indirectly as part of a Deployment, ReplicaSet or other templated controller via the controller manager.
In the case of a pod deployed by its own, pod is created by kubectl not by controller manager.
In Kubernetes there is one superuser role named "cluster-admin". In my case, kubectl is running with superuser role "cluster-admin". This "cluster-admin" role has access to all the pod security policies. Because, to associate a pod security policy to a role, we need to use 'use' verbs and set 'resources' to 'podsecuritypolicy' in 'apiGroups'
In the cluster-admin role 'resources' * include 'podsecuritypolicies' and 'verbs' * include 'use'. So all policies will also enforce on the cluster-admin role as well.
[root#master01 vagrant]# kubectl get clusterrole cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: psp-test-pod
namespace: default
spec:
serviceAccountName: default
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
I deployed the above pod.yaml using the command kubectl create -f pod.yaml
Since I had created two pod security policies one for restriction and one for privileges, cluster-admin role has access to both policies. So the above pod will launch fine with kubectl because cluster-admin role has access to the privileged policy( privileged: false also works because admin role has access to restriction policy as well). This situation happens only if either a pod created directly by kubectl not by kube control managers or a pod has access to the "cluster-admin" role via serviceaccount
In the case of a pod created by Deployment/Replicaset etc..first kubectl pass the control to the controller manager, then the controller will try to deploy the pod after validating the permissions(serviceaccount, podsecuritypolicies)
In the below Deployment file, pod is trying to run with privileged mode. In my case, this deployment will fail because I already set the "restricted" policy as the default policy for all the serviceaccounts in the cluster. So no pod will able to run with privileged mode. If a pod needs to run with privileged mode, allow the serviceaccount of that pod to use the "privileged" policy
apiVersion: apps/v1
kind: Deployment
metadata:
name: pause-deploy-privileged
namespace: kube-system
labels:
app: pause
spec:
replicas: 1
selector:
matchLabels:
app: pause
template:
metadata:
labels:
app: pause
spec:
serviceAccountName: default
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
I can't start pod which requires privileged security context.
PodSecurityPolicy:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: pod-security-policy
spec:
privileged: true
allowPrivilegeEscalation: true
readOnlyRootFilesystem: false
allowedCapabilities:
- '*'
allowedProcMountTypes:
- '*'
allowedUnsafeSysctls:
- '*'
volumes:
- '*'
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
hostNetwork: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
ClusterRole:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: privileged
rules:
- apiGroups:
- '*'
resourceNames:
- pod-security-policy
resources:
- '*'
verbs:
- '*'
ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: privileged-role-binding
roleRef:
kind: ClusterRole
name: privileged
apiGroup: rbac.authorization.k8s.io
subjects:
# Authorize specific service accounts:
- kind: ServiceAccount
name: default
namespace: kube-system
- kind: ServiceAccount
name: default
namespace: default
- kind: Group
# apiGroup: rbac.authorization.k8s.io
name: system:authenticated
# Authorize specific users (not recommended):
- kind: User
apiGroup: rbac.authorization.k8s.io
name: admin
$ k auth can-i use psp/pod-security-policy
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'extensions'
yes
$ k apply -f daemonset.yml
The DaemonSet "daemonset" is invalid: spec.template.spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy
Not sure if it is needed but I have added PodSecurityContext to args/kube-apiserver --enable-admission-plugins
Any advice and insight is appreciated. WTF is this: "It looks like your post is mostly code; please add some more details." !?!
Just checked your Pod Security Policy configuration on my current environment:
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1"
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1"
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1"
I assume that you've included Privileged securityContext in the current DaemonSet manifest file.
securityContext:
privileged: true
In order to allow Kubernetes API spawning Privileged containers you might have to set kube-apiserver flag --allow-privileged to true value.
--allow-privileged=true
Therefore, I'm facing the same issue in my k8s cluster, once I disallow to run privileged containers with false option.