I'm trying to restrict all pods except few from running with the privileged mode.
So I created two Pod security policies:
One allowing running privileged containers and one for restricting privileged containers.
[root#master01 vagrant]# kubectl get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
privileged true RunAsAny RunAsAny RunAsAny RunAsAny false *
restricted false RunAsAny RunAsAny RunAsAny RunAsAny false *
Created the Cluster role that can use the pod security policy "restricted" and binded that role to all the serviceaccounts in the cluster
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp-restricted
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- restricted
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: psp-restricted
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-restricted
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
Now I deploying a pod with "privileged" mode. But it is getting deployed. The created pod annotation indicating that the psp "privileged" was validated during the pod creation time. Why is that ? the restricted PSP should be validated right ?
apiVersion: v1
kind: Pod
metadata:
name: psp-test-pod
namespace: default
spec:
serviceAccountName: default
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
[root#master01 vagrant]# kubectl create -f pod.yaml
pod/psp-test-pod created
[root#master01 vagrant]# kubectl get pod psp-test-pod -o yaml |grep kubernetes.io/psp
kubernetes.io/psp: privileged
Kubernetes version: v1.14.5
Am i missing something here ? Any help is appreciated.
Posting the answer to my own question. Hope it will help someone
All my PodSecurityPolicy configurations are correct. The issue was, I tried to deploy a pod by its own not via any controller manager like Deployment/Replicaset/Daemonset etc..
Most Kubernetes pods are not created directly by users. Instead, they are typically created indirectly as part of a Deployment, ReplicaSet or other templated controller via the controller manager.
In the case of a pod deployed by its own, pod is created by kubectl not by controller manager.
In Kubernetes there is one superuser role named "cluster-admin". In my case, kubectl is running with superuser role "cluster-admin". This "cluster-admin" role has access to all the pod security policies. Because, to associate a pod security policy to a role, we need to use 'use' verbs and set 'resources' to 'podsecuritypolicy' in 'apiGroups'
In the cluster-admin role 'resources' * include 'podsecuritypolicies' and 'verbs' * include 'use'. So all policies will also enforce on the cluster-admin role as well.
[root#master01 vagrant]# kubectl get clusterrole cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: psp-test-pod
namespace: default
spec:
serviceAccountName: default
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
I deployed the above pod.yaml using the command kubectl create -f pod.yaml
Since I had created two pod security policies one for restriction and one for privileges, cluster-admin role has access to both policies. So the above pod will launch fine with kubectl because cluster-admin role has access to the privileged policy( privileged: false also works because admin role has access to restriction policy as well). This situation happens only if either a pod created directly by kubectl not by kube control managers or a pod has access to the "cluster-admin" role via serviceaccount
In the case of a pod created by Deployment/Replicaset etc..first kubectl pass the control to the controller manager, then the controller will try to deploy the pod after validating the permissions(serviceaccount, podsecuritypolicies)
In the below Deployment file, pod is trying to run with privileged mode. In my case, this deployment will fail because I already set the "restricted" policy as the default policy for all the serviceaccounts in the cluster. So no pod will able to run with privileged mode. If a pod needs to run with privileged mode, allow the serviceaccount of that pod to use the "privileged" policy
apiVersion: apps/v1
kind: Deployment
metadata:
name: pause-deploy-privileged
namespace: kube-system
labels:
app: pause
spec:
replicas: 1
selector:
matchLabels:
app: pause
template:
metadata:
labels:
app: pause
spec:
serviceAccountName: default
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
Related
I am trying to set up the security polices in the cluster. I enabled pod security and created a restricted psp
1.Step 1 - Created PSP
2.Step 2 - Created Cluster Role
3.Step 3 - Create ClusterRoleBinding
PSP
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
serviceaccount.cluster.cattle.io/pod-security: restricted
serviceaccount.cluster.cattle.io/pod-security-version: "2315292"
creationTimestamp: "2022-02-28T20:48:12Z"
labels:
cattle.io/creator: norman
name: restricted-psp
spec:
allowPrivilegeEscalation: false
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
Cluster Role -
apiVersion: rbac.authorization.k8s.io/v1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
serviceaccount.cluster.cattle.io/pod-security: restricted
labels:
cattle.io/creator: norman
name: restricted-clusterrole
rules:
- apiGroups:
- extensions
resourceNames:
- restricted-psp
resources:
- podsecuritypolicies
verbs:
- use
ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: restricted-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: restricted-clusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:security
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:authenticated
Create couple of yams one for deplyment and other for pod
kubectl create ns security
$ cat previleged-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: privileged-deploy
name: privileged-pod
spec:
containers:
- image: alpine
name: alpine
stdin: true
tty: true
securityContext:
privileged: true
hostPID: true
hostNetwork: true
$ cat previleged-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: privileged-deploy
name: privileged-deploy
spec:
replicas: 1
selector:
matchLabels:
app: privileged-deploy
template:
metadata:
labels:
app: privileged-deploy
spec:
containers:
- image: alpine
name: alpine
stdin: true
tty: true
securityContext:
privileged: true
hostPID: true
hostNetwork: true
The expectation was both pod and deployment to be prevented . But the pod got created and deployment failed
$ kg all -n security
NAME READY STATUS RESTARTS AGE
**pod/privileged-pod 1/1 Running 0 13m**
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/privileged-deploy 0/1 0 0 13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/privileged-deploy-77d7c75dd8 1 0 0 13m
As Expected Error for Deployment came as below
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 3m10s (x18 over 14m) replicaset-controller Error creating: pods "privileged-deploy-77d7c75dd8-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
But the pod created directly though yaml worked . Is PSP only for pods getting created through deplyment/rs ? Please help , how can we prevent users from creating pods which are previleged and dangerous
But the pod created directly though yaml worked . Is PSP only for pods
getting created through deplyment/rs ?
That's because when you create a bare pod (creating a pod directly) it will be created by the user called kubernetes-admin (in default scenarios), who is a member of the group system:masters, which is mapped to a cluster role called cluster-admin, which has access to all the PSPs that get created on the cluster. So the creation of bare pods will be successful.
Whereas pods that are created by deployment,rs,sts,ds (all the managed pods) will be created using the service account mentioned in their definition. The creation of these pods will be successful only if these service accounts have access to PSP via a cluster role or role.
how can we prevent users from creating pods which are previleged and dangerous
We need to identify what is that user and group that will be creating these pods (by checking ~/kube/config or its certificate) and then make sure, it does not have access to PSP via any cluster role or role.
I am trying to attach an IAM role to a pod's service account from within the POD in EKS.
kubectl annotate serviceaccount -n $namespace $serviceaccount eks.amazonaws.com/role-arn=$ARN
The current role attached to the $serviceaccountis outlined below:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: common-role
rules:
- apiGroups: [""]
resources:
- event
- secrets
- configmaps
- serviceaccounts
verbs:
- get
- create
However, when I execute the kubectl command I get the following:
error from server (forbidden): serviceaccounts $serviceaccount is forbidden: user "system:servi...." cannot get resource "serviceaccounts" in API group "" ...
Is my role correct? Why can't I modify the service account?
Kubernetes by default will run the pods with service account: default which don`t have the right permissions. Since I cannot determine which one you are using for your pod I can only assume that you are using either default or some other created by you. In both cases the error suggest that the service account your are using to run your pod does not have proper rights.
If you run this pod with service account type default you will have add the appropriate rights to it. Alternative way is to run your pod with another service account created for this purpose. Here`s an example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: run-kubectl-from-pod
Then you will have to create appropriate role (you can find full list of verbs here):
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-service-accounts
rules:
- apiGroups: [""]
resources:
- serviceaccounts
verbs:
- get
- create
- patch
- list
I'm using here more verbs as a test. Get and Patch would be enough for this use case. I`m mentioning this since its best practice to provide as minimum rights as possible.
Then create your role accordingly:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-service-account-bind
subjects:
- kind: ServiceAccount
name: run-kubectl-from-pod
roleRef:
kind: Role
name: modify-service-accounts
apiGroup: rbac.authorization.k8s.io
And now you just have reference that service account when your run your pod:
apiVersion: v1
kind: Pod
metadata:
name: run-kubectl-in-pod
spec:
serviceAccountName: run-kubectl-from-pod
containers:
- name: kubectl-in-pod
image: bitnami/kubectl
command:
- sleep
- "3600"
Once that is done, you just exec into the pod:
➜ kubectl-pod kubectl exec -ti run-kubectl-in-pod sh
And then annotate the service account:
$ kubectl get sa
NAME SECRETS AGE
default 1 19m
eks-sa 1 36s
run-kubectl-from-pod 1 17m
$ kubectl annotate serviceaccount eks-sa eks.amazonaws.com/role-arn=$ARN
serviceaccount/eks-sa annotated
$ kubectl describe sa eks-sa
Name: eks-sa
Namespace: default
Labels: <none>
Annotations: eks.amazonaws.com/role-arn:
Image pull secrets: <none>
Mountable secrets: eks-sa-token-sldnn
Tokens: <none>
Events: <none>
If you encounter any issues with request being refused please start with reviewing your request attributes and determine the appropriate request verb.
You can also check your access with kubectl auth can-i command:
kubectl-pod kubectl auth can-i patch serviceaccount
API server will respond with simple yes or no.
Please Note that If you want to patch a service account to use an IAM role you will have delete and re-create any existing pods that are assocaited with the service account to apply credentials environment variables. You can read more about it here.
While your role appears to be correct, please keep in mind that when executing kubectl, the RBAC permissions of your account in kubeconfig are relevant for whether you are allowed to perform an action.
From your question, I understand that your role is attached to the service account you are trying to annotate, which is irrelevant to the kubectl permission check.
I am trying to access all the namespaces and pods from my another pod. So, I have created clusterrole, clusterrolebinding and service account. I am able access the only customer namespace resources. But I need to access all the namespace resources. Is it possible?
apiVersion: v1
kind: ServiceAccount
metadata:
name: spinupcontainers
namespace: customer
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: spinupcontainers
namespace: customer
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "delete", "patch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spinupcontainers
namespace: customer
subjects:
- kind: ServiceAccount
name: spinupcontainers
roleRef:
kind: ClusterRole
name: spinupcontainers
apiGroup: rbac.authorization.k8s.io
Could anyone help to resolve this problem?
Thanks in advance
It seems in your YAML example you are using a RoleBinding as opposed to a ClusterRoleBinding. A RoleBinding only grants those permissions inside of a namespace. See also the Kubernetes Documentation on this topic:
A RoleBinding grants permissions within a specific namespace whereas a
ClusterRoleBinding grants that access cluster-wide.
Most important thing is that you have to connect your service account to your cluster role with proper cluster role binding. Because binding types decide that scope of service account abilities. Under these circumstances, you have to describe cluster role binding as shown below;
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: spinupcontainers
subjects:
- kind: ServiceAccount
name: spinupcontainers
namespace: customer
roleRef:
kind: ClusterRole
name: spinupcontainers
apiGroup: "rbac.authorization.k8s.io"
If you want to test this within the pod you would describe respective service account for pod like below:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- sleep
- "4800"
image: busybox:1.28
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
serviceAccountName: default
status: {}
And then finally you need to ssh to pod and can execute proper curl command with using service account token. Do not forget that you can find the token file in pod by defined service account to pod yaml before (in /var/run/secrets/kubernetes.io/serviceaccount). After that you have to execute API call to use kubernetes API server service (ıf you used kubeadm to create the cluster. It has been already defined in default namespace as named kubernetes). In the below, you can find proper apı call to get default namespace secrets
curl -k -H "Authorization: Bearer $TOKEN" https://<kubernetes-apı-fqdn>/api/v1/namespaces/default/secrets
Hi I saw this documentation where kubectl can run inside a pod in the default pod.
Is it possible to run kubectl inside a Job resource in a specified namespace?
Did not see any documentation or examples for the same..
When I tried adding serviceAccounts to the container i got the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
This was when i tried sshing into the container and running the kubctl.
Editing the question.....
As I mentioned earlier, based on the documentation I had added the service Accounts, Below is the yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl
command:
- "bin/bash"
- "-c"
- "kubectl get pods"
restartPolicy: Never
On running the job, I get the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
Is it possible to run kubectl inside a Job resource in a specified namespace? Did not see any documentation or examples for the same..
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. It means the permission aspect is the same as in a normal pod, meaning that yes, it is possible to run kubectl inside a job resource.
TL;DR:
Your yaml file is correct, maybe there were something else in your cluster, I recommend deleting and recreating these resources and try again.
Also check the version of your Kubernetes installation and job image's kubectl version, if they are more than 1 minor-version apart, you may have unexpected incompatibilities
Security Considerations:
Your job role's scope is the best practice according to documentation (specific role, to specific user on specific namespace).
If you use a ClusterRoleBinding with the cluster-admin role it will work, but it's over permissioned, and not recommended since it's giving full admin control over the entire cluster.
Test Environment:
I deployed your config on a kubernetes 1.17.3 and run the job with bitnami/kubectl and bitnami/kubectl:1:17.3. It worked on both cases.
In order to avoid incompatibility, use the kubectl with matching version with your server.
Reproduction:
$ cat job-kubectl.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl:1.17.3
command:
- "bin/bash"
- "-c"
- "kubectl get pods -n my-namespace"
restartPolicy: Never
$ cat job-svc-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
I created two pods just to add output to the log of get pods.
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty --namespace my-namespace
the pod is running
$ kubectl run ubuntu --generator=run-pod/v1 --image=ubuntu -n my-namespace
pod/ubuntu created
Then I apply the job, ServiceAccount, Role and RoleBinding
$ kubectl get pods -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 88s
testing-stuff-ddpvf 0/1 Completed 0 13s
ubuntu 0/1 Completed 3 63s
Now let's check the testing-stuff pod log to see if it logged the command output:
$ kubectl logs testing-stuff-ddpvf -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 76s
testing-stuff-ddpvf 1/1 Running 0 1s
ubuntu 1/1 Running 3 51s
As you can see, it has succeeded running the job with the custom ServiceAccount.
Let me know if you have further questions about this case.
Create service account like this.
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
Create ClusterRoleBinding using this.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: modify-pods-to-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: internal-kubectl
Now create pod with same config that are given at Documentation.
When you use kubectl from the pod for any operation such as getting pod or creating roles and role bindings it will use the default service account. This service account don't have permission to perform those operations by default. So you need to
create a service account, role and rolebinding using a more privileged account.You should have a kubeconfig file with admin privilege or admin like privilege. Use that kubeconfig file with kubectl from outside the pod to create the service account, role, rolebinding etc.
After that is done create pod by specifying that service account and you should be able perform operations which are defined in the role from within this pod using kubectl and the service account.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: internal-kubectl
I have .NET Standard (4.7.2) simple application that is containerized. It has a method to list all namespaces in a cluster. I used csharp kubernetes client to interact with the API. According to official documentation the default credential of API server are created in a pod and used to communicate with API server, but while calling kubernetes API from the pod, getting following error:
Operation returned an invalid status code 'Forbidden'
My deployment yaml is very minimal:
apiVersion: v1
kind: Pod
metadata:
name: cmd-dotnetstdk8stest
spec:
nodeSelector:
kubernetes.io/os: windows
containers:
- name: cmd-dotnetstdk8stest
image: eddyuk/dotnetstdk8stest:1.0.8-cmd
ports:
- containerPort: 80
I think you have RBAC activatet inside your Cluster. You need to assign a ServiceAccount to your pod which containing a Role, that allows this ServerAccount to get a list of all Namespaces. When no ServiceAccount is specified in the Pod-Template, the namespaces default ServiceAccount will be assigned to the pods running in this namespace.
First, you should create the Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: <YOUR NAMESPACE>
name: namespace-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["namespaces"] # Resource is namespaces
verbs: ["get", "list"] # Allowing this roll to get and list namespaces
Create a new ServiceAccount inside your Namespace
apiVersion: v1
kind: ServiceAccount
metadata:
name: application-sa
namespace: <YOUR-NAMESPACE>
Assign your Role created Role to the Service-Account:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-namespace-listing
namespace: <YOUR-NAMESPACE>
subjects:
- kind: ServiceAccount
name: application-sa # Your newly created Service-Account
namespace: <YOUR-NAMESPACE>
roleRef:
kind: Role
name: namespace-reader # Your newly created Role
apiGroup: rbac.authorization.k8s.io
Assign the new Role to your Pod by adding a ServiceAccount to your Pod Spec:
apiVersion: v1
kind: Pod
metadata:
name: podname
namespace: <YOUR-NAMESPACE>
spec:
serviceAccountName: application-sa
You can read more about RBAC in the official docs. Maybe you want to use kubectl-Commands instead of YAML definitions.