Minikube: Restricted PodSecurityPolicy is not restricting when trying to create a privileged container - kubernetes

I have enabled podsecuritypolicy in minikube. By default it has created two psp - privileged and restricted.
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false *
restricted false RunAsAny MustRunAsNonRoot MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
I have also created a linux user - kubexz, for which I have created ClusterRole and RoleBinding to restrict for only managing pods on kubexz namespace, and use the restricted psp.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: only-edit
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create", "delete", "deletecollection", "patch", "update", "get", "list", "watch"]
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
resourceNames: ["restricted"]
verbs: ["use"]
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: kubexz-rolebinding
namespace: kubexz
subjects:
- kind: User
name: kubexz
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: only-edit
I have set the kubeconfig file in my kubexz user $HOME/.kube. The RBAC is working fine - From kubexz user I am only able to create and manage pod resources in the kubexz namespace as expected.
But when I post a pod manifest with securityContext.privileged: true, the restricted podsecuritypolicy is not stopping me to create that pod. I should not be able to create a pod with privilege container. But the pod is getting created. Not sure what am I missing
apiVersion: v1
kind: Pod
metadata:
name: new-pod
spec:
hostPID: true
containers:
- name: justsleep
image: alpine
command: ["/bin/sleep", "999999"]
securityContext:
privileged: true

I have followed PodsecurityPolicy using minikube.
This work by default only while using Minikube 1.11.1 with Kubernetes 1.16.x or higher.
Note for older versions of minikube:
Older versions of minikube do not ship with the pod-security-policy addon, so the policies that addon enables must be separately applied to the cluster
What I did:
1. Start minikube with the PodSecurityPolicy admission controller and the pod-security-policy addon enabled.
minikube start --extra-config=apiserver.enable-admission-plugins=PodSecurityPolicy --addons=pod-security-policy
The pod-security-policy addon must be enabled along with the admission controller to prevent issues during bootstrap.
2. Create authenticated user:
In this regard, Kubernetes does not have objects which represent normal user accounts. Normal users cannot be added to a cluster through an API call.
Even though normal user cannot be added via an API call, but any user that presents a valid certificate signed by the cluster’s certificate authority (CA) is considered authenticated. In this configuration, Kubernetes determines the username from the common name field in the ‘subject’ of the cert (e.g., “/CN=bob”). From there, the role based access control (RBAC) sub-system would determine whether the user is authorized to perform a specific operation on a resource.
Here you can find example how you can properly prepare X509 Client Certs and configure KubeConfig file accordingly.
The most important part is to define properly the common name (CN) and the organization field (O):
openssl req -new -key DevUser.key -out DevUser.csr -subj "/CN=DevUser/O=development"
The common name (CN) of the subject will be used as username for authentication request. The organization field (O) will be used to indicate group membership of the user.
Finally I have created your configuration based on standard minikube setup and can't recreate your issue either due to hostPID: true or securityContext.privileged: true
To consider:
a). Verify if your client certificate for authentication and kubeconfig file were created/setup properly especially common name (CN) and organization field (O).
b). Make sure you are switching between proper context while performing requests on behalf of different users.
f.e. kubectl get pods --context=NewUser-context

Related

Forbidden after enabling Google Cloud Groups RBAC in GKE

We are enabling Google Cloud Groups RBAC in our existing GKE clusters.
For that, we first created all the groups in Workspace, and also the required "gke-security-groups#ourdomain.com" according to documentation.
Those groups are created in Workspace with an integration with Active Directory for Single Sign On.
All groups are members of "gke-security-groups#ourdomain" as stated by documentation. And all groups can View members.
The cluster was updated to enabled the flag for Google Cloud Groups RBAC and we specify the value to be "gke-security-groups#ourdomain.com".
We then Added one of the groups (let's called it group_a#ourdomain.com) to IAM and assigned a custom role which only gives access to:
"container.apiServices.get",
"container.apiServices.list",
"container.clusters.getCredentials",
"container.clusters.get",
"container.clusters.list",
This is just the minimum for the user to be able to log into the Kubernetes cluster and from there being able to apply Kubernetes RBACs.
In Kubernetes, we applied a Role, which provides list of pods in a specific namespace, and a role binding that specifies the group we just added to IAM.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-role
namespace: custom-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rolebinding
namespace: custom-namespace
roleRef:
kind: Role
name: test-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: group_a#ourdomain.com
Everything looks good until now. But when trying to list the pods of this namespace with the user that belongs to the group "group_a#ourdomain.com", we get:
Error from server (Forbidden): pods is forbidden: User
"my-user#ourdomain.com" cannot list resource "pods" in API group ""
in the namespace "custom-namespace": requires one of ["container.pods.list"]
permission(s).
Of course if I give container.pods.list to the group_a#ourdomain assigned role, I can list pods, but it opens for all namespaces, as this permission in GCloud is global.
What am I missing here?
Not sure if this is relevant, but our organisation in gcloud is called for example "my-company.io", while the groups for SSO are named "...#groups.my-company.io", and the gke-security-groups group was also created with the "groups.my-company.io" domain.
Also, if instead of a Group in the RoleBinding, I specify the user directly, it works.
It turned out to be an issue about case-sensitive strings and nothing related with the actual rules defined in the RBACs, which were working as expected.
The names of the groups were created in Azure AD with a camel case model. These group names where then showed in Google Workspace all lowercase.
Example in Azure AD:
thisIsOneGroup#groups.mycompany.com
Example configured in the RBACs as shown in Google Workspace:
thisisonegroup#groups.mycompany.com
We copied the names from the Google Workspace UI all lowercase and we put them in the bindings and that caused the issue. Kubernetes GKE is case sensitive and it didn't match the name configured in the binding with the email configured in Google Workspace.
After changing the RBAC bindings to have the same format, everything worked as expected.
Looks like you are trying to grant access to deployments in the extensions and apps API groups. That requires the user to specify the extensions and apps api group in your role rules:
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- '*'
- apiGroups:
- extensions
- apps
resources:
- deployments
- replicasets
verbs:
- '*'
I can recommend you to recreate role and role bindings too. You can visit the following thread as a reference too RBAC issue : Error from server (Forbidden):
Edited 012622:
Can you please confirm that you provided the credentials or configuration file (manifest, YAML)? As you may know, this information is provided by Kubernetes and the default service account. You can verify it by running:
$ kubectl auth can-i get pods
Let me tell you that the account type you need to use for your accounts is “service account”. To create a new service account with a wider set of permissions, the following is a YAML example:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-read-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-read-sa
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-read-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: pod-read-sa
apiGroup: ""
roleRef:
kind: Role
name: pod-read-role
apiGroup: ""
Please use the following thread as a reference.

Inadvertently deleted admin clusterrole and can't access cluster resources

I deleted my cluster-admin role via kubectl using:
kubectl delete clusterrole cluster-admin
Not sure what I expected, but now I don't have access to the cluster from my account. Any attempt to get or change resources using kubectl returns a 403, Forbidden.
Is there anything I can do to revert this change without blowing away the cluster and creating a new one? I have a managed cluster on Digital Ocean.
Not sure what I expected, but now I don't have access to the cluster from my account.
If none of the kubectl commands actually work, unfortunately you will not be able to create a new cluster role. The problem is that you won't be able to do anything without an admin role. You can try creating the cluster-admin role directly through the API (not using kubectl), but if that doesn't help you have to recreate the cluster.
Try applying this YAML to creaste the new Cluster role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
apply the YAML file changes
kubectl apply -f <filename>.yaml

Does'nt configMap referencing in POD requires ServiceAccount?

Curious as to how configMaps can be referenced in PODs without appropriate serviceAccount and asscoiated RBAC rules ?
Sample POD Yaml mounting configMap
- mountPath: /kubernetes-vault
name: kubernetes-vault
.................
.................
volumes:
- emptyDir: {}
name: vault-token
- configMap:
defaultMode: 420
name: kubernetes-vault
name: kubernetes-vault
But the associated ServiceAccount and it's corresponding RBAC ( Role and RoleBinding ) does'nt have any rules specifying access rules for this configMap (kubernetes-vault)
Role & Rule for the POD
rules:
- apiGroups:
- '*'
resources:
- services
- pods
- endpoints
verbs:
- get
- list
- watch
Couple of Qs
does'nt access to configMap required appropriate ServiceAccount with access rules specified specifically for configMap access ?
if yest which rule mentioned above governs configMap Access
if not , what objects are governed by RBAC rules ?
doesn't access to configMap required appropriate ServiceAccount with access rules specified specifically for configMap access ?
It will when a ServiceAccount is performing that action, yes, but volumes: are performed by a mixture of kube-apiserver, kube-controller, and the calling credential that interacts with the apiserver. By the time the Pod's volumes mount, all those security checks are a done deal -- one can verify that behavior by running any Pod and suppressing its ServiceAccount and observe that the volume mounts still take place
If one has objects which should only be accessed by a limited set of users, that should happen at the Role level to prevent the users from scheduling Pods that touch the sensitive items.
if not , what objects are governed by RBAC rules ?
As far as I know, everything is governed by RBAC rules, and even if they aren't to your satisfaction, the system offers Validating Admission Controllers which allow extremely fine-grained access rules

Automatically create Kubernetes resources after namespace creation

I have 2 teams:
devs: they create a new Kubernetes namespace each time they deploy a branch/tag of their app
ops: they manage access control to the cluster with (cluster)roles and (cluster)rolebindings
The problem is that 'devs' cannot kubectl their namespaces until 'ops' have created RBAC resources. And 'devs' cannot create RBAC resources themselves as they don't have the list of subjects to put in the rolebinding resource (sharing the list is not an option).
I have read the official documentation about Admission webhooks but what I understood is that they only act on the resource that triggered the webhook.
Is there a native and/or simple way in Kubernetes to apply resources whenever a new namespace is created?
I've come up with a solution by writing a custom controller.
With the following custom resource deployed, the controller injects the role and rolebinding in namespaces matching dev-.* and fix-.*:
kind: NamespaceResourcesInjector
apiVersion: blakelead.com/v1alpha1
metadata:
name: nri-test
spec:
namespaces:
- dev-.*
- fix-.*
resources:
- |
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dev-role
rules:
- apiGroups: [""]
resources: ["pods","pods/portforward", "services", "deployments", "ingresses"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["create"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["list", "get"]
- |
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-rolebinding
subjects:
- kind: User
name: dev
roleRef:
kind: Role
name: dev-role
apiGroup: rbac.authorization.k8s.io
The controller is still in early stages of development but I'm using it successfully in more and more clusters.
Here it is for those interested: https://github.com/blakelead/nsinjector
Yes, there is a native way but not an out of the box feature.
You can do what you have described by using/creating an operator. Essentially extending Kubernetes APIs for your need.
As operator is just an open pattern which can implement things in many ways, in the scenario you gave one way the control flow could look like could be:
Operator with privileges to create RBAC is deployed and subscribed to changes to a k8s namespace object kind
Devs create namespace containing an agreed label
Operator is notified about changes to the cluster
Operator checks namespace validation (this can also be done by a separate admission webhook)
Operator creates RBAC in the newly created namespace
If RBACs are cluster wide, same operator can do the RBAC cleanup once namespace is deleted
It's kind of related to how the user is authenticated to the cluster and how they get a kubeconfig file.You can put a group in the client certificate or the bearer token that kubectl uses from the kubeconfig. Ahead of time you can define a clusterrole having a clusterrolebinding to that group which gives them permission to certain verbs on certain resources(for example ability to create namespace)
Additionally you can use an admission webhook to validate if the user is supposed to be part of that group or not.

How to set an IAM user to have specific rights in Kubernetes Cluster on AWS.

I want to allow a user to do things in the Kubernetes cluster for EKS for example: apply deployment, create secrets, create volumes etc.
I'm not sure which role to use for that. I don't want to allow users:
to create clusters, delete clusters, list cluster only perform the Kubernetes operations within the cluster.
As far as I know the permissions to the cluster are performed with Heptio authenticator. I believe I am missing something here but can't figure out what.
This link is the right one to add an AWS IAM user or AWS Role to a given K8S Role.
Let's say that you wanted to create a new K8S Role to only have read permission, called pod-reader
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
After creating the role, you need to give the permission to your IAM user to assume that role. This is easily doable doing:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapUsers: |
- userarn: arn:aws:iam::270870090353:user/franziska_adler
username: iam_user_name
groups:
- pod-reader
More information about K8S RBAC Authorization here
Looks like you have to manually add the users in the config map under the 'mapUsers' item and then run kubectl apply config-map.yml according the aws documentation in section 3. "Add your IAM users, roles, or AWS accounts to the configMap."
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html