restricting EKS user access - kubernetes

I am trying to add new user to EKS cluster and giving then access. So far I was able to add the user just by editing configmap/aws-auth (kubectl edit -n kube-system configmap/aws-auth) and adding new user to
mapUsers: |
- userarn: arn:aws:iam::123456789:user/user01
username: user01
groups:
- system:masters
How can add user to EKS cluster and give full access to to specific namespace, but nothing outside of it ?
I tried to create Roles & RoleBinding as
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: namespace1
name: namespace1-user
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
# This role binding allows "user01" to read pods in the "namespace1" namespace.
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace1-user-role-binding
namespace: namespace1
subjects:
- kind: User
name: user01
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: namespace1-user
user01 can see all the pods from other users with kubectl get pods --all-namespaces, is there any way to restrict this ?

Essentially what you want is to define a cluster role and use a role binding to apply it to a specific namespace. Using a cluster role (rather than a role) allows you to re-use it across namespaces. Using a role binding allows you to target a specific namespace rather than giving cluster-wide permissions.

Related

how to give pods in namespace an admin access?

I'm new to k8s.
I have a deployment with a single pod inside k8test (custom) namespace. for learning purposes, I want to give that pod an admin access.
I failed to achieve this by creating a namespace-role:
kind: 'Role',
apiVersion: 'rbac.authorization.k8s.io/v1',
metadata: {
name: 'super-duper-admin',
namespace: 'k8test',
},
rules: [
{
apiGroups: [''],
resources: ['ResourceAll'],
verbs: ['VerbAll'],
},
],
from the pod`s log:
services is forbidden: User "system:serviceaccount:k8test:default" cannot list resource "services" in API group "" in the namespace "k8test"
I couldn't find a simple explanantion to what is apiGroups. What is it?
You can find all api groups here.
As documented here
API groups make it easier to extend the Kubernetes API. The API group
is specified in a REST path and in the apiVersion field of a
serialized object
Pod comes under core API Group and version v1.
In your RBAC '' indicates the core API group.
Create the role as below which gives permission to all apigroups, all resources and all verbs.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: k8test
name: super-duper-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
Bind the role to the service account as below
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
# You need to already have a Role named "pod-reader" in that namespace.
kind: RoleBinding
metadata:
name: admin-rolebinding
namespace: k8test
subjects:
# You can specify more than one "subject"
- kind: ServiceAccount
name: default # "name" is case sensitive
namespace: k8test
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: super-duper-admin # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
Execute below command to verify the RABC is properly applied
kubectl auth can-i list services --as=system:serviceaccount:k8test:default -n k8test
More information and examples about RBAC here

Kubernetes RBAC roles with resourceName and listing objects

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: p-viewer-role
namespace: pepsi
rules:
- apiGroups:
- ""
resourceNames:
- p83
resources:
- pods
verbs:
- list
- get
- watch
When we use resourceNames in the Roles, the following command works
kubectl get pods -n pepsi p83
returns a proper value. However,
kubectl get pods -n pepsi
returns forbidden. Why doesn't it list p83
RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: p-viewer-rolebinding
namespace: pepsi
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: p-viewer-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: pepsi-project-viewer
namespace: project
This is expected behavior. You have defined a role which is scoped to the namespace pepsi to pod resources with specific resourceName p83.
For kubectl get pods -n peps command to work you need to remove the resourceNames p83 from the Role
This kind of advanced validation is best handled by OPA where you can define fine grained policies.
Short answer:
list(and watch)actually can be restricted by their resource name, and permits list(and watch) requests using a fieldSelector of metadata.name=... to match a single object (for example, /api/v1/namespaces/$ns/configmaps?fieldSelector=metadata.name=foo)
For more details and some tests you can check this link: https://github.com/kubernetes/website/pull/29468
#sftim and #liggitt do offer a lot of help!

kubectl to allow user to only to copy files in and out of the pods

I have a Kubernetes cluster where my application is deployed.
there are some other users they should only be able to copy files into and from a pod. Using kubectl cp command.
This user context should not allow the user to do any other operations on the cluster other than kubectl cp.
kubectl cp internally uses exec. There is no way to provide permission to only copy but you can provide only exec permission.
Create a role with permission to pods/exec
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-exec
rules:
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
Create a Rolebinding to assign the above role to a user.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-exec-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-exec
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: user
Rather than use kubectl cp, instead run a sidecar container with an sftp or rsync server. That will give you better control at all levels.
You can use opa and admission controller which only permit to run api manifest has a specific label like "cp" or "username" etc. and also benefits from gatekeeper
https://www.youtube.com/watch?v=ZJgaGJm9NJE&t=3040s

kubernetes resource level RBAC with object filtering

i'm doing some resource level RBAC for k8s custom objects and finding it difficult to get filter resources using native k8s calls
cluster is my custom CRD and user john has access to only one crd instance not all instances of CRD using k8s native RBAC
➜ k get clusters
NAME AGE
aws-gluohfhcwo 3d2h
azure-cikivygyxd 3d1h
➜ k get clusters --as=john
Error from server (Forbidden): clusters.operator.biqmind.com is forbidden: User "ranbir" cannot list resource "clusters" in API group "operator.biqmind.com" in the namespace "biqmind"
➜ k get clusters --as=john aws-gluohfhcwo
NAME AGE
aws-gluohfhcwo 3d2h
i have explicitly specify object name to get the list of objects to which user is authenticated. any suggestions on how this can be solved?
full rbac is posted here
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: biqmind
name: cluster-admin-aws-gluohfhcwo
rules:
- apiGroups: ["operator.biqmind.com"]
resources: ["clusters"]
resourceNames: ["aws-gluohfhcwo"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cluster-admin-aws-gluohfhcwo-binding
namespace: biqmind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cluster-admin-aws-gluohfhcwo
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ranbir
User "ranbir" cannot list resource "clusters" in API group "operator.biqmind.com" in the namespace "biqmind"
You must add RBAC permissions with the verb list for the specified user in the specified namespace, to let that user list "clusters".
When doing
kubectl get clusters --as=john aws-gluohfhcwo
you use the RBAC verb get, but to list without specifying a specific name, the user also need permission to list.
Example of giving list permission, without resourceName::
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: biqmind
name: cluster-admin-aws-gluohfhcwo
rules:
- apiGroups: ["operator.biqmind.com"]
resources: ["clusters"]
resourceNames: ["aws-gluohfhcwo"]
verbs: ["get", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["operator.biqmind.com"]
resources: ["clusters"]
verbs: ["get", "list"]
Enforcing RBAC on the user side is easy in concept. You can create RoleBindings for individual users, but this is not the recommended path as there’s a high risk of operator insanity.
The better approach for sane RBAC is to create that your users map to; how this mapping is done is dependent on your cluster’s authenticator (e.g. the aws-iam-authenticator for EKS uses mapRoles to map a role ARN to a set of groups).
Groups and the APIs they have access to are ultimately determined based on an organization’s needs, but a generic reader (for new engineers just getting the hang of things), writer (for your engineers), and admin (for you) role is a good start. (Hey, it’s better than admin for everyone.)
Here is example of configuration file:
# An example reader ClusterRole – ClusterRole so you’re not worried about namespaces at this time. Remember, we’re talking generic reader/writer/admin roles.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: reader
rules:
– apiGroups: [“*”]
resources:
– deployments
– configmaps
– pods
– secrets
– services
verbs:
– get
– list
– watch
---
# An example reader ClusterRoleBinding that gives read permissions to
# the engineering and operations groups
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: reader-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: reader
subjects:
- kind: Group
name: umbrella:engineering
- kind: Group
name: umbrella:operations
---
# An example writer ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: writer
rules:
- apiGroups: [“*”]
resources:
- deployments
- configmaps
- pods
- secrets
- services
verbs:
- create
- delete
- patch
- update
---
# An example writer ClusterRoleBinding that gives write permissions to
# the operations group
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: reader-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: reader
subjects:
- kind: Group
name: umbrella:operations
Here is exact explanation: rbac-article.
Take notice that default Roles and Role Bindings
API servers create a set of default ClusterRole and ClusterRoleBinding objects. Many of these are system: prefixed, which indicates that the resource is “owned” by the infrastructure. Modifications to these resources can result in non-functional clusters. One example is the system:node ClusterRole. This role defines permissions for kubelets. If the role is modified, it can prevent kubelets from working.
All of the default cluster roles and rolebindings are labeled with kubernetes.io/bootstrapping=rbac-defaults.
Remember about auto-reconciliation
At each start-up, the API server updates default cluster roles with any missing permissions, and updates default cluster role bindings with any missing subjects. This allows the cluster to repair accidental modifications, and to keep roles and rolebindings up-to-date as permissions and subjects change in new releases.
To opt out of this reconciliation, set the rbac.authorization.kubernetes.io/autoupdate annotation on a default cluster role or rolebinding to false. Be aware that missing default permissions and subjects can result in non-functional clusters.
Auto-reconciliation is enabled in Kubernetes version 1.6+ when the RBAC authorizer is active.
Useful article: understanding-rbac.

How to bind roles with service accounts - Kubernetes

I know there are a lot of similar questions but none of them has a solution as far as I have browsed.
Coming to the issue, I have created a service account (using command), role (using .yaml file), role binding (using .yaml files). The role grants access only to the pods. But when I login into the dashboard (Token method) using the SA that the role is attached to, I'm able to view all the resources without any restrictions. Here are the file and commands used by me.
Role.yaml:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: assembly-prod
name: testreadrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
RoleBinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrolebinding
namespace: assembly-prod
subjects:
- kind: ServiceAccount
name: testsa
apiGroup: ""
roleRef:
kind: Role
name: testreadrole
apiGroup: rbac.authorization.k8s.io
Command used to create service account:
kubectl create serviceaccount <saname> --namespace <namespacename>
UPDATE: I create a service account and did not attach any kind of role to it. When I tried to login with this SA, It let me through and I was able to perform all kinds activities including deleting "secrets". So by default all SA are assuming admin access and that is the reason why my above roles are not working. Is this behavior expected, If yes then how can I change it?
Try the below steps
# create service account
kubectl create serviceaccount pod-viewer
# Create cluster role/role
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-viewer
rules:
- apiGroups: [""] # core API group
resources: ["pods", "namespaces"]
verbs: ["get", "watch", "list"]
---
# create cluster role binding
kubectl create clusterrolebinding pod-viewer \
--clusterrole=pod-viewer \
--serviceaccount=default:pod-viewer
# get service account secret
kubectl get secret | grep pod-viewer
pod-viewer-token-6fdcn kubernetes.io/service-account-token 3 2m58s
# get token
kubectl describe secret pod-viewer-token-6fdcn
Name: pod-viewer-token-6fdcn
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: pod-viewer
kubernetes.io/service-account.uid: bbfb3c4e-2254-11ea-a26c-0242ac110009
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InBvZC12aWV3ZXItdG9rZW4tNmZkY24iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicG9kLXZpZXdlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImJiZmIzYzRlLTIyNTQtMTFlYS1hMjZjLTAyNDJhYzExMDAwOSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnBvZC12aWV3ZXIifQ.Pgco_4UwTCiOfYYS4QLwqgWnG8nry6JxoGiJCDuO4ZVDWUOkGJ3w6-8K1gGRSzWFOSB8E0l2YSQR4PB9jlc_9GYCFQ0-XNgkuiZBPvsTmKXdDvCNFz7bmg_Cua7HnACkKDbISKKyK4HMH-ShgVXDoMG5KmQQ_TCWs2E_a88COGMA543QL_BxckFowQZk19Iq8yEgSEfI9m8qfz4n6G7dQu9IpUSmVNUVB5GaEsaCIg6h_AXxDds5Ot6ngWUawvhYrPRv79zVKfAxYKwetjC291-qiIM92XZ63-YJJ3xbxPAsnCEwL_hG3P95-CNzoxJHKEfs_qa7a4hfe0k6HtHTWA
ca.crt: 1025 bytes
namespace: 7 bytes
```
Login to dashboard using the above token. you should see only pods and namespaces
[![Refer the below link][1]][1]
[1]: https://i.stack.imgur.com/D9bDi.png
Okay I've found the solution for this. The major issue was I'm running my cluster on Azure AKS, which I should have mentioned in the question but did not. It was my mistake. In Azure AKS, if rbac is not enabled during cluster creation, then there is no use of roles and role-bindings at all. All request to the api-server will be treated as requests from Admin. This was confirmed by Azure support too. So that was the reason my cluster-role-binding and roles didn't apply.
I see that the .yamls you provided need some adjustments.
Role has wrong formatting after the rules part.
RoleBinding is missing namespace: after subjects:, and also is formatted wrongly.
Try something like this:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: assembly-prod
name: testreadrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrolebinding
namespace: assembly-prod
subjects:
- kind: ServiceAccount
name: testsa
namespace: assembly-prod
roleRef:
kind: Role
name: testreadrole
apiGroup: rbac.authorization.k8s.io
There is a very useful guide about Non-Privileged RBAC User Administration in Kubernetes where you can find more detailed info regarding this particular topic.