Providing read access to IAM role to prod eks cluster - kubernetes

I have an IAM role in AWS with policies :
EKS Describe
EKS List
Now I want that role to access the cluster with only read only accesses.
Is there any specific role/rolebinding that kuberenetes offer that I could use here within of aws-auth configmap.
Kindly guide. If in case there is some other approach too to grant such access.

Related

Kubernetes service account security

I am trying to understand security implication of granting kubernetes serviceaccount permission to perform deployments, service creation, etc. The role is clusterrole with namespaced rolebinding.
Use case are using service account to automate/orchestrate some tasks inside cluster. Version is 1.16
Service account can used to grant different access level to different purpose. for developer ClusterRole need to list, get and watch resources but not delete. but Admin can delete and create resources.
If you are developing K8 operator, operator need to communicate with cluster for create, update and delete resources then operator need service with ClusterRoleBinding of all verbs. so that operator can perform all operation on resources. but not good practice to assign full permission to regular deployment.
Let me discuss about this.
According to the docs.
A Role always _sets permissions within a particular namespace__; when you create a Role, you have to specify the namespace it belongs in
ClusterRole, by contrast, is a non-namespaced resource. The resources have different names (Role and ClusterRole) because a Kubernetes object always has to be either namespaced or not namespaced; it can't be both.
A role binding grants the permissions defined in a role to a user or set of users. It holds a list of subjects (users, groups, or service accounts), and a reference to the role being granted. A RoleBinding grants permissions within a specific namespace whereas a ClusterRoleBinding grants that access cluster-wide.
If you want to define a role within a namespace, use a Role; if you want to define a role cluster-wide, use a ClusterRole.
A RoleBinding may reference any Role in the same namespace. Alternatively, a RoleBinding can reference a ClusterRole and bind that ClusterRole to the namespace of the RoleBinding. If you want to bind a ClusterRole to all the namespaces in your cluster, you use a ClusterRoleBinding.
So answering your question - handling ClusterRoles it could be definitively security risk.
The best solution is to grant as minimal permissions as possible.
Grant a role to an application-specific service account (best practice)
Grant a role to the "default" service account in a namespace
Grant a role to all service accounts in a namespace
Grant a limited role to all service accounts cluster-wide (discouraged)
Grant super-user access to all service accounts cluster-wide (strongly discouraged)
There is no one answer for this question because you need to plan and test action/permissions between your application/deployment and kubernetes API. F.e at namesapaced or non namesapced resources, inside one namespace or across the entire cluster.
In you example you can simply use Role/Rolebinding if you are working inside the same namespace. You can use ClusterRole/Rolebinding and extend permissions by additional RoleBinding allowing the ServiceAccount to create new k8s objects into another namespace.
Assuming we are talking about ServiceAccount for deployment, here you can find good advice for "RBAC in Deployments: A use case"
If you create ServiceAccount for your deployment and create appropriate Role/ClusterRole and Rolebinding/ClusterRoleBinding:
you can perform:
kubectl can-i get secrets --as=system:serviceaccount:[namespace]:[service_account_name] -n [target_namespace]
For testing please take a look also at Access Clusters Using the Kubernetes API.
This command will show you if particular ServiceAccount (defined by subject in Rolebinding/ClusterRolebinding) has permissions (defined by verbs in Role/ClusteRole) to get secrets (defined by resources Role/ClusterRole) in specified namespace.
Following this approach you can verify if your deployment has enough permissions to perform all required operation against Kubernetes API.
While working with RBAC in Kubernetes you should consider mentioned below topics:
Have multiple users with different properties, establishing a proper authentication mechanism.
Have full control over which operations each user or group of users can execute.
Have full control over which operations each process inside a pod can execute.
Limit the visibility of certain resources of namespaces.

How is Kubernetes RBAC Actually Enforced for Service Accounts?

We're trying to create different kuberentes secrets and offer access to specific secrets through specific service accounts that are assigned to pods. For example:
Secrets
- User-Service-Secret
- Transaction-Service-Secret
Service Account
- User-Service
- Transaction-Service
Pods
- User-Service-Pod
- Transaction-Service-Pod
The idea is to restrict access to User-Service-Secretsecret to User-Service service account that is assign to User-Service-Pod. So we can set this all up with the relevant kuberentes resources (i.e. ServiceAccount, Role, RoleBinding), but we realize that this may not be actually enforced, because Transaction-Service-Pod can just as easily read the User-Service-Secret secret when Pod starts up, even though the service account its assign to doesn't have get permission to the User-Service-Secret.
How do we actually enforce the RBAC system?
FYI we are using EKS
First it is important to distinguish between API access to the secret and consuming the secret as an environment variable or a mounted volume.
TLDR:
RBAC controls who can access a secret (or any other resource) using K8s API requests.
Namespaces or the service account's secrets attribute control if a pod can consume a secret as an environment variable or through a volume mount.
API access
RBAC is used to control if an identity (in your example the service account) is allowed to access a resource via the K8s API. You control this by creating a RoleBinding (namespaced) or a ClusterRoleBinding (cluster-wide) that binds an identity to a Role (namespaced) or a ClusterRole (not-namespaced) to your identity (service account). Then, when you assign the service account to a pod by setting the serviceAccountName attribute, running kubectl get secret in that pod or the equivalent method from one of the client libraries would mean you have credentials available to make the API request.
Consuming Secrets
This however is independent of configuring the pod to consume the secret as an environment variable or a volume mount. If the container spec in a pod spec references the secret it is made available inside that container. Note, per container, not per pod. You can limit what secret a pod can mount by having the pods in different namespaces, because a pod can only refer to a secret in the same namespace. Additionally, you can use the service account's secrets attribute, to limit what secrets a pod with thet service account can refer to.
$ kubectl explain sa.secrets
KIND: ServiceAccount
VERSION: v1
RESOURCE: secrets <[]Object>
DESCRIPTION:
Secrets is the list of secrets allowed to be used by pods running using
this ServiceAccount. More info:
https://kubernetes.io/docs/concepts/configuration/secret
ObjectReference contains enough information to let you inspect or modify
the referred object.
You can learn more about the security implications of Kubernetes secrets in the secret documentation.
The idea is to restrict access to User-Service-Secret secret to User-Service service account that is assign to User-Service-Pod. So we can set this all up with the relevant Kubernetes resources (i.e. ServiceAccount, Role, RoleBinding), but we realize that this may not be actually enforced, because Transaction-Service-Pod can just as easily read the User-Service-Secret secret when Pod starts up, even though the service account its assign to doesn't have get permission to the User-Service-Secret.
Yes, this is correct.
This is documented for Kubernetes on privilege escalation via pod creation - within a namespace.
Users who have the ability to create pods in a namespace can potentially escalate their privileges within that namespace. They can create pods that access their privileges within that namespace. They can create pods that access secrets the user cannot themselves read, or that run under a service account with different/greater permissions.
To actually enforce this kind of Security Policies, you probably have to add an extra layer of policies via the admission controller. The Open Policy Agent in the form of OPA Gatekeeper is most likely a good fit for this kind of policy enforcement.

EKS pods multi tenant with RDS

I have an EKS cluster with ASG.
I want to restrict pods that are in the specific name space to connect to specific RDS services.
Is this available in AWS, is there any suggestions how to do so?
Looking for best practices that are already running in production.
I don’t believe there is a way to whitelist just a single namespace in order to access your RDS instance. This is mainly because clusters are shared and AWS services don’t really understand what kubernetes namespace is.
In order to achieve connectivity you can have a private vpc peering or a publicly available RDS on which you are going to whitelist elastic IP attached to VPC NAT gateway. I would strongly advise you use private vpc peering and then you at least know that connections are private.
Finally, RDS access is going to be allowed for entire cluster as you can’t really limit it to a single set of resources. However, because your RDS requires user credentials to access any data inside I don’t believe that it is such a big issue to have your cluster whitelisted against RDS.
You can use Pod Security Groups to achieve this.
Blog Post explaining Pod Security Groups
Tutorial for setting up Pod Security Groups
If you want all Pods of a namespace to be part of a EC2 Security Group, you should be able to use an empty podSelector in your SecurityGroupPolicy.
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: my-security-group-policy
namespace: my-namespace
spec:
securityGroups:
groupIds:
- my_pod_security_group_id
podSelector:

Kubeconfig for deploying to all namespaces in a k8s cluster

I am looking at instructions on how to go about generating a kubeconfig file that can deploy, delete my k8s deployment to all namespaces and also have have permissions to create, delete and view secrets in all namespaces.
The use case for this kubeconfig is to use it in Jenkins for performing deployments to a kube cluster.
I am aware of k8s service accounts with role and rolebindings, however it appears they can be used to only to specific namespace(s)
Thanks
you should create cluster role and cluster role bindings to grant access cluster level. Then using the service account that has cluster level access, you should be able to do the stuff across all namespaces.

How do permissions in a GCloud IAM role get implemented in a kubernetes cluster?

I am running a Kubernetes application on GKE. In the GCP IAM console, I can see several built-in roles, e.g. Kubernetes Engine Admin. Each role has an ID and permissions associated with it— for example, Kubernetes Engine Admin has ID roles/container.admin and ~300 permissions, each something like container.apiServices.create.
In the kubernetes cluster, I can run:
kubectl get clusterrole | grep -v system: # exclude system roles
This returns the following:
NAME AGE
admin 35d
cloud-provider 35d
cluster-admin 35d
cluster-autoscaler 35d
edit 35d
gce:beta:kubelet-certificate-bootstrap 35d
gce:beta:kubelet-certificate-rotation 35d
gce:cloud-provider 35d
kubelet-api-admin 35d
view 35d
I do not see any roles in this table that reflect the roles in GCP IAM.
That being the case, how are the GCP IAM roles implemented/enforced in a cluster? Does Kubernetes talk to GCP, in addition to using RBAC, when doing permissions checks?
RBAC system lets you exercise fine-grained control over how users access the API resources running on your cluster. You can use RBAC to dynamically configure permissions for your cluster's users and define the kinds of resources with which they can interact.
Moreover, GKE also uses Cloud Identity and Access Management (IAM) to control access to your cluster.
Hope this helps!
RBAC inherits permissions from IAM, so be careful with that. If you set a cluster-admin permission, for example, in IAM, you will have no way to give less permissions through RBAC.
If you want to use RBAC, you will need to set the lowest permission for the user (given your use case), and then granularly manage the permissions through RBAC.