Let's say I have an EKS cluster with multiple pods hosting different applications. I want to allow connections from a specific application to an RDS instance without allowing all the pods in the EKS cluster to connect to the RDS.
After some research, I found out that there's a networking approach to solve the issue, by creating security groups for pods. But I am trying to look for another approach.
I was expecting to have a simple setup where I can:
create IAM policy with read/write permissions to the DB
create an IAM role and attach that policy
create a service account (IAM and k8s service accounts) with that role
assign the service account to the pods I want to grant RDS access.
But it seems like the only way to have IAM authentication from pods to RDS, is by continuously generating a token each 15m. Is this the only way?
I would recommend this task to be divided into following parts:
making sure the security concept is sound
allowing network traffic from EC2 worker nodes to RDS
creating Egress network policy for the cluster to only allow RDS traffic for specific pods
making sure the security concept is sound
The question here is why would you want to select only specific pods to access RDS database? Is there a trust issue with parties deploying pods to the cluster or is it a matter of compliance (some departments can't access other department resources)? If it's trust, maybe the separation here is not enough, maybe you should not allow untrusted pods on your cluster (vulnerabilities with root gaining via docker)?
allowing network traffic from EC2 worker nodes to RDS
For this, the Security Group of worker EC2 nodes must be allowed on RDS side (Inbound rules), and just to be sure, SG of EKS cluster nodes should also allow connections to RDS. Without these generic rules, the traffic can't flow. You could be specific here - for example allow access on only specific RDS instances, not all. You can have more than one node group for your EKS cluster also (and run pods that require RDS access only on those nodes, with labels).
creating Egress network policy for the cluster
If you create a default deny-all Egress policy (which is highly recommended), than no pods can access RDS by default. You can then apply more granular Egress policies to access RDS database by namespace, label, pod name. More here:
https://www.cncf.io/blog/2020/02/10/guide-to-kubernetes-egress-network-policies/
Clarification: in this scenario you store secrets to access database in kubernetes secrets, mount them into pod and login normally. If you want to just get connection without auth, my answer won't help you.
Related
I have a GKE cluster and I would like to connect some, but not all (!), pods and services to a managed Postgresql Cloud DB running in the same VPC.
Of course, I could just go for it (https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine), but I would like to make sure that only those pods and services can connect to the Postgresql DB, that should do so.
I thought of creating a separate node pool in my GKE cluster (https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools), where only those pods services do run, that should be able to connect to the Postgresql DB, and allow only those pods and services to connect to the DB by telling the DB which IPs to accept. However, it seems that I cannot set dedicated IPs on the node pool level, only on the cluster level.
Do you have an idea how I can make such a restriction?
When you create your node pool, create it with a service account that haven't the permission to access to Cloud SQL instances.
Then, leverage Workload identity to load a specific service account with some of your pods, and grant the service account the permission to access to Cloud SQL instance
You asked "how to know the IP to restrict them to a access to Cloud SQL". It's a wrong (or legacy) assumption. Google always says "Don't trust the network (and so, the IPs)". Base your security on the identity (the service account of the node pool and of the pod through workload identity) is a far better option.
I recently built an EKS cluster w/ Terraform. In doing that, I Created 3 different roles on the cluster, all mapped to the same AWS IAM Role (the one used to create the cluster)
Now, when I try to manage the cluster, RBAC seems to use the least privileged of these (which I made a view role) that only has read-only access.
Is there anyway to tell config to use the admin role instead of view?
I'm afraid I've hosed this cluster and may need to rebuild it.
Some into
You don't need to create a mapping in K8s for an IAM entity that created an EKS cluster, because by default it will be mapped to "system:masters" K8s group automatically. So, if you want to give additional permissions in a K8s cluster, just map other IAM roles/users.
In EKS, IAM entities are used authentication and K8s RBAC are for authorization purposes. The mapping between them is set in aws-auth configMap in kube-system namespace,
Back to the question
I'm not sure, why K8s may have mapped that IAM user to the least privileged K8s user - it may be the default behaviour (bug?) or due to the mapping record (for view perms) being later in the CM, so it just re-wrote the previous mapping.
Any way, there is no possibility to specify a K8s user to use with such mapping.
Also, if you used eksctl to spin up the cluster, you may try creating a new mapping as per docs, but not sure if that will work.
Some reading reference: #1, #2
I have a GKE clusters setup, dev and stg let's say, and wanted apps running in pods on stg nodes to connect to dev master and execute some commands on that's GKE - I have all the setup I need and when I add from hand IP address of the nodes all works fine but the IP's are changing,
so my question is how can I add to Master authorised networks the ever-changing default-pool IPs of nodes from the other cluster?
EDIT: I think I found the solution, it's not the node IP but the NAT IP I have added to authorized networks, so assuming I don't change those I just need to add the NAT I guess, unless someone knows better solution ?
I'm not sure that you are doing the correct things. In kubernetes, your communication is performed between services, that represents deployed pods, on one or several nodes.
When you communicate with the outside, you reach an endpoint (an API or a specific port). The endpoint is materialized by a loadbalancer that routes the traffic.
Only the kubernetes master care about the node as resources (CPU, memory, GPU,...) provider inside the cluster. You should never have to directly reach the node of a cluster without using the standard way.
Potentially you can reach the NodePort service exposal on the NodeIP+servicePort.
What you really need to do is configure the kubectl in jenkins pipeline to connect to GKE Master IP. The master is responsible for accepting your commands (rollback, deployment, etc). See Configuring cluster access for kubectl
The Master IP is available in the Kubernetes Engine console along with the Certificate Authority certificate. A good approach is to use a service account token to authenticate with the master. See how to Login to GKE via service account with token.
I'm running a hybrid version of EKS cluster where I'm trying to use AWS Fargate for some of my workload. From what I know, AWS Fargate can be used for stateless pods which makes natural that, for standard app/db scenario, you would have to use hybrid mode where app is running on Fargate while db is running on one of the EKS worker nodes.
Problem that I see is that app cannot communicate with db in this case.
Now, I would conclude that, workload on Fargate can be reached from the outside of the Fargate only if using ALB ingress in front?
If that is true, that would also not solve this problem since app (on Fargate) needs to connect to db (running on EKS worker nodes), not the vice versa. I guess this can be solved by having ALB ingress in front of db but seems to me like an overkill?
Is there any other way around this problem?
Thanks.
Do you really need db running on EKS? If you really do, I think you can create a Cluster Service for your db pods so that your application pods can have access to it without too much effort.
On the other hand, I would just go with an instance of RDS and create a security group there to allow access from your EKS Fargate pods.
We want to run a multi-tenant scenario that requires tenant separation on a network level.
The idea is that every tenant receives a dedicated node and a dedicated network that other tenants nodes can join. Tenant nodes should be able to interact with each other in that network.
Networks should not be able to talk with each other (true network isolation).
Are there any architectural patterns to achieve this?
One Kubernetes cluster per tenant?
One Kubernetes cluster for all tenants, with one subnet per tenant?
One Kubernetes cluster across VPCs (speaking in AWS terms)?
The regular way to deal with multi-tenancy inside kubernetes is to use namespaces. But this is within a kube cluster, meaning you still have the same underlying networking solution shared by all tenants. That is actualy fine, as you have Network Policies to restrict networking in the cluster.
You can obviously run autonomous clusters per tenant, yet this is not exactly multi-tenancy then, just multiple clusters. Networking can be configured on node level to route as expected, but you'd still be left with an issue of cross-cluster service discovery etc. Federation can help a bit with that, but I would still advise to chase Namespaces+Policies approach.
I see four ways to run multi-tenant k8s clusters at network-level:
Namespaces
Ingress rules
allow/deny and ingress/egress Network Policies
Network-aware Zones