Limit access to node pools of GKE to Cloud SQL DB - postgresql

I have a GKE cluster and I would like to connect some, but not all (!), pods and services to a managed Postgresql Cloud DB running in the same VPC.
Of course, I could just go for it (https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine), but I would like to make sure that only those pods and services can connect to the Postgresql DB, that should do so.
I thought of creating a separate node pool in my GKE cluster (https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools), where only those pods services do run, that should be able to connect to the Postgresql DB, and allow only those pods and services to connect to the DB by telling the DB which IPs to accept. However, it seems that I cannot set dedicated IPs on the node pool level, only on the cluster level.
Do you have an idea how I can make such a restriction?

When you create your node pool, create it with a service account that haven't the permission to access to Cloud SQL instances.
Then, leverage Workload identity to load a specific service account with some of your pods, and grant the service account the permission to access to Cloud SQL instance
You asked "how to know the IP to restrict them to a access to Cloud SQL". It's a wrong (or legacy) assumption. Google always says "Don't trust the network (and so, the IPs)". Base your security on the identity (the service account of the node pool and of the pod through workload identity) is a far better option.

Related

CloudSQL Proxy on GKE : Service vs Sidecar

Does anyone know the pros and cons for installing the CloudSQL-Proxy (that allows us to connect securely to CloudSQL) on a Kubernetes cluster as a service as opposed to making it a sidecar against the application container?
I know that it is mostly used as a sidecar. I have used it as both (in non-production environments), but I never understood why sidecar is more preferable to service. Can someone enlighten me please?
The sidecar pattern is preferred because it is the easiest and more secure option. Traffic to the Cloud SQL Auth proxy is not encrypted or authenticated, and relies on the user to restrict access to the proxy (typically be running local host).
When you run the Cloud SQL proxy, you are essentially saying "I am user X and I'm authorized to connect to the database". When you run it as a service, anyone that connects to that database is connecting authorized as "user X".
You can see this warning in the Cloud SQL proxy example running as a service in k8s, or watch this video on Connecting to Cloud SQL from Kubernetes which explains the reason as well.
The Cloud SQL Auth proxy is the recommended way to connect to Cloud SQL, even when using private IP. This is because the Cloud SQL Auth proxy provides strong encryption and authentication using IAM, which can help keep your database secure.
When you connect using the Cloud SQL Auth proxy, the Cloud SQL Auth proxy is added to your pod using the sidecar container pattern. The Cloud SQL Auth proxy container is in the same pod as your application, which enables the application to connect to the Cloud SQL Auth proxy using localhost, increasing security and performance.
As sidecar is a container that runs on the same Pod as the application container, because it shares the same volume and network as the main container, it can “help” or enhance how the application operates. In Kubernetes, a pod is a group of one or more containers with shared storage and network. A sidecar is a utility container in a pod that’s loosely coupled to the main application container.
Sidecar Pros: Scales indefinitely as you increase the number of pods. Can be injected automatically. Already used by serviceMeshes.
Sidecar Cons: A bit difficult to adopt, as developers can't just deploy their app, but deploy a whole stack in a deployment. It consumes much more resources and it is harder to secure because every Pod must deploy the log aggregator to push the logs to the database or queue.
Refer to the documentation for more information.

How to allow connection from specific AWS EKS pods to AWS RDS?

Let's say I have an EKS cluster with multiple pods hosting different applications. I want to allow connections from a specific application to an RDS instance without allowing all the pods in the EKS cluster to connect to the RDS.
After some research, I found out that there's a networking approach to solve the issue, by creating security groups for pods. But I am trying to look for another approach.
I was expecting to have a simple setup where I can:
create IAM policy with read/write permissions to the DB
create an IAM role and attach that policy
create a service account (IAM and k8s service accounts) with that role
assign the service account to the pods I want to grant RDS access.
But it seems like the only way to have IAM authentication from pods to RDS, is by continuously generating a token each 15m. Is this the only way?
I would recommend this task to be divided into following parts:
making sure the security concept is sound
allowing network traffic from EC2 worker nodes to RDS
creating Egress network policy for the cluster to only allow RDS traffic for specific pods
making sure the security concept is sound
The question here is why would you want to select only specific pods to access RDS database? Is there a trust issue with parties deploying pods to the cluster or is it a matter of compliance (some departments can't access other department resources)? If it's trust, maybe the separation here is not enough, maybe you should not allow untrusted pods on your cluster (vulnerabilities with root gaining via docker)?
allowing network traffic from EC2 worker nodes to RDS
For this, the Security Group of worker EC2 nodes must be allowed on RDS side (Inbound rules), and just to be sure, SG of EKS cluster nodes should also allow connections to RDS. Without these generic rules, the traffic can't flow. You could be specific here - for example allow access on only specific RDS instances, not all. You can have more than one node group for your EKS cluster also (and run pods that require RDS access only on those nodes, with labels).
creating Egress network policy for the cluster
If you create a default deny-all Egress policy (which is highly recommended), than no pods can access RDS by default. You can then apply more granular Egress policies to access RDS database by namespace, label, pod name. More here:
https://www.cncf.io/blog/2020/02/10/guide-to-kubernetes-egress-network-policies/
Clarification: in this scenario you store secrets to access database in kubernetes secrets, mount them into pod and login normally. If you want to just get connection without auth, my answer won't help you.

How to connect to PostgreSQL cluster on DigitalOcean from CircleCI?

I have a Kubernetes cluster setup on DigitalOcean and a separate database Postgres instance there. In database cluster settings there is a list of limited IP addresses that have an access to that database cluster (looks like a great idea).
I have a build and deploy proccess setup with CircleCI and at the end of that process, after deploying a container to K8s cluster, I need to run database migration. The problem is that I don't know CircleCI agent IP address and can not allow it in DO settings. Does anybody know how we can access DigitalOcean Postgres cluster from within CircleCI steps?
Unfortunately when you use a distributed service like that that you don't manage, I would be very cautious about using the restricted IP approach. (Really you have three services you don't manage - Postgres, Kubernetes, and CircleCI.) I feel as if DigitalOcean has provided a really excellent security option for internal networking, since it can track changes in droplet IP, etc.
But when you are deploying on another service, especially if this is for production, and even if the part of your solution you're deploying is deployed (partially) on DigitalOcean infrastructure, I'd be very concerned that CircleCI will change IP dynamically. DO has no way of knowing when this happens, as unlike Postgres and Kubernetes, they don't manage it even if they do host part of it
Essentially I have to advise you to either get an assurance of a static IP from your CircleCI vendor/provider, or disable the IP limitation on Postgres.

Which endpoint to connect to for read/write operations using AWS Aurora PostgreSQL Database Cluster

I have an application (AWS API Gateway) using an Aurora PostgreSQL cluster.
The cluster has 1 read/write (primary) and one reader endpoint.
At the moment, my application connections to the specific writer instance for all operations:
rds-instance-1.xxx.ap-southeast-2.rds.amazonaws.com
But I have the following endpoints available:
rds.cluster-xxx.ap-southeast-2.rds.amazonaws.com
rds.cluster-ro-xxx.ap-southeast-2.rds.amazonaws.com
rds-instance-1.xxx.ap-southeast-2.rds.amazonaws.com
rds-instance-1-ap-southeast-2c.xxx.ap-southeast-2.rds.amazonaws.com
If I am doing read and write operations, should I be connecting to the instance endpoint I'm using? Or should i use rds.cluster-xxx.ap-southeast-2.rds.amazonaws.com ? What are the benefits of using the different endpoints? I understand that if I connect to a read only endpoint I can only do reads, but for read/writes what's the difference connecting to:
rds.cluster-xxx.ap-southeast-2.rds.amazonaws.com
Or
rds-instance-1.xxx.ap-southeast-2.rds.amazonaws.com
?
What is the right / best endpoint to use for general workloads, and why?
You should use cluster reader/writer endpoint.
rds.cluster-xxx.ap-southeast-2.rds.amazonaws.com
rds.cluster-ro-xxx.ap-southeast-2.rds.amazonaws.com
The main benefit of using cluster endpoint is that if the failover occurs due to some reason you will not worry about the endpoint and you will can expect a minimal interruption of service.
Or what if you have 3 read replica then how you will manage to connect the reader? so Better to use cluster reader/writer endpoint.
Using the Reader Endpoint
You use the reader endpoint for read-only connections for your Aurora
cluster. This endpoint uses a load-balancing mechanism to help your
cluster handle a query-intensive workload. The reader endpoint is the
endpoint that you supply to applications that do reporting or other
read-only operations on the cluster.
Using the Cluster Endpoint
You use the cluster endpoint when you administer your cluster, perform
extract, transform, load (ETL) operations, or develop and test
applications. The cluster endpoint connects to the primary instance of
the cluster. The primary instance is the only DB instance where you
can create tables and indexes, run INSERT statements, and perform
other DDL and DML operations.
Instance endpoint
The instance endpoint provides direct control over connections to the
DB cluster, for scenarios where using the cluster endpoint or reader
endpoint might not be appropriate. For example, your client
application might require more fine-grained load balancing based on
workload type. In this case, you can configure multiple clients to
connect to different Aurora Replicas in a DB cluster to distribute
read workloads. For an example that uses instance endpoints to improve
connection speed after a failover for Aurora PostgreSQL
You can check furhter details AWS RDS Endpoints

Should my database server be a Pod on the same Service

In terms of providing a url (to a postgres database) for my web server. Should the postgres database be behind it's own Service or is it okay for it to be a Pod on the same Service as the web server?
Can I configure a Pod to have a FQDN that doesn't change?
Its absolutely fine and I would say recommended to keep the database behind its own service in k8s.
The database would need to be backed by a persistent volume as well.
You can reference the service in other webserver/application pods.
As long as you expose the service properly, FQDN should work.
"This is one of the simpler methods, you could evolve based on your network design"