Google Cloud SQL - Service Account - google-cloud-sql

I am following the link below to create a Cloud SQL Proxy.
https://cloud.google.com/sql/docs/mysql/connect-container-engine
When I get to the step to create the service account, I am unable to see any Cloud SQL roles, even though I have a MySQL instance associated with the project and I have enabled Cloud SQL administration API as described in the previous step.
Also, the whole process seems to be quite long-winded. Is there a way to connect directly from the container cluster to Cloud SQL without using the proxy? If yes, how do I find the IP address of the Cloud SQL instance? Also, how do I get the container cluster IP to white-list?
Many thanks

Related

CloudSQL Proxy on GKE : Service vs Sidecar

Does anyone know the pros and cons for installing the CloudSQL-Proxy (that allows us to connect securely to CloudSQL) on a Kubernetes cluster as a service as opposed to making it a sidecar against the application container?
I know that it is mostly used as a sidecar. I have used it as both (in non-production environments), but I never understood why sidecar is more preferable to service. Can someone enlighten me please?
The sidecar pattern is preferred because it is the easiest and more secure option. Traffic to the Cloud SQL Auth proxy is not encrypted or authenticated, and relies on the user to restrict access to the proxy (typically be running local host).
When you run the Cloud SQL proxy, you are essentially saying "I am user X and I'm authorized to connect to the database". When you run it as a service, anyone that connects to that database is connecting authorized as "user X".
You can see this warning in the Cloud SQL proxy example running as a service in k8s, or watch this video on Connecting to Cloud SQL from Kubernetes which explains the reason as well.
The Cloud SQL Auth proxy is the recommended way to connect to Cloud SQL, even when using private IP. This is because the Cloud SQL Auth proxy provides strong encryption and authentication using IAM, which can help keep your database secure.
When you connect using the Cloud SQL Auth proxy, the Cloud SQL Auth proxy is added to your pod using the sidecar container pattern. The Cloud SQL Auth proxy container is in the same pod as your application, which enables the application to connect to the Cloud SQL Auth proxy using localhost, increasing security and performance.
As sidecar is a container that runs on the same Pod as the application container, because it shares the same volume and network as the main container, it can “help” or enhance how the application operates. In Kubernetes, a pod is a group of one or more containers with shared storage and network. A sidecar is a utility container in a pod that’s loosely coupled to the main application container.
Sidecar Pros: Scales indefinitely as you increase the number of pods. Can be injected automatically. Already used by serviceMeshes.
Sidecar Cons: A bit difficult to adopt, as developers can't just deploy their app, but deploy a whole stack in a deployment. It consumes much more resources and it is harder to secure because every Pod must deploy the log aggregator to push the logs to the database or queue.
Refer to the documentation for more information.

Can I use the cloudsql-proxy to connect to a custom VM running postgres?

I have a custom Postgres instance running on a GCE VM. I am not using CloudSQL. I'd like to use the functionality provided by the cloudsql-proxy, but when I specify my custom instance the proxy fails.
googleapi: Error 404: The Cloud SQL instance does not exist., instanceDoesNotExist
It appears that only CloudSQL instances work and I don't understand the limitation. It seems like the proxy should work on any VM with port 5432 open.
Cloud SQL Proxy can only be used with Cloud SQL instances.
In the documentation About the Cloud SQL Proxy there is no mention about using it for custom databases inside GCE.
As it is stated in the documentation:
The Cloud SQL Proxy provides secure access to your Cloud SQL Second
Generation instances without having to whitelist IP addresses or
configure SSL.
However, I found some documentation about Access control overview where you can find an alternative to it.

Will PrivateLink allow firehose to access my private Redshift cluster?

I am trying to set up firehose to send data from a kinesis stream to a redshift cluster. Firehose successfully inserts the data to my s3 bucket, but I am receiving the following error when firehose attempts to execute the s3->Redshift copy command:
The connection to the specified Amazon Redshift cluster failed. Ensure that security settings allow Firehose connections, that the cluster or database specified in the Amazon Redshift destination configuration JDBC URL is correct, and that the cluster is available.
I have performed every setup step according to this except for one: I did not make my Redshift cluster publicly accessible. I am unable to do this bc the cluster is in a private VPC that does not have an internet gateway attached.
After researching the issue, I found this article which provides insight for how to set up an AWS PrivateLink with firehose. However, I have heard that some AWS services support PrivateLink and others do not. Would PrivateLink work for this case?
I am also concerned with how this would affect the security of my VPC. Could anyone provide insight to possible risks to using a PrivateLink?
I was able to solve this issue. Add an Internet gateway to your VPC route table.
Go to Redshift VPC.
On the Routes tab (you must have 3 private routes), choose Edit, Add another route, and add the following routes as necessary. Choose Save when you're done.
For IPv4 traffic, specify 0.0.0.0/0 in the Destination box, and select the internet gateway ID in the Target list.
If you add internet gateway ID to all 3 private routes, you might see Failure in other applications that are using the same route/VPC. To fix that, update only 1 route with internet gateway ID and the other two will have nat as destination for (0.0.0.0/0).

Connect Google Cloud Run to MongoDB Atlas

I'm evaluating a move from Google Kubernetes Engine to Google Cloud Run, to improve cost and resource efficiency within our company. I'm also in the process of transitioning our workflows from monolithic PHP and Ruby apps to a more nimble Node.js setup, using MongoDB.
For a small organization like ours, I like the idea of managed services such as Google Cloud Run and MongoDB Atlas, however, I'm concerned about the security. In MongoDB Atlas, it seems the only real security measure is to whitelist IP, which I obviously don't have access to through Google Cloud Run.
I'm definitely not a network expert, so I'm wondering if anyone has any ideas for securely connecting Cloud Run to MongoDB Atlas, while still maintaining scalability. If I have to remain on GKE, so be it, I just want to know all of my options before I move forward.
IP whitelist - by its very nature, Google Cloud Run would seem to be anti-static-IP, so this seems to be a non-starter.
I evaluated items such as Cloud NAT and Cloud VPC Peering, but from what I can tell Cloud Run does not have access to the VPC, so it seems like this wouldn't help either.
Cloud Run and Cloud Function have the same underlying infrastructure. Cloud Function have the capability to be connected to a VPC. Thereby, Cloud Run will support a day this capability, I hope by the end of 2019.
If you can, I just recommend you to wait!
Update (October 2020): Cloud Run has now launched VPC egress feature that lets you configure a static IP for outbound requests through Cloud NAT. You can follow this step by step guide in the documentation to configure a static IP to connect to MongoDB Atlas.

How to reach hosted postgres in GCP from Kubernetes cluster, directly to private IP

So, I created a postgreSQL instance in Google Cloud, and I have a Kubernetes Cluster with containers that I would like to connect to it. I know that the cloud sql proxy sidecar is one method, but the documentation says that I should be able to connect to the private IP as well.
I notice that a VPC peering connection was automatically created for me. It's set for a destination network of 10.108.224.0/24, which is where the instance is, with a "Next hop region" of us-central1, where my K8s cluster is.
And yet when I try the private IP via TCP on port 5432, I time out. I see nothing in the documentation about have to modify firewall rules to make this work, but I tried that anyway, finding the firewall interface in GCP rather clumsy and confusing compared with writing my own rules using iptables, but my attempts failed.
Beyond going to the cloud sql sidecar, does anyone have an idea why this would not work?
Thanks.
Does your GKE cluster meet the environment requirements for private IP? It needs to be a VPC enabled cluster on the same VPC and region as your Cloud SQL instance.
In the end, the simplest thing to do was to just use the google cloud sql proxy. As opposed to a sidecar, I have multiple containers needing db access so I put the proxy into my cluster as its own container with a service, and it seems to just work.
If your instance of cloud SQL or compute both in the same VPC then only you can create a VPC peering over private IP.
From cloud SQL compute VM you can choose the VPC and subnet and also setup same for the GKE and you can make the connection from pod to cloud sql.