I'm still having problems accessing the cloud SQL instance from a GCE container. When I try to open up mysql, I get the following error:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial
communication packet', system error: 0
The connection works fine from my local machine, though (The instance has a public IP and I have added my office's IP to the 'allowed Networks'). So, the instance is accessible through the internet just fine.
I guess the db's access control is blocking my access from the gce network, but I'm unable to figure out how to configure this.
I added my project to "Authorized App Engine Applications" in the Cloud SQL control panel, but that doesn't seem to help.
EDIT:
If I add "0.0.0.0/0" to Allowed Networks, all works well. This is obviously not what I want, so what do I need to enter instead?
EDIT2: I could also add all public IPs from my kubernetes cluster (obtained through gcloud compute instances list) and add them to the cloud sql access list manually. But, this doesn't seem to be right, does it?
The recommended solution is to use SSL connection with that 0.0.0.0/0 CIDR. This is to limit the connection to the correct key. I also read that they won't promise you a specific IP range so the CIDR /14 might not work some times. I had to do the SSL connection with my Cloud SQL for the same reasons.
You should use the public IP addresses of the GCE instances to correctly allow traffic to your Cloud SQL instance (as you mentioned in EDIT2).
You can find more information in Cloud SQL documentation: https://cloud.google.com/sql/docs/gce-access
If you add the /14 CIDR block for your Container Engine cluster as the source address range does that work?
To find the CIDR block for your cluster, click on the cluster name in the Google Cloud Console and find the row labeled "Container address range".
Related
I have created my organisation infrastructure in GCP following the Cloud Foundation Toolkit using the Terraform modules provided by Google.
The following table list the IP ranges for all environments:
Now I am in the process of deploying my application that consists of basically Cloud Run services and a Cloud SQL (Postgres) instance.
The Cloud SQL instance was created with a private IP from the "unallocated" IP range that is reserved for peered services (such as Cloud SQL).
In order to establish connectivity between Cloud Run and Cloud SQL, I have also created the Serverless VPC Connector (ip range 10.1.0.16/28) and configured the Cloud SQL proxy.
When I try to connect to the database from the Cloud Run service I get this error after ~10s:
CloudSQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: Post "https://www.googleapis.com/sql/v1beta4/projects/[my-project]/instances/platform-db/createEphemeral?alt=json&prettyPrint=false": context deadline exceeded
I have granted roles/vpcaccess.user for both the default Cloud Run SA and the one used by the application in the host project.
I have granted roles/compute.networkUser for both SAs in the service project. I also granted roles/cloudsql.client for both SAs.
I have enabled servicenetworking.googleapis.com and vpcaccess.googleapis.com in the service project.
I have run out of ideas and I can't figure out what the issue is.
It seems like a timeout error when Cloud Run tries to create a POST request to the Cloud SQL API. So it seems like the VPC connector (10.1.0.16/28) cannot connect to the Cloud SQL instance (10.0.80.0/20).
Has anyone experienced this issue before?
When you use the Cloud SQL built-in connexion in Cloud Run (but also App Engine and Cloud Function) a connexion similar to Cloud SQL proxy is created. This connexion can be achieved only on a Cloud SQL public IP, even if you have a serverless VPC connector and your database reachable through the VPC.
If you have only a private IP on Cloud SQL, you need to use the private IP to reach the database, not the built-in Cloud SQL connector. More detail in the documentation
I also wrote an article on this
If you are using a private IP, you need to check the docker bridge network's IP range. Here is what the documentation says:
If a client cannot connect to the Cloud SQL instance using private IP, check to see if the client is using any IP in the range 172.17.0.0/16. Connections fail from any IP within the 172.17.0.0/16 range to Cloud SQL instances using private IP. Similarly, Cloud SQL instances created with an IP in that range are unreachable. This range is reserved for the docker bridge network.
To resolve some of the issues, you are experiencing, follow the documentation here and post any error messages you receive, for example, you could try:
Try the gcloud sql connect command to connect to your instance. This command authorizes your IP address for a short time. You can run this command in an environment with Cloud SDK and mysql client installed. You can also run this command in Cloud Shell, which is available in the Google Cloud Console and has Cloud SDK and the mysql client pre-installed.
Temporarily allow all IP addresses to connect to an instance. For IPv4 authorize 0.0.0.0/0 (for IPv6, authorize ::/0. After you have tested this, please make sure you remove it again as it opens up to the world!
Are you using connection pools?
If not, I would create a cache of connections so that when your application needs to link to the database, it can get a temporary connection from the pool. Once the application has finished its operation, the connection returns to the pool again for later use. For this to work correctly, the connection needs to be open and closed efficiently and not waste any resources.
I've having issues accessing MongoDB Atlas from Google Cloud functions. It is giving me error regarding IP Whitelisting but I've added both (Serverless VPC Access) IP address range and VPC Network Peering IP address range to MongoDB whitelist.
I've also created MongoDB peering with google cloud.
If I allow (access from anywhere) then my mongodb starts working fine, otherwise it gives error regarding IP whitelisting.
I'm not sure what else I should add to MongoDB whitelist when I've added both IP's already.
Can anyone help me regarding this? A simple step by step guide will mean a lot. (images/video can help big if possible)
**Edit
I took (Atlas GCP Project ID & Atlas VPC Name) to create (VPC Network Peering).
And they both are (Active & Available).
And after that I created (Serverless VPC Access).
And added it to my function inside (connection), a function that will connect to mongoDB to get data. It works fine if I set mongoDB to (allow from everywhere) but do not work without it.
And after that I added all 3 IP's/CIDR blocks to the IP Whitelist.
The CIDR Block from MongoDB Atlas as in 1st image.
And CIDR Block from Serverless VPC Access.
And CIDR Blcok from VPC Network as well just like all above.
But I've still confused that when I run this function it still gives me error about IP Whitelist and only works if I allow traffic from everywhere in mongoDB.
Don't know what I'm doing right and what I'm doing wrong. As there aren't any videos available on internet to achieve this.
I even tried this article but still nothing works out.
https://medium.com/better-programming/connecting-google-cloud-functions-with-mongodb-atlas-499a0a82ccf3
This is the error I'm getting.
If you know you need to whitelist specific IPs:
Whitelist all IPs.
Connect successfully.
Download server log.
Figure out which IP the connection came from.
Whitelist that IP.
Verify this IP is in your expected range, etc.
If you know you don't need to whitelist specific IPs:
Reference Atlas documentation that says so and explains how VPC peering is supposed to work (medium posts are not a substitute for official documentation).
If you don't know whether specific IPs need to be waitlisted:
Follow the first procedure and whitelist your IPs.
Then look for official documentation stating what the proper usage would be.
I try to connect my app that is hosted on google cloud platform(gcp) app Engine to my Mongo Atlas DB.
And Mongo wants me to whitelist the gcp app ip.
But gcp doesn't have a static IP for me to whitelist.
I want to make sure I apply security best practices, and as far as I understand whitelisting my DB for all the ips is not secures. So how can I do it without opening all ips ?
You have 2 solutions
You can grant the App Engine IP ranges. But it's not secured as described in the documentation:
From this example, we see that both the 8.34.208.0/20 and 8.35.192.0/21 IP ranges can be used for App Engine traffic. Other queries for any additional netblocks may return additional IP ranges.
Note that using static IP address filtering is not considered a safe and effective means of protection. For example, an attacker could set up a malicious App Engine app which could share the same IP address range as your application. Instead, we suggest that you take a defense in depth approach using OAuth and Certs.
You can perform VPC peering. This required several things
Have a paid subscription to Mongo Atlas
Create a {peering between Mongo Atlas and your project](https://docs.atlas.mongodb.com/security-vpc-peering/)
Create a serverless VPC connector and add it to your App Engine to allow it to reach private IP on the VPC (and peering attached to the VPC, like your Mongo Atlas DB)
You have the option of reserving a static IP while creating a VM.
On the"create instance" page, scroll to "networking" you are presented with options for your
I. Internal IP
II. External IP
If you are running M10-Cluster (or higher) on Atlas, VPC-Peering is your way to go. I'd recommend trying this tutorial. They're explaining what CIDR-ranges (what you referred to as IPs) to whitelist.
One thing to notice here, they are using GCPs Kubernetes Engine. With App Engine there is a little extra effort as it is one of GCPs "Serverless"-Solutions, which is the reason why you should not use static IPs or anything like that. You will need to connect your App to the VPC-Network via a Connector:
Create a connector in the same region as your GAE-App following
these instructions. You can find out the current region of your
GAE-App with gcloud app describe. Just give the connector the range
10.8.0.0 for now (/28 is added automatically). Remember the name
you gave it.
Depending on your environment your app has to point to that connector. In NodeJS its your app.yaml file and it looks similar to this:
runtime: nodejs10
vpc_access_connector:
name: projects/GCLOUD_PROJECT_ID/locations/REGION_WHERE_GAE_RUNS/connectors/NAME_YOU_ENTERED_IN_STEP_1
Go to your Atlas project, navigate to Network Access and whitelist
the CIDR-range you set for the connector in Step 1
You may also need to whitelist the CIDR-range from Step 1 for the
VPC-Network. You can do that in GCP by navigating to VPC-Network ->
Firewall
I have created a CloudSQL instance which was part of a VPC I have created.
I'm able to connect to this CloudSQL using CloudSQL Proxy service. But I'm unable to connect to this instance using public IP of the instance though I added the firewall rule to this VPC.
The error I'm getting:
Unable to connect to host <public-ip-of-cloudsql>, or the request timed out.
Be sure that the address is correct and that you have the necessary privileges, or try increasing the connection timeout (currently 10 seconds).
MySQL said:
Can't connect to MySQL server on '<public-ip-of-cloudsql>' (4)
Following is the firewall rule I added and provided my home IP address in the blocked out area.
Please let me know if I'm missing something. I can provide more details if needed.
These are the steps you should follow in order to connect to Cloud SQL using the public IP:
Created a Cloud SQL instance, including configuring the default user.
Assuming you use a local client:
2.Install the client.
3.Configure access to your Cloud SQL instance.
4.Connect to your Cloud SQL instance.
You can find a detailed explanation here: Connecting MySQL client using public IP
If you are using the Cloud SQL proxy to connect via public ip, it requires port 3307 to be open to the address.
If you aren't using the Cloud SQL proxy to connect via public ip, you need to authorize your external IP.
I was able to connect CloudSQL which is part of a VPC by just adding the client IP address as Authorized networks.
It's weird, I tried many times before but couldn't succeed. It is working now.
Thanks, guys for answers.
I'm not sure how to phrase this question or even if it's relevant here.
I'm researching a solution to move our in-house MongoDB installation to a cloud-based db as a service solution in Mongo lab.
The company has stated here http://docs.mlab.com/security/#network that if I deploy the DB in my region (I use google cloud)
When you connect to your mLab database from within the same datacenter/region, you communicate over your cloud hosting provider’s internal network.
How is that statement possible?
When I create a DB at Mongo lab I get an external URL to connect to
ds021984.mlab.com -> 104.154.103.88 instead of an internal host name 10.x.x.x
So how can that address be external thus effecting my latency deeply?
Am I missing something ? How is that statement possible?
The only time you can use the internal IP to address a VM in GCP is if that VM is in the same network resource (and hence, the same GCP account). GCP is smart enough to know if the external IP being addressed is a GCP address, and will route the traffic such that it does not leave the region. This is pretty evident when you ping an external IP from another VM in the region, you'll typically get sub-millisecond response times.