Also, i need to connect with the same project hosted app with the cluster.
and access the database.
i don't know which IP i give in the mysql.createconnection in my app.
in my node.js app
var connection = mysql.createConnection({host:"IP",user:"username",password: "password",database:"databasename"});
"How to connect to Cloud SQL" is kind of a broad question, and you haven't given very many details as to your environment, but I'll try to point you in the right direction.
First there are generally 3 ways to connect to Cloud SQL - via Environment connectors, Private IP, Public IP.
Environment Connectors (App Engine & Cloud Functions)
If you are using Google App Engine or Google Functions, you should use the /cloudsql socket provided by the environment. See this page here for examples.
Private IP (Compute or Kubernetes Engine)
To connect via Private IP, your app needs to have access to a VPC. This can be either a Compute VM or a GKE cluster. Then you app can access the "Private IP" for the instance just like it would a local database.
Public IP (Anything with access to the internet)
Finally, you can connect via Public IP. This can be done as long as you have access to the internet, but by default public connections need to be authenticated. This can be done 3 different ways:
Using the Cloud SQL proxy
Using an SSL cert
Whitelisting an IP address
Hope this helps.
Related
So I have GCP set up and Kubernetes, I have a web app (Apache OFBiz) running on pods in the GKE cluster. We have a domain that points itself to the web app, so essentially it's accessible from anywhere on the internet. Our issue is since this is a school project, we want to limit the access to the web app to the internal network on GCP, we want to simulate a VPN connection. I have a VPN gateway set up, but I have no idea what to do on any random computer to simulate a connection to the internal network on GCP. Do I need something else to make this work? What are the steps on the host to connect to GCP? And finally, how do I go about limiting access to the webapp so only people in the internal network have access to the webapp?
When I want to test a VPN, I simply create a new VPC in my project and I connect both with Cloud VPN. Then, in the new VPC, you can create VM that simulate computer in the other side of the VPN and thus simulate what you want.
To setup a VPN on GCP you can use Cloud VPN using static or dynamic routing, you will need to configure a remote peer from the location you want to access your GCP resources to establish the connection towards the Cloud VPN gateway on GCP end.
This means you may require a router that supports creating VPN tunnels on your on-premises or use a host that acts like a router to establish this connection using a VPN software towards Cloud VPN (like Strongswan, for example).
You can block external access to the resources on your VPC network by using GCP firewall rules and just allow specific ports or source IP ranges as you wish.
Another option, even if it's not a VPN or encrypted traffic, is to only allow ingress traffic from the public IP from where you would like to connect to your internal VPC, but this is less secure and would only work if you have an static public IP on your on-premises.
Since you said this is a school project, I would recommend asking your teacher for more direct advice. That said, you can't "simulate" a VPN but you can set up an IPSec client on your laptop or whatever and actually connect to it. Unfortunately Google doesn't appear to have any documentation on this so I'm guessing they presume you already know IPSec well enough to write a connection config yourself.
Using kubectl port-forward might be an easier solution.
I have created my organisation infrastructure in GCP following the Cloud Foundation Toolkit using the Terraform modules provided by Google.
The following table list the IP ranges for all environments:
Now I am in the process of deploying my application that consists of basically Cloud Run services and a Cloud SQL (Postgres) instance.
The Cloud SQL instance was created with a private IP from the "unallocated" IP range that is reserved for peered services (such as Cloud SQL).
In order to establish connectivity between Cloud Run and Cloud SQL, I have also created the Serverless VPC Connector (ip range 10.1.0.16/28) and configured the Cloud SQL proxy.
When I try to connect to the database from the Cloud Run service I get this error after ~10s:
CloudSQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: Post "https://www.googleapis.com/sql/v1beta4/projects/[my-project]/instances/platform-db/createEphemeral?alt=json&prettyPrint=false": context deadline exceeded
I have granted roles/vpcaccess.user for both the default Cloud Run SA and the one used by the application in the host project.
I have granted roles/compute.networkUser for both SAs in the service project. I also granted roles/cloudsql.client for both SAs.
I have enabled servicenetworking.googleapis.com and vpcaccess.googleapis.com in the service project.
I have run out of ideas and I can't figure out what the issue is.
It seems like a timeout error when Cloud Run tries to create a POST request to the Cloud SQL API. So it seems like the VPC connector (10.1.0.16/28) cannot connect to the Cloud SQL instance (10.0.80.0/20).
Has anyone experienced this issue before?
When you use the Cloud SQL built-in connexion in Cloud Run (but also App Engine and Cloud Function) a connexion similar to Cloud SQL proxy is created. This connexion can be achieved only on a Cloud SQL public IP, even if you have a serverless VPC connector and your database reachable through the VPC.
If you have only a private IP on Cloud SQL, you need to use the private IP to reach the database, not the built-in Cloud SQL connector. More detail in the documentation
I also wrote an article on this
If you are using a private IP, you need to check the docker bridge network's IP range. Here is what the documentation says:
If a client cannot connect to the Cloud SQL instance using private IP, check to see if the client is using any IP in the range 172.17.0.0/16. Connections fail from any IP within the 172.17.0.0/16 range to Cloud SQL instances using private IP. Similarly, Cloud SQL instances created with an IP in that range are unreachable. This range is reserved for the docker bridge network.
To resolve some of the issues, you are experiencing, follow the documentation here and post any error messages you receive, for example, you could try:
Try the gcloud sql connect command to connect to your instance. This command authorizes your IP address for a short time. You can run this command in an environment with Cloud SDK and mysql client installed. You can also run this command in Cloud Shell, which is available in the Google Cloud Console and has Cloud SDK and the mysql client pre-installed.
Temporarily allow all IP addresses to connect to an instance. For IPv4 authorize 0.0.0.0/0 (for IPv6, authorize ::/0. After you have tested this, please make sure you remove it again as it opens up to the world!
Are you using connection pools?
If not, I would create a cache of connections so that when your application needs to link to the database, it can get a temporary connection from the pool. Once the application has finished its operation, the connection returns to the pool again for later use. For this to work correctly, the connection needs to be open and closed efficiently and not waste any resources.
I have been struggling when trying to connect to an instance of Postgresql in Google Cloud Platform (from my machine in my home network), which has a private IP.
I have tried with https://cloud.google.com/sql/docs/postgres/connect-admin-proxy (Proxy cloud sql) but I need my instance to have a public IP, and that is not possible according to the requirements I have.
Also I read that I can connect to my VPC using https://cloud.google.com/vpc/docs/configure-serverless-vpc-access , but I have no idea what I have to do.
Does anyone have ever faced a similar issue?
Thanks! I am new at GCP configuration.
You can connect to private IP by having access to the VPC your Cloud SQL instance is paired in. There are instructions under "Connecting from an external source" on the Configuring Private IP page.
However please note that connecting with Public IP with the Cloud SQL proxy is also very secure, and encrypts the data between the proxy and your instance in a similar fashion to how the Cloud VPN works.
It is not possible out of the box, but you can use openVPN to create a site to client VPN (bastion host) I found an article about how to address this scenario, this is a very elaborate solution as was mentioned on the question comments.
I found this feature request for Cloud SQL to allow connection between on-premise servers to instances with private IP
I try to connect my app that is hosted on google cloud platform(gcp) app Engine to my Mongo Atlas DB.
And Mongo wants me to whitelist the gcp app ip.
But gcp doesn't have a static IP for me to whitelist.
I want to make sure I apply security best practices, and as far as I understand whitelisting my DB for all the ips is not secures. So how can I do it without opening all ips ?
You have 2 solutions
You can grant the App Engine IP ranges. But it's not secured as described in the documentation:
From this example, we see that both the 8.34.208.0/20 and 8.35.192.0/21 IP ranges can be used for App Engine traffic. Other queries for any additional netblocks may return additional IP ranges.
Note that using static IP address filtering is not considered a safe and effective means of protection. For example, an attacker could set up a malicious App Engine app which could share the same IP address range as your application. Instead, we suggest that you take a defense in depth approach using OAuth and Certs.
You can perform VPC peering. This required several things
Have a paid subscription to Mongo Atlas
Create a {peering between Mongo Atlas and your project](https://docs.atlas.mongodb.com/security-vpc-peering/)
Create a serverless VPC connector and add it to your App Engine to allow it to reach private IP on the VPC (and peering attached to the VPC, like your Mongo Atlas DB)
You have the option of reserving a static IP while creating a VM.
On the"create instance" page, scroll to "networking" you are presented with options for your
I. Internal IP
II. External IP
If you are running M10-Cluster (or higher) on Atlas, VPC-Peering is your way to go. I'd recommend trying this tutorial. They're explaining what CIDR-ranges (what you referred to as IPs) to whitelist.
One thing to notice here, they are using GCPs Kubernetes Engine. With App Engine there is a little extra effort as it is one of GCPs "Serverless"-Solutions, which is the reason why you should not use static IPs or anything like that. You will need to connect your App to the VPC-Network via a Connector:
Create a connector in the same region as your GAE-App following
these instructions. You can find out the current region of your
GAE-App with gcloud app describe. Just give the connector the range
10.8.0.0 for now (/28 is added automatically). Remember the name
you gave it.
Depending on your environment your app has to point to that connector. In NodeJS its your app.yaml file and it looks similar to this:
runtime: nodejs10
vpc_access_connector:
name: projects/GCLOUD_PROJECT_ID/locations/REGION_WHERE_GAE_RUNS/connectors/NAME_YOU_ENTERED_IN_STEP_1
Go to your Atlas project, navigate to Network Access and whitelist
the CIDR-range you set for the connector in Step 1
You may also need to whitelist the CIDR-range from Step 1 for the
VPC-Network. You can do that in GCP by navigating to VPC-Network ->
Firewall
I´ve deploy my demo app on GAE and works fine with mLab , but when I try to deploy mongodb on GCE (MongoDB (Google Click to Deploy) )the deploy is success but I don´t know how to get te URI to set on my app running on GAE.
I try with internal and external IP but it seems dont work !
Thanks
GAE Standard deployments are sand-boxed. Therefore you can not connect to GCE instances' internal IPs. You can imagine it as two different devices on two different private networks that are not capable to communicate with one another using their internal IPs. However, they can always communicate if one of the devices (GCE instance in this case) has a public IP, and it's private network (firewall) allowed traffic through the port required by the device.
On the other hand, if the GAE deployment is in flex environment, you should be able to connect to the db using the API through internal IPs.
I have tried and succeeded with this flex environment example for both internal and external IP addresses. Like you, I used Cloud Launcher to deploy Mongodb which created GCE instances with public IPs and network tags mongodb and mongodb-db. Then I created a db, username and a password by connecting to the primary db instance through SSH.
To use the internal IP, I just created/modified keys.json file per the example, as follows:
{
"mongoHost": "internal IP address",
"mongoPort": "27017",
"mongoDatabase": "db",
"mongoUser": "username",
"mongoPass": "password"
}
So I didn't have to worry about the URI as the code in server.js took care of it through passing this string:
mongodb://${user}:${pass}#${host}:${port}
But for your demo app, you may have to check the MongoDB official documentation for the standard connection string format URI.
As for using public IPs, I had to create a network firewall rule that allows tcp ingress on port 27017 with target tags identical to the network tags in order to limit access through the port to the MongoDB instances only. Next, I modified the keys.json file as above by replacing the internal IP with the public one.