I am doing a mongoexport and then a BigQuery load of a 50 million record collection
All of my cloud functions and app engine instances connect fine to Mongo Atlas via the VPC peering connection setup via Serverless VPC Access to our Atlas hosted in GCP
However, I have not been able to get Compute Engine instances to connect via our VPC. When I add the Compute Engine instance external IP, it connects fine. When I remove that and add in the internal IP for the Compute Engine instance I get timeouts, but it does show:
2021-01-10T18:09:44.531+0000 could not connect to server: server selection error: server selection timeout, curr
ent topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: ***.mongodb.net:27017, Type: Unkn
own, State: Connected, Average RTT: 0, Last error: connection() : dial tcp *.*.*.*:27017: i/o timeout }, { Ad
dr: ***.mongodb.net:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connec
tion() : dial tcp *.*.*.*:27017: i/o timeout }, { Addr: ***.mongodb.net:27017, Type: Unkn
own, State: Connected, Average RTT: 0, Last error: connection() : dial tcp *.*.*.*:27017: i/o timeout }, ] }
So my best guess is I'm not putting in the right IP range, or the right specific IP to allow compute engine instances to connect internally, it seems like it's Mongo Atlas firewall blocking.
What are the proper steps to setup connection between compute engine and mongo atlas over VPC so that there is no ingress/egress and connections are direct.
I recommend you to use the Intelligence tool on GCP to discard any firewall issue, keep in mind that the tool simulate the packet not a real packet. Once you have discard the firewall rules at GCP level, ensure that the internal firewall is allowing traffic.
I guess cause of this problem is firewall settings too.
Follow 2 steps below.
Step 1. Check VPC peering status
You can use VPC peering's internal access when 'status' is in active.
Check whether it is active or not.
Step 2. Check Firewall rules
If you don't have touch anything on firewall rules, Add a rule that allows each other's CIDR range at both sides.
Your issue is the following: you are trying to access to this DNS to connect your MongoDB atlas cluster ***.mongodb.net. This DNS is public, and your VM need to go on the internet to resolve this DNS. And because, you haven't public IP, you can't go to the internet.
The solution is to use a Cloud NAT to allow your VM to reach the Internet.
Related
I have a VM instance (e2-micro) on GCP running with postgres. I added my own external ip address to pg_hba.conf so I can connect to the database on my local machine. Next to that I have a nodeJS application which I want to connect to that database. Locally that works, the application can connect to the database on the VM instance. But when I deploy the app to GCP I get a 500 Server Error when I try to visit the page in the browser.
These are the things I already did/tried:
Created a Firewall rule to allow connections on my own external ip address
Created a VPC connector and added that connector to my app.yaml
Made sure everything is in the same project and region (europe-west1)
If I allow all ip addresses on my VM instance with 0.0.0.0/0 then App Engine can connect, so my guess is that I'm doing something wrong the connector? I use 10.8.0.0/28 as ip range while the internal ip address of the VM instance is 10.132.0.2, is that an issue? I tried an ip range with 10.0.0.0 but that also didn't work.
First check if your app uses a /28 IP address range (see the documentation):
When you create a connector, you also assign it an IP range. Traffic
sent through the connector into your VPC network will originate from
an address in this range. The IP range must be a CIDR /28 range that
is not already reserved in your VPC network.
When you create a VPC connector a proper firewall rulle is also created to allow traffic:
An implicit firewall rule with priority 1000 is created on your VPC
network to allow ingress from the connector's IP range to all
destinations in the network.
As you wrote yourself when you create a rule that allows traffic from any IP it works (your app can connect). So - look for the rule that allows traffic from the IP range that your app is in - if it's not there create it.
Or - you can connect your app to your DB over public IP's - in such case you also have to create a proper rule that will allow the traffic from the app to DB.
Second - check the IP of the DB that app uses.
My guess is that you didn't change the IP of the DB (that app uses) and it tries to connect not via VPC connector but via external IP and that's why it cannot (and works only when you create a firewall rule).
This answer pointed me in the right direction: https://stackoverflow.com/a/64161504/3323605.
I needed to deploy my app with
gcloud beta app deploy
since the VPC connector method was on beta. Also, I tried to connect to the external IP in my app.yaml but that needed to be the internal IP ofcourse.
I'd like to know in detail how to connect google compute engine virtual machine instance and app engine.
I've set up a virtual machine instance on Google compute engine, and my Postgres server is running there, following this tutorial: https://cloud.google.com/community/tutorials/setting-up-postgres
I've deployed my flask app under the same project on Google Cloud Platform, creating an app engine instance.
I searched on how to connect compute engine and app engine together, and it seems it should be possible through a VPC connector: connect Google App Engine and Google Compute Engine
This is what my VPC connector looks like:
Serverless VPC access
Name Network Region IP address range Min. throughput Max. throughput
connector-name default europe-west2 10.8.0.0/28 200 300
On my compute engine, I have my VM instance like so:
Name Zone Internal IP External IP
some-name europe-west2-c 10.154.0.2 (nic0) 34.89.113.193
On my flask app, I'm trying to connect to my remote DB like so:
db = PostgresqlExtDatabase(
"some-name", # databse name
user="postgres",
password="some-password",
host="10.154.0.2", # remote host internal ip
port=5432,
)
db.connect()
This is my app.yaml for the vpc access part, I've followed this reference: https://cloud.google.com/appengine/docs/standard/python/connecting-vpc#configuring
vpc_access_connector:
name: projects/some-name/locations/europe-west2/connectors/connector-name
If I understood correctly, if the VPC connector is present, I should just be able to connect using the internal IP address of my VM instance(this case, 10.154.0.2)?
The problem is, when the app is deployed for production, It is still complaining that it cannot connect:
2020-09-26 12:54:51 default[20200926t134815] Is the server running on host "10.154.0.2" and accepting
2020-09-26 12:54:51 default[20200926t134815] TCP/IP connections on port 5432?
If it's connected internally I assume I don't have to add that internal IP to firewall rules, although I did try that as well. As for firewall rules, I have allowed my local machine's IP address so I can connect to the remote Postgres server via PgAdmin.
I've actually tried External IP(34.89.113.193) as well although that doesn't make sense to me.
I'm a bit of a noob on networks and backend stuff in general, any help would be much appreciated.
UPDATED 1
This is my firewall rules:
Direction
Ingress, Egress
Action on match
Allow
Source filters
IP ranges
92.40.176.9/32
78.146.103.141/32
10.154.0.2
Protocols and ports
tcp:5432
Image for reference: Screenshot for the list of firewall rules
It turns out the firewall / postgres configurations were all ok, but because this VPC connector method was on beta, I needed to run:
gcloud beta app deploy
instead of the usual
gcloud app deploy.
This command then updated gcloud Beta Commands and prompted me to enable API:
API [appengine.googleapis.com] not enabled on project [742932836941]. Would you like to enable and retry (this will take a few minutes)? (y/N)?
After enabling this everything worked fine.
Per the information provided seems like both VPC firewall rules and the connector are well configured.
However, based on the messages
2020-09-26 12:54:51 default[20200926t134815] Is the server running on host "10.154.0.2" and accepting
2020-09-26 12:54:51 default[20200926t134815] TCP/IP connections on port 5432?
Seems like the VM or server using 10.154.0.2 is not accepting requests on port 5432 or the port has not been opened, you can use this site to do a port scan.
Based on the guide you followed to create PostgreSQL you are using Ubuntu as OS, therefore I suggest you open the port in ubuntu and see if the issue persists.
I have 2 Aws RDS instances,(Run on Postgresql). Both are on Different accounts and different regions. I want to set up data replication between them, using AWS DMS.
I tried doing VPC peering.
I saw the following video to enable VPC peering :-
https://www.youtube.com/watch?v=KmCEFGDTb8U
The Problem:-
When I try creating the AWS DMS service, I add the Hostname, Username and Password, etc for the source(Which exists on the other account), and when I hit Test Connection, I get the following error.
Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed to connect Network error has occurred, Application-Detailed-Message: RetCode: SQL_ERROR SqlState: 08001 NativeError: 101 Message: [unixODBC]timeout expired ODBC general error.
To my surprise, I get a similar error when I hit the Test Connection for the Target RDS instance, which is in the same account. i.e.:-
Test Endpoint failed: Application-Status: 1020912, Application-Message: Cannot connect to ODBC provider Network error has occurred, Application-Detailed-Message: RetCode: SQL_ERROR SqlState: 08001 NativeError: 101 Message: [unixODBC]timeout expired ODBC general error.
Google suggests that we are having some sort of Firewall, but looking at the NACLs I can see we allow 0.0.0.0/0 for both the VPC's.
If you're attempting to access the private IP ranges in one IP from another IP, in addition to creating the VPC Peering connections, you'll have to:
create route table entries in both VPCs to route traffic to the remote VPC's IP range(s) through the Peering Connection,
allow connections within the security groups, both from the source CIDR range in the destination security group, and, if you're filtering outgoing connections from the source, also in it's outbound rules. Note that you can't use Security Group Id to allow this traffic because it doesn't apply to cross region peering;
allow the connection in the underyling software ( probably allowed by default ),
allow the network ACL to pass the traffic ( you've verified that's also allowed by default)
Since you're seeing timeouts, I'd suspect the security group rules. But, it could also be a bad route.
As suggested here https://aws.amazon.com/premiumsupport/knowledge-center/dms-endpoint-connectivity-failures/
When modifying the Replication Instance used to test connection to the Endpoint, take note of:
Private IP Address
VPC Security Group
Either change the Security Group to a suitable one or edit the Security Group being used adding an Inbound Rule to allow PostgreSQL traffic Type from the Private IP Address of the Replication Instance.
The below solution worked for me.
Create replication instance, then endpoints.
If the test endpoints fails - then ensure to pick up the private IP from the instance(if DMS replication instance and the DB are located within the same VPC) and add it to the inbound rules of the corresponding security ID.
If the VPC's are in different region, you might need VPC Peering to get this sorted.
Since I had both running in the same VPC, adding the private IP to inbound rules worked fine and the connection is successful.
(I've searched SO, AWS support and more widely without success.)
I've just successfully deployed a MEANjs application to a Bitnami MEAN instance on EC2, following Ahmed Haque's excellent tutorial on scotch.io. As part of the tutorial/deployment I altered the AWS Security Group to include port 27017 for MongoDB traffic. The CIDR notation for the port 27017 was 0.0.0.0/0 - which AFAIK means 'allow access from any IP address'.
Question: Why does MongoDB port 27017 need to be opened in AWS EC2
Security Group for a 'production' type environment? Surely this is directly exposing the DB to the
Internet. The only thing that should be talking to Mongo is the
"/server/api" code, which is running on the same instance -
and so shouldn't need the port opening.
If I change the Security Group rule for port 27017 by closing off 27017, changing the source to: localhost, the internal IP address, the public IP address, or hack a CIDR to be equivalent to any of those - then the web app hangs (static content returns but no responses to db backed api calls). Changing the SG rule back to 0.0.0.0/0 almost immediately 'fixes' the hang.
All is otherwise sweet with my install. I've closed port 3000 (the node app) in the Security Group and am using Apache to proxy port 80 traffic to port 3000. Set up like this, port 3000 does not need to be open in the Security Group; to me this implies that on-instance traffic doesn't need ports to be externally exposed - so how come that's not true of the Mongo port?
I cant see anything in the '/client' code which is talking direct to Mongo.
What am I missing?
Thanks in advance - John
OK, after further investigation and overnight/red wine reflection I think I have an answer for those learners like me following the above tutorial (or similar). Following the Agile principle that 'done' means 'working code in a production environment' I was trying to understand the last 5 meters as a developer trying to get code working in a representative production environment (which wouldn't have unnecessary ports open) - this answer is written from that perspective. (Builds welcome from wiser readers.)
What's Happening
The step in the tutorial which (a) changed the Mongo bind IP address from 127.0.0.1 to 0.0.0.0, and (b) specifies a connection URL which uses the external IP address of the same instance, appears to have two effects:
It makes the MongoDB on the instance you're configuring potentially available to other instances (0.0.0.0 tells Mongo to "listen on all available network interfaces".)
It means that the IP traffic from your MEAN app /server component on the same instance will talk to Mongo as though it was coming from off-instance (even though it's on the same instance). Hence the Security Group needs to make port 27017 open to allow this traffic to flow. (This is the nub of the issue in terms of MEANjs stack component interaction.)
Fix
On a single instance MEANjs server, if you change the Mongo bind IP address back to 127.0.0.1 and the Mongo connection url to be 127.0.0.1:27017 then you can close off port 27017 in the EC2 Security Group and the app still works.
To share one MongoDB across more than one MEANjs app server (without wanting to stray into serverfault territory):
Change the Mongo bind IP address to 0.0.0.0,
Use the private IP address of the Mongo server in other app/instance connection strings
Add a EC2 Security Group CIDR rule of private IP address/24, or private IP address/16 to allow access across instances in the specified internal IP address range.
The above is developer 'hack', not a recommendation for good practice.
I have some services stood up on Google Container Engine and they are hooked up to external IPs.
When I try to query on of these external IPs from within one of my services I get an error like
dial tcp xx.xx.xx.xx:5429: getsockopt: connection refused
Using the exact same service, but running on my local machine, it can connect fine to the same IP and port.
Is there some sort of port opening that I need to do in Google Networking dashboard or in my Kuberenetes pod configuration to allow my pod to connect to this host?
It is a firewall issue. It is trying to set up a connection through 5429 port, which surely is being blocked in the firewall rules.
You can find the Firewall console in the dashboard > Networking > VPC network > Firewall rules.
You only need to allow the connection in the needed port in the network where your instance is and it will work properly.