Connecting to mlab from AWS EC2 - mongodb

I'm trying to connect my app running on AWS EC2 to mlab endpoint. I can easily connect using the same code base from my local machine to mlab endpoint. However, when I run on AWS, I get the following error.
{ name: 'MongoError', message: 'connect ECONNREFUSED' }
I have tried increasing connectionTimeMS to 30 seconds, but I still get the same error.
From EC2 instance, I can ping the DB server and netcat is also successful.
My EC2 instance is configured to receive and send all traffic on all ports from any IP address.
I think issue might be related to outgoing traffic, but do not know how to configure.
Thanks

Turned out to be environment issue.
I was setting my mlab endpoint using the command
export MONGOLAB_URL='xxxx'
and then running my app
sudo node server.js
this sequence does not set my local environment.
All I had to do in the end was to use the following command
sudo MONGOLAB_URI='xxxxxx' node server.js

Related

Failed connection to postgres on EC2 only, works on local machine

I'm running an application on an EC2 instance (AWS cloud computing solution) that connects to a specific Database address.
When I run the application in my local machine it connects perfectly to the DB.
However, when I run it in the EC2 the connection hangs indefinitely. I've also tried to connect using psql to prove that the error was on the connection and received the "Connection timed out error".
Both my local machine and the EC2 instance are running ubuntu 22.04.
Does anyone have any idea what could be going on?
Both RDS instance and EC2 instance need to be in the same VPC, check your inbound rule of security group of your RDS. It must allow security group of your EC2 instance.
If you configured it to connect through public internet (not recommended) like what you did from local, the inbound rule of RDS need to allow IP of your EC2 instance.

MongoDB Compass connectivity issue with Ec2 Instance to access amazon documentDB cluster

I have been trying to access my database that i created on Amazon DocumentDB via MongoDB Compass using Ec2 Instance but following is the error I keep on getting. There are no issues with the security groups, since I have made sure to give appropriate in bound request rights.
I am able to connect to the DocumentDB cluster via mongo shell after I SSH into the ec2-instance, but I cannot connect it via MongoDB Compass because it throws me the same error that has been attached in the image below
Please help!
Just use your pem file and set localhost port example here port 8000. Once run this below command then just open mongo compass and host: localhost:8000 then it will connect.
ssh -i my-aws-key.pem -N -f -L 8000:localhost:27017 ec2-user#serverIp

Connect postgres cloud sql through cloud sql proxy

I created a Single Zone postgres db instance on Cloud Sql, and I am trying to connect by cloud sql proxy.
/cloud_sql_proxy -instances=<PROJECT_ID>:us-central1:staging=tcp:5432 -credential_file=./<SERVICE_ACCOUNT_KEY_FILE>
This is running well. But when i run below command,
psql "host=127.0.0.1 sslmode=disable dbname=postgres user=postgres"
the proxy shows this error:
2019/11/14 15:20:10 using credential file for authentication; email=<SERVICE_ACCOUNT_EMAIL>
2019/11/14 15:20:13 Listening on 127.0.0.1:5432 for <PROJECT_ID>:us-central1:staging
2019/11/14 15:20:13 Ready for new connections
2019/11/14 15:20:34 New connection for "<PROJECT_ID>:us-central1:staging"
2019/11/14 15:22:45 couldn't connect to "<PROJECT_ID>:us-central1:staging": dial tcp 34.70.245.249:3307: connect: connection timed out
Why is this happening?
I am doing this from my local.
I've just followed this tutorial step by step and it worked perfectly for me.
I did not have to do any extra steps(whitelisting ip, opening port etc...) and this was done in a clean project.
Are you trying to do this from local with the SDK or from Cloud Shell? Do you have any firewall restrictions in place?
Any further information about specific setup from your side that might affect will surely help.
Let us know.
EDIT:
Make sure your port 3307 is not blocked by anything.
Have a look at this official documentation specifying that.
Make sure you have all the required IAM roles attached to the service account before you connect to it:
For instance, the list of roles for cloudsql can be retrieved from gcloud with:
$ gcloud iam roles list --filter 'name~"roles/cloudsql"' --format 'table(name, description)'
NAME DESCRIPTION
roles/cloudsql.admin Full control of Cloud SQL resources.
roles/cloudsql.client Connectivity access to Cloud SQL instances.
roles/cloudsql.editor Full control of existing Cloud SQL instances excluding modifying users, SSL certificates or deleting resources.
roles/cloudsql.instanceUser Role allowing access to a Cloud SQL instance
roles/cloudsql.serviceAgent Grants Cloud SQL access to services and APIs in the user project
roles/cloudsql.viewer Read-only access to Cloud SQL resources.
If your service account is lacking the appropriate roles, it won't be able to connect to the instance for IAM authentication to work.
The issue is probably that you are not in the VPC network, like when you connect from localhost, so what happens is the cloud proxy showing it cannot connect to the remote IP.
Read this carefully if you use a private IP :
https://cloud.google.com/sql/docs/postgres/private-ip
Note that the Cloud SQL instance is in a Google managed network and the proxy is meant to be used to simplify connections to the DB within the VPC network.
In short: running cloud-sql-proxy from a local machine will not work, because it's not in the VPC network. It should work from a Compute Engine VM that is connected to the same VPC as the DB.
What I usually do as a workaround is use gcloud ssh from a local machine and port forward over a small VM in compute engine, like:
gcloud beta compute ssh --zone "europe-north1-b" "instance-1" --project "my-project" -- -L 5432:cloud_sql_server_ip:5432
Then you can connect to localhost:5432 (make sure nothing else is running or change first port number to one that is free locally)
What should also work is to setup a VPN connection to the VPC network and then run the cloud proxy in node in that network.
I have to say I found this really confusing because it gives the impression the proxy does similar magic like gloud does. It's beyond me why some Google engineers have not wired that together yet, can't be too hard.
I had this issue previously when I didn't specify the port argument to psql for some reason, try this:
psql "host=127.0.0.1 port=5432 sslmode=disable user=postgres"
Don't specify the db, and see if that lets you get to the prompt.

PGAdmin III cannot connect AWS RDS

I am trying to connect AWS RDS PostgreSql from PgAdmin 3. I followed the below link
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html
In Security Group, I also added PostgreSQL and All traffic as below
The "publicly accessible" flag was enabled (updated after Mark B's comment)
I got the error from PGAdmin3
Very appreciate for any suggestion
******UPDATE*******
I can connect pgAdminIII to AWS RDS successfully using home wifi, but cannot connect using office wifi.
My concern is:
Was the port 5432 blocked by office wifi?
How can I configure/update the port without impacting to current API?
Note: My current API is working well (CRUD)
Can you can test your connection to a DB instance using common Linux or Windows tools first?
From a Linux or Unix terminal, you can test the connection by typing the following (replace with the endpoint and with the port of your DB instance):
$nc -zv DB-instance-endpoint port
For example, the following shows a sample command and the return value:
$nc -zv postgresql1.c6c8mn7tsdgv0.us-west-2.rds.amazonaws.com 8299
Connection to postgresql1.c6c8mn7tsdgv0.us-west-2.rds.amazonaws.com
8299 port [tcp/vvr-data] succeeded!
Windows users can use Telnet to test the connection to a DB instance. Note that Telnet actions are not supported other than for testing the connection. If a connection is successful, the action returns no message. If a connection is not successful, you receive an error message such as the following:
C:>telnet sg-postgresql1.c6c8mntzhgv0.us-west-2.rds.amazonaws.com
8299
Connecting To sg-postgresql1.c6c8mntzhgv0.us-west-2.rds.amazonaws.com...Could not
open connection to the host, on port 819: Connect failed
If Telnet actions return success, then you are good to go.
If you are trying to access it from a network which is not listed for that port. you need to add inbound rules for those network IPs from AMAZON RDS system
You will also need to set Public accessibility true under Connect & security tab in RDS console.
Read this post.In your security group go to unbound rules and add my ip.
and make sure your database is public.
https://serverfault.com/questions/656079/unable-to-connect-to-public-postgresql-rds-instance

Connecting to MongoDB hosted on Amazon EC2 (using PyMongo)

I'm having trouble remotely connecting to my MongoDB instance; I've deployed it using the MongoDB AWS Quick Start, and can connect via SSH as per the "Testing" section of the Quick Start guide.
However, when trying to connect remotely (I'm using the PyMongo driver), I run into a pymongo.errors.ServerSelectionTimeoutError: ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com:27017: timed out error.
I've tried setting bind_ip to 0.0.0.0 as well as an Elastic IP to the VPC instance that the mongod is running on, but to no avail. In fact, even pinging the EIP leads to a timeout (although pinging the NAT instance doesn't).
With PyMongo I've tried both DNS's, with and without SSL. I can successfully connect to MongoDB on localhost.
These are the security groups for the VPC: AWS security groups
If anyone has a clue on what I might be doing wrong I would greatly appreciate it; I've been struggling with this for over a day now. Thanks!