Ive created an instance/database on aws and when I try to connect to it through my terminal, by running the code below, I get an error.
The line I run in the terminal is:
psql --host=testdb.c7hgibdbsgjm.eu-west-2.rds.amazonaws.com --port=5432 --username=postgres --password --dbname=testdb
And the error it returns is:
psql: error: could not connect to server: could not translate host name "testdb.c7hgibdbsgjm.eu-west-2.rds.amazonaws.com"
to address: nodename nor servname provided, or not known.
Ive spent the last 3 days reading the relevant documentation and trying to get this to work but I don't know where im going wrong.
Also when I run:
nslookup testdb.c7hgibdbsgjm.eu-west-2.rds.amazonaws.com
It returns:
Non-authoritative answer:
*** Can't find testdb.c7hgibdbsgjm.eu-west-2.rds.amazonaws.com: No answer
I come from a statistics background and I've done a fair bit of coding in R and python but Im relatively new to using the terminal etc!
Thanks for any guidance or help as this is making me want to punch my laptop.
On running dig command:
dig testdb.c7hgibdbsgjm.eu-west-2.rds.amazonaws.com
It returned a private IP: 172.31.23.42.
It seems you are running RDS instance as private or internal, i.e. it won't be accessible from the internet.
You need to access it from the VPC itself or need to use a VPN.
Seems like its a DNS resolution issue. I was able to resolve it using DIG.
In the VPC make sure that both of the following options are enabled on the VPC:
enableDnsHostnames
enableDnsSupport
If the RDS instance is not hosted in the VPC but is instead accessed across a VPC peer then DNS resolution might need enabling on the VPC peer
Related
I'm trying to get RabbitMQ to monitor a postgresql database to create a message queue when database rows are updated. The eventual plan is to feed this message queue into an AWS EKS (Elastic Kubernetes Service) cluster as a job.
I've read many many approaches to this but they are still confusing as a newcomer to RabbitMQ and many seemed to be written more than 5 years ago so I'm not sure if they'll still work with current versions of postgres and rabbitmq.
I've followed this guide about installing the area51/notify-rabbit docker container which can connect the two via a node app, but when I ran the docker container it immediately stopped and didn't seem to do anything.
There is also this guide, which uses a go app to connect the two, but I'd rather not use Go ouside of a docker container.
Additionally, there is also this method, to install the pg_amqp extension from a repository which hasn't been updated in years, which allows for a direct connection from PostgreSQL to RabbitMQ. However, when I followed this and attempted to install pg_amqp on my Postgres db (postgresql 12), I was unable to connect using psql to the database, getting the classic error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
My current set-up, is I have a rabbitMQ server installed in a docker container in an AWS EC2 instance which I can access via the internet. I ran the following to install and run it:
docker pull rabbitmq:3-management
docker run --rm -p 15672:15672 -p 5672:5672 rabbitmq:3-management
The postgresql database is running on a separate EC2 instance and both instances have the required ports open for accessing data from each server.
I have also looked into using Amazon SQS as well for this, but it didn't seem to have any info on linking Postgresql up to it. I haven't really seen any guides or Stack Overflow questions on this since 2017/18 so I'm wondering if this is still the best way to create a message broker for a kubernetes system? Any help/pointers on this much appreciated.
In the end, I decided the best thing to do was create some simple Python scripts to do the LISTEN/NOTIFY steps and route traffic from PostgreSQL to RabbitMQ based off the following code https://gist.github.com/kissgyorgy/beccba1291de962702ea9c237a900c79
I set it up inside Docker containers and set them to run in my Kubernetes cluster so they are within the automatic restarts if they fail.
When trying to get a psql shell (not using iam user) I am receiving:
> gcloud alpha sql connect pg-instance --database mydb --user myuser --project my-project
Starting Cloud SQL Proxy: [/Users/me/google-cloud-sdk/bin/cloud_sql_proxy -instances my-project:us-central1:pg-instance=tcp:9470 -credential_file /Users/me/.config/gcloud/legacy_credentials/me#me.com/adc.json]]
2022/03/15 14:47:59 Rlimits for file descriptors set to {Current = 8500, Max = 9223372036854775807}
2022/03/15 14:47:59 using credential file for authentication; path="/Users/me/.config/gcloud/legacy_credentials/me#me.com/adc.json"
2022/03/15 14:48:00 Listening on 127.0.0.1:9470 for my-project:us-central1:pg-instance
2022/03/15 14:48:00 Ready for new connections
Connecting to database with SQL user [myuser].Password:
psql: error: connection to server at "127.0.0.1", port 9470 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I had the same error message when connecting to Postgres(Cloud Sql) using a service account.
In my setup I did run cloud_sql_proxy inside docker container.
In order to make it work I had to add extra configuration defined in step #9 https://cloud.google.com/sql/docs/sqlserver/connect-docker#connect-client
docker run -d \
-v <PATH_TO_KEY_FILE>:/config \
-p 127.0.0.1:5432:5432\
gcr.io/cloudsql-docker/gce-proxy:1.33.1 /cloud_sql_proxy \
-instances=<INSTANCE_CONNECTION_NAME>=tcp:0.0.0.0:5432 -credential_file=/config
The missing bits were: host ip on port mapping and 0.0.0.0: in cloud_sql_proxy command
There are a few things I would like to point out. The best starting point for me would be the About connection options page; both the Overview and the Before you begin sections are very helpful to get the full idea of the process and how to properly configure the user. But the most important part is the Connection Options, for the message connection to server at "127.0.0.1" I’m guessing it is a private IP, but please make sure this section is covered before starting to debug.
In your case, the logs are saying there was an error in the connection to the server…
I used the Troubleshoot guide that includes the Diagnose issues link to get to the Debug connection issues page that has a lot of useful information on how to debug any connectivity issue.
Generally, connection issues fall into one of the following three areas:
Connecting - are you able to reach your instance over the network?
Authorizing - are you authorized to connect to the instance?
Authenticating - does the database accept your database credentials?
Each of those can be further broken down into different paths for investigation.
Once determining the connection method, there are different questions that will help to guide you through the possible troubleshooting paths.
If using these guides doesn’t get you a solution, please make sure to update your answer with the results, steps, and information followed to provide further help. This would be a good example, as it has the same log error, and this other question shows that there are a few different troubleshooting paths for this specific log message, plus they have useful information for you.
My nodejs app was working fine with mongodb connection and suddenly this error got appeared. Then I tried to connect to mongodb with mongo compass and same error is there. I could not find out any reason for this.
Error: querySrv ESERVFAIL _mongodb._tcp.cluster0.abcd0.mongodb.net
[nodemon] app crashed - waiting for file changes before starting...
Then I changed the mongodb connection url to old url and after that I got this error.
Error: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/
[nodemon] app crashed - waiting for file changes before starting...
I have already white listed my ips and my configurations are correct (I double checked).
0.0.0.0/0 (includes your current IP address)
What is the reason for this ?
Thank you.
querySrv ESERVFAIL is a DNS error.
This means that your local machine is not able to get a response from your DNS resolver for the SRV record _mongodb._tcp.cluster0.abcd0.mongodb.net (I assume that's not your real hostname, but it will work for an example)
From your local machine, test SRV lookup from a command line, possibly one of these:
nslookup -type=SRV _mongodb._tcp.cluster0.abcd0.mongodb.net
host -t SRV _mongodb._tcp.cluster0.abcd0.mongodb.net
If that fails, feel free to say bad things about your DNS provider.
Then go to the Atlas UI and get the pre-3.6 connection string. It will start with mongodb:// and not mongodb+srv://.
Joe's identification of the problem is spot on and help me with a resolution. This was fixed for me after adding Google's DNS server (8.8.8.8) to the Wifi settings of my computer.
On MacOS its in Settings > Network > Wi-Fi (select the appropriate network) > Advanced > DNS
Then add the DNS Server 8.8.8.8
I was a windows10 user and I was facing exactly the same problem. I have figure out it's a DNS problem. the following process worked for me
Check this! if you are non windows 10 user
Stop the server and run again your server and it will solve the problem.
Hey Guys!
So i was having this weird error below :(
So what might be causeing this error?
make sure the database you trying to create n your mongoDB collections exist for me it was "userDB" that was the issue for me!
mongoose.connect(
`mongodb+srv://admin-eniola:${process.env.PASSWORD}#cluster0.velr6at.mongodb.net/userDB`
);
makes sure you check whatever password you using, it must correlate with your user password not account password!
check where your password is stored your program either dotenv or secrets file and make sure it match with your user account password.
Thanks and i hope this solutions works for you as well!
I'm a relative noob w.r.t. many of the moving parts on the system that I'm working on, and so please pardon me for a lack of understanding in places. My question here is more about asking for debugging strategies rather than asking for a solution to the problem, as I am drawing a blank at the moment.
Current Setup
I'm running a Docker container on an EC2 instance. All instances run in my company VPC. I need to connect to a Postgres database that lives on a workstation on-premise. The Docker container is spun-up and spun-down automatically using Dokku, an open source Heroku-alternative that I finally figured out how to get setup on EC2.
Some variables I will be using in the post:
DBSERVER: The address of the workstation that is hosting our database.
DOKKUSERVER: The address of the Dokku EC2 instance.
APPCONTAINER: The Docker container, spun up by Dokku, that houses my app.
APPNAME: The application name on Dokku
What works
When I enter into the APPCONTAINER with dokku enter APPNAME, I can ping DBSERVER and get back a response:
(environment_name) [root#APPCONTAINER project]# ping DBSERVER
PING DBSERVER (DB_SERVER_IP) 56(84) bytes of data.
64 bytes from DBSERVER (DB_SERVER_IP): icmp_seq=1 ttl=117 time=95.9 ms
64 bytes from DBSERVER (DB_SERVER_IP): icmp_seq=2 ttl=117 time=96.2 ms
64 bytes from DBSERVER (DB_SERVER_IP): icmp_seq=3 ttl=117 time=95.7 ms
What doesn't work
However, when I try to connect using pgcli, I find I cannot connect:
(environment_name) [root#APPCONTAINER project]# pgcli -h $DB_SERVER -p $DB_PORT -U $DB_USER -d $DB_NAME
could not send SSL negotiation packet: Resource temporarily unavailable
Additionally, from my Python app, in which we use both psycopg2 and sqlalchemy (in different parts of the codebase), I find we cannot connect to the database. On the other hand, connection from my local machine (i.e. my laptop, or another workstation that is on-premise) works without issue.
My main problem here is that the Python app requires a connection to the database via psycopg2 and sqlalchemy, but that isn't working.
Current hypotheses
To recap, I am drawing a blank here on how to debug, and so I'm asking for debugging strategies (i.e. what should I look at) with hopefully some pointers to docs on how to debug. (Exact commands - I am happy to look them up, if pointed to the docs pages.) Things I have thought of, but am not sure how to look at, are:
Debugging "the connection" (still a nebulous term in my head) between the Docker container on EC2 and the on-premise workstation?
Things I have done already are:
checking to ensure that environment variables are set correctly
ensuring that ports are set correctly for the app to work
entering into the Docker container that Dokku is running and checking that the source code is correct.
Are there other debugging strategies that I might have missed? I would love to be enlightened on them please!
I believe this was answered on twitter, but the problem was the user had not created the correct security group rules on their AWS account to allow traffic from the server to the database.
I've installed Node, express, and mongodb all successfully. I can run mongo in my terminal and it starts correctly. I can also see data i've manually stored.
Locally, I was using mongoose.connect('mongodb://localhost:27017/test'); and I had no issues. On my EC2 I used mongoose.connect('mongodb://ipaddress:27017/test'); but it's failing.
Error: failed to connect to [ipaddress:27017].
ipaddress is an actual ip address not a string or variable.
mongo
show dbs <-- this shows my databases so I know it's running!
I've looked online for a few hours and have come up short! I'm sure it's a simple setting i've missed.
On my EC2 i'm allowing all connections on all port ranges.
What am I missing?
Thanks!
Since the mongod instance is running on the same server, you need to set the IP Address to 0.0.0.0
I'm not sure why this needs to be done, but I got (some) understanding by reading the explanations listed on this post.