Connecting to Database (Postgres AWS RDS) - postgresql

I am following the tutorial on how to set up and connect a PostgreSQL server on AWS found HERE
When I try to sign in on workbench, I get this message:
At first I thought it was because my DB instance was not available yet, so I waited until it finished backing up. This did not seem to work as I am still getting this message. I will appreciate assistance on this.

Have you created a secutiry group and allow databases connection port?
From the docs:
VPC Security Group(s): Select Create New Security Group. This will
create a security group that will allow connection from the IP address
of the device you are currently using, to the database created.

Related

Error creating Glue connection with DocumentDB

I have created a connection in Glue with a DocumentDB cluster. The cluster is running and I can connect from my laptop and also from AWS athena to run Athena queries over it. The connection URL in Glue follows this format:
mongodb://host:27017/database
In the connection creation I have tried enabling and disabling the SSL connection option:
Also I have disable in the cluster the TLS and rebooted the database. Every time I test the connection with Glue I get this error:
Check that your connection definition references your Mongo database with correct URL syntax, username, and password.
Exiting with error code 30
Also I have tried setting the user and password in the URL but I get the same error.
How can I solve this?
Thanks!!!
First of all, does the "database" actually exists in DocumentDB cluster? Make sure you select the right VPC for Glue, has to be the same as DocumentDB. When using the Test Connection option, one of the security groups has to have an allow all rule, or the source security group in your inbound rule can be restricted to the same security group.
This blog post has some good info on how to setup a Glue connection to MongoDB/DocumentDB.
I have solved the problem. Disabling TLS on DocumnetDB and in the Glue connection works. I have to find the way to make it working with TLS enabled.

Is it possible to limit user connection IP range with SQL instead of editing pg_hba.conf?

We are using AWS PostgreSQL RDS and we would like to limit some accounts to be accessed from a specific set of CIDR. Since RDS is managed DBMS by AWS we do not have access to pg_hba.conf.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html
By checking the CREATE ROLE and USER DDL in PG, it does not seem to be an option.
https://www.postgresql.org/docs/current/sql-createrole.html
https://www.postgresql.org/docs/current/sql-createuser.html
you can try to write you own rules via function/procedure checking, using SELECT inet_server_addr() (just keep in mind that it works only with non-localhost connections).
Also some useful functions here (like local/remote ip/port): https://www.postgresql.org/docs/9.4/functions-info.html

Unable to connect from BigQuery job to Cloud SQL Postgres

I am not able to use the federated query capability from Google BigQuery to Google Cloud SQL Postgres. Google announced this federated query capability for BigQuery recently in beta state.
I use EXTERNAL_QUERY statement like described in documentation but am not able to connect to my Cloud SQL instance. For example with query
SELECT * FROM EXTERNAL_QUERY('my-project.europe-north1.my-connection', 'SELECT * FROM mytable;');
or
SELECT * FROM EXTERNAL_QUERY("my-project.europe-north1.pg1", "SELECT * FROM INFORMATION_SCHEMA.TABLES;");
I receive this error :
Invalid table-valued function EXTERNAL_QUERY Connection to PostgreSQL server failed: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
Sometimes the error is this:
Error encountered during execution. Retrying may solve the problem.
I have followed the instructions on page https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries and enabled BigQuery Connection API. Some documents use different quotations for EXTERNAL_QUERY (“ or ‘ or ‘’’) but all the variants end with same result.
I cannot see any errors in stackdriver postgres logs. How could I correct this connectivity error? Any suggestions how to debug it further?
Just adding another possibility for people using Private IP only Cloud SQL instances. I've just encountered that and was wondering why it was still not working after making sure everything else looked right. According to the docs (as of 2021-11-13): "BigQuery Cloud SQL federation only supports Cloud SQL instances with public IP connectivity. Please configure public IP connectivity for your Cloud SQL instance."
I just tried and it works, as far as the bigquery query runs in EU (as of today 6 October it works).
My example:
SELECT * FROM EXTERNAL_QUERY("projects/xxxxx-xxxxxx/locations/europe-west1/connections/xxxxxx", "SELECT * FROM data.datos_ingresos_netos")
Just substitute the first xxxxs with your projectid and the last ones with the name you gave to the connection in The bigquery interface (not cloudsql info, that goes into the query)
Unfortunately BigQuery federated queries to Cloud SQL work currently only in US regions (2019 September). The documents (https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries) say it should work also in other regions but this is not the case.
I tested the setup from original question multiple times in EU and europe-north1 but was not able to get it working. When I changed the setup to US or us-central1 it works!
Federated queries to Cloud SQL are in preview so the feature is evolving. Let's hope Google gets this working in other regions soon.
The BigQuery dataset and the Cloud SQL instance must be in the same region, or same location if the dataset is in a multi-region location such as US and EU.
Double check this according to Known issues listed.
server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
This message usually means that you will find more information in the logs of the remote database server. No useful information could be sent to the "client" server because the connection disappeared so there was no way to send it. So you have look in the remote server.

AWS RDS for PostgreSQL cannot be connected after several hours

I created several instances of RDS with PostgreSQL and get the same problems:
I can connect to all of them right after creating the instances.
After several hours (I stop working on it, turn off my laptop), I cannot connect to any of them again.
I use DBeaver for the connections, the error show is "Connection attempt timed out."
I attached the information of the
. Hope someone can help me with this problem. Thank you in advance.
Finally, I found the answer for my problem. For the error of "connection timeout", one of the reasons can be from the security settings. Although I set it as public when creating the RDS instance, the instance is attached with a private VPC security group which is not exposed public.
We can attach the RDS instance with a public security group inside the VPC (I don't think it is a good setting, just for the beginner in AWS like me) as below:
from Services, select EC2, select Security Groups in the left panel.
click "Create Security Group" button.
in the dialog, enter the name for the Group, e.g "postgres-public-access"
in the dialog, click "Add Rule" button.
In the "Type" column, select "PostgreSQL" or other types of RDS instances (or you can input the port of your RDS instance, usually it is 5432 for Postgres).
In the "Source" column, enter "0.0.0.0/0".
Click "Save" button.
from Services, select RDS, select the RDS instance, click "Modify" button.
In "Network & Security", "Security group", select the VPC Security Group you just created, in my case, it is "postgres-public-access".
Click "Continue" button.
Now you can go ahead and connect with your database everywhere.
I had to add/edit a rule to the VPC to allow connections from All sources.
Steps:
Go to DB > Connectivity & security > click on VPC(vpc-
Under Security > Security Groups > open sg-[something] for which VPC
ID matches the DB VPC
Inbound Rules > Edit Rules > Change Source to anywhere
So it seems that even when creating the DB and selecting allow public access, it only includes the traffic from withing the VPC. By doing the above steps you can allow access to all sources.
I just followed the guide: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html
Run through the typical things:
Make sure the Database is Public! Check in the AWS Website Console, if its Private make it public.
Check you have the Firewall port open for the software and the port you're trying to connect through.
When you create a dB in RDS a Security Group is created automatically with the Rule All, All:
You can add a rule for TCP Port 5432, like I have above.
Check Username/Password - sometimes incorrect ones get cached.
Try to ping the dB to see if its a internet connection problem.
I faced the same issue and it end up because of the VPN am using, when i disconnected the VPN i apply to connect.
Select DB -> Modify -> Connectivity-> Save

Enable local development access to PostgreSQL DB on Amazon RDS

I'm in the early stages of a web project which requires a database. Until now, I've managed to get away with using an SQLite database locally for development and a PostgreSQL database running on AWS RDS in "production" (mainly just for alpha testers). I haven't really had any state in the database that I couldn't just blow away and re-seed whenever necessary.
However, I'm now at the point in my project where I'm going to have state in the production database that I can't easily reproduce via seeding in my local SQLite database. So I've decided to create another development database that I create via a script which just takes the last snapshot of my production database and creates a production database. I've managed to get this script running with some degree of success...
But I'm having difficulty connecting to this development database in my local development environment. Each time I try to connect, I timeout. Most of the resources on Amazon seem to indicate that this is likely a security group issue. The security group corresponding to my database currently has these inbound settings (security group erased, but it is the group listed as my RDS security group):
Is there something obviously wrong here? How do I set up my security groups such that I can connect to this development database on my local machine?
The source shouldn't be set to the same security group, but rather whatever source you'll be connecting from. You can use 0.0.0.0/0 to enable traffic from any source.