Error creating Glue connection with DocumentDB - mongodb

I have created a connection in Glue with a DocumentDB cluster. The cluster is running and I can connect from my laptop and also from AWS athena to run Athena queries over it. The connection URL in Glue follows this format:
mongodb://host:27017/database
In the connection creation I have tried enabling and disabling the SSL connection option:
Also I have disable in the cluster the TLS and rebooted the database. Every time I test the connection with Glue I get this error:
Check that your connection definition references your Mongo database with correct URL syntax, username, and password.
Exiting with error code 30
Also I have tried setting the user and password in the URL but I get the same error.
How can I solve this?
Thanks!!!

First of all, does the "database" actually exists in DocumentDB cluster? Make sure you select the right VPC for Glue, has to be the same as DocumentDB. When using the Test Connection option, one of the security groups has to have an allow all rule, or the source security group in your inbound rule can be restricted to the same security group.
This blog post has some good info on how to setup a Glue connection to MongoDB/DocumentDB.

I have solved the problem. Disabling TLS on DocumnetDB and in the Glue connection works. I have to find the way to make it working with TLS enabled.

Related

Is it possible to limit user connection IP range with SQL instead of editing pg_hba.conf?

We are using AWS PostgreSQL RDS and we would like to limit some accounts to be accessed from a specific set of CIDR. Since RDS is managed DBMS by AWS we do not have access to pg_hba.conf.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html
By checking the CREATE ROLE and USER DDL in PG, it does not seem to be an option.
https://www.postgresql.org/docs/current/sql-createrole.html
https://www.postgresql.org/docs/current/sql-createuser.html
you can try to write you own rules via function/procedure checking, using SELECT inet_server_addr() (just keep in mind that it works only with non-localhost connections).
Also some useful functions here (like local/remote ip/port): https://www.postgresql.org/docs/9.4/functions-info.html

No Data appearing in Redshift from MongoDB using AWS DMS

I am trying to test the AWS data migration service to fetch some data from Atlas MongoDB to Redshift cluster, I have created the replication instance, both the endpoints, successfully tested the connection them and created the replication task, the task loads and completes normally but no data appears in redshift, just empty tables?
Anybody can you please help what might be the reason?
Ok so first try to debug it , modify the task and enable cloudwatch logs and run the task again and look at the logs , also enalbe data assessment task you will need to add an iam role for that. Some mistakes I made was that the endpoint for mongodb needs an atlasadmin login , no other will work , tell me if it did not work. Also make sure all tasks have correct db name , capitol letters matter , same happend with me , my endpoint had db name with small letters and my task had db name with capitol letter , the task was successful but no data.Lastly and most important is that make sure your mongo peering connection is correctly figured with aws. Im sure when you see the logs there will be an un authorized error.

Unable to connect from BigQuery job to Cloud SQL Postgres

I am not able to use the federated query capability from Google BigQuery to Google Cloud SQL Postgres. Google announced this federated query capability for BigQuery recently in beta state.
I use EXTERNAL_QUERY statement like described in documentation but am not able to connect to my Cloud SQL instance. For example with query
SELECT * FROM EXTERNAL_QUERY('my-project.europe-north1.my-connection', 'SELECT * FROM mytable;');
or
SELECT * FROM EXTERNAL_QUERY("my-project.europe-north1.pg1", "SELECT * FROM INFORMATION_SCHEMA.TABLES;");
I receive this error :
Invalid table-valued function EXTERNAL_QUERY Connection to PostgreSQL server failed: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
Sometimes the error is this:
Error encountered during execution. Retrying may solve the problem.
I have followed the instructions on page https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries and enabled BigQuery Connection API. Some documents use different quotations for EXTERNAL_QUERY (“ or ‘ or ‘’’) but all the variants end with same result.
I cannot see any errors in stackdriver postgres logs. How could I correct this connectivity error? Any suggestions how to debug it further?
Just adding another possibility for people using Private IP only Cloud SQL instances. I've just encountered that and was wondering why it was still not working after making sure everything else looked right. According to the docs (as of 2021-11-13): "BigQuery Cloud SQL federation only supports Cloud SQL instances with public IP connectivity. Please configure public IP connectivity for your Cloud SQL instance."
I just tried and it works, as far as the bigquery query runs in EU (as of today 6 October it works).
My example:
SELECT * FROM EXTERNAL_QUERY("projects/xxxxx-xxxxxx/locations/europe-west1/connections/xxxxxx", "SELECT * FROM data.datos_ingresos_netos")
Just substitute the first xxxxs with your projectid and the last ones with the name you gave to the connection in The bigquery interface (not cloudsql info, that goes into the query)
Unfortunately BigQuery federated queries to Cloud SQL work currently only in US regions (2019 September). The documents (https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries) say it should work also in other regions but this is not the case.
I tested the setup from original question multiple times in EU and europe-north1 but was not able to get it working. When I changed the setup to US or us-central1 it works!
Federated queries to Cloud SQL are in preview so the feature is evolving. Let's hope Google gets this working in other regions soon.
The BigQuery dataset and the Cloud SQL instance must be in the same region, or same location if the dataset is in a multi-region location such as US and EU.
Double check this according to Known issues listed.
server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
This message usually means that you will find more information in the logs of the remote database server. No useful information could be sent to the "client" server because the connection disappeared so there was no way to send it. So you have look in the remote server.

Connecting to Database (Postgres AWS RDS)

I am following the tutorial on how to set up and connect a PostgreSQL server on AWS found HERE
When I try to sign in on workbench, I get this message:
At first I thought it was because my DB instance was not available yet, so I waited until it finished backing up. This did not seem to work as I am still getting this message. I will appreciate assistance on this.
Have you created a secutiry group and allow databases connection port?
From the docs:
VPC Security Group(s): Select Create New Security Group. This will
create a security group that will allow connection from the IP address
of the device you are currently using, to the database created.

How can I connect to MongoDB Atlas using Robomongo?

I signed up freely at MongoDB Atlas and created cluster now I want to know how can I create database and connect to that using Robomongo?
1) (Atlas Mongodb console)First of all click on ALLOW ACCESS FROM ANYWHERE(see in below image) and put some random IP address , don't click on Add Current IP Address otherwise it will not connect with robomongo .
2) Now open robomongo ,select connection Tab and then select type Direct Connection , and put your primary cluster in Address [you can get your Primary Cluster Address from Project->Clusters->(choose) Primary Cluster-> "There you will find your Primary Cluster Address"] .
3)now click on Authentication Tab , put database name is admin and put your username and password , Auth Mechanism is SCRAM-SHA-1.
4) select self-signed certificate as Authentication Method
5) Now , click on test,we are done !
The standard Mongo URI connection schema has the form:
mongodb://[username:password#]host1[:port1][,...hostN[:portN]]][/[database][?options]]
Security Reasons
Do not allow access everywhere for security reasons
Restrict to your IP address
Connect via roboMongo 3T using a secondary cluster node from MongoDB Atlas
In case it helps others, Robo3Tversion 1.3 and greater has a "From SRV" field where you can paste the SRV connection string and it fills out the connection options correctly for you. As of 1.3 it looks like this:
As of writing, you can get the connection string by clicking the "connect" button next to your cluster dashboard's graphs, and then clicking "Connect your application", and you get a screen like this with the connection string that you can copy:
#kdblue, It's not working for me. But when I tried using the replica set, I could able to connect successfully.
Robo 3T Version: 1.2.1
Steps followed:
In your MongoDB Atlas(cloud.mongodb.com), copy all the three replica sets name and note it down. (Refer an image for reference, the replica sets denoted in the orange box).
Now, in your Robo 3T, in Connection tab, select type as Replica Set.
Provide a suitable name for your connection.
And now in Members, add all the three copied replica sets. Refer image for details.
Provide authentication, if you have any and follow SSL steps (mandatory) as suggested by #kdblue in the previous answer.
You could able to connect successfully now.
Thank you.
[Updated]
It is now possible to connect to Mongo Atlas 3.4 free cluster with the latest beta: Robomongo 1.1 - Beta version with MongoDB 3.4 Support
Direct connections do not work with Replica Sets and Robo3T.
And the cluster you create on Atlas is a 3-Node replica set.
Select Connection Type: Replica Set on the first tab
To find out 3 members in new Atlas dashboard:
click on Clusters in your Atlas dashboard.
click collections button on the cluster.
click Overview tab on the next menu.
you will see the list of your set (primary and two secondary).
then follow #Balasubramani M's answer.
If you have the "TLS" instead of the "SSL" tab, don't get crazy.
Just do exactly the same that you would do with "SSL":
Mark the "Use TLS protocol" checkbox
Choose the "Self-signed Certificate" authentication method option
And that's all!
Instead of connecting it with robomongo I would recommend you to connect it with COMPASS. That is a opensource GUI tool for connecting to your MongoDB Atlas deployment and it is supported by MongoDB people also.
You can download compass from https://www.mongodb.com/download-center/compass.
Additionally many functionalities are not supported in robomongo.
Robo mongo is the 3rd party tool so even if you go the mongodb people they will not support.
Instruction for connecting your atlas cluster with compass can be found in the documentation https://docs.atlas.mongodb.com/compass-connection/
However, even after following my response you encounter any issue, let me know , I will help you further.
No matter what I tried it wouldn't work, all I had to end up doing was update to the latest version at which point my old connection setup worked fine.
https://robomongo.org/download
Tip: I struggled updating a connection, no dice.
Created one form scratch using above and connected on first attempt.