connection of deno to mongodb fails - mongodb

I'm trying to connect my deno application to mongodb but I get error.
import {MongoClient} from "https://deno.land/x/mongo#v0.21.2/mod.ts";
const client = await new MongoClient();
await client.connect("mongodb+srv://deno:i4vY8AtCEhr6ReqB#sample.7jp1l.mongodb.net/deno?retryWrites=true&w=majority");
const db = client.database("notes");
export default db;
everything seem to be fine but when I run the app, I get this error.
error: Uncaught (in promise) Error: MongoError: "Connection failed: failed to lookup address information: nodename nor servname provided, or not known"
throw new MongoError(`Connection failed: ${e.message || e}`);
^
at MongoClient.connect (client.ts:93:15)
at async mongodb.ts:4:1

2 problems that I see:
The code snippet above only works with Mongo installed in local machine.
The connection string use DNS Seed List, but the current library couldn't resolve to a list of hosts
To make it works with Mongo Atlas, you need to call a connect method with difference parameters and find a correct (static) host instead a (dynamic) DNS Seed List:
const client = new MongoClient();
const db = await client.connect({
db: '<your db or collection with work with>',
tls: true,
servers: [
{
host: '<correct host - the way to get the host - see bellow>',
port: 27017,
},
],
credential: {
username: '<your username>',
password: '<your password>',
mechanism: 'SCRAM-SHA-1',
},
});
How to get the correct host:
Open your Cluster in Mongo Atlas
Select Connect button
Select Connect to application option
Select Driver: Node.js and Version: 2.2.12 or later
Here you will see a list of host follow # character

Thank you #nthung.vlvn for the clue. Indeed the host needs to be a primary shard. It fixed the lookup address information, but I had another error, that my credentials are incorrect. I had to add db "admin" to credential:
credential: {
username: '<your username>',
password: '<your password>',
db: "admin",
mechanism: 'SCRAM-SHA-1',
}
That is weird, because I do not have admin db in my Atlas. Anyway it started to work.

Related

connecting to mongodb replicaSet with nestjs and typeorm is not working

problem
I'm trying to connect to mongodb with nestjs(^8.2.3) and typeorm(^0.2.28)
In test environment, connecting to mongodb standalone server is working. For your information, node mongodb library version is ^3.6.2.
production sample code(nestjs server)
I referred the typeorm code to write mongodb options
import { TypeOrmModule } from '#nestjs/typeorm';
import { MongoConnectionOptions } from 'typeorm/driver/mongodb/MongoConnectionOptions';
export const configForOrmModule = TypeOrmModule.forRootAsync({
imports: [],
useFactory: async () => {
const mongodbConfig: MongoConnectionOptions = {
type: 'mongodb',
username,
// for replicaSet (production)
hostReplicaSet: 'server1.example.com:20723,server2.example.com:20723,server.example.com:20723',
replicaSet: 'replicaSetName'
port: Number(port),
password: encodeURIComponent(password),
database,
authSource,
synchronize: true,
useUnifiedTopology: true,
entities: [Something],
};
return mongodbConfig;
},
inject: [],
});
But in production environment, when nestjs server try to connect to mongodb replicaSet, the server get this server selection loop error message over and over again like below. Interesting thing was the domain that the server tried to connect was different from replicaSet hosts(ex. another-hostname not included in server1.example.com:20723,server2.example.com:20723,server.example.com:20723). (+ edited: the another hostname is actual physical server indicated by the dns(server.example.com))
[39m01/28/2022, 2:39:16 AM [31m ERROR[39m [38;5;3m[TypeOrmModule] [39m[31mUnable to connect to the database. Retrying (3)...[39m
MongoServerSelectionError: getaddrinfo ENOTFOUND <another-hostname>
at Timeout._onTimeout (/home/node/app/node_modules/mongodb/lib/core/sdam/topology.js:430:30)
at listOnTimeout (node:internal/timers:557:17)
at processTimers (node:internal/timers:500:7)
what I’ve tried but these not worked
remove useUnifiedTopology: true option
downgrade mongodb library version to 3.5.11 (I've read in mongodb community there are something bug with topology after 3.6 version)
use host option not the hostReplicaSet
if you need more information, please tell me. thank you for your helping.
It was kubernates DNS issue. The hostReplicaSet server1.example.com:20723,... is resolved to host1 (physical server name. without example.com) but, k8s doesn't know it. so connection was failed.
there are two options
update kubernates /etc/hosts setting to add host1 -> host1.example.com
or update mongodb hostname host1 -> host1.example.com

Getting correct socketPath for TypeORM config

I'm trying to connect a Cloud Run service to Cloud SQL postgres instance. I believe I'm nearly there, but am having some trouble getting the deployed instance to connect properly. My local environment can connect (via SSL) to the database intended for production, but the deployed version can't...
I'm using TypeORM, and have everything setup properly in the configuration...
#Module({
imports: [
TypeOrmModule.forRootAsync({
imports: [ConfigModule],
inject: [ConfigService],
useFactory: (configService: ConfigService) => {
const socketPath = configService.get('DB_SOCKET_PATH');
const extra = socketPath ? {
socketPath: socketPath,
ssl: {
rejectUnauthorized: false,
ca: Buffer.from(process.env.DB_SSL_CA, 'base64').toString('ascii'),
cert: Buffer.from(process.env.DB_SSL_CERT, 'base64').toString('ascii'),
key: Buffer.from(process.env.DB_SSL_KEY, 'base64').toString('ascii'),
}
} : { };
return ({
type: 'postgres',
host: socketPath || configService.get('DB_HOST'),
port: configService.get('DB_PORT'),
username: configService.get('DB_USER'),
password: configService.get('DB_PASS'),
database: configService.get('DB_NAME'),
extra: extra,
entities: [__dirname + '/../../modules/**/*.entity{.ts,.js}'],
namingStrategy: new SnakeNamingStrategy(),
synchronize: true,
});
}
})
]
})
export class DatabaseModule { }
Despite that I'm getting an error when I try to use the socketPath as the host rather than the actual host variable (necessary for GCP). It seems that TypeORM is adding extra characters, /.s.PGSQL.5432, at the end of my connection string that I don't want. And just to clarify, the socket path is in the form of /cloudsql/<PROJECT_ID>:<REGION>:<INSTANCE>.
[Nest] 28532 - 02/15/2021, 2:25:07 PM [ExceptionHandler] connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432 +3ms
Error: connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1141:16)
At an older point in time, this used to work for me but I guess something changed in the TypeORM library. Does anybody have any ideas on this? Thanks!
EDIT: As of now I've gotten it to connect to the server correctly, but it's now giving me an error that says the server doesn't support SSL connections, which makes no sense given that I can connect via SSL fine on my local machine...?
SOLUTION: The issue does not seem to any code's fault, but rather some networking stuff on the GCP side. I configured the service and database to run through a VPC then just used a private IP address for the host.
It seems that TypeORM is adding extra characters, /.s.PGSQL.5432
This is actually intended - the Postgres spec requires that the unix sockets end with this suffix.
[Nest] 28532 - 02/15/2021, 2:25:07 PM [ExceptionHandler] connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432 +3ms
The error means that the socket wasn't found - usually because there was a misconfiguration and the Cloud SQL proxy couldn't start. You can check your logs at the instance start up to see if the proxy left any errors, but generally it'll come down to the following:
The Cloud SQL Admin API needs to be enabled
Your service account needs to have Cloud SQL Connect IAM role (or equivalent)
The service needs to be configured for Cloud SQL.
For a full list of instructions, see the Connecting from Cloud Run to Cloud SQL page.

Connect to Amazon RDS PostgresQL Proxy with IAM Credentials using TypeORM

I'm trying to figure out how to connect to a RDS PG Proxy within a lambda function using TypeORM (so there's no issues establishing connections). I'm able to connect to the RDS instance with the Lambda function successfully - however, when I point the information at the proxy (change the environment variables within the Lambda function) I am greeted with the following error:
{
"errorType": "Error",
"errorMessage": "read ECONNRESET",
"code": "ECONNRESET",
"errno": "ECONNRESET",
"syscall": "read",
"stack": [
"Error: read ECONNRESET",
" at TCP.onStreamRead (internal/stream_base_commons.js:205:27)"
]
}
Here is the code used to create the connection with TypeORM:
const config = getDBConfig();
connection = await createConnection(config);
// Retrieve database connection options
const getDBConfig = (): ConnectionOptions => {
// Use IAM-based authentication to connect
const signer = new RDS.Signer({
region: "us-east-1",
username: process.env.USERNAME,
hostname: process.env.HOSTNAME,
port: 5432,
});
// Retrieve password dynamically from RDS
const token = signer.getAuthToken({
username: process.env.USERNAME,
});
// Return configuration object
return {
username: process.env.USERNAME,
host: process.env.HOSTNAME,
port: 5432,
password: token,
ssl: {
ca: fs.readFileSync("./config/rds-ca-2019-root.pem").toString(),
},
type: "postgres",
database: "postgres",
synchronize: false,
entities: [],
};
};
In terms of the two environment variables, HOSTNAME is equal to the URL provided by RDS proxy, and USERNAME is the username assigned within the secret for the RDS Proxy. Both the Lambda function and RDS Proxy have been given admin access, just to ensure there's no interference there (I know this is horrible, will reduce privileges once I get this working!). IAM authentication has been set to required for the proxy.
Update 8/14/2020
This article explains how you would connect RDS MySQL Proxy with TypeORM, still have not figured out how to connect to a RDS PG Proxy though.
https://dev.to/vikasgarghb/rds-proxy-via-sam-15gn
I've finally found the instructions to setup DB user for PG in the AWS docs. Posting this here for anyone also having trouble finding them.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.DBAccounts.html#UsingWithRDS.IAMDBAuth.DBAccounts.PostgreSQL
Basically you just need to add user to existing rds_iam group.
CREATE USER lambda;
GRANT ALL PRIVILEGES ON DATABASE postgres TO lambda;
GRANT rds_iam TO lambda;

Compass - Unable to connect to any MongoDB Atlas database using compass or otherwise

I have been trying to connect to Atlas using the university.mongodb.com connection string:
mongodb+srv://m001-student:m001-mongodb-basics#cluster0-jxeqq.mongodb.net/test
But the compass GUI gives the following error:
queryTxt ETIMEOUT cluster-0-jxeqq.mongodb.net
I created my own cluster and tried again. But unfortunately, the same error showed up. I tried to write a JavaSript code as (the angular placeholders had the actual values in them):
const MongoClient = require('mongodb').MongoClient;
const uri = "mongodb+srv://<user>:<password>#cluster0.amffz.gcp.mongodb.net/<dbname>?retryWrites=true&w=majority";
const client = new MongoClient(uri, { useNewUrlParser: true });
client.connect(err => {
if (err) return console.log(err);
const collection = client.db("test").collection("collection-1");
client.close();
});
...which gave me this error:
Error: queryTxt ETIMEOUT cluster0.amffz.gcp.mongodb.net
at QueryReqWrap.onresolve [as oncomplete] (dns.js:202:19) {
errno: undefined,
code: 'ETIMEOUT',
syscall: 'queryTxt',
hostname: 'cluster0.amffz.gcp.mongodb.net'
}
In Atlas, I enabled the Network Access in the Security option to :
0.0.0.0/0
I disabled my firewall just to be sure that the connection is not being blocked:
sudo ufw disable
But nothing seems to work.
Any help?
[Edit]
System Config: Ubuntu 20.04 LTS
I figured out that the machine which I was using had some problem with the connectivity.
I figured out the way to solve this. I uninstalled MongoDB. It seemed like it had some configuration issues.
I removed mongodb and mongoose npm packages as well.
Re-installed everything. And viola, it got connected!

I can not connect to Postgres DB with Strapi on Heroku

Trying to deploy Strapi on Heroku with Postgres as described here
https://strapi.io/documentation/v3.x/deployment/heroku.html
But I get this error
error: no pg_hba.conf entry for host "84.212.51.43", user "ssqqeaz***", database "d6gtu***", SSL off
I use Heroku Postgres add-on.
My database config:
module.exports = ({ env }) => ({
defaultConnection: 'default',
connections: {
default: {
connector: 'bookshelf',
settings: {
client: 'postgres',
host: env('DATABASE_HOST', '127.0.0.1'),
port: env.int('DATABASE_PORT', 27017),
database: env('DATABASE_NAME', 'strapi'),
username: env('DATABASE_USERNAME', ''),
password: env('DATABASE_PASSWORD', ''),
},
options: {
ssl: true
},
},
},
});
Why? Please help!
try to change ssl : true into ssl : false
The current configuration you've posted will not work with a Heroku Postgres database. The primary concern here is that you're reading components of your postgres database url out of manually set config vars. This is very much recommended against by Heroku because they may need to move the database to a new host in the case of disasters. DATABASE_URL is set by Heroku when you create a database on an app and it's the one config var you can rely on to stay up-to-date. Moving on...
You will need to parse the username, password, host, port and database name out of the DATABASE_URL config var and supply those to the attributes of the settings block. Based on the error you provided, I can tell you're not presently doing this because Heroku databse usernames all start with a 'u', so something is very wrong if you get the error user "ssqqeaz***". As a first step you might try hard coding these values in the settings block to make sure it works (make sure to rotate the credentials after you do it, or otherwise clean up your git history to prevent leaked creds). The pattern for a postgres connection url is something like this: postgres:// $USERNAME : $PASSWORD # $HOSTNAME : $PORT / $DATABASE_NAME.
Not sure if it will help moving your config around...
remove ssl from option Key
insert ssl after password inside of settings Key
eg.
ssl: env.bool('DATABASE_SSL', false),
also check your app config vars inside of Heroku and make sure you have the required postgres config vars setup and they match the heroku generated DATABASE_URL config var.
lastly check your ./config/server.js file and make sure your host is 0.0.0.0
eg.
module.exports = ({ env }) => ({
host: env('HOST', '0.0.0.0'),
port: env.int('PORT', 1337),
admin: {
auth: {
secret: env('ADMIN_JWT_SECRET', '**********************************'),
},
},
});