How to handle multiple mongo databases connections - mongodb

I have a multi tenant application, i.e., each customer has their database.
My stack is NestJS with MongoDB, and to handle HTTP requests to the right database I use the lib nestjs-tenancy, it scopes the connection based in the request subdomain.
But this lib does not work when I need to execute something asynchronous, like a queue job (bull). So in the queue consumer, I create a new connection to the right database that I need to access. This info about the database I extract from the queue job data:
try {
await mongoose.connect(
`${this.configService.get('database.host')}/${
job.data.tenant
}?authSource=admin`,
);
const messageModel = mongoose.model(
Message.name,
MessageSchema,
);
// do a lot of stuffs with messageModel
await messageModel.find()
await messageModel.save()
....
} finally {
await mongoose.connection.close();
}
I have two different jobs that may run at the same time and for the same database.
I'm noticing sometimes that I'm getting some erros about the connection, like:
MongoExpiredSessionError: Cannot use a session that has ended
MongoNotConnectedError: Client must be connected before running operations
So for sure I'm doing something wrong. Ps: in all my operations I use await.
Any clue of what can be?
Should I use const conn = mongoose.createConnection() instead mongoose.connect()?
Should I always close the connection or leave it "open"?
EDIT:
After some testing, I'm sure that I can't use the mongoose.connect(). Changing it to mongoose.createConnection() solved the issues. But I'm still confused if I need to close the connection. Closing it, I'm sure that I'm not overloading mongo with a lot of connections, but in the same time, every request it will create a new connection and I have hundreds of jobs running at once for different connections..

Related

Socket IO + PostgreSQL + PostgreSQL Adapter... driving me to hell

I have been swimming across the official SocketIO documentation but... I do not get it. I am working with two separated machines, both connected to another machine with a PostgreSQL DB. When I run SocketIO on the same machine, everything is fine, it works as it has to. I add the adapter and the table creation that the documentation says is all ok. I am using sequelize sooo fine, the table is created and a periodical DELETE and NOTIFY are launched to that table. Oooook. But that table, the place where magic should happen is... dry. There is no data input. NEVER. So the table is always empty and, as I can assume, this is why when I launch the other machine, SocketIO from machine 1 does not link with SocketIO from machine2, each of the machines creates their separate rooms and are never related because the data that has to do it, is not being stored in the DB. Where I am messing up?
Sequelize is used as ORM for this proyect.
Where I connect IO to the server, the adapter is launched and a table is created in the DB:
const startSocketIo = function (server) {
sequelize.query(`
CREATE TABLE IF NOT EXISTS socket_io_attachments (
id bigserial UNIQUE,
created_at timestamptz DEFAULT NOW(),
payload bytea
);
`);
io = new Server(server);
io.adapter(createAdapter(sequelize));
io.of('/').adapter.on('create-room', (room) => {
console.warn(`room ${room} was created`);
});
io.on('connection', async (socket) => {
// Verify User JWT
const userAuth = await jwt.verifyToken(socket.handshake.auth['token']);
if (!userAuth) {
return socket.disconnect();
}
And the other place where the "adapter" itself is called. During the "streaming" of the text. "Clients" has the array with the rooms and socket IDs created in each client login.
const clients = await io.sockets.adapter.rooms;
What I do not understand is that... ok, you launch it in the same machine and all is OK (because that socket data is stored in the machine's cache memory, ok) but when you try to reach that data from another server by login in the same room... nothing, because the machine that is sending that data, is not saving it outside its machine. The adapter needs to be there for that. According to the documentation, the table it creates only saves data in some specific situations so... if that is not the use I suppose it has... why is it not doing the connection between those two machines?

ORMLite --- After .commit , .setAutoCommit --- Connection NOT closed

I use ORMLite on a solution made by server and clients.
On server side I use PostgreSQL, on client side I use SQLite. In code, I use the same ORMLite methods, without taking care of the DB that is managed (Postgres or SQLite). I used also pooled connection.
I don't have connection opened, when I need a Sql query, ORMLite takes care to open and close the connection.
Sometime I use the following code to perform a long operation in background on server side, so in DB PostgreSql.
final ConnectionSource OGGETTO_ConnectionSource = ...... ;
final DatabaseConnection OGGETTO_DatabaseConnection =
OGGETTO_ConnectionSource.getReadOnlyConnection( "tablename" ) ;
OGGETTO_DAO.setAutoCommit(OGGETTO_DatabaseConnection, false);
// do long operation with Sql Queries ;
OGGETTO_DAO.commit(OGGETTO_DatabaseConnection);
OGGETTO_DAO.setAutoCommit(OGGETTO_DatabaseConnection, true);
I noted that the number of open connections increased, therefore after sometime the number is so big to stop the server (SqlException "too many clients connected to DB").
I discovered that it's due to the code snippet above, it seems that after this snippet the connection is not closed e remain open.
Of course I cannot add at the end a "OGGETTO_ConnectionSource.close()", because it closes the pooled connection source.
If I add at the end "OGGETTO_DatabaseConnection.close();", it doesn't work, open connections continue to increase.
How to solve it?
I discovered that it's due to the code snippet above, it seems that after this snippet the connection is not closed e remain open.
Let's RTFM. Here are the javadocs for the ConnectionSource.getReadOnlyConnection(...) method. I will quote:
Return a database connection suitable for read-only operations. After you are done,
you should call releaseConnection(DatabaseConnection).
You need to do something like the following code:
DatabaseConnection connection = connectionSource.getReadOnlyConnection("tablename");
try {
dao.setAutoCommit(connection false);
try {
// do long operation with Sql Queries
...
dao.commit(connection);
} finally {
dao.setAutoCommit(connection, true);
}
} finally {
connectionSource.releaseConnection(connection);
}
BTW, this is approximately what the TransactionManager.callInTransaction(...) method is doing although it has even more try/finally blocks to ensure that the connection's state is reset. You should probably switch to it. Here are the docs for ORMLite database transactions.

Can I reuse an connection in mongodb? How this connections actually work?

Trying to do some simple things with mongodb my mind got stuck in something that feels kinda strange for me.
client = MongoClient(connection_string)
db = client.database
print(db)
client.close()
I thought that when make a connection it is used only this one along the rest of the code until the close() method. But it doesn't seem to work that way... I don't know how I ended up having 9 connections when it supposed to be a single one, and even if each 'request' is a connection there's too many of them
For now it's not a big problem, just bothers me the fact that I don't know exactly how this works!
When you do new MongoClient(), you are not establishing just one connection. In fact you are creating the client, that will have a connection pool. When you do one or multiple requests, the driver uses an available connection from the pool. When the use is complete, the connection goes back to the pool.
Calling MongoClient constructor every time you need to talk to the db is a very bad practice and will incur a penalty for the handshake. Use dependency injection or singleton to have MongoClient.
According to the documentation, you should create one client per process.
Your code seems to be the correct way if it is a single thread process. If you don't need any more connections to the server, you can limit the pool size by explicitly specifying the number:
client = MongoClient(host, port, maxPoolSize=<num>).
On the other hand, if the code might later use the same connection, it is better to simply create the client once in the beginning, and use it across the code.

#hapijs/lab: how to prevent creation multiple DB connections when testing the server?

I develop an API server with Hapi. I use #hapi/lab for testing. I have different test files for different API modules/routes.
There is a server initialization in each test file. A DB connection is created when the server is initialized, consequently, multiple DB connections are created at the same time when I try to test the server. And I got warning like that:
WARNING: Creating a duplicate database object for the same connection.
at Object.register (/home/.../node_modules/hapi-pg-promise/index.js:19:20)
at internals.Server.register (/home/.../node_modules/#hapi/hapi/lib/server.js:453:35)
at async Object.exports.compose (/home/.../node_modules/#hapi/glue/lib/index.js:46:9)
at async createServer (/home/.../server.js:10:115)
at async Immediate.<anonymous> (/home/.../node_modules/#hapi/lab/lib/runner.js:662:17)
So, is there an approach, how to test a Hapi server in multiple files without multiple server connections?
You will have to lazy load the servr. let's assume you have a function serverFactory which returns the server. you could do this
let serverFactoryObj;
function getServerFactory() {
if (!serverFactoryObj) {
serverFactoryObj = serverFactory();
}
return serverFactoryObj;
}
This way, no matter how many times you test the endpoints, you will always have a single instance of server. Ideally you should always test with single instance rather than creating/stopping server for each test.

Mongoose connection that is rarely used (keep alive?)

In my applicaiton I have a DB to which the code will periodically connect, but it will be quite rarely use (maybe once a day/week).
Can I create just connect while module (app) init and then use it across the module while application run lifecycle?
var conn = mongoose.createConnection(process.env.SOME_DB)
I'm not sure should I have a keep alive option as suggested in mongoose docs:
options.server.socketOptions = options.replset.socketOptions = { keepAlive: 1 };
mongoose.connect(uri, options);
or standard auto reconnect feature will be enough?
An i'm also not what is "long running applications"? Actually any real-time service is long running application, should keep alive be enabled for all such services in production?
Also not sure what are Connection pools and how they can affect.
There is a reference to this in the Mongoose documentation:
http://mongoosejs.com/docs/connections.html
And yes, it's generally a good idea.
Also in that document Connection pools are explained. But generally speaking, Mongoose is keeping several socket connections open to the server/replica-set/mongos instances rather than one to allow concurrent processing of requests. Yes, even with async call-backs on IO there is wait time, so Connection pools allow another channel to talk on while one is busy.
And yes, it's generally a good idea.