Socket IO + PostgreSQL + PostgreSQL Adapter... driving me to hell - postgresql

I have been swimming across the official SocketIO documentation but... I do not get it. I am working with two separated machines, both connected to another machine with a PostgreSQL DB. When I run SocketIO on the same machine, everything is fine, it works as it has to. I add the adapter and the table creation that the documentation says is all ok. I am using sequelize sooo fine, the table is created and a periodical DELETE and NOTIFY are launched to that table. Oooook. But that table, the place where magic should happen is... dry. There is no data input. NEVER. So the table is always empty and, as I can assume, this is why when I launch the other machine, SocketIO from machine 1 does not link with SocketIO from machine2, each of the machines creates their separate rooms and are never related because the data that has to do it, is not being stored in the DB. Where I am messing up?
Sequelize is used as ORM for this proyect.
Where I connect IO to the server, the adapter is launched and a table is created in the DB:
const startSocketIo = function (server) {
sequelize.query(`
CREATE TABLE IF NOT EXISTS socket_io_attachments (
id bigserial UNIQUE,
created_at timestamptz DEFAULT NOW(),
payload bytea
);
`);
io = new Server(server);
io.adapter(createAdapter(sequelize));
io.of('/').adapter.on('create-room', (room) => {
console.warn(`room ${room} was created`);
});
io.on('connection', async (socket) => {
// Verify User JWT
const userAuth = await jwt.verifyToken(socket.handshake.auth['token']);
if (!userAuth) {
return socket.disconnect();
}
And the other place where the "adapter" itself is called. During the "streaming" of the text. "Clients" has the array with the rooms and socket IDs created in each client login.
const clients = await io.sockets.adapter.rooms;
What I do not understand is that... ok, you launch it in the same machine and all is OK (because that socket data is stored in the machine's cache memory, ok) but when you try to reach that data from another server by login in the same room... nothing, because the machine that is sending that data, is not saving it outside its machine. The adapter needs to be there for that. According to the documentation, the table it creates only saves data in some specific situations so... if that is not the use I suppose it has... why is it not doing the connection between those two machines?

Related

How to handle multiple mongo databases connections

I have a multi tenant application, i.e., each customer has their database.
My stack is NestJS with MongoDB, and to handle HTTP requests to the right database I use the lib nestjs-tenancy, it scopes the connection based in the request subdomain.
But this lib does not work when I need to execute something asynchronous, like a queue job (bull). So in the queue consumer, I create a new connection to the right database that I need to access. This info about the database I extract from the queue job data:
try {
await mongoose.connect(
`${this.configService.get('database.host')}/${
job.data.tenant
}?authSource=admin`,
);
const messageModel = mongoose.model(
Message.name,
MessageSchema,
);
// do a lot of stuffs with messageModel
await messageModel.find()
await messageModel.save()
....
} finally {
await mongoose.connection.close();
}
I have two different jobs that may run at the same time and for the same database.
I'm noticing sometimes that I'm getting some erros about the connection, like:
MongoExpiredSessionError: Cannot use a session that has ended
MongoNotConnectedError: Client must be connected before running operations
So for sure I'm doing something wrong. Ps: in all my operations I use await.
Any clue of what can be?
Should I use const conn = mongoose.createConnection() instead mongoose.connect()?
Should I always close the connection or leave it "open"?
EDIT:
After some testing, I'm sure that I can't use the mongoose.connect(). Changing it to mongoose.createConnection() solved the issues. But I'm still confused if I need to close the connection. Closing it, I'm sure that I'm not overloading mongo with a lot of connections, but in the same time, every request it will create a new connection and I have hundreds of jobs running at once for different connections..

Entity Framework/dotnet5.0 creates multiple connection

I have a very straightforward dotnet core application that creates a db context
services.AddDbContext<MyContext>(options =>
{
var connectionString = settings.ConnectionString;
options
.UseMySql(
connectionString, ServerVersion.AutoDetect(connectionString),
x => x.MigrationsAssembly("My.Migrations")
);
}, ServiceLifetime.Scoped);
All requests are asynchronous. MySql connector is from Pomelo.
It serves a separate http client ( angular ) The client sometimes sends more than 20 requests simultaneously. No other clients are connected to the api. It works smoothly but from time time I see an error that the db user created too many connections (over 10).
"MySqlConnector.MySqlException (0x80004005):
User 'user-name' has exceeded the 'max_user_connections'
resource (current value: 20)\r\n at
MySqlConnector.Core.ServerSession.ConnectAsync(ConnectionSettings cs
I managed to persuade the db admin to increase the limit to 20 ( default for mysql is 10) but it did not solve the issue completely and in fact is not a good way to solve it.
Is there a way to limit the number of connections the context creates ?

#hapijs/lab: how to prevent creation multiple DB connections when testing the server?

I develop an API server with Hapi. I use #hapi/lab for testing. I have different test files for different API modules/routes.
There is a server initialization in each test file. A DB connection is created when the server is initialized, consequently, multiple DB connections are created at the same time when I try to test the server. And I got warning like that:
WARNING: Creating a duplicate database object for the same connection.
at Object.register (/home/.../node_modules/hapi-pg-promise/index.js:19:20)
at internals.Server.register (/home/.../node_modules/#hapi/hapi/lib/server.js:453:35)
at async Object.exports.compose (/home/.../node_modules/#hapi/glue/lib/index.js:46:9)
at async createServer (/home/.../server.js:10:115)
at async Immediate.<anonymous> (/home/.../node_modules/#hapi/lab/lib/runner.js:662:17)
So, is there an approach, how to test a Hapi server in multiple files without multiple server connections?
You will have to lazy load the servr. let's assume you have a function serverFactory which returns the server. you could do this
let serverFactoryObj;
function getServerFactory() {
if (!serverFactoryObj) {
serverFactoryObj = serverFactory();
}
return serverFactoryObj;
}
This way, no matter how many times you test the endpoints, you will always have a single instance of server. Ideally you should always test with single instance rather than creating/stopping server for each test.

Meteor - using snychronised non-persistent / in-memory MongoDB on the server

in a Meteor app, having real-time reactive updates between all connected clients is achieved with writing in collections, publishing and subscribing the right data. In normal case this means also database writes.
But what if I would like to sync particular data which does not need to be persistent and I would like to save the overhead of writing in the database ? Is it possible to use mini-mongo or other in-memory caching on the server by still preserving DDP synchronisation to all clients ?
Example
In my app I have a multiple collapsed threads and I want to show, which users currently expanded particular thread
Viewed by: Mike, Johny, Steven ...
I can store the information in the threads collection or make make a separate viewers collection and publish the information to the clients. But there is actually no meaning in making this information persistent an having the overhead of database writes.
I am confused by the collections documentation. which states:
OPTIONS
connection Object
The server connection that will manage this collection. Uses the default connection if not specified. Pass the return value of calling DDP.connect to specify a different server. Pass null to specify no connection.
and
... when you pass a name, here’s what happens:
...
On the client (and on the server if you specify a connection), a Minimongo instance is created.
But If I create a new collection and pass the option object with conneciton: null
// Creates a new Mongo collections and exports it
export const Presentations = new Mongo.Collection('presentations', {connection: null});
/**
* Publications
*/
if (Meteor.isServer) {
// This code only runs on the server
Meteor.publish(PRESENTATION_BY_MAP_ID, (mapId) => {
check(mapId, nonEmptyString);
return Presentations.find({ matchingMapId: mapId });
});
}
no data is being published to the clients.
TLDR: it's not possible.
There is no magic in Meteor that allow data being synced between clients while the data doesn't transit by the MongoDB database. The whole sync process through publications and subscriptions is triggered by MongoDB writes. Hence, if you don't write to database, you cannot sync data between clients (using the native pub/sub system available in Meteor).
After countless hours of trying everything possible I found a way to what I wanted:
export const Presentations = new Mongo.Collection('presentations', Meteor.isServer ? {connection: null} : {});
I checked the MongoDb and no presentations collection is being created. Also, n every server-restart the collection is empty. There is a small downside on the client, even the collectionHanlde.ready() is truthy the findOne() first returns undefined and is being synced afterwards.
I don't know if this is the right/preferable way, but it was the only one working for me so far. I tried to leave {connection: null} in the client code, but wasn't able to achieve any sync even though I implemented the added/changed/removed methods.
Sadly, I wasn't able to get any further help even in the meteor forum here and here

Azure SQL Server (elastic pool) connect slow

We ar running a webapplication in Azure Web Apps using a database per customer (multiple accounts per customer). When logging in we connect the user to the correct customer database. This database is also hosted in azure (an elastic pool). It is hosted in the same region (West Europe) as the Web App.
Once the connection is pooled, request times are fast, but the first time a user log's in, the connection still needs to be created an this takes (quiet) a long time.
The connectionstring is build up using a SqlConnectionStringBuilder.
var csb = new System.Data.SqlClient.SqlConnectionStringBuilder();
csb.DataSource = "tcp:******.database.windows.net,1433";
csb.InitialCatalog = "***-***-***";
csb.UserID = "**-**";
csb.Password = "**********";
csb.MultipleActiveResultSets = true;
csb.Encrypt = true;
csb.PersistSecurityInfo = false;
csb.TrustServerCertificate = false;
csb.ConnectTimeout = 30;
_connectionString = csb.ConnectionString;
// Data Source=tcp:******.database.windows.net,1433;Initial Catalog=***-***-***;Persist Security Info=False;User ID=**-***;Password=******;MultipleActiveResultSets=True;Connect Timeout=30;Encrypt=True;TrustServerCertificate=False
Am I doing anything wrong? Or are there some settings in azure to speed up the connect process?
The above request shows the first request to the application of a customer. It therefor includes the EF Migration Seed resulting in the first 2 queries not actually going to the database itself and quite a lot of queries (not all shown here) to the database.
Well, I solved my problem eventualy. Seems i was matching wrong queries within Applications Insights. I installed Stackify and this gives just the little bit more information I needed.
Seem's Entity Framework does some things with the 'master' database. As the user in the connectionstring did not have access to the 'master' database it throws an error. Well, handling that error take's up quite some time on the app-service used and therefor returning slow. It just doesn't fail.
What EF tries to do is determine if the database exist by querying the master database wich is faster then connecting to a non existing database. If it fails because it can not connect to the master database, EF just tries to connect to the database itself. If connection to the database works, it continues normal execution like the seed method.