Blazor Server save SignalR connection to database to prevent loss? - server

SignalR connection is connection between server and user and its "stored" in server memory.
There are some scenarios where SignalR is lost and user needs to reload the page.
When I need to restart the Server
When users sleeps his phone and get back to website after some time
Is it somehow possible to save these SignalR connection instances to Database, so when i restart my server it will reload them into Memory and user is able to reconnect to his previous state ?

You don't have to persist the connection. What you need to persist is app/user session state. Go through this article below:
https://learn.microsoft.com/en-us/aspnet/core/blazor/state-management?view=aspnetcore-5.0&pivots=server
If you are looking for mobile app style resume functionality, that is very hard to achieve.

I persist the connected users. In scaled services there is no guarentee you will get the same instance of the server. There is a feature called auto reconnect you can look at on the hub context other than that you should configure your clients to reconnect by monitoring the connection state. Either way you will get a new connectionId.

Related

How to have database realtime readonly duplication on different servers?

I have IoT object data in MongoDB and an admin panel to access it.
IoT object must work with no internet connection (access to data to add/update).
IoT object should not send too much data (working with SIM card).
Admin panel must have IoT object data even when disconnected (data of latest time connected).
Data on admin panel must be real time (refresh every 30s).
Admin panel does not have to modify data. Looks like a real time backup but a copy is not compatible with (2) and seems unnecessary since there will be little change between refreshing. I checked replica set (IoT database must be primary and server secondary) but (with extended loss of connection) the replica set is not made for important data drift.
My idea is to listen to queue changes on primary database using changeStream and then send a RabbitMQ request to the server with changes. Since the queue will only send changes once and there is no check for old data I'm afraid it will not work 100% of the time which can't be fixed without complete backup of the primary.
The system will have about ~100 IoT devices. How to achieve this?

Best practices sending data from local db to the remote api on restored connection?

during some app development an essential question appeared - what is the best way to manage sending data to remote API from local db (in current case using Hive) to remote API? The one important thing, that data from local db should be sent only when connection state was down before. For example, you write a message, trying to send it but there is no connection to internet, so the data that should be sent is saved to local db. After that, the next time you open the app, it should check the connection state, check stored data in local db and then send it, and after successfull sending clear data in local db.
What is the best way it could be for dart/flutter/bloc application? At the time local data is sending via StatefulWidget onInit() state. Is there any better way/practice?

pub/sub pattern for a socket connection

I am developing an app with c# client and Go server. Now I would like to implement real-time update functionalities, such as when I am inside a user's profile, to obtain their data in real time, so that if said user changes them, it is updated without the need to reload manually.
According to what I have been researching for this type of app, redis is usually used with a publisher and subscriber pattern, but I have not found anything on how to implement this in an app that maintains the connection with the server through sockets...
By directly having a bidirectional connection, could this functionality be developed in real time in some other way?
If anyone knows anything about it I would appreciate any information.

How to send instance-wide notifications in PostgreSQL, across different databases?

I'm using PostgreSQL's NOTIFY command to send async events to inform external programs of the changes happening inside a database. It works perfect but now I've got a new scenario. I need to have several databases within an instance of PostgreSQL.
As I've read the documentation and tested it myself, NOTIFY does not go beyond the borders of a database (to other databases within the PostgreSQL instance).
Whenever the command NOTIFY channel is invoked, either by this session
or another one connected to the same database, all the sessions
currently listening on that notification channel are notified, and
each will in turn notify its connected client application.
Which means I have to listen to notifications of each database separately. And since I'm planning to provide my users with the capability to instantiate their own database on-demand, it means I have to make new listener connections for each new database as well. It poses a challenge and I really prefer if I can have a constant number of listener connections, regardless of the number of databases.
Does anyone know how to send notifications across databases in PostrgeSQL or some other feature I can use?

Per-session persistent sockets in a web application

I have a perl web application (CGI::Application with ModPerl::Registry) which connects to a authenticated custom server over a socket and exchanges data (command/response) with it. Currently the web application connects to the server, authenticates and disconnects on every page request - even for the same user.
Is there some way I can use the same socket over multiple page requests which share a common session id? Creating a separate daemon that proxies connections and makes them persistent is an option I am exploring, but would like to know if there are any simpler solutions.
I have no control over the design of the custom server unfortunately.
Looks like the same question was asked on PerlMonks. The responses there point in the right direction, but the issue seems to be that you want one cached connection per session, not one cached connection per session per httpd thread/process. You might have to resort to a separate proxy process to get the behaviour you want.