Symmetric DS simple configuration guidance - postgresql

Over the past couple of weeks I have been prototyping out some examples in symmetric DS. Looking for some guidance and examples because I am really running into some walls here. I have used the server and android examples successfully, don't need any assistance with setup on getting the basics working. It is a complex tool and I;m still learning it as well.
So I am trying to setup an environment where all the clients that run on android device sync up to a server. So I know it's fairly straight forward to do a setup where its 1 MASTER -> <- multiple clients, as the example that they provide do.
What I am trying to do is multiple masters to multiple clients. Essentially I want a database on the server for each client. Ill attach a diagram to try to help explain but I want a database for each store so store #1 has a master DB on the server and it syncs both ways with the client device.
server-diagram

SymmetricDS requires having a central node to store the configuration. I would recommend to have a central node with bunch of databases that connect to the central database. Connect each android application to another database. This topology will allow configuring what data syncs from the central node to the bunch of databases and what goes back

On the router from client to server you can set the target catalog to be a variable : $(sourceExternalId). This will use the clients external id as the database name on your server.
If you also need to replicate data back down you can set the external select on the triggers at the server. This would need to be an expression on your server database that would evaluate the current database. This would fire when a change occurs on the server database and populate the external_data column on sym_data during capture with the database that the change occurred in. You would then adjust the router from server to client to be a column match router type. Your expression then for the router would be: EXTERNAL_DATA=:EXTERNAL_ID. This would ensure that this data only be sent to the appropriate client.

Related

What are the differences (CPU, runtime, or otherwise) between using pg pool and pgp as a database connection in an express server

I've created a few apps which utilize a postgres database, but in all of those projects, I've either used the pool or client function from the pg npm package. Recently I came across the pg-promise node package, and was just wondering if there were any drawbacks to using pg-promise over pool or client. I'm just worried about changes in runtime that would affect how many clients the app could service at one time.
pg-promise is "Built on top of node-postgres". You're still using the same pools and clients.
Nothing changes regarding the amount of connections your database will be able to handle, and unless you use a different approach to building your application (like, using transactions instead of not using transactions, or using individual clients instead of pooling), nothing will change regarding the amount of clients your app will be able to serve.

Can we switch to using authenticated access to MongoDB with no downtime?

We currently have a fairly complex Mongo environment with multiple query routers and data servers in different AWS regions using sharding and replication so that data will be initially written to a master shard in a local data server and then replicated to all regions.
When we first set this up we didn't add any security to the Mongo infrastructure and are using unauthenticated access for read and write. We now need to enable authentication so that the platform components that are writing data can use a single identity for write and read, and our system administrators can use their own user accounts for admin functionality.
The question is whether and how we can switch to using authentication without taking any downtime in the backend. We can change connection strings on the fly in the components that read and write to the DB, and can roll components in and out of load-balancers if we do need a restart. The concern is on the Mongo side.
Can we enable authentication without having to restart?
Can we continue to allow open access from an anonymous user after enabling authentication (to allow backward compatibility while we update the connection strings)?
If not, can we change the query strings before we enable authentication and have Mongo accept the connection requests even though it isn't authenticating?
Can we add authorization to our DBs and Collections after the fact?
Will there be any risk to replication as we go through this process? We have a couple of TB of data and if things get out of sync it's very difficult to force a resync.
I'm sure I'm missing some things, so any thoughts here will be much appreciated.
Thanks,
Ian

How to handle username/pass changes in a distributed REST application?

I have a distributed REST application, written in C++, with an integrated SQLite DB. The application is self contained - no apache or iis server, and no external mysql. The application is the logic behind a hardware sensor: the application monitors sensor(s), identifying and storing data of interest, and generating "events" when data of interest repeats. The creation of data of interest is synchronized across the Internet to multiple instances of the application using REST to communicate the synchronization.
Using basic authentication over https, each instance maintains a local key/value store of remote instances' user/pass authentication data. This is necessary because each communication with a remote instance of the application requires authentication.
My question is how to handle the situation when the human operator changes either the username or password in the application, while the application is in active synchronization with remote instances.
I'm thinking this is really no different than any other material application data changing - when a local username / password changes, a REST communication is posted to each synchronization instance containing the changed data for that remote's local key/value store. Any communications that fail get queued for when that remote is back, as that is material information the remote needs to maintain synchronization.
Because the communications occur over https, the fact that authentication data is being passed around is okay.
I thought I might need special logic to handle the race condition where one instance tries to communicate with another, but the other has just changed its authentication fields. The sender will queue with my current logic, and when the remote sends it's updated authentication data, the locally queued failed communications will start succeeding. So that does not appear to be an issue.
I guess this is a request for anyone that's been here before, what did you do? Maybe my search terms are weak here, because I'm not finding discussion of this issue.

Loopback.io backup server and server to server replication

I am thinking of adopting Loopback.io to create a REST API. I may need the following approach: an inTERnet server (run by me) to which clients connect, plus a fallback inTRAnet server to which clients connect only in case the internet connection is down. This secondary fallback server should then replicate data on the main server when the internet connection is up and running again. As clients are on the same inTRAnet they should be able to switch automatically to the fallback server. Is this possible as an idea and if so, what do you recommend i start digging into?
Thank you all!
Matteo
Simon from my other account. I believe what you want is possible as you can use whatever client side technology you want with LoopBack. As for easy solutions, I'm not familiar enough with Cordova to give any insight there.
It is definitely possible, but I suggest going through the getting started tutorial first. You'd probably create two application servers and have another proxy in front to route the requests to server a or b based a heartbeat from the main server. You would have to code all the logic and set up the infrastructure yourself though.

convert always connected to occasionally connected application

I have an existing client-server 3-tier application with the following stack :
Smart-Client (Win-Forms)
IIS/ASP.NET
Sql server
Some of the data is stored in Entity–attribute–value (EAV) model.
All primary keys are integer identity columns.
Database operations are mostly performed using Stored procedures.
I am tasked with converting this application into an occasionally connected application (OCA)
There should be no issues with installation and resources limitation on the clients.
This is the first such project for me.
I have done some reading about
ms sync framework
Enterprise library / occasionally connected smart clients
SQL server replication
In order preserve existing code and limiting change impact, I am considering installing the the 3-tier application on each client , using sync framework to handle synchronization on the WS to handle synchronization. Also having one master server to which synchronizations will refer.
Does this solution look feasible?
Are there any other resources regarding converting an on always Connected 3-tier application to an occasionally connected application ?
Thank you .
should be feasible. not much change in your app. you just have to install a local database on your clients.
however, your're using identity columns. unless you partitioned your identity values (client 1 is 1-1000, client 2 is 1001 - 2000, etc...) you will duplicate IDs when you upload them.
have a look at this: Database Sync:SQL Server and SQL Express N-Tier with WCF