Synchronize local changes to remote database - postgresql

I got a requirement to setup 2-way data sync between main remote Postgresql DB and a local one. Main DB is used for multi-tenant access, and local db is used only by one tenant. So I need to sync only this tenant data. Local DB is installed on tenant's site and should be used only when internet connection is down. So, when there is internet connection, use remote DB and sync all the changes to local. When internet is down, switch to local DB. When internet is back online, sync local changes to remote.
I tried to figure out how to set this up, but it seems that Postgres replication isn't suitable for this case.
I need a tool that is capable of partial table sync both ways. Should I, maybe, consider changing DB design?

Related

SQL Live Backup Over Intermittent Connection

I have a few PCs that have local PostgreSQL databases running, just logging data. Data is only ever inserted, never removed or updated. The remote PCs are connected to the internet by cellular modem and depending on their location, often do not have internet access. When they do have an internet connection I would like them to push a copy of their databases to a central location and keep the remote database up to date with any new data. Essentially, I need an 'rsync' for databases.
At first it seemed like what I need is to set up PostgreSQL Hot-Standby but I'm unsure if this is actually what I need because my situation seems to differ from the examples I've seen.
Each remote PC has a Postgres server with a single database that has a unique name, the tables within the DBs have generic names. I would like to synchronize these databases to a single remote Postgres server. I think this should be okay due to the unique DB names.
My connectivity is very intermittent, days to weeks without a connection. I've seen PgAdmin be very reliable despite a terrible (cellular) internet connection, if Postges Hot-Standby is the same I may be alright.
As far as I can see my options are either to set up PostgreSQL Hot-Standby, or roll my own solution. I don't want to roll my own solution. However it is simple enough if I can't find anything better; a Python daemon run by systemd to find the diff between the local and remote DB, then push the new rows from the local to the remote DB. But I'm sure someone has solved this problem, I just haven't found the solution yet.
You don't need hot standby (which is the PostgreSQL term for being able to query the replicated database), but streaming replication. You need a central standby server for each intermittently connected remote database server. If you use replication slots, you can be sure that replication will never fall behind.

Firebird local vs remote connection

I have a cloud-service based on firebird databases. Every customer has his own database file. So many connection-definitions are loaded into my service at startup. This works all well.
Currently the load for the server is ok so the database files are on the same machine as the service itself. Later I could extend it by another server.
My question is:
Does it matter if I use a local firebird connection or should I prefer a remote connection (via TCP/IP). Although I am on the same machine.
Are there advantages / disadvantages or any limits? I got a lot of requests to this server.
I am using Firebird 2.5.7 (64Bit).

Replicate only certain databases

I have one instance of database on local server, another on remote. So when internet is gone, application works on local database. After internet connection is reestablished I want to sync these two databases. I have two questions:
How to replicate only one database. (there are several databases on instance);
I have only succeeded to replicate instances on same machine (when one host is localhost and another is remote instance, it throws error), how to replicate database on different machines?
How to replicate only one database. (there are several databases on instance);
You can make the other database as Arbiter. It does not have a copy of data.
I have only succeeded to replicate instances on same machine (when one host is localhost and another is remote instance, it throws error), how to replicate database on different machines?
looks like a network setup problem to me. Test Connections Between all Members

Mirror one database to another in PostgreSQL

I know the way to set up a Master/Slave DB in Postgres is having 2 DB servers, but unfortunately i have only one server for now.
How can i mirror my production db into another "backup db" in "real_time"? I want to give access to another user to the mirrored db, so even if he does something there it will not affect production.
Nothing stops you setting up hot standby streaming replication, or another replication option like Londiste, between two PostgreSQL instances on the same computer.
The two copies of PostgreSQL must use different ports, but that's the only real restriction.
How to set up the second PostgreSQL instance depends on your operating system and how you installed PostgreSQL, which you have not mentioned.
You'll want to use streaming replication with hot standby if you want a read-only replica. If you want it to be read/write, then you can do a one-off copy of the database with pg_basebackup and not keep them in sync after that. Or you can use a tool like Londiste to replicate changes selectively.
You can run multiple instances of PostgreSQL on the same computer, by using different ports.

Incremental backups from server to local machine

My live site is using mongodb to store user activities on the site.
I am having a single server running monogdb. I cant afford a second server for master slave replication.
my problem is i want to take the dump of server's mongodb database everyday and restore it to my local machine so that i can query on my local machine.I know how to dump and restore but the issue is every day i have to dump the entire database from server and restore it from the scratch in my local machine ..it takes a lot of time.
so my question is ..is there any way to have incremental backup in mongodb so that i have to dump and restore only single day data as it will take less time.
i do not know much about mongodb, but i have an idea.
i think you can introduce your local mongodb instance as a slave of master production db, and make slave only writable if possible, for preventing live system making selects from your local.
this way can work because slaves keeps track of master writes and deletes and try to make themselves as a copy of master.
And there is a good reason to do that is a slave doesn't have to be online always, when it becomes online, slave will check masters list (this list lenght like 1hour or 1 day is configurable at master) and copy datas from master as quick as possible.
Once you dump master to your local, then you can backup your data twice a day with this method i think.