I would like to sync a set of tables between two databases, transparently without application code chages. My idea is to create triggers on insert, update and delete, in the source database tables to replicate the data using dblink to the dest. database tables seamlessly.
The problem is that changes in the source tables are always done inside a transaction. The triggers automatically replicate changes in the dest. tables but if the source transaction is rolled back the dest. tables changes are not.
Is there a way to automatically sync transaction begin and commit/rollback between the two databases? A trigger-like behavior would be ideal.
Yes, it's possible from ages - using Slony-I, or other trigger-based replication.
Row updates are logged to special tables on "master" side, and replayed asynchronously on "slave" side.
External program/daemon is used to synchronise changes.
See http://www.postgresql.org/docs/current/static/different-replication-solutions.html and http://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling#Replication for more information.
When 9.3 comes out, check out http://www.postgresql.org/docs/9.3/static/postgres-fdw.html. You can roll back transactions in foreign databases.
Related
I use postgresql. I have many databases in a server. There is one database which I use the most say 'main'. This 'main' has many tables inside it. And also other databases have many tables inside them.
What I want to do is, whenever a new row is inserted into 'main.users' table I wish to insert the same data into 'users' table of other databases. How shall I do it in postgresql? Similarly I wish to do the same for all actions like UPDATE, DELETE etc.,
I had gone through the "logical replication" concept as suggested by you. In my case I know the source db name up front and I will come to know the target db name as part of the query. So it is going to be dynamic.
How to achieve this? is there any db concept available in postgresql? Or I welcome all other possible ways as well. Please share me some idea on this.
If this is all on the same Postgres instance (aka "cluster"), then I would recommend to use a foreign table to access the tables from the "main" database in the other databases.
Those foreign tables look like "local" tables inside each database, but access the original data in the source database directly, so there is no need to synchronize anything.
Upgrade to a recent PostgreSQL release and use logical replication.
Add a trigger on the table in the master database that uses dblink to access and write the other databases.
Be sure to consider what should be done if the row alreasdy exists remotely, or if the rome server is unreachable.
Also not that updates propogated usign dblink are not rolled back if the inboking transaction is rolled back
I need some advice about the following scenario.
I have multiple embedded systems supporting PostgreSQL database running at different places and we have a server running on CentOS at our premises.
Each system is running at remote location and has multiple tables inside its database. These tables have the same names as the server's table names, but each system has different table name than the other systems, e.g.:
system 1 has tables:
sys1_table1
sys1_table2
system 2 has tables
sys2_table1
sys2_table2
I want to update the tables sys1_table1, sys1_table2, sys2_table1 and sys2_table2 on the server on every insert done on system 1 and system 2.
One solution is to write a trigger on each table, which will run on every insert of both systems' tables and insert the same data on the server's tables. This trigger will also delete the records in the systems after inserting the data into server. The problem with this solution is that if the connection with the server is not established due to network issue than that trigger will not execute or the insert will be wasted. I have checked the following solution for this
Trigger to insert rows in remote database after deletion
The second solution is to replicate tables from system 1 and system 2 to the server's tables. The problem with replication will be that if we delete data from the systems, it'll also delete the records on the server. I could add the alternative trigger on the server's tables which will update on the duplicate table, hence the replicated table can get empty and it'll not effect the data, but it'll make a long tables list if we have more than 200 systems.
The third solution is to write a foreign table using postgres_fdw or dblink and update the data inside the server's tables, but will this effect the data inside the server when we delete the data inside the system's table, right? And what will happen if there is no connectivity with the server?
The forth solution is to write an application in python inside each system which will make a connection to server's database and write the data in real time and if there is no connectivity to the server than it will store the data inside the sys1.table1 or sys2.table2 or whatever the table the data belongs and after the re-connect, the code will send the tables data into server's tables.
Which option will be best according to this scenario? I like the trigger solution best, but is there any way to avoid the data loss in case of dis-connectivity from the server?
I'd go with the fourth solution, or perhaps with the third, as long as it is triggered from outside the database. That way you can easily survive connection loss.
The first solution with triggers has the problems you already detected. It is also a bad idea to start potentially long operations, like data replication across a network of uncertain quality, inside a database transaction. Long transactions mean long locks and inefficient autovacuum.
The second solution may actually also be an option if you you have a recent PostgreSQL versions that supports logical replication. You can use a publication WITH (publish = 'insert,update'), so that DELETE and TRUNCATE are not replicated. Replication can deal well with lost connectivity (for a while), but it is not an option if you want the data at the source to be deleted after they have been replicated.
From https://wiki.postgresql.org/wiki/Psycopg2_Tutorial
PostgreSQL can not drop databases within a transaction, it is an all
or nothing command. If you want to drop the database you would need to
change the isolation level of the database this is done using the
following.
conn.set_isolation_level(0)
You would place the above immediately preceding the DROP DATABASE
cursor execution.
Why "If you want to drop the database you would need to change the isolation level of the database"?
In particular, why do we need to change the isolation level to 0? (If I am correct, 0 means psycopg2.extensions.ISOLATION_LEVEL_READ_COMMITTED)
From https://stackoverflow.com/a/51859484/156458
The operation of destroying a database is implemented in a way which
prevents undoing it - therefore you can not run it from inside a
transaction because transactions are always undoable. Also keep in
mind that unlike most other databases PostgreSQL allows almost all DDL
statements (obviously not the DROP DATABASE one) to be executed inside
a transaction.
Actually you can not drop a database if anyone (including you) is
currently connected to this database - so it does not matter what is
your isolation level, you still have to connect to another database
(e.g. postgres)
"you can not run it from inside a transaction because transactions are always undoable". Then how can I drop a database not from inside a transaction?
I found my answer at https://stackoverflow.com/a/51880577/156458
I'm unfamiliar with psycopg2 so I can only provide steps to be performed.
Steps to be taken to perform DROP DATABASE from Python:
Connect to a different database, which you don't want to drop
Store current isolation level in a variable
Set isolation level to 0
Execute DROP DATABASE query
Set isolation level back to original (from #2)
Steps to be taken to perform DROP DATABASE from PSQL:
Connect to a different database, which you don't want to drop
Execute DROP DATABASE query
Code in psql
\c second_db
DROP DATABASE first_db;
Remember, that there can be no live connections to the database you are trying to drop.
My postresql database is updated each night.
At the end of each nightly update, I need to know what data changed.
The update process is complex, taking a couple of hours and requires dozens of scripts, so I don't know if that influences how I could see what data has changed.
The database is around 1 TB in size, so any method that requires starting a temporary database may be very slow.
The database is an AWS instance (RDS). I have automated backups enabled (these are different to RDS snapshots which are user initiated). Is it possible to see the difference between two RDS automated backups?
I do not know if it is possible to see difference between RDS snapshots. But in the past we tested several solutions for similar problem. Maybe you can take some inspiration from it.
Obvious solution is of course auditing system. This way you can see in relatively simply way what was changed. Depending on granularity of your auditing system down to column values. Of course there is impact on your application due auditing triggers and queries into audit tables.
Another possibility is - for tables with primary keys you can store values of primary key and 'xmin' and 'ctid' hidden system columns (https://www.postgresql.org/docs/current/static/ddl-system-columns.html) for each row before updated and compare them with values after update. But this way you can identify only changed / inserted / deleted rows but not changes in different columns.
You can make streaming replica and set replication slots (and to be on the safe side also WAL log archiving ). Then stop replication on replica before updates and compare data after updates using dblink selects. But these queries can be very heavy.
As there is no support for user defined functions or stored procedures in RedShift, how can i achieve UPSERT mechanism in RedShift which is using ParAccel, a PostgreSQL 8.0.2 fork.
Currently, i'm trying to achieve UPSERT mechanism using IF...THEN...ELSE... statement
e.g:-
IF NOT EXISTS(SELECT...WHERE(SELECT..))
THEN INSERT INTO tblABC() SELECT... FROM tblXYZ
ELSE UPDATE tblABC SET.,.,.,. FROM tblXYZ WHERE...
which is giving me error. As i'm writing this code independently without including it in function or SP's.
So, is there any solution to achieve UPSERT.
Thanks
You should probably read this article on upsert by depesz. You can't rely on SERIALIABLE for this since, AFAIK, ParAccel doesn't support full serializability support like in Pg 9.1+. As outlined in that post, you can't really do what you want purely in the DB anyway.
The short version is that even on current PostgreSQL versions that support writable CTEs it's still hard. On an 8.0 based ParAccel, you're pretty much out of luck.
I'd do a staged merge. COPY the new data to a temporary table on the server, LOCK the destination table, then do an UPDATE ... FROM followed by an INSERT INTO ... SELECT. Doing the data uploads in big chunks and locking the table for the upserts is reasonably in keeping with how Redshift is used anyway.
Another approach is to externally co-ordinate the upserts via something local to your application cluster. Have all your tools communicate via an external tool where they take an "insert-intent lock" before doing an insert. You want a distributed locking tool appropriate to your system. If everything's running inside one application server, it might be as simple as a synchronized singleton object.