This question already has answers here:
multi Master Replication in Postgres
(1 answer)
Multi master replication in postgresql [closed]
(2 answers)
Closed 4 months ago.
I have a question that doesn't seem to be answered anywhere. (At least that I can find)
I'm trying to do the following:
I have an HQ Database.
Each agency has X computers with local DBs that must synch between themselves (In the agency) and HQ.
Is this possible with the current logical replication of Postgres?
Here is the scheme of what I was thinking
My question is, is this currently possible with PostgreSQL?
I was thinking of the following:
The Master DB is a publisher and subscribes to each workstation on
the network.
Each workstation is a publisher and a subscriber to each workstation on the local network segment AND the master DB.
Is this feasible? Or should I develop my own script to do peer-to-peer synch?
Let me know what you think. Thanks.
Related
This question already has an answer here:
Is there any way to know the last commit value in a table?
(1 answer)
Closed 5 years ago.
I'm Oracle Dba and want to learn Postgresql. Does it have equivalent of flashback in Postgresql?
In short, no. Here's a good answer: https://dba.stackexchange.com/a/362/30035
And ATM the best practice as Laurenz suggests here: https://www.postgresql.org/message-id/A737B7A37273E048B164557ADEF4A58B36614938%40ntex2010i.host.magwien.gv.at
the lagging slave can show you the value some time ago (look recovery_min_apply_delay, or pg_xlog_replay_pause() fro pre 9.4 releases) - of course it's not a FRA, but can give you some place to move
This question already has answers here:
How to drop a PostgreSQL database if there are active connections to it?
(12 answers)
Closed 7 years ago.
We use postgres as a real-time data cache for observations. We need to drop our tables on a daily basis. There are frequently clients that still have the db open for reading, actually they have it open for read/write and don't realize it. We have specifically noted that Python opens it rw and keeps a permanent transaction lock on the DB. This prevents us from dropping the tables.
The data table can have different number of columns on a daily basis, so 'delete from table' does not appear to be an option.
We have tried creating a read-only user, but that did not help, it was still getting "IDLE in transaction".
Is there any kind of 'kill -9' for dropping tables?
We are currently on PostgreSQL 8.4 on RHEL 6, but will be migrating to RHEL 7 soon.
If you have administrative access then you can kill all the current sessions. I think your question is similar to this.
This question already has an answer here:
Find PostgreSQL server hostname on which it runs
(1 answer)
Closed 8 years ago.
Is it possible with easy syntax like MS SQL Server do
SELECT HOST_NAME()
in postgresql 9.3.2?
I have read some articles but no result !
No, the default build doesn't have that. It is easy however to extend PostgreSQL with new native functions and someone already did it: http://pgxn.org/dist/hostname/ .
Another way would be to install an additional db language (PostgreSQL is great like that - you have the option of using arbitrary languages instead of pl/pgsql) and use the language's own functions to do that. There are e.g. pl/python (http://www.postgresql.org/docs/9.1/static/plpython-funcs.html) and pl/perl (http://www.postgresql.org/docs/9.1/static/plperl-trusted.html -- see also the discussion about trusted and untrusted languages).
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As we know, mongodb has limited oplog.
If I just create a new slave, everything in the database is not sync yet. Everything in the database is bigger than any oplog.
So how do I get around this? Does that mean we cannot create a new slave that's bigger than the oplog? Does mongodb has other mechanism for synching database besides that oplog?
How exactly it's done then if true?
So what's the problem?
If your database is of reasonable size, and you have a snapshot, you can copy over the files (specified by the --dbpath flag on startup or in the config file) to allow the new replica set member to come online quicker. However, an initial sync may still happen.
Conceptually, the following things happen
Start up the new member as part of the replica set
Add it to the rs.conf()
The new replicaset is synced off the closest (could be a primary or a secondary) and will begin pulling data from it (initial sync) and mark a point in the oplog for it's own reference.
The new secondary will then apply the oplog from it's own timestamp that it has copied from the other replica set member.
If the sync fails, another initial sync (from the very start) will happen. For really large datasets, the sync can take some time.
In reply to your questions
Does that mean we cannot create a new slave that's bigger than the oplog?
You can create and add a new member that is bigger than the oplog
Does mongodb has other mechanism for synching database besides that oplog?
Yes, the initial sync and the file copy mentioned above.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Please help me to setup connectivity from PostgreSQL to Informix (latest versions for both). I would like to be able to perform a query on Informix from PostgreSQL. I am looking for a solution that will not require data exports (from Informix) and imports (to PostgreSQL) for every query.
I am very new in PostgreSQL and need detailed instructions.
As Chris Travers said, what you're seeking to do is not easy to do.
In theory, if you were working with Informix and needed to access PostgreSQL, you could (buy and) use the Enterprise Gateway Manager (EGM) and use the ODBC driver for PostgreSQL to allow Informix to connect to PostgreSQL. The EGM would do its utmost to appear to be another Informix database while actually accessing PostgreSQL. (I've not validated that PostgreSQL is supported, but EGM basically needs an ODBC driver to work, so there shouldn't be any problem — 'famous last words', probably.) This will include an emulation of 2PC (two-phase commit); not perfect, but moderately close.
For the converse connection (working with PostgreSQL and connecting to Informix), you will need to look to the PostgreSQL tool suite — or other sources.
You haven't said which version you are using. There are some limitations to be aware of but there are a wide range of choices.
Since you say this is import/export, I will assume that read-only options are not sufficient. That rules out PostgreSQL 9.1's foreign data wrapper system.
Depending on your version David Fetter's DBI-Link may suit your needs since it can execute queries on remote tables (see https://github.com/davidfetter/DBI-Link). It hasn't been updated in a while but the implementation should be pretty stable and usable across versions. If that fails you can write stored procedures in an untrusted language (PL/PythonU, PL/PerlU, etc) to connect to Informix and run the queries there. Note there are limits regarding transaction handling you will run into in this case so you may want to run any queries on the other tables using deferred constraint triggers so everything gets run at commit time.
Edit: A cleaner way occurred to me: use foreign data wrappers for import and a separate client app for export.
In this approach, you are going to have four basic components but this will be loosely coupled and subject to proper transactional controls. You can even use two-phase commit if you want. The four components are (not providing a complete working example here but at least a roadmap to one):
Foreign data wrappers for data import, allowing you to see data from Informix.
Views of data to be exported.
External application which manages the export aspect, written in a language of your choice. This listens on a channel like LISTEN export_informix;
Triggers on underlying tables which make view of data to be exported which raise a NOTIFY export_informix
The notifications are riased on the commit and so basically you have two stages to your transaction in this way:
Write data in PostgreSQL, flag data to be exported. Commit.
Read data from PostgreSQL, export to Informix. Commit on both sides (TPC?).