PostgreSQL Streaming Replication Reject Insert - postgresql

I have Postgresql 14 and I made streaming replication (remote_apply) for 3 nodes.
When the two standby nodes are down, if I tried to do an insert command this will show up:
WARNING: canceling wait for synchronous replication due to user request
DETAIL: The transaction has already committed locally, but might not have been replicated to the standby.
INSERT 0 1
I don't want to insert it locally. I want to reject the transaction and show an error instead.
Is it possible to do that?

No, there is no way to do that with synchronous replication.

I don't think you have thought through the implications of what you want. If it doesn't commit locally first, then what should happen if the master crashes after sending the transaction to the replica, but before getting back word that it was applied there? If it was committed on the replica but rejected on the master, how would they ever get back into sync?

I made a script that checks the number of standby nodes and then make the primary node read-only if the standby nodes are down.

Related

How works delayed standby to recover wals

Which mechanism is used for a delayed standby to recover wals. Processes that usually use streaming are not present (wal_sender/wal_receiver). Does the deferred standby fetch the wals one after another based on its offset from the primary or barman if it is configured

postgresql, streaming replication is synchronous?

I've setup streaming replication and wonder if it's synchrounous.. if it's blocking.
Which implys when slave goes down, the synchronous rep will be blocked and there will be problems service client requests.
Or do I need not to worry about such a scenario?
Yes, with synchronous replication a COMMIT will block until the required standby servers have reported that they have received the information.
That leads to reduced availability if you only have a single standby server. This is not a PostgreSQL shortcoming, but a fundamental necessity; read about the CAP theorem for more.
The way to deal with that is to have more than one standby server, so that life can go on if a standby server fails.

How to syncup MongoDB failed over with working replica machine?

MongoDB replication, it has 3 servers(Server1, Server2, Server3). Due to any reason, Server1 goes down and Server2 acts as Primary and Server3 as Secondary mode.
Query: As Server1 is down and after 2-3 hours we made it up(running). The 3 hrs gap between Server1 data and Server2 data, how it will be sync up?
The primary maintains an oplog detailing all of the writes that have been done to the data. The oplog is capped by size, the oldest entries are automatically removed to keep it below the configured size.
When a secondary node replicates from the primary, it reads the oplog and creates a local copy. If a secondary is offline for a period of time, when it comes back online, it will ask the primary for all oplog entries since the last one that it successfully copied.
If the primary still has the entry that the secondary most recently saw, the secondary will begin applying the events it missed.
If the primary no longer has that entry, the secondary will log a message that it is too stale to catch up, and manual intervention will be required. This would usually require a manual resync

Do I need to archive postgres WAL records if I am already streaming them to a standby server?

I have a postgres master node which is streaming WAL records to a standby slave node. The slave database runs in read only mode and has a copy of all data on the master node. It can be switched to master by creating a recovery.conf file in /tmp.
On the master node I am also archiving WAL records. I am just wondering if this is necessary if they are already streamed to the slave node? The archived WAL records are 27GB at this point. The disk will fill eventually.
A standby server is no backup; it only protects you from hardware failure on the primary.
Just imagine that somebody by mistakes deletes data or drops a table, then you won't be able to recover from this problem without a backup.
Create a job that regularly cleans up archived WALs if they exceed a certain age.
Once you have a full backup, then you can purge the preceding WAL files associated.
The idea is to preserve the WAL Files for PITR in case if your server crashes.
If your Primary server crashes, then you can certainly use your hot-standby and make it primary, but at this time you have to build another server (as a hot-standby). Typically you don't want to build it using streaming replication.
You will be using full backup+wal backups to build a server and then proceed further instead of relying on streaming replication.

What is the consistency of Postgresql HA cluster with Patroni?

What is the consistency of Postgresql HA cluster with Patroni?
My understanding is that because the fail-over is using a consensus (etc or zookeeper) the system will stay consistent under network partition.
Does this mean that transaction running under the serializable Isolation Level will also provide linearizability.
If not which consistency will I get Sequential Consistency, Causal Consistency .. ?
You shouldn't mix up consistency between the primary and the replicas and consistency within the database.
A PostgreSQL database running in a Patroni cluster is a normal database with streaming replicas, so it provides the eventual consistency of streaming replication (all replicas will eventually show the same values as the primary).
Serializabiliy guarantees that you can establish an order in the database transactions that ran against the primary such that the outcome of a serialized execution in that order is the same as the workload had in reality.
If I read the definition right, that is just the same as “linearizability”.
Since only one of the nodes in the Patroni cluster can be written to (the primary), this stays true, no matter if the database is in a Patroni cluster or not.
In a distributed context, where we have multiple replicas of an object’s state, A schedule is linearizable if it is as if they were all updated at once at a single point in time.
Once a write completes, all later reads (wall-clock time) from any replica should see the value of that write or the value of a later write.
Since PostgreSQL version 9.6 its possible to have multiple synchronous standy node. This mean if we have 3 server and use num_sync = 2, the primary will always wait for write to be on the 2 standby before doing commit.
This should satisfy the constraint of linearizable schedule even with failover.
Since version 1.2 of Patroni, When synchronous mode is enabled, Patroni will automatically fail over only to a standby that was synchronously replicating at the time of the master failure.
This effectively means that no user visible transaction gets lost in such a case.