I am running two databases (PostgreSQL 9.5.7) in a master/slave setup. My application is connecting to a pgpool instance which routes to the master database (and slave for read only queries).
Now I am trying to scale out some data to another read-only database instance containing only a few tables.
This works perfectly using pglogical directly on the master database.
However if the master transitions to slave for some reason, pglogical can't replicate any longer because the node is in standby.
Tried following things:
subscribed on the slave since it's less likely to go down, or overheated: Can't replicate on standby node.
subscribed via pgpool server: pgpool doesn't accept replication connections.
subscribed to both servers: pglogical config gets replicated along, so can't give them different node names.
The only thing I can think of now is to write my own tcp proxy which regularly checks for the state of the server to which I can subscribe to.
Is there any other/easier way I can solve this ?
Am I using the wrong tools perhaps ?
Ok so it seems that there are no solutions for this problem just yet.
Since the data in my logically replicated database is not changing fast, there is no harm if the replication stops for a moment.
Actions on failover could be:
Re-subscribe to the promoted master.
or promote standby node back to master after failover.
Related
I'm reading the article below how to achieve streaming replication in Postgres DB.
https://www.percona.com/blog/2018/09/07/setting-up-streaming-replication-postgresql
Some things are not quite clear
1) Are both DB instances active OR the slave instance is just a clone of master (o it communicates with master, but not the backend?
2) If DB master node failed, what will happen until second node will get back online? Is this covered by default by just having wal sender and wal receiver processes or something else needs to be added?
3) Which DB_HOST:PORT should be configured in the backend app if for example I have two backend nodes (both of them are active)?
If hot_standby = on in postgresql.conf, clients can connect to the standby, but only read data and not modify them. The standby is an identical physical cooy of the primary, just as if you had copied it file by file.
If the primary fails, the standby will remain up and running, but you still can only read data until somebody promotes the standby. You have to understand that PostgreSQL does not ship with cluster software that allows this to happen automatically. You have to usr some other software like Patroni for that.
That depends on the API your software is using. With libpq (the C API) or JDBC you can have a connection string that contains both servers and will select the primary automatically, but with other clients you may have to use external load balancing software.
I'm searching for HA solutions without load balancing in the master-slave model, using postgresql. My favorite solution so far is log shipping synchronous replication. But I have one main concern, and that is, if my slave server becomes unavailable, will my master server continue it's operation? Or will it wait for the acknowledgment of my slave server until it's up again?
If you have only one standby, the master will halt ( by design ).
The master will still serve read-only statements, but all writes will be blocked until the standby comes back.
You can avoid this scenario by providing multiple candidates in synchronous_standby_names.
See SYNCHRONOUS-REPLICATION in the PostgreSQL Docs.
I found another way to prevent the master halt at slave crash. We can use wal_sender_timeout in masters postgresql.conf file to disconnect from the slave if it's been crashed.
I'm not exactly a DBA, so I would appreciate easy to understand responses. I have to provide replication to our DB and pgpool seems more convenient because if one postgresql instance fails, the clients are not required to change anything to keep on working, right? So, in this case, makes more sense to use pgpool, but the configuration part seems (to me) a lot more complicated and confusing. For instance, do I need to set up WAL on both postgresql servers? Or this is only needed if I want to set up postgresql replication? The more I try to get an answer to these questions, the less clear it becomes. Maybe I forgot how to google...
The built-in replication, provided by PostgreSQL itself, includes streaming replication, warm standby, and hot standby. These options are based on shipping Write-Ahead Logs (WAL) to all the standby servers. Write statements (e.g., INSERT, UPDATE) will go to the master, and the master will send logs (WALs) to the standby servers (or other masters, in the case of master-master replication).
pgpool, on the other hand, is a type of statement-based replication middleware (like a database proxy). All the statements actually go to pgpool, and pgpool forwards everything to all the servers to be replicated.
One big disadvantage with pgpool is that you have a single point of failure; if the server running pgpool crashes, your whole cluster fails.
The PostgreSQL documentation has some basic info on the various types of replication that are possible: https://www.postgresql.org/docs/current/different-replication-solutions.html
PostgreSql 9.1 has master-slave synchronous replication. Suppose the master is machine A and the slave is machine B.
If the master fails, how does PostgreSQL know when to make the slave the master? What if the slave incorrectly thought the master was down because of a temporary network glitch on the master where the client program could still contact the master though.
And moreover, how would my client program know the slave in the new master and more importantly is ready to accept writes. Does the slave send a message to the client?
Check repmgr, it's one of its jobs is to deal with this issue.
Typically you want to use a promotion-management system like repmgr or patroni. Then you want to use some sort of a high availability proxy (could be pgbouncer or haproxy) to handle the actual abstraction so your applications do not need to know what system is master.
In answer to your question, most of these systems use a heartbeat to determine if there is a problem. Patroni goes out over the etcd heartbeat. Repmgr has its own heartbeat check. With Repmgr you need to write hook scripts to take care of stonith, and so forth.
I'd like to use mongodb to distribute a cached database to some distributed worker nodes I'll be firing up in EC2 on demand. When a node goes up, a local copy of mongo should connect to a master copy of the database (say, mongomaster.mycompany.com) and pull down a fresh copy of the database. It should continue to replicate changes from the master until the node is shut down and released from the pool.
The requirements are that the master need not know about each individual slave being fired up, nor should the slave have any knowledge of other nodes outside the master (mongomaster.mycompany.com).
The slave should be read only, the master will be the only node accepting writes (and never from one of these ec2 nodes).
I've looked into replica sets, and this doesn't seem to be possible. I've done something similar to this before with a master/slave setup, but it was unreliable. The master/slave replication was prone to sudden catastrophic failure.
Regarding replicasets: While I don't imagine you could have a set member invisible to the primary (and other nodes), due to the need for replication, you can tailor a particular node to come pretty close to what you want:
Set the newly-launched node to priority 0 (meaning it cannot become primary)
Set the newly-launched node to 'hidden'
Here are links to more info on priority 0 and hidden nodes.