I have a question related to PostgreSQL master-slave configuration. We have two instances (master and slave) running on AWS with the sizes m5.16xlarge and m5.8xlarge respectively.
Here, the question is sizes of these two instances (master & slave) should be same? or we can have different configurations for master and slave? if possible could you please share the documentation link?
Note: I looked at this best practices, but didn't find the information which I am looking for
Many Thanks,
Suresh.
Related
I am working on DS8K configuration with PowerHA(IBM), but for setting environment of DS8K i am facing issue. The first thing is i am not clear for types of configuration we can do for DS8K, as I got as of now there is two of setting DS8K:
1)Hyper swap
2)Replication
so the confusion is with powerha(IBM) how these setup is different.
Please help me to get understand of these basic knowledge and let me know if any other information is needed.
Thanks.
High Availability and Disaster Recovery Options for DB2 for Linux, UNIX, and Windows book has an example configuration of PowerHA replication setup, I believe you need replication here as with Db2 and PowerHA we have active/passive setup.
I have created a replicated Postgresql database (Master - Slave). I did this with an already existing Ansible Playbook (Role) , which I don't fully understand yet. The cluster currently consists of only 2 databases on different VMs.
So I want to test this replication now.
Unfortunately I have little experience with Postgresql.
How can I control whether they connect stable?
If the slave really takes over the task if the master should fail?
Many thanks for any information, tips & tricks.
Postgresql v. 9.6
Official PostgreSQL does not yet support automatic failover (Although there are multiple third-party projects which support this feature). Therefore if the deployment you have mentioned is only official PostgreSQL, after master failure, none of replicas take over the write task. But they can answer read queries if they are configured as hot_standby.
If you want to check the state of replication, in master you can check out pg_stat_replication in master.
Also these official docs would help you understand Postgres streaming replication & failover better:
https://www.postgresql.org/docs/9.6/warm-standby.html#STREAMING-REPLICATION
https://www.postgresql.org/docs/9.6/warm-standby-failover.html
I started to implement bigchainDB.I have followed the tutorial from here
I have setup two nodes running bighchainDB server running & also mongoDB. I have added node id and address of nodes to each configuration so that they can connect.I am able to create transactions on each node.So my questions are as follow
How two nodes communicate and sync data with each other.
How consensus is achieved ?
Why this tutorial is created for setting up cluster ?
BigchainDB nodes communicate with each other using Tendermint P2P protocol. Consensus is Tendermint consensus. To understand those better, here are some starting points:
The BigchainDB 2.0 Whitepaper
The Tendermint website and docs
Also, please ignore the old docs for versions of BigchainDB Server older than 2.0.0x.
I am thinking of using Citus opensource for dualnode cluster - my questions are basically 2:
- if this kind of clustering is available - in the case of a failover is the slave node promoted to master? If yes - how - does it use WAL?
- If such a way of clusterisation is not possible what is an alternative for that except pgpool?
Thank you.
Citus isn't a high-availability solution for single-node PostgreSQL. Citus shards/partitions your data across multiple servers, and can thus use multiple CPU cores in parallel for your queries or transactions.
Citus is suitable for a variety of use-cases, and you can find more information on those here.
For high-availability, Citus can replicate data across multiple nodes, or you can set up streaming replication for each worker node. Citus Cloud uses streaming replication for each node, and you can find more information on how Citus Cloud manages HA on our documentation.
Scenario
Multiple application servers host web services written in Java, running in SpringSource dm Server. To implement a new requirement, they will need to query a read-only PostgreSQL database.
Issue
To support redundancy, at least two PostgreSQL instances will be running. Access to PostgreSQL must be load balanced and must auto-fail over to currently running instances if an instance should go down. Auto-discovery of newly running instances is desirable but not required.
Research
I have reviewed the official PostgreSQL documentation on this issue. However, that focuses on the more general case of read/write access to the database. Top google results tend to lead to older newsgroup messages or dead projects such as Sequoia or DB Balancer, as well as one active project PG Pool II
Question
What are your real-world experiences with PG Pool II? What other simple and reliable alternatives are available?
PostgreSQL's wiki also lists clustering solutions, and the page on Replication, Clustering, and Connection Pooling has a table showing which solutions are suitable for load balancing.
I'm looking forward to PostgreSQL 9.0's combination of Hot Standby and Streaming Replication.
Have you looked at SQL Relay?
The standard solution for something like this is to look at Slony, Londiste or Bucardo. They all provide async replication to many slaves, where the slaves are read-only.
You then implement the load-balancing independent of this - on the TCP layer with something like HAProxy. Such a solution will be able to do failover of the read connections (though you'll still loose transaction visibility on a failover, and have to start new transaction on the new slave - but that's fine for most people)
Then all you have left is failover of the master role. There are supported ways of doing it on all these systems. None of them are automatic by default (because automatic failover of a database master role is really dangerous - consider the situation you are in once you've got split brain), but they can be automated easily if the requirement needs this for the master as well.