I am working on PostgreSQL 14 and pgpool2 4.3.1
I have configured streaming replication with (two stand by) and three node pgpool2.
After configuring pgpool2 when I use show pool_node it show one primary and one standby.
I gave node id 0 for node1 ,1 for node2 and 2 for node3. show pool_node retunes node1(primary) and node3 (stand by) but node ID it returns are 0 and 1 .
pg_is_in_recovery() results are correct.
why show pool_node is not returning all the nodes(stand by).Am i doing some thing wrong.
Any help with this issue will be appreciated.
Related
I defined a replicaset on mongodb v 3.6.8 ubuntu 20.04 with 2 nodes
At the beginnning, I had the first node as Master and the Second as Secondary.
Due to problem to modify the unix service (cf option --replSet), at a time the master was stopped and the secondary up.
Now, I have 2 Secondary nodes; any command to add a third one or to remove the 2nd node freeze.
Any clue to start the first node as master ?
I searched on stack overflow couldn't find the solution for it.
I've 3 nodes 1 primary and 2 secondary nodes like, mongo1.com, mongo2.com and mongo3.com.
Everything is working well with the connection. When I shutdown anyone node, e.g. mongo1.com, my app is working fine. Again I shutdown 2nd node e.g. mongo3.com then app stopped working. In case I enable anyone node enable then app again working fine.
In short, with single node app not working. Looking for guideline / answer for the behaviour.
I checked status using rs.status(), two node show me health: 0 and message: unreachable node.
Third node which is active show health: 1 and "infoMessage" : "could not find member to sync from"
I did multiple research and found that if 2 node shutdown then you can manually made running node as primary.
To have a primary in a 3 node replica set at least 2 nodes must be operational.
I have configured pgpool with 2 nodes Postgres in replication mode. I would like to check the functionality of load balancing. So, I want to know if there is a way to check which query is redirected to which node in Pgpool -II. Thanks
you can set "log_per_node_statement" parameter "on". it print the logs for each DB node separately.
http://www.pgpool.net/docs/latest/en/html/runtime-config-logging.html
I have a postgresql 10 master db with 2 hot standby servers with streaming replication, and the replication is working correctly. The synchronous_commit is setted to remote_write
Also I have a pgpool 3.7.5 configured with the params:
delay_threshold = 1
sr_check_period = 1
And the the following weights:
master: 1
node1: 3
node2: 3
In the log I can see the node1 and node2 are lagging:
Replication of node:1 is behind 75016 bytes from the primary server (node:0)
The pgpool docs says:
delay_threshold (integer)
Specifies the maximum tolerance level of replication delay in WAL bytes on the standby server against the primary server. If the delay exceeds this configured level, Pgpool-II stops sending the SELECT queries to the standby server and starts routing everything to the primary server even if load_balance_mode is enabled, until the standby catches-up with the primary. Setting this parameter to 0 disables the delay checking. This delay threshold check is performed every sr_check_period. Default is 0.
The problem it's that pgpool sends queries to the hot standbys before they obtained the new data from master through streaming replication.
I enabled the log_per_node_statement = on temporally to be able to see which node the query executes and I can see that queries are sent to the nodes even if there aren't sync when delay_threshold should avoid that.
Am I missing something? When the nodes are behind master the queries are not supposed to go the master?
Thanks in advance.
Other config values of pgpool are:
num_init_children = 120
max_pool = 3
connection_cache = off
load_balance_mode = on
master_slave_sub_mode = 'stream'
replication_mode = off
sr_check_period = 1
first, I think you should check the result of "show pool_nodes" and check if three nodes are properly set with right role (primary, standby, standby).
second, did you set "app_name_redirect_preference_list" or "database_redirect_preference_list" ? If so, That can affect on selecting the node for SELECT query.
And in my opinion, I think delay_threshold = 1 is strict, the unit is bytes and in my case, I use "10000000" on PROD. why don't you just put "/NO LOAD BALANCE/" comment to send specific queries to only master?
And I simply recommend you to upgrade the version of pgpool to 4.0.0 (2018-10-19 released). 3.7.x has mysterious bug on load balancing.
I also faced a similar problem that load balancing is not working properly with the version (3.7.5) even when our configuration has no problem. The pgpool randomly We even contact pgpool developer team to solve this problem but they couldn't find the root cause.
You can check the details in the link below.
https://www.pgpool.net/mantisbt/view.php?id=435.
And this was resolved like charm by upgrading to version 4.0.0.
We have the following multi data-center Scenario
Node1 --- Node3
| |
| |
| |
--- ---
Node2 Node4
Node1 and Node3 form a Replica (sort of) Set ( for high availability )
Node 2/Node 4 are Priority 0 members (They should never become Primaries - Solely for read purpose)
Caveat -- what is the best way to design such a situation, since Node 2 and Node4 are not accessible to one another, given the way we configured our VPN/Firewalls;
essentially ruling out any heartbeat between Node2 and Node4.
Thanks Much
Here's what I got in mind:
Don't keep even members in a set. Thus you need another arbiter or set one of node2/4 to non-voting member.
As I'm using C# driver, I'm not sure you are using the same technology to build your application. Anyway, it turns out C# driver obtain a complete available server list from seeds (servers you provided in connection string) and tries to load-balancing requests to all of them. In your situation, I guess you would have application servers running in all 3 data centers. However, you probably don't want, for example, node 1 to accept connections from a different data center. That would significantly slow down the application. So you need some further settings:
Set node 3/4 to hidden nodes.
For applications running in the same data center with node 3/4, don't config the replicaSet parameter in connection string. But config the readPreference=secondary. If you need to write, you'll have to config another connection string to primary node.
If you make the votes of 2 and 4 also 0 then it should act, in failover as though 1 and 2 are only eligible. If you set them to hidden you have to forceably connect to them, MongoDB drivers will intentionally avoid them normally.
Other than that node 2 and 4 have direct access to whatever would be the primary as such I see no other problem.