How to use physical and logical replication in Patroni together? - postgresql

I want to construct cluster like this:
3 MAIN nodes linked by physical replication
N OTHER nodes that receiving data from MAIN nodes via logical replication
I successfully configured physical replication between 3 MAIN nodes, but I did't go far.
I must note that I specify "logical" value for "wal_level" fields for all nodes in my cluster.
But when I try to create subscription at any OTHER node I got error like this: "logical decoding cannot be used while in recovery".
Can anybody help me please?

I solve it.
Just for connection use primary node address.
It is look something like this:
You have 3 MAIN node linked by Patroni physical HA;
You also have N OTHER nodes where each of this is self primary (it is independent of the MAIN nodes).
On an OTHER nodes you must create subscription to the current primary from MAIN nodes.
Note, that you can have right user permissions for REPLICATION.

Related

Sharding with replication

Sharding with replication]1
I have a multi tenant database with 3 tables(store,products,purchases) in 5 server nodes .Suppose I've 3 stores in my store table and I am going to shard it with storeId .
I need all data for all shards(1,2,3) available in nodes 1 and 2. But node 3 would contain only shard for store #1 , node 4 would contain only shard for store #2 and node 5 for shard #3. It is like a sharding with 3 replicas.
Is this possible at all? What database engines can be used for this purpose(preferably sql dbs)? Did you have any experience?
Regards
I have a feeling you have not adequately explained why you are trying this strange topology.
Anyway, I will point out several things relating to MySQL/MariaDB.
A Galera cluster already embodies multiple nodes (minimum of 3), but does not directly support "sharding". You can have multiple Galera clusters, one per "shard".
As with my comment about Galera, other forms of MySQL/MariaDB can have replication between nodes of each shard.
If you are thinking of having a server with all data, but replicate only parts to readonly Replicas, there are settings for replicate_do/ignore_database. I emphasize "readonly" because changes to these pseudo-shards cannot easily be sent back to the Primary server. (However see "multi-source replication")
Sharding is used primarily when there is simply too much traffic to handle on a single server. Are you saying that the 3 tenants cannot coexist because of excessive writes? (Excessive reads can be handled by replication.)
A tentative solution:
Have all data on all servers. Use the same Galera cluster for all nodes.
Advantage: When "most" or all of the network is working all data is quickly replicated bidirectionally.
Potential disadvantage: If half or more of the nodes go down, you have to manually step in to get the cluster going again.
Likely solution for the 'disadvantage': "Weight" the nodes differently. Give a height weight to the 3 in HQ; give a much smaller (but non-zero) weight to each branch node. That way, most of the branches could go offline without losing the system as a whole.
But... I fear that an offline branch node will automatically become readonly.
Another plan:
Switch to NDB. The network is allowed to be fragile. Consistency is maintained by "eventual consistency" instead of the "[virtually] synchronous replication" of Galera+InnoDB.
NDB allows you to immediately write on any node. Then the write is sent to the other nodes. If there is a conflict one of the values is declared the "winner". You choose which algorithm for determining the winner. An easy-to-understand one is "whichever write was 'first'".

MongoDB failover when 2 nodes of a 3 node replicaset go down

I need to setup a mongo replicaset on two data centers.
For the sake of testing, I setup a replicaset of 3 nodes, thinking of putting 2 nodes on the local site - Primary and a secondary, and on the other site another standby.
However, if I take down the Primary and one of the standby's, the remaining standby stays as secondary, and is not promoted to become a Primary, like I expected.
Reading about it in other questions here, looke like the only solution is to use an arbiter on a third site, which is quite problematic.
As a temporary solution - is there a way to force this standalone secondary to become a primary?
In order to elect a PRIMARY the majority of all members must be up.
2 out of 3 nodes is not the majority. Typically the data center itself does not crash, usually you "only" lose the connection to a data center.
You can to following.
Put 2 nodes in first data center, and 1 node it second data center. In this setup the first data center acts as primary and must not fail! The second data center may fail.
Another setup is to put one node in each data center and an ARBITER on - a different site. This "different site" does not need to be a full-blown data center, the MongoDB ARBITER process is a very light process and does not store any data, so it could be a small host somewhere in your IT network. Of course, it must have connection to both data centers.

Can Postgres-XL shard, replicate and auto-balance at the same time?

For example if I have 5 servers (A, B, C, D, E)
Can we set the data distributed with replication factor of 3? (for example one write goes to ABC, other records goes to ABD, other record goes to ABE, and so on), so when node C had some hardware failure, there still some record exists.
Can we also add a new node then balance the stored data to the new node without downtime?
Yes, it can do that, but not in the way you are thinking. What you are describing would be NoSQL setup. Postgres-XL is an MPP database.
When you create a table you define it's "DISTRIBUTED BY" option which can be replication, round robin, hash, modulo, etc. You will need to review the details of each option. You can also define table spaces to be on defined nodes.
Your setup would be something like
Node1 Transaction Manger
Node2 Transaction Manger Proxy
Node3 Coordinator1 & Data Node1
Node4 Coordinator2 & Data Node2
Node5 Data Node3
NOTE: Important to point out that as I just discovered Postgres-XL has no HA or fail over support. Meaning that if a single node failed the database is down and will require manual intervention. Worse if you are using the round robin, hash, modulo sharing options if you lose the disk on a single node you have lost your database entirely.
You can have stand by nodes that mirror each of your nodes, but this will double the number of nodes you need and it will still not fail over. You will have to manually configure it to use the standby node and restart it.

Can two replica sets share the same database?

I currently have 2 physical servers and one arbiter configured as a replica set. I would like to try sharing with this configuration. I know it is possible to run two mongod instances on the same server, one as master of replica 1, the other as slave of replica 2: can these two processes (master of replica 1 and slave of replica 2) point to the same database? Isn't there the danger of a sort of loop?
Hmm I am unsure if you know what replication really is.
All members in the replica set will share the same database(s), they will replicate the database(s) between them and maintain them.
Replicas are exactly that, they are copies of each other, including database.
I suggest you read: http://docs.mongodb.org/manual/replication/
Edit
There could be another meaning here as in to the same files since you mention running the master and slaves on the same node.
First off running two replicas on the same node is pointless. You will get no benefit and if anything you will get a performance problem since that IO is now taking double the strain it normally would.
So I would begin by saying that your idea would be really bad design even if it was feasible which it is not, the physical files cannot have multiple file locks on them.

mongo DB - All nodes secondary

All of the nodes in our cluster are "secondary" and no node is stepping up as "primary".
How do I force a node to become primary?
===SOLUTION===
We had 4 nodes in our replica set, when we are supposed to only have an odd number of nodes.
Remove a node so you have an odd number of nodes
rs.config()
Edit the list of servers in notepad/textpad removing one of the servers
config = POST_MODIFIED_LIST_HERE
rs.reconfig(config, {force:true})
Stop the mongodb service 'mongod' on all nodes, and bring them back up
Done
If this doesn't fix it, try adding a priority to one of the nodes.
You can use the following instructions available at MongoDB's website:
http://www.mongodb.org/display/DOCS/Forcing+a+Member+to+be+Primary
If you have an even number of nodes, one answer is to remove one. Another answer can be to add an arbiter, which doesn't have a copy of the data but participates in the cluster purely for voting and breaks ties. This way you get odd vote numbers and guaranteed elections, but the availability/capacity of four nodes.