Sharding with replication - mongodb

Sharding with replication]1
I have a multi tenant database with 3 tables(store,products,purchases) in 5 server nodes .Suppose I've 3 stores in my store table and I am going to shard it with storeId .
I need all data for all shards(1,2,3) available in nodes 1 and 2. But node 3 would contain only shard for store #1 , node 4 would contain only shard for store #2 and node 5 for shard #3. It is like a sharding with 3 replicas.
Is this possible at all? What database engines can be used for this purpose(preferably sql dbs)? Did you have any experience?
Regards

I have a feeling you have not adequately explained why you are trying this strange topology.
Anyway, I will point out several things relating to MySQL/MariaDB.
A Galera cluster already embodies multiple nodes (minimum of 3), but does not directly support "sharding". You can have multiple Galera clusters, one per "shard".
As with my comment about Galera, other forms of MySQL/MariaDB can have replication between nodes of each shard.
If you are thinking of having a server with all data, but replicate only parts to readonly Replicas, there are settings for replicate_do/ignore_database. I emphasize "readonly" because changes to these pseudo-shards cannot easily be sent back to the Primary server. (However see "multi-source replication")
Sharding is used primarily when there is simply too much traffic to handle on a single server. Are you saying that the 3 tenants cannot coexist because of excessive writes? (Excessive reads can be handled by replication.)
A tentative solution:
Have all data on all servers. Use the same Galera cluster for all nodes.
Advantage: When "most" or all of the network is working all data is quickly replicated bidirectionally.
Potential disadvantage: If half or more of the nodes go down, you have to manually step in to get the cluster going again.
Likely solution for the 'disadvantage': "Weight" the nodes differently. Give a height weight to the 3 in HQ; give a much smaller (but non-zero) weight to each branch node. That way, most of the branches could go offline without losing the system as a whole.
But... I fear that an offline branch node will automatically become readonly.
Another plan:
Switch to NDB. The network is allowed to be fragile. Consistency is maintained by "eventual consistency" instead of the "[virtually] synchronous replication" of Galera+InnoDB.
NDB allows you to immediately write on any node. Then the write is sent to the other nodes. If there is a conflict one of the values is declared the "winner". You choose which algorithm for determining the winner. An easy-to-understand one is "whichever write was 'first'".

Related

MongoDB Replica Set - 5 data centers - are two arbiters possible?

Is it possible to deploy MongoDB Replica Set with two arbiters? The documentation states that a replica set should contain a maximum of one arbiter, but it doesn't specify if this is a real limitation or just a recommendation.
The planned deployment would be in five data centers, where three of the data centers would hold the actual data and run on high performance hardware, and the last two data centers would hold no data and only run arbiters.
The goal is to achieve high-availability and allow the system to operate even with the loss of any two data centers.
The other option is a 4+1 setup which would of course cost more, but would that really provide any benefits over 3+2 in this case?
More than one arbiter is not recommended, because arbiters are considered voting nodes. You need to consider what happens when the two data-bearing nodes go offline, where you are left with a primary and two arbiters. This is not ideal because:
Majority write concern will wait until writes propagate to the majority of voting nodes. Since the arbiters cannot do writes, the write will hang. This is especially a problem if the replica set is part of a sharded cluster, since chunk moves requires majority write concern.
Having two arbiters and a single active data-bearing nodes means that you don't have high availability anymore. If the primary is subsequently corrupted, you have no other node that have a copy of the data.
Losing two datacenters typically means that you have a more pressing issue compared to the database not allowing writes (e.g. reliability of your hosting company). You have to wonder if a hosting company allows two datacenter to be offline for an extended period of time, what are the chances of them corrupting your data?
If you envision that two of your data-bearing nodes can be offline at the same time (due to maintenance, disasters, etc.), then the best thing to do is to have a replica set with five data-bearing nodes.
If five data-bearing nodes is not ideal for your situation, I would recommend you to go with only one arbiter (the 4+1 topology you mentioned).

MongoDB load balancing in multiple AWS instances

We're using amazon web service for a business application which is using node.js server and mongodb as database. Currently the node.js server is runing on a EC2 medium instance. And we're keeping our mongodb database in a separate micro instance. Now we want to deploy replica set in our mongodb database, so that if the mongodb gets locked or unavailble, we still can run our database and get data from it.
So we're trying to keep each member of the replica set in separate instances, so that we can get data from the database even if the instance of the primary memeber shuts down.
Now, I want to add load balancer in the database, so that the database works fine even in huge traffic load at a time. In that case I can read balance the database by adding slaveOK config in the replicaSet. But it'll not load balance the database if there is huge traffic load for write operation in the database.
To solve this problem I got two options till now.
Option 1: I've to shard the database and keep each shard in separate instance. And under each shard there will be a reaplica set in the same instance. But there is a problem, as the shard divides the database in multiple parts, so each shard will not keep same data within it. So if one instance shuts down, we'll not be able to access the data from the shard within that instance.
To solve this problem I'm trying to divide the database in shards and each shard will have a replicaSet in separate instances. So even if one instance shuts down, we'll not face any problem. But if we've 2 shards and each shard has 3 members in the replicaSet then I need 6 aws instances. So I think it's not the optimal solution.
Option 2: We can create a master-master configuration in the mongodb, that means all the database will be primary and all will have read/write access, but I would also like them to auto-sync with each other every so often, so they all end up being clones of each other. And all these primary databases will be in separate instance. But I don't know whether mongodb supports this structure or not.
I've not got any mongodb doc/ blog for this situation. So, please suggest me what should be the best solution for this problem.
This won't be a complete answer by far, there is too many details and I could write an entire essay about this question as could many others however, since I don't have that kind of time to spare, I will add some commentary about what I see.
Now, I want to add load balancer in the database, so that the database works fine even in huge traffic load at a time.
Replica sets are not designed to work like that. If you wish to load balance you might in fact be looking for sharding which will allow you to do this.
Replication is for automatic failover.
In that case I can read balance the database by adding slaveOK config in the replicaSet.
Since, to stay up to date, your members will be getting just as many ops as the primary it seems like this might not help too much.
In reality instead of having one server with many connections queued you have many connections on many servers queueing for stale data since member consistency is eventual, not immediate unlike ACID technologies, however, that being said they are only eventually consistent by 32-odd ms which means they are not lagging enough to give decent throughput if the primary is loaded.
Since reads ARE concurrent you will get the same speed whether you are reading from the primary or secondary. I suppose you could delay a slave to create a pause of OPs but that would bring back massively stale data in return.
Not to mention that MongoDB is not multi-master as such you can only write to one node a time makes slaveOK not the most useful setting in the world any more and I have seen numerous times where 10gen themselves recommend you use sharding over this setting.
Option 2: We can create a master-master configuration in the mongodb,
This would require you own coding. At which point you may want to consider actually using a database that supports http://en.wikipedia.org/wiki/Multi-master_replication
This is since the speed you are looking for is most likely in fact in writes not reads as I discussed above.
Option 1: I've to shard the database and keep each shard in separate instance.
This is the recommended way but you have found the caveat with it. This is unfortunately something that remains unsolved that multi-master replication is supposed to solve, however, multi-master replication does add its own ship of plague rats to Europe itself and I would strongly recommend you do some serious research before you think as to whether MongoDB cannot currently service your needs.
You might be worrying about nothing really since the fsync queue is designed to deal with the IO bottleneck slowing down your writes as it would in SQL and reads are concurrent so if you plan your schema and working set right you should be able to get a massive amount of OPs.
There is in fact a linked question around here from a 10gen employee that is very good to read: https://stackoverflow.com/a/17459488/383478 and it shows just how much throughput MongoDB can achieve under load.
It will grow soon with the new document level locking that is already in dev branch.
Option 1 is the recommended way as pointed out by #Sammaye but you would not need 6 instances and can manage it with 4 instances.
Assuming you need below configuration.
2 shards (S1, S2)
1 copy for each shard (Replica set secondary) (RS1, RS2)
1 Arbiter for each shard (RA1, RA2)
You could then divide your server configuration like below.
Instance 1 : Runs : S1 (Primary Node)
Instance 2 : Runs : S2 (Primary Node)
Instance 3 : Runs : RS1 (Secondary Node S1) and RA2 (Arbiter Node S2)
Instance 4 : Runs : RS2 (Secondary Node S2) and RA1 (Arbiter Node S1)
You could run arbiter nodes along with your secondary nodes which would help you in election during fail-overs.

Mongodb and Cassandra data storing mechanism

I have been reading about MongoDB and Cassandra. MongoDB is a master/slave where as Cassandra is masterless (all nodes are equal). My doubt is about how the data is stored in these both.
Let's say a user is writing a request to MongoDB(a cluster with master and different slaves each in a separate machine). This means the master will decide(or through some application implementation) to which slave this update should be written to . That is same data will not be available in all the nodes in MongoDB. Each node size may vary. Am i right ? Also when queried will the master know to which node this request should be sent ?
In the case of cassandra, the same data will be written to all the nodes ie) effectively if one node size is 10GB, then the other nodes size is also 10GB. Because if only this is the case, then when one node fails, the user will not lose any data by querying in another node. Am i right here ? If I am right, the same data is available in all the nodes, then what is the advantage of using map/reduce function in Cassandra ? If I am wrong, then how availability is maintained in Cassandra since the same data will not be available in the other node ?
I was searching in stackoverflow about MongoDB vs cassandra and have read about some 10 posts but my questions could not be cleared with the answers in those posts. Please clear my doubts and If I had assumed wrongly, also correct me.
Regarding MongoDB, yep you're right, there is only one primary.
Any secondary can become primary as long as everything is in sync as this will mean the secondary has all the data. Each node doesn't have to be the same on-disk size and this can vary depending on when the replication was done, however, they do have the same data (as long as they're in sync).
I don't know much about Cassandra, sorry!
I've written a thesis about NoSQL stores and therefor I hope that I remember the most parts correctly for Cassandra:
Cassandra is a mixture of Amazon Dynamo, from which it inherit the replication and sharding, and Googles BigTable from which it got the datamodel. So Cassandra basically shards your data, while keeping copies of it on other nodes. Let's have a five node cluster, with nodes called A to E. Your keys are hashed to the keyring through consistent hashing, where continuous areas of your keyring are stored on a given node. So if we have a value range from 1 to 100, per default each node will get 1/5 of the ring. A will range from [1,20), B from [20,40) and so on.
An important Concept for Dynamo is the triple (R,W,N) which tells how many nodes have to read, write, and keep a given value.
Per default you have 3 (N) copies of your data, which is stored on the primary node and two following nodes, which hold backups. When I remember it right from the Dynamo paper your writes go per Default to the first W nodes of your N copies, the other nodes are updated through an Gossip Protocol eventually.
As long as everything is going fine you'll get consistent results, if your primary node is down for some time another node takes your data, through a hinted hand-off. Once the primary comes back your data will be merged, or tried to be merged (this part I can't really remember but check those Vector Clocks which are used to tell the update history).
So if not too big parts of your cluster go down, you'll have a consistent view on your data. If bigger parts of your node are down or you request from only a small parts of your copies you may see inconsistencies, which (may) eventually be consistent.
Hope that helped, I can highly recommend to read those original papers about Amazon Dynamo and Google BigTable, but I think you're mostly interested in Amazon Dynamo. Additionally this post from Werner Vogels may come handy as well.
As for the sharding size I think that those can vary depending on your machine and how hot given areas of your keyring are.
Cassandra does not, typically, keep all data on all nodes. As you suggest, this would defeat some of the advantages offered by it's distributed data model (in particular fast writes would be hampered). The amount of replication desired (how many nodes should keep copies of your data) is customizable by the client at write time. As such, you can set it up to replicate across all nodes, or just keep your data at a single node with no replication. It's up to you. The specific node(s) to which the data gets written is determined by the hash value of the key. Each node is assigned a range of hash values it will store, so when you go to look up a value, again the key is hashed and that indicates on which node to find the data.

mongoDB replication+sharding on 2 servers reasonable?

Consider the following setup:
There a 2 physical servers which are set up as a regular mongodb replication set (including an arbiter process, so automatic failover will work correctly).
now, as far as i understand, most actual work will be done on the primary server, while the slave will mostly just do work to keep its dataset in sync.
Would it be reasonable, to introduce sharding into this setup in a way that one would set up another replication set on the same 2 servers, so that each of them has one mongod process running as primary and one process running as secondary.
The expected result would be that both servers will share the workload of actual querys/inserts while both are up. In the case of one server failing the whole setup should elegantly fail over to continue running, until the other server is restored.
Are there any downsides to this setup, except the overall overhead in setup and number of processes (mongos/configservers/arbiters)?
That would definitely work. I'd asked a question in the #mongodb IRC channel a bit ago as to whether or not it was a bad idea to run multiple mongod processes on a single machine. The answer was "as long as you have the RAM/CPU/bandwidth, go nuts".
It's worth noting that if you're looking for high-performance reads, and don't mind writes being a bit slower, you could:
Do your writes in "safe mode", where the write doesn't return until it's been propagated to N servers (in this case, where N is the number of servers in the replica set, so all of them)
Set the driver-appropriate flag in your connection code to allow reading from slaves.
This would get you a clustered setup similar to MySQL - write once on the master, but any of the slaves is eligible for a read. In a circumstance where you have many more reads than writes (say, an order of magnitude), this may be higher performance, but I don't know how it'd behave when a node goes down (since writes may stall trying to write to 3 nodes, but only 2 are up, etc - that would need testing).
One thing to note is that while both machines are up, your queries are being split between them. When one goes down, all queries will go to the remaining machine thus doubling the demands placed on it. You'd have to make sure your machines could withstand a sudden doubling of queries.
In that situation, I'd reconsider sharding in the first place, and just make it an un-sharded replica set of 2 machines (+1 arbiter).
You are missing one crucial detail: if you have a sharded setup with two physical nodes only, if one dies, all your data is gone. This is because you don't have any redundancy below the sharding layer (the recommended way is that each shard is composed of a replica set).
What you said about the replica set however is true: you can run it on two shared-nothing nodes and have an additional arbiter. However, the recommended setup would be 3 nodes: one primary and two secondaries.
http://www.markus-gattol.name/ws/mongodb.html#do_i_need_an_arbiter

Do I absolutely need a minimum of 3 nodes/servers for a Cassandra cluster or will 2 suffice?

Surely one can run a single node cluster but I'd like some level of fault-tolerance.
At present I can afford to lease two servers (8GB RAM, private VLAN #1GigE) but not 3.
My understanding is that 3 nodes is the minimum needed for a Cassandra cluster because there's no possible majority between 2 nodes, and a majority is required for resolving versioning conflicts. Oh wait, am I thinking of "vector clocks" and Riak? Ack! Cassandra uses timestamps for conflict resolution.
For 2 nodes, what is the recommended read/write strategy? Should I generally write to ALL (both) nodes and read from ONE (N=2; W=N/2+1; W=2/2+1=2)? Cassandra will use hinted-handoff as usual even for 2 nodes, yes?
These 2 servers are located in the same data center FWIW.
Thanks!
If you need availability on a RF=2, clustersize=2 system, then you can't use ALL or you will not be able to write when a node goes down.
That is why people recommend 3 nodes instead of 2, because then you can do quorum reads+writes and still have both strong consistency and availability if a single node goes down.
With just 2 nodes you get to choose whether you want strong consistency (write with ALL) or availability in the face of a single node failure (write with ONE) but not both. Of course if you write with ONE cassandra will do hinted handoff etc as needed to make it eventually consistent.