I have a single server that I now want to replicate and go for higher availability. One of the elements in my software stack if Zookeeper, so it seems natural to go to a clustered configuration on it.
However, I have data on my single server, and I couldn't find any guide on going to a clustered setup. I tried setting up two independent instances and then going to a clustered configuration, but only data present on the elected master was preserved.
So, how can I safely go from a single server setup to a clustered setup without losing data?
If you go from 1 server straight to 3 servers, you may lose data, as the 2 new servers are sufficient to form a quorum, and elect one of themselves as a leader, ignoring the old server, and losing all data on that machine.
If you grow your cluster from 1 to 2, when the two servers start up, then a quorum can't form without the old server being involved, and data will not be lost. When the cluster finishes starting, all data will be synced to both servers.
Then you can grow your cluster from 2 to 3, and again a quorum can't form without at least 1 server that has a copy of the database, and again when the cluster finishes starting all data will be synced to all three servers.
Related
We are currently setting up our mongodb environment for production. At the moment we only have one dedicated mongodb database server. We will expand this in the near future with a 2nd server and I already indicated to the management that for the ideal situation we should get a 3rd server as well.
Since I already know we're going to use sharding and replication in the near future I want to be prepared for it.
The idea I have now is to start now with the Development Configuration (as mongo's documentation names it).
Whenever our second server comes available I would like to expand this setup to a configuration with 2 configuration servers en 2 shards (replica sets).
And of course when our third server comes available have the fully functional sharded cluster configuration.
While reading mongo's documentation I was getting triggered by the note that de Development setup should not be used in production.
MongoDb Development Configuration
Keeping in mind that we will add more servers soon, would it be a bad idea to already configure the Development Configuration already so we can easily add the 2nd server to the cluster when it comes available?
After setting up the 'development sharded setup' I've found my anwser. Of course i'm happy to share in case anybody runs into the same questions as I do when starting with this.
In my case, it was ok to start with the development setup untill my new servers arrived. It was a temporary situation and when my new servers arived I was able to easily expand my replicasets. There are a number of reasons why this isn't adviced for production:
To state the obvious, there is no replication yet. Since I was running shards on one machine there is a single point of failure. If the machine, or one node goes down, the cluster won't work anymore.
Now this part is interesting. After I added a second server, I did have primary and secondary nodes. Primary nodes were used for writing and secondary for reading. I've eliminated the issue that there was no replication AND my data had a higher availability. However, I noticed with the 2-member replica sets, if one member of the replicaset went down (even is this was a secondary), the primary stepped down to a secondary node as well. This had to do with the voting mechanism that MongoDb uses. See Markus' more detailed answer on this.. Since there are no more primaries in the replicaset, my cluster won't function anymore. Now, if i were to use an arbiter I could eliminate this problem as well.
When you have a 3-member replicataset, automatic failover kicks in. Whenever a node goes down, another primary is assigned automatically and the cluster will continue performing as before.
During my tests I also got to a point where one of my MongoD.exe instances stopped working due to a "Out of memory exception". I was running a cluster with 3 replicasets, meaning every machine had at least 4 mongod.exe processes running (3 for the replicaset shards and one for the configuration server replicaset). Besides having a query which wasn't optimized yet I also noticed that the WiredTiger storage engine by default can use up to 50% of ram minus one gigabyte. Perhaps it wasn't the best choise to have multiple replicaset-shards on one machine but I was able to eliminate the problem by capping the wiredtiger memory usage.
I hope this answer helps anybody who's starting to set up replication and sharding for MongoDb.
Recently I've been reading about methods to horizontally scale a server.
The majority of methods, if not all, require a central server to sync the data (Redis is widely used here). That means that multiple nodes rely on one central server, one point of failure, to have data synced between them.
I know that I can have multiple instances of Redis and those nodes can connect to the backup Redis servers and keep syncing the data just in case the main Redis server fails, but I dont like the idea to have "2 layers" (Server layer and Redis layer) to achieve this. Is there any piece of software, or method, to have multiple nodes be synced between them only being connected between them?
I mean, like having the 3 Redis servers installed in the same 3 Server machines. If one servers goes down, it goes down along with the Redis server, so the other 2 Server machines connect to the fallback Redis server installed on their own machines, but instead of this hackish thingy, have a piece of software that simply connect between the nodes, sync the data and such.
I've been thinking on a system, but one problem arises. A problem that can be extended to the "having multiple Redis instances" thing.
Imagine that I have a system with X Redis servers in different datacenters, and also have X servers in different datacenters too. I don't want them in the same datacenter to avoid single point of failures. Now, let's say that half of the servers lose the connection to the other half, not because of machines going down, but because of connection failures (ISP problem and such). The slaves are going to think that the master Redis server is dead because the connection dropped and they cannot reconnect, and the same with part of the servers that also lost the connection, that are going to think that the master Redis is dead, and connect and sync data using the slaves.
Now we have an scenario of one master Redis server with X servers syncing data between them and some slave Redis servers with other X servers syncing data between them.
With this, all the distributed system to not have a single point of failure is a failure by itself, because still has a single point of failure that, instead of make people lose data, will make the system have unsynced garbage data.
And as it will be realtime data, having a "save timestamps a resync data when connection recovers in a few minutes" is not a solution.
What do you think? Any solution?
Should I keep the replicasets and config servers on separate servers? Or have one replicaset and one config server on one server? Can I have all replicasets on one server and all config servers on another one server? (Does this defeat the purpose of sharding?)
The purpose of sharding is distributing load on multiple servers. The purpose of replication is (mostly) redundancy by allowing one server to take the place of another when that server goes offline for some reason. Obviously, it does not make much sense in either case to run multiple instances on the same server. So yes, it would defeat the purpose of sharding.
However, when you only have two servers and have to choose between replication and sharding, you can get the best of both worlds by creating two shards where each shard has a secondary which runs on the server of the primary of the other shard. That way you have both the performance-improvement when everything is OK but don't lose access to half your data when one server goes down.
Regarding the config servers: MongoDB recommends to make them a separate replica-set which runs on separate servers. But when you are on a budget, it should technically be possible to put that replica-set on the same hardware which runs the actual database. The config servers are only required when a mongos process (re-)starts or when a chunk migration happens and are relatively idle the rest of the time. Unfortunately a chunk migration is also a phase where the involved shards are very busy, so running the config servers on the same hardware will make performance drops during chunk migrations even worse.
I am aware that mongodb has a master-slave architecture.
Therefore, I was thinking that the master would be the single point of failure in mongoDB since it takes care of all the requests and sends it to the slave nodes. However, when the master fails, then a new master is reelected from the slaves. Therefore I need some clarification on where the single point of failure lies.
Does mongoDB have a single point of failure? Is it in the master node?
Thanks,
MongoDB can be set up in a way that there is no single point of failure (at least none specific to MongoDB).
When you set up replication as suggested (which includes primary, secondary and an arbiter on a 3rd server), the secondary will take the role of the primary when it goes down. Keep in mind that this only works when the applications know both the primary and the secondary (how to make it aware depends on the driver).
When you have a sharded cluster, the mongo router process (mongos) and the config servers becomes additional possible points of failure, but you can also set up reduntant routers and config servers. To send the clients to another mongos server when theirs goes down, you need a 3rd party load-balancing solution.
For a proper production MongoDB setup with clustering, MongoDB Inc. suggests:
At least 2 mongos routers
Exactly 3 config servers
3 servers per shard (primary, secondary and arbiter), where the arbiters do not necessarily need dedicated servers and can share hardware with the routers, config servers, members of a different replica-set or app servers.
I never have my hands on coding. I got a doubt regarding mongodb replica sets
below is the situation
I have an alert monitoring application.
It is using mongodb with replica set with 3 nodes.
Applications Java code base keep connecting to the primary and doing some transactions.
Now my question is that,
if the primary server is down, how will it effect the application server.
I mean, would the app server writes error saying connection failed like errors.
OR
the replica set will pick one of the slaves automatically as master and provides the application server to do its activity. How will it happen...?
Thanks & Regards,
UDAY
The replica set will try to pick another server as the new primary. If you have three nodes, and one goes down, the other two will negotiate which one becomes the new master. If two go down, or somehow communication between the remaining breaks down, there will be no new master until the situation is recovered.
The official drivers support this automatic fail-over, as does the mongos routing server if you use it. So the application code does not need to do anything here.
I am not sure if there will be connection errors during the brief period of time this fail-over negotiation takes (you will probably get errors for a few seconds).