MongoDB replica set with primary only - mongodb

After version 4.0, MongoDB has introduced the multi-document transactions for replica sets. In order to use the new feature, I have converted the testing instance to a replica set, following the official documentation.
The end result was a replica set with a primary node, no secondary nodes and no arbiters.
I would like to know if this architecture has any implications on performance, data integrity, etc... Any help or reference to a similar case is much appreciated

It is valid to have a single member replica set for the purposes of testing or development. This will allow you to use features which require a replica set deployment (for example, transactions in MongoDB 4.0+ and change streams in MongoDB 3.6+).
The main downsides of a single member deployment are that you don't get any of the usual replica set benefits such as data redundancy and fault tolerance, and cannot test more interesting read concerns and write concerns that might be useful in a production deployment. A replica set member has some expected write overhead as compared to a standalone server because it also has to maintain a replication oplog.
A production replica set deployment should have a minimum of three members. See Deploy a replica set in the MongoDB documentation for full details.

Related

Is a single member replica set OK?

I am trying to find an authoritative answer to the following question: Is a single member replica set a supported deployment setup?
While this question may seems weird or silly, my specific use case follows:
A team wants to upgrade from Mongo 2 to Mongo 4 and they foresee that transactions might be useful to them. They currently run a single mongod instance. MongoDB documentation leads them to believe that to use transactions they must activate replica sets and deploy at least 3 mongod instances. Interesting documentation bits are:
https://docs.mongodb.com/manual/core/transactions/#transactions-and-replica-sets
Multi-document transactions are available for replica sets only. Transactions for sharded clusters are scheduled for MongoDB 4.2 [1].
https://docs.mongodb.com/manual/core/replica-set-members/
The minimum recommended configuration for a replica set is a three member replica set with three data-bearing members: one primary and two secondary members. You may alternatively deploy a three member replica set with two data-bearing members: a primary, a secondary, and an arbiter, but replica sets with at least three data-bearing members offer better redundancy.
My point of view is that:
Replica sets aim to increase redundancy and availability through a typical primary - secondaries design
Replica set documentation focuses on what make sense for its initial orginal purpose. It documents traditional and sane deployment setups fo HA (>= 3 and odd number of voter, separate machines, how to deal with multiple DC etc.)
Team is not interested in HA but must switch to replica sets in order to use TX (alternative path being to forgo TX and deploy a single mongod)
According to replica set documentation and my distributed systems background, I don't see why a single member replica set would be an issue if you don't care about HA. With a single member a primary can be elected, replication is NOOP and default and majority write concern are just w: 1.
Am I right?

MongoDB SHARDING_FILTER in plan

I have a problem on Sharded Cluster. I'm testing performance to compare between Sharded and Replica Set.
I have inserted data to Shard 1 directly without mongos and then query it by aggregate query but I cannot found it. I checked in explain plan that shows "SHARDING_FILTER" in stage on Primary shard but doesn't have that in Secondary when I checked explain plan.
What's configuration to control about it?
MongoDB version : 3.0.12
I have inserted data to Shard 1 directly without mongos and then query it by aggregate query but I cannot found it.
It's not entirely clear what your performance comparison is, but irrespective you should always interact with data via mongos for a sharded cluster.
The role of mongos includes keeping track of the sharded cluster metadata (as cached from the config servers), observing data inserts/updates/deletions, and routing requests. Bypassing mongos will lead to potential complications in collection/data visibility (as you have observed) because you are skipping some of the expected data management infrastructure for your sharded deployment.
I checked in explain plan that shows "SHARDING_FILTER" in stage on Primary shard but doesn't have that in Secondary when I checked explain plan.
Secondary reads are eventually consistent, so the state of data on a given secondary may not necessarily match the current sharded cluster metadata. This becomes more problematic with many shards: with a secondary read preference results can potentially be combined from secondaries with significant differences in replication lag.
For consistent queries for a sharded cluster you should always use primary reads (which is the default behaviour) via mongos. Queries against primaries through mongos may include a SHARDING_FILTER stage which filters result documents that are not owned by the current shard (for example, due to migrations in progress where documents need to transiently exist on both a donor and target shard).
As at MongoDB 3.4, secondaries do not have the ability to filter results because they'd need to maintain a separate view of the cluster metadata which matches their eventually consistent state. There's a relevant Jira issue to watch/upvote: SERVER-5931 - Secondary reads in sharded clusters need stronger consistency. I currently would not recommend secondary reads in a sharded cluster (or in general) without careful consideration of the impact of eventual consistency on your use case. For the general case, please read Can I use more replica nodes to scale?.
What's configuration to control about it?
Use the default read preference (primary reads) and always interact with your sharded deployment through mongos.

MongoDB replication factors

I'm new to Mongo, have been using Cassandra for a while. I didn't find any clear answers from Mongo's docs. My questions below are all closely related.
1) How to specify the replication factors in Mongo?
In C* you can define replication factors (how many replicas) for the entire database and/or for each table. Mongo's newer replica set concept is similar to C*'s idea. But, I don't see how I can define the replication factors for each table/collection in Mongo. Where to define that in Mongo?
2) Replica sets and replication factors?
It seems you can define multiple replica sets in a Mongo cluster for your databases. A replica set can have up to 7 voting members. Does a 5-member replica set means the data is replicated 5 times? Or replica sets are only for voting the primary?
3) Replica sets for what collections?
The replica set configuration doc didn't mention anything about databases or collections. Is there a way to specify what collections a replica set is intended for?
4) Replica sets definition?
If I wanted to create a 15-node Mongo cluster and keeping 3 copies of each record, how to partition the nodes into multiple replica sets?
Thanks.
Mongo replication works by replicating the entire instance. It is not done at the individual database or collection level. All replicas contain a copy of the data except for arbiters. Arbiters do not hold any data and only participate in elections of a new primary. They are usually deployed to create enough of a majority that if an instance goes down a new instance can be elected as the primary.
Its pretty well explained here https://docs.mongodb.com/manual/replication/
Replication is referred to the process of ensuring that the same data is available on more than one Mongo DB Server. This is sometimes required for the purpose of increasing data availability.
Because if your main MongoDB Server goes down for any reason, there will be no access to the data. But if you had the data replicated to another server at regular intervals, you will be able to access the data from another server even if the primary server fails.
Another purpose of replication is the possibility of load balancing. If there are many users connecting to the system, instead of having everyone connect to one system, users can be connected to multiple servers so that there is an equal distribution of the load.
In MongoDB, multiple MongDB Servers are grouped in sets called Replica sets. The Replica set will have a primary server which will accept all the write operation from clients. All other instances added to the set after this will be called the secondary instances which can be used primarily for all read operations.

What is the advantage to explicitly connecting to a Mongo Replica Set?

Obviously, I know why to use a replica set in general.
But, I'm confused about the difference between connecting directly to the PRIMARY mongo instance and connecting to the replica set. Specifically, if I am connecting to Mongo from my node.js app using Mongoose, is there a compelling reason to use connectSet() instead of connect()? I would assume that the failover benefits would still be present with connect(), but perhaps this is where I am wrong...
The reason I ask is that, in mongoose, the connectSet() method seems to be less documented and well-used. Yet, I cannot imagine a scenario where you would NOT want to connect to the set, since it is recommended to always run Mongo on a 3x+ replica set...
If you connect only to the primary then you get failover (that is, if the primary fails, there will be a brief pause until a new master is elected). Replication within the replica set also makes backups easier. A downside is that all writes and reads go to the single primary (a MongoDB replica set only has one primary at a time), so it can be a bottleneck.
Allowing connections to slaves, on the other hand, allows you to scale for reads (not for writes - those still have to go the primary). Your throughput is no longer limited by the spec of the machine running the primary node but can be spread around the slaves. However, you now have a new problem of stale reads; that is, there is a chance that you will read stale data from a slave.
Now think hard about how your application behaves. Is it read-heavy? How much does it need to scale? Can it cope with stale data in some circumstances?
Incidentally, the point of a minimum 3 members in the replica set is to offer resiliency and safe replication, not to provide multiple nodes to connect to. If you have 3 nodes and you lose one, you still have enough nodes to elect a new primary and have replication to a backup node.

Does MongoDB require at least 2 server instances to prevent the loss of data?

I have decided to start developing a little web application in my spare time so I can learn about MongoDB. I was planning to get an Amazon AWS micro instance and start the development and the alpha stage there. However, I stumbled across a question here on Stack Overflow that concerned me:
But for durability, you need to use at least 2 mongodb server
instances as master/slave. Otherwise you can lose the last minute of
your data.
Is that true? Can't I just have my box with everything installed on it (Apache, PHP, MongoDB) and rely on the data being correctly stored? At least, there must be a config option in MongoDB to make it behave reliably even if installed on a single box - isn't there?
The information you have on master/slave setups is outdated. Running single-server MongoDB with journaling is a durable data store, so for use cases where you don't need replica sets or if you're in development stage, then journaling will work well.
However if you're in production, we recommend using replica sets. For the bare minimum set up, you would ideally run three (or more) instances of mongod, a 'primary' which receives reads and writes, a 'secondary' to which the writes from the primary are replicated, and an arbiter, a single instance of mongod that allows a vote to take place should the primary become unavailable. This 'automatic failover' means that, should your primary be unable to receive writes from your application at a given time, the secondary will become the primary and take over receiving data from your app.
You can read more about journaling here and replication here, and you should definitely familiarize yourself with the documentation in general in order to get a better sense of what MongoDB is all about.
Replication provides redundancy and increases data availability. With multiple copies of data on different database servers, replication protects a database from the loss of a single server. Replication also allows you to recover from hardware failure and service interruptions. With additional copies of the data, you can dedicate one to disaster recovery, reporting, or backup.
In some cases, you can use replication to increase read capacity. Clients have the ability to send read and write operations to different servers. You can also maintain copies in different data centers to increase the locality and availability of data for distributed applications.
Replication in MongoDB
A replica set is a group of mongod instances that host the same data set. One mongod, the primary, receives all write operations. All other instances, secondaries, apply operations from the primary so that they have the same data set.
The primary accepts all write operations from clients. Replica set can have only one primary. Because only one member can accept write operations, replica sets provide strict consistency. To support replication, the primary logs all changes to its data sets in its oplog. See primary for more information.