Technically is it supported to begin with just one shard for a shard cluster? So we can be ready for adding additional one(s) at anytime, at the same time save the cost of additional shard(s) before we really need it(them)?
To go further, is it possible to have a shard running on one single instance, instead of having to be based off of a 3 instance replica set?
From here, sharding is:
A database architecture that partitions data by key ranges and
distributes the data among two or more database instances.
A shard will be either a replica set or a standalone mongod instance. It is possible for you to use a single machine by using different ports to establish distinct communication endpoints for the config, mongod and mongos processes on the single machine. Also, yes, you may add a shard at a later time when you need to expand.
However, the point of providing sharding is to support horizontal scaling. Additionally, the point of sharded clustering is to provide failover and redundancy support. By using a single shard on a single server, you are losing the benefits of scaling and certainly failover.
The recommended production architecture includes:
Three config servers on separate machines for each sharded cluster.
Two or more replica sets as shards.
One or more query routers (mongos); typically, one mongos instance per application server.
Peruse the Sharded Cluster Requirements section in the documentation to get a feel for whether or not your environment needs sharding and sharded clusters since there is complexity in establishing such an architecture.
Related
Currently we are working with standalone mongodb without any replication or sharding, Now we are considering moving to replica-set for production purposes.
Will an application written for standalone mongodb will work for replica-set or sharded replica-set without any changes or are there some standalone/replica-set specific features in mongodb ?
Provided the MongoDB uses the default ports (27017 for standalone mongod and mongos) you don't need to touch your client application at all, it will work in either case.
Of course, when you connect to a MongoDB then a sharded cluster has more options, but the defaults are fine.
Will an application written for standalone mongodb will work for
replica-set or sharded replica-set without any changes or are there
some standalone/replica-set specific features in mongodb ?
Here are some things to think about when an application is to run on a replica-set or a sharded cluster. In addition, replica-sets and sharded clusters has some features not available in standalone deployment (see the Transactions and Change Streams topic at the bottom).
Replica Sets
A replica-set is cluster with multiple database servers - with replicated data on each server. The topology of the replica-set has one primary node (or member) and remaining members are secondaries (there can be other special purpose nodes like arbiters).
The data redundancy and failover features of replica-sets give your applications additional capabilties - for example, an application always runs even if a server is down.
The data is always written to the primary and read from it, by default. You can configure that the data can be read from the secondary nodes also from your application - this the Read Preference. This configuration can be used by the applications accessing a replica-set in some scenarios (see Read Preference Use Cases). This is for replica-sets and has no usage for standalone deployment.
Also, see Replica Set Read and Write Semantics:
From the perspective of a client application, whether a MongoDB
instance is running as a single server (i.e. “standalone”) or a
replica set is transparent. However, MongoDB provides additional read
and write configurations for replica sets.
Then, there are some things like, the Connection String URI, which uses different format for replica-set and sharded clusters - this is used by the applications to connect.
Sharded Cluster
The application should not be run in sharded cluster deployment as it is. It will require design level changes - and will affect the queries. Sharding is about distributing the data among shards. Note that in sharded cluster each shard is a replica-set. A sharded database can have sharded and un-sharded collections. Sharded collections are the distributed data.
To create a sharded collection, you must figure a shard key - this is the most important aspect of your application accessing a sharded collection. Shard key determines how the queries access particular shard to get the data. So, your application must take into consideration the shard key - the queries need to be created with shard key usage. Shard key affects the performance of your application queries, primarily.
Also, in the sharded cluster environment the application accesses the database via a mongos router - not the servers directly.
There are many other finer aspects when working with sharded databases and accessing for applications - the topic is too broad to discuss here. Changing from standalone to sharded cluster is an architectural change. Some aspects that can affect the application due to migrating from standalone to a replica-set also apply here (as each shard is a replica-set).
Also, see Operational Restrictions in Sharded Clusters - these are specific to sharded clusters and not applicable to standalone deployments.
Transactions and Change Streams
Features like transactions and change streams are available with replica-sets and sharded clusters only (and not on single standalone servers). This gives additional capabilites to your applications and can solve complex business logic and scenarios.
As per MongoDB documentation, transactions only works for replica sets and not single node. Why such requirement? Isn't it is easier to do transaction stuff on a single node rather than a distributed system?
Implementation of transactions uses sessions which in turn require an oplog. Oplog is provided by replica sets for data synchronization between nodes.
Isn't it is easier to do transaction stuff on a single node rather than a distributed system?
This is true but in practice, MongoDB positions itself as a high-availability database therefore there are rather few production deployments using a standalone server (as far as I know this isn't even an option in Atlas, for example). Hence lack of transaction support on standalone servers typically doesn't affect anything.
Conversely, implementing transactions only on standalone servers would not address the needs of the vast majority of MongoDB deployments/customers that use replica sets and sharded clusters.
For development purposes you can run a single-node replica set which gives you an oplog required for sessions and transactions but still only one mongod process.
I am aware that mongodb has a master-slave architecture.
Therefore, I was thinking that the master would be the single point of failure in mongoDB since it takes care of all the requests and sends it to the slave nodes. However, when the master fails, then a new master is reelected from the slaves. Therefore I need some clarification on where the single point of failure lies.
Does mongoDB have a single point of failure? Is it in the master node?
Thanks,
MongoDB can be set up in a way that there is no single point of failure (at least none specific to MongoDB).
When you set up replication as suggested (which includes primary, secondary and an arbiter on a 3rd server), the secondary will take the role of the primary when it goes down. Keep in mind that this only works when the applications know both the primary and the secondary (how to make it aware depends on the driver).
When you have a sharded cluster, the mongo router process (mongos) and the config servers becomes additional possible points of failure, but you can also set up reduntant routers and config servers. To send the clients to another mongos server when theirs goes down, you need a 3rd party load-balancing solution.
For a proper production MongoDB setup with clustering, MongoDB Inc. suggests:
At least 2 mongos routers
Exactly 3 config servers
3 servers per shard (primary, secondary and arbiter), where the arbiters do not necessarily need dedicated servers and can share hardware with the routers, config servers, members of a different replica-set or app servers.
I want to make continuous backup of my Sharded Cluster on a single MongoDB server somewhere else.
So, is it possible to create Replica Set with Sharded Cluster (mongos instance) and single MongoDB server?
Did anyone experience creating Replica Sets with two Sharded Clusters or with one Sharded Cluster and one Single Server?
How does it work?
By the way, the best (and for now, the only) way to continuously backup Sharded Cluster is by using MongoDB Management Service (MMS).
I were also facing the same issue sometime back. I wanted to take replica of all sharded cluster into one MongoDB. But, i didn't found any solution for this scenario, and I think this is true, because -
If you configure the multiple shard server (say 2 shard server) with
one replica set, then this will not work because in a replica set (say
rs0) only 1-primary member is possible. And In this scenario, we will
have multiple primary depend on number of shard server.
To take the backup of your whole sharded cluster, you must stop all writes to the cluster. You can refer to MongoDB documentation on this - http://docs.mongodb.org/manual/tutorial/backup-sharded-cluster-with-database-dumps/
All in the title : do we need 2 mongos per shard in MongoDB ? I am not sure to understand exactly what mongos are for and if my website will communicate with them or if it is something internal to MongoDB.
If you have cluster set up (with shards, not to be confused with replica set), then you have to have mongos instances deployed. It's a router process. It knows which data resides where. Application talks to mongos, it routes the request to corresponding shard. Talking to shards directly is strongly discouraged.
You must have at least one mongos process. You can have more, they have small resource footprint. I usually deploy one mongos per application server.
A mongos is basically nothing more than a router which gathers a configuration of your cluster from config servers, caches that config, and uses it to route targeted and scatter and gather operations within a cluster of shards. It can also be used for aggregation as such if aggregation queries are common in your app the mongos can take some CPU and memory, however, for the most part they have no weight and can run on the smallest server.
You do not require 2 mongos, the number depends upon the operations being sent through that router. You can in theory do with one, however, that isn't very redundant and cerates a single point of failure, 2 makes that less possible.