Is sharding done per database or collection? - mongodb

I am trying to understand the sharding concept with respect to MongoDB. To understand the concept, lets say we have two scenarios:
I have two databases 'customer' and 'item'.
I have two collections 'customer' and 'item' in the same database.
Both 'customer' and 'item' datasets are huge (in TB).
My question is: In the above listed scenarios how is sharding designed and which one is preferred.
The examples I have come across talk about sharding with one collection. But when we have multiple databases/collections. How do we handle it?
Please point me in the right direction.

MongoDB distributes data, or shards, at the collection level.
See here:
https://docs.mongodb.org/manual/core/sharding-introduction/#data-partitioning
The procedure requires you to first enable sharding on the database level (which will not automatically shard any collection).
I think you should read through the docs carefully as sharding is by no means magic, and requires thorough planning and understanding of the mechanics.

Related

Single big collection for all products vs Separate collections for each Product category

I'm new to NoSQL and I'm trying to figure out the best way to model my database. I'll be using ArangoDB in the project but I think this question also stands if using MongoDB.
The database will store 12 categories of products. Each category is expected to hold hundreds or thousands of products. Products will also be added / removed constantly.
There will be a number of common fields across all products, but each category will also have unique fields / different restrictions to data.
Keep in mind that there are instances where I'd need to query all the categories at the same time, for example to search a product across all categories, and other instances where I'll only need to query one category.
Should I create one single collection "Product" and use a field to indicate the category, or create a seperate collection for each category?
I've read many questions related to this idea (1 collection vs many) but I haven't been able to reach a conclusion, other than "it dependes".
So my question is: In this specific use case which option would be most optimal, multiple collections vs single collection + sharding, in terms of performance and speed ?
Any help would be appreciated.
As you mentioned, you need to play with your data and use-case. You will have better picture.
Some decisions required as below.
Decide the number of documents you will have in near future. If you will have 1m documents in an year, then try with at least 3m data
Decide the number of indices required.
Decide the number of writes, reads per second.
Decide the size of documents per category.
Decide the query pattern.
Some inputs based on the requirements
If you have more writes with more indices, then single monolithic collection will be slower as multiple indices needs to be updated.
As you have different set of fields per category, you could try with multiple collections.
There is $unionWith to combine data from multiple collections. But do check the performance it purely depends on the above decisions. Note this open issue also.
If you decide to go with monolithic collection, defer the sharding. Implement this once you found that queries are slower.
If you have more writes on the same document, writes will be executed sequentially. It will slow down your read also.
Think of reclaiming the disk space when more data is cleared from the collections. Multiple collections do good here.
The point which forces me to suggest monolithic collections is that I'd need to query all the categories at the same time. You may need to add more categories, but combining all of them in single response would not be better in terms of performance.
As you don't really have a join use case like in RDBMS, you can go with single monolithic collection from model point of view. I doubt you could have a join key.
If any of my points are incorrect, please let me know.
To SQL or to NoSQL?
I think that before you implement this in NoSQL, you should ask yourself why you are doing that. I quite like NoSQL but some data is definitely a better fit to that model than others.
The data you are describing is a classic case for a relational SQL DB. That's fine if it's a hobby project and you want to try NoSQL, but if this is for a production environment or client, you are likely making the situation more difficult for them.
Relational or non-relational?
You mention common fields across all products. If you wish to update these fields and have those updates reflected in all products, then you have relational data.
Background
It may be worth reading Sarah Mei 2013 article about this. Skip to the section "How MongoDB Stores Data" and read from there. Warning: the article is called "Why You Should Never Use MongoDB" and is (perhaps intentionally) somewhat biased against Mongo, so it's important to read this through the correct lens. The message you should get from this article is that MongoDB is not a good fit for every data type.
Two strategies for handling relational data in Mongo:
every time you update one of these common fields, update every product's document with the new common field data. This is generally only ok if you have few updates or few documents, but not both.
use references and do joins.
In Mongo, joins typically happen code-side (multiple db calls)
In Arango (and in other graph dbs, as well as some key-value stores), the joins happen db-side (single db call)
Decisions
These are important factors to consider when deciding which DB to use and how to model your data
I've used MongoDB, ArangoDB and Neo4j.
Mongo definitely has the best tooling and it's easy to find help, but I don't believe it's good fit in this case
Arango is quite pleasant to work with, but doesn't yet have the adoption that it deserves
I wouldn't recommend Neo4j to anyone looking for a NoSQL solution, as its nodes and relations only support flat properties (no nesting, so not real documents)
It may also be worth considering MariaDB or Postgres

NoSQL (document) - what are good reasons for introducing new collections?

There are alot of resources with modeling document NoSQL databases that describe embedded vs normalized, multicollection approach. But i could find very few about third, middle-way, which actually sounds most like the core of NoSQL: keeping multiple document types in same collection.
There are implementation details like having type field for each document and index on it but what i cannot find some info about is what is the turning point in deciding if some documents should be separate into different colletions or kept within the same?
I've found some sources mentioning collection size, but still, that doesn't either sound like the good reason because sharding/scaling single collection with multiple document types also sound like a perfectly viable option to me.
So, i am trying to find out some explanation, what is that 'The Reason' when deciding between single collection multiple document types vs multiple collections each storing one document type?
I don't know if its significant, but if it is, i am thinking in context of MongoDB and DocumentDB.
A collection in Cosmos DB is a billable entity, where the cost is determined by the throughput and used storage. Collections can span one or more partitions or servers and can scale to handle practically unlimited volumes of storage or throughput.
Microsoft Azure Cosmos DB strongly suggests to store documents of different types into the same "collection".
But having multiple collections is something that can be quite useful for different use cases:
1. Multi-tenancy: you want to be sure all data are separated
2. Different types of data requiring different partitioning strategies

120 mongodb collections vs single collection - which one is more efficient?

I'm new to mongodb and I'm facing a dilemma regarding my DB Schema design:
Should I create one single collection or put my data into several collections (we could call these categories I suppose).
Now I know many such questions have been asked, but I believe my case is different for 2 reasons:
If I go for many collections, I'll have to create about 120 and that's it. This won't grow in the future.
I know I'll never need to query or insert into multiple collections. I will always have to query only one, since a document in collection X is not related to any document stored in the other collections. Documents may hold references to other parts of the DB though (like userId etc).
So my question is: could the 120 collections improve query performance? Is this a useful optimization in my case?
Or should I just go for single collection + sharding?
Each collection is expected hold millions of documents. If use only one, it will store billions of docs.
Thanks in advance!
------- Edit:
Thanks for the great answers.
In fact the 120 collections is only a self made limit, it's not really optimal:
The data in the collections is related to web publishers. There could be millions of these (any web site can join).
I guess the ideal situation would be if I could create a collection for each publisher (to hold their data only). But obviously, this is not possible due to mongo limitations.
So I came up with the idea of a fixed number of collections to at least distribute the data somehow. Like: collection "A_XX" would hold XX Platform related data for publishers whose names start with "A".. etc. We'll only support a few of these platforms, so 120 collections should be more than enough.
On another website someone suggested using many databases instead of many collections. But this means overhead and then I would have to use / manage many different connections.
What do you think about this? Is there a better solution?
Sorry for not being specific enough in my original question.
Thanks in advance
Single Sharded Collection
The edited version of the question makes the actual requirement clearer: you have a collection that can potentially grow very large and you want an approach to partition the data. The artificial collection limit is your own planned partitioning scheme.
In that case, I think you would be best off using a single collection and taking advantage of MongoDB's auto-sharding feature to distribute the data and workload to multiple servers as required. Multiple collections is still a valid approach, but unnecessarily complicates your application code & deployment versus leveraging core MongoDB features. Assuming you choose a good shard key, your data will be automatically balanced across your shards.
You can do not have to shard immediately; you can defer the decision until you see your workload actually requiring more write scale (but knowing the option is there when you need it). You have other options before deciding to shard as well, such as upgrading your servers (disks and memory in particular) to better support your workload. Conversely, you don't want to wait until your system is crushed by workload before sharding so you definitely need to monitor the growth. I would suggest using the free MongoDB Monitoring Service (MMS) provided by 10gen.
On another website someone suggested using many databases instead of many collections. But this means overhead and then I would have to use / manage many different connections.
Multiple databases will add significantly more administrative overhead, and would likely be overkill and possibly detrimental for your use case. Storage is allocated at the database level, so 120 databases would be consuming much more space than a single database with 120 collections.
Fixed number of collections (original answer)
If you can plan for a fixed number of collections (120 as per your original question description), I think it makes more sense to take this approach rather than using a monolithic collection.
NOTE: the design considerations below still apply, but since the question was updated to clarify that multiple collections are an attempted partitioning scheme, sharding a single collection would be a much more straightforward approach.
The motivations for using separate collections would be:
Your documents for a single large collection will likely have to include some indication of the collection subtype, which may need to be added to multiple indexes and could significantly increase index sizes. With separate collections the subtype is already implicit in the collection namespace.
Sharding is enabled at the collection level. A single large collection only gives you an "all or nothing" approach, whereas individual collections allow you to control which subset(s) of data need to be sharded and choose more appropriate shard keys.
You can use the compact to command to defragment individual collections. Note: compact is a blocking operation, so the normal recommendation for a HA production environment would be to deploy a replica set and use rolling maintenance (i.e. compact the secondaries first, then step down and compact the primary).
MongoDB 2.4 (and 2.2) currently have database-level write lock granularity. In practice this has not proven a problem for the vast majority of use cases, however multiple collections would allow you to more easily move high activity collections into separate databases if needed.
Further to the previous point .. if you have your data in separate collections, these will be able to take advantage of future improvements in collection-level locking (see SERVER-1240 in the MongoDB Jira issue tracker).
The main problem here is that you will gain very little performance in the current MongoDB versions if you separate out collections into the same database. To get any sort of extra performance over a single collection setup you would need to move the collections out into separate databases, then you will have operational overhead for judging what database you should query etc.
So yes, you could go for 120 collections easily however, you won't really gain anything currently due to: https://jira.mongodb.org/browse/SERVER-1240 not being implemented (anytime soon).
Housing billions of documents in a single collection isn't too bad. I presume that even if you was to house this in separate collections it probably would not be on a single server either, just like sharding a single collection, so any speed reduction due to multi server setup will also not matter in this case.
In my personal opinion, using a single collection is easier on everything.

Shard key and how to choose it?

I'm new in NoSQL databases and now I use MongoDB, BTW I have a question about MongoDB shard key and I want to know what it does actually? Is it related to queries performance? And how we can choose a good shard key for a collection?
Thanks in advance
From 10gen's docs: http://www.mongodb.org/display/DOCS/Choosing+a+Shard+Key
Choosing a shard-key is very dependent on your data and your use case.
Here's some more documentation you may find relevant:
http://docs.mongodb.org/manual/faq/sharding/
http://docs.mongodb.org/manual/sharding/
Specifically:
http://docs.mongodb.org/manual/core/sharding/
Essentially sharding allows you to partition your data across different servers. This means different writes/reads are going to different servers -- distributing the load of the application across multiple servers.
The shard key is the value in the collection that you are evaluating to determine which shard/server the document is being routed too.
You can have more explanation on shard key selection and working in Kristina Chodrow's book "Scaling MongoDB"
Check out this also

setting up mongodb for sharding/scalability?

Any recommended readings for setting up mongodb for sharding/scalability?
I'm looking for best practices. i don't know a lot about sharding or scaling db solutions. are there examples out there with practical real world examples?
i apologize if i'm using the wrong terms.
Is my understanding correct in that mongodb acts like a "single database" but knows how to distribute data across disparate instances of mongodb (maybe located in different locations, etc)
Are each of those instances called shards? is that data replicated across all instances?
MongoDB provides two types of scaling.
Read scaling: is provided by Replica Sets.
Write scaling is provided by Sharding.
Those links are a reasonable place to start.
There are also numerous slides and videos from the multiple Mongo conferences that have run recently. Here are some recent ones with use cases.
are each of those instances called shards? is that data replicated across all instances?
Think of a shard as a "slice" of your data. Each shard is generally composed of a replica set. So each shard has multiple computers managing replication of data.
is my understanding correct in that mongodb acts like a "single database" but knows how to distribute data across disparate instances of mongodb...
Sharding allows MongoDB to automatically distribute writes. But there's a little more to it, so I think it's best you work through some of the presentations.
MongoDB has a great documentation. Issues like Sharding and Replica sets are documented in depth:
http://www.mongodb.org/display/DOCS/Sharding+Introduction
http://www.mongodb.org/display/DOCS/Replica+Sets
Apart from that there are lot of presentations
http://www.10gen.com/presentations
and videos
http://www.10gen.com/presentations
dealing with your questions.
Please research first and come up with some more specific questions.