I haven't found any information about this.
How many documents can a single collection have in MongoDB before it is necessary to use sharding?
There is no limitation as you can see here:
If you specify a maximum number of documents for a capped collection using the max parameter to create, the limit must be less than 2^32 documents. If you do not specify a maximum number of documents when creating a capped collection, there is no limit on the number of documents.
#M-A.Fernandes has right.
I can only add this information:
Maximum Number of Documents Per Chunk to Migrate
MongoDB cannot move a chunk if the number of documents in the chunk exceeds either 250000 documents or 1.3 times the result of dividing the configured chunk size by the average document size. db.collection.stats() includes the avgObjSize field, which represents the average document size in the collection.
Related
Is there any limit for fields count per single MongoDB document? Didn't find anything about that in documentation.
For example, if I have document that contains 1000 fields and fits 16Mb.
Yes, It will work.
Fields are part of Documents. Mongo has only document size limit of 16Mb. As long as it fits in 16Mb, it's good.
But I suggest you to look at the need for these many fields in single document.
I am looking to use MongoDB to store a huge amount of records : between 12 and 15 billions. Is it possible to store this number of documents in mongoDB ?
I saw on the net, that there are limits for : document size, index size, number of elements in collection.
But is there a limit in terms of number of records ?
There is no limit on the number of documents in one collection. However, you are probobly will run into issues with disk, ram, and lookup/update performance. If your data has some kind of logical grouping, I would suggest splinting it between multiple collections, or even instances (on different servers of course).
This is about a recommendation on mongodb. I have a collection that always increase row by row (I mean the count of documents). It is about 5 billion now. When I make a request on this collection I sometimes get the error about 16 MB size.
The first thing that I want to ask is which structure is the best way of creating collections that increasing the rows hugely. What is the best approach? What should I do for this kind of structure and the performance?
Just to clarify, the 16MB limitation is on documents, not collections. Specifically, its the maximum BSON document size, as specified in this page in the documentation.
The maximum BSON document size is 16 megabytes.
If you're running into the 16MB limit in aggregation is because you are using MongoDB version 2.4 or older. In these, the aggregate() method returned a document, which is subject to the same limitation as all other documents. Starting in 2.6, the aggregate() method returns a cursor, which is not subject to the 16MB limit. For more information, you should consult this page in the documentation. Note that each stage in the aggregation pipeline is still limited to 100MB of RAM.
In MongoDB there is the maximum size of 16 MByte per document. Does this size limit include sub-documents?
In other words: Are the 16 MByte per document including its sub-documents, or is it 16 MByte per document and each sub-document counts as an own document?
Yes, this is 16MB limit for the whole structure, including sub-documents.
Keep in mind that what you call sub-documents, MongoDB sees as regular values. From its perspective, they are no different than, say, strings. Just values.
I understand that in MongoDB a BSON document can be no bigger than 16mb. Does this size limit account for embedded documents as well? I plan on having well over 16mb of documents inside the embedding document.
A single MongoDB document cannot be larger than 16 MB and all of a document's embedded documents count toward this limit, so what you're planning won't work.