What is the max size of collection in mongodb - mongodb

I would like to know what is the max size of collection in mongodb.
In mongodb limitations documentation it is mentioned single MMAPv1 database has a maximum size of 32TB.
This means max size of collection is 32TB?
If I want to store more than 32TB in one collection what is the solution?

There are theoretical limits, as I will show below, but even the lower bound is pretty high. It is not easy to calculate the limits correctly, but the order of magnitude should be sufficient.
mmapv1
The actual limit depends on a few things like length of shard names and alike (that sums up if you have a couple of hundred thousands of them), but here is a rough calculation with real life data.
Each shard needs some space in the config db, which is limited as any other database to 32TB on a single machine or in a replica set. On the servers I administrate, the average size of an entry in config.shards is 112 bytes. Furthermore, each chunk needs about 250 bytes of metadata information. Let us assume optimal chunk sizes of close to 64MB.
We can have at maximum 500,000 chunks per server. 500,000 * 250byte equals 125MB for the chunk information per shard. So, per shard, we have 125.000112 MB per shard if we max everything out. Dividing 32TB by that value shows us that we can have a maximum of slightly under 256,000 shards in a cluster.
Each shard in turn can hold 32TB worth of data. 256,000 * 32TB is 8.19200 exabytes or 8,192,000 terabytes. That would be the limit for our example.
Let's say its 8 exabytes. As of now, this can easily translated to "Enough for all practical purposes". To give you an impression: All data held by the Library of Congress (arguably one of the biggest library in the world in terms of collection size) holds an estimated size of data of around 20TB in size including audio, video, and digital materials. You could fit that into our theoretical MongoDB cluster some 400,000 times. Note that this is the lower bound of the maximum size, using conservative values.
WiredTiger
Now for the good part: The WiredTiger storage engine does not have this limitation: The database size is not limited (since there is no limit on how many datafiles can be used), so we can have an unlimited number of shards. Even when we have those shards running on mmapv1 and only our config servers on WT, the size of a becomes nearly unlimited – the limitation to 16.8M TB of RAM on a 64 bit system might cause problems somewhere and cause the indices of the config.shard collection to be swapped to disk, stalling the system. I can only guess, since my calculator refuses to work with numbers in that area (and I am too lazy to do it by hand), but I estimate the limit here in the two digit yottabyte area (and the space needed to host that somewhere in the size of Texas).
Conclusion
Do not worry about the maximum data size in a sharded environment. No matter what, it is by far enough, even with the most conservative approach. Use sharding, and you are done. Btw: even 32TB is a hell lot of data: Most clusters I know hold less data and shard because the IOPS and RAM utilization exceeded a single nodes capacity.

Related

Redshift free storage doesn't increase after adding 2 nodes

My 4-Node (dc2.large 160 GB storage per node) Redshift cluster had around 75% storage full, so I added 2 more nodes, to make a total of 6 Nodes, and I was expecting the disk usage to drop down to around 50%, but after making the said change, the disk usage still remains at 75% (even after few days and after VACUUM).
75% of 4*160 = 480 GB of data
6*160 = 960 of available storage in the new configuration, which means it should have dropped to 480/960 i.e somewhere close to 50% disk usage.
The image shows the disk space percentage before and after adding two nodes.
I also checked if there are any large table which are using DISTSTYLE ALL, which causes data replication across the nodes, but the tables I have in that are very small in size as compared to the total storage capacity, so I don't think they'd have any significant impact on the storage.
What can I do here to reduce the storage usage as I don't want to add more nodes and then again land up in the same situation?
It sounds like your tables are affected by the minimum table size. It may be counter-intuitive but you can often reduce the size of small tables by converting them to DISTSTYLE ALL.
https://aws.amazon.com/premiumsupport/knowledge-center/redshift-cluster-storage-space/
Can you clarify what distribution style you are using for some of the bigger tables?
If you are not specifying a distribution style then Redshift will automatically pick one (see here), and it's possible that it will chose ALL distribution at first and only switch to EVEN or KEY distribution once you reach a certain disk usage %.
Also, have you run the ANALYZE command to make sure the table stats are up to date?

Does reducing the size of mongodb documents reduce the working set?

I'm trying to determine if reducing the size of our documents will reduce our working set? Our database is reaching the limit of the RAM on our instances. We are storing redundant data in an array in each of our documents (can be thousands of elements), which I am now limiting to 40. That should reduce the size of the collection at fault to 10% of what it was (this is where our bulk is), but in my understanding of the documentation, storage size will not change. After reading up on mongo, I'm not sure if reducing document size to 10% of what they were originally will impact the working set? Can someone please help/explain?
Edit: Some background information
One of the things we've done to keep performance up as our database grows is to increase RAM to 'fit' the database in its entirety. The database is getting close to the 64GB of ram we have on our replica instances... That is what put this question forward...
Edit: The essential question
Essentially, the question comes down to this: What should I use when I'm calculating if our working set fits in memory? These numbers come from running db.stats():
dataSize + indexSize < RAM
OR
storageSize + indexSize < RAM
OR
fileSize + indexSize < RAM
Thanks!

Total MongoDB storage size

I have a sharded and replicated MongoDB with dozens millions of records. I know that Mongo writes data with some padding factor, to allow fast updates, and I also know that to replicate the database Mongo should store operation log which requires some (actually, a lot of) space. Even with that knowledge I have no idea how to estimate the actual size required by Mongo given a size of a typical database record. By now I have a descrepancy with a factor of 2 - 3 between weekly repairs.
So the question is: How to estimate a total storage size required by MongoDB given an average record size in bytes?
The short answer is: you can't, not based solely on avg. document size (at least not in any accurate way).
To explain more verbosely:
The space needed on disk is not simply a function of the average document size. There is also the space needed for any indexes you create. Then there is the space needed if you do trigger those moves (despite padding, this does happen) - that space is placed on a list to be re-used but depending on the data you subsequently insert, it may or may not be possible to re-use that space.
You can also add into the fact that pre-allocation will mean that occasionally a handful of documents will increase your on-disk space utilization by ~2GB as a new data file is allocated. Of course, with sufficient data, this will be essentially a rounding error but it is worth bearing in mind.
The only way to estimate this type of data to size ratio, assuming a consistent usage pattern, is to trend it over time for your particular use case and track the disk space usage versus the data inserted (number of documents might be better than data volume depending on variability of doc size).
Similarly, if you track the insertion rate, doc size and the space gained back from a resync/repair. FYI - you can resync a secondary from scratch to get a "fresh" copy of the data files rather than running a repair, which can be less disruptive, and use less space depending on your set up.

What are the things to consider while assessing the hardware (RAM and hard disk) for mongodb and how to assess them?

We are going to use mongodb for an automated alert notification system. This will also notify different server statistics and business statistics. We would like to have a separate server for this and need to assess the hard ware(both RAM, hard disc and other configurations if any)
Shall some one shed some light on these plases....
What are the things to consider...?
How to prceeed once we collect that information(Is there any standard)...?
Currently I have only the below information.
Writes per second: 400
Average record size in the write: 5KB
Data retendancy policy: 30days
Mongodb buffers writes in memory and flushes them to disk once a while (60sec by default, can be configured with --syncdelay), so writing 400 5KB docs per sec is not going to be a problem if mongo can quickly update all indices (it would be helpful if you could give some info on the type and number of indices you're going to have).
You're going to have 1'036'800'000 documents / 5TB of raw data each month. Mongo will need more than 5TB to store that (for each doc it will repeat all key names, plus indices). To estimate index size:
2 * [ n * ( 18 bytes overhead + avg size of indexed field + 5 or so bytes of conversion fudge factor ) ]
Where n is the number of documents you have.
And then you can estimate the amount of RAM (you need to fit your indices there if you care about query performance).

How much data per node in Cassandra cluster?

Where are the boundaries of SSTables compaction (major and minor) and when it becomes ineffective?
If I have major compaction couple of 500G SSTables and my final SSTable will be over 1TB - will this be effective for one node to "rewrite" this big dataset?
This can take about day for HDD and need double size space, so are there best practices for this?
1 TB is a reasonable limit on how much data a single node can handle, but in reality, a node is not at all limited by the size of the data, only the rate of operations.
A node might have only 80 GB of data on it, but if you absolutely pound it with random reads and it doesn't have a lot of RAM, it might not even be able to handle that number of requests at a reasonable rate. Similarly, a node might have 10 TB of data, but if you rarely read from it, or you have a small portion of your data that is hot (so that it can be effectively cached), it will do just fine.
Compaction certainly is an issue to be aware of when you have a large amount of data on one node, but there are a few things to keep in mind:
First, the "biggest" compactions, ones where the result is a single huge SSTable, happen rarely, even more so as the amount of data on your node increases. (The number of minor compactions that must occur before a top-level compaction occurs grows exponentially by the number of top-level compactions you've already performed.)
Second, your node will still be able to handle requests, reads will just be slower.
Third, if your replication factor is above 1 and you aren't reading at consistency level ALL, other replicas will be able to respond quickly to read requests, so you shouldn't see a large difference in latency from a client perspective.
Last, there are plans to improve the compaction strategy that may help with some larger data sets.