skip a mongo capped collection - mongodb

I have a very large capped collection in mongodb. Given that the capped collection structure is predictable (i.e. sort is predefined, memory footprint is predefined, etc), is there a better way to get a cursor on the LATEST item inserted instead of iterating?
In other words, what I'm doing right now is to get the size of my collection (n), and then create a cursor that sets skip=n-1 to put me at the end of the collection. Then I iterate on the cursor and handle all new additions to the collection.
The problem with this approach is that my collection is huge. lets say 11m records. that takes 20 minutes to skip. Which means that when my cursor starts emitting data, its 20 minutes behind.

Try db.cappedCollection.find().limit(1).sort({$natural:-1}) .

Have you tried indexing the collection and using $gt - this should be faster although the index will have some impact on the speed of the writes to the collection.

Related

MongoDB hanging count query on collection with large objects

I have a collection with 10.000 objects. Each object's size is around 500kb since they include images in them. For statistics, I need to count objects with their creation time. Even though I have indexes, counting the whole collection takes more than 15 seconds. When I remove the image field (i.e the object becomes a simple JSON object), the query immediately returns. I do not understand why size of the objects affects performance this much. Here is a sample query I have been using:
const aggregation = [
{"$match": {"createTime": {"$gte": "2019-01-01T00:00:00.000Z"}}},
{"$match": {"createTime": {"$lte": "2020-01-01T23:59:59.999Z"}}},
{"$count": "value"}];
myCollection.aggregate(aggregation).then(foo);
Is there a way to make the query faster?
One solution I could think of is to store images in a separate collection. This will definitely make the query faster but I am wondering the reason behind this performance drop.
500KB * 10000 documents is 5.1GB to examine. That might take a few seconds, especially if your cache is smaller than that.
Try doing this with a count query instead.
Assuming there is an index on createTime, and no document in the collection contains an array for that field (i.e. the index is not multikey), this query should be able to be fully covered.
This means that they query executor should use a COUNTSCAN stage to find the number of matching documents by scanning the index, and never need to look at a single document, which means document size no longer matters, and it should cut down on your disk IO, cache churn, and CPU utilization as well.
db.myCollection.count({"createTime": {"$gte": "2019-01-01T00:00:00.000Z"},"createTime": {"$lte": "2020-01-01T23:59:59.999Z"}})`

Does a MongoDB cursor "auto-grow" when I add documents

I am using a MongoDB cursor to find a large number of documents, which takes quite some time. What happens if during this time, there are documents added to the database that match the search criteria of the cursor.
Will the cursor return the documents?
Or does the cursor take some kind of snapshot if it begins, and thus omits the later added results?
Will the cursor return the documents?
Yes. This also happens when you update some documents which you received from the cursor, causing them to grow out of their current disk bounds and move to a bigger slot in the data files. In this case, you may see such documents twice (or more).

MongoDB capped collection performance

I am recently working on a time series data project to store some sensor data.To achieve maximum insertion/write throughput i used capped collection(As per the mongodb documentation capped collection will increase the read/write performance). when i test the collection for insertion/write of some thousand documents/records using python driver with capped collection without index against the normal collection, i couldn't see much difference in improvement in write performance of capped collection over normal collection. example is like i inserted 40K records on single thread using pymongo driver. capped collection took around 25.4 seconds and normal collection took 25.7 seconds.
Could anyone please explain me when can we achieve maximum insertion/write throughput of capped collection? Is this is the right choice for time series data collections?
Data stored into capped collections are rotated upon exceeding fixed size of capped collection .
Capped collections don't require any indexes as they preserve the insertion order and also data is retrieved in natural order same as order in which the database refers to documents on disk.Hence it offers high performance in insertion and data retrieval process.
For more detailed description related to Capped collections please refer the documentation as mentioned in URL
https://docs.mongodb.com/manual/core/capped-collections/

Mongo TTL vs Capped collections for efficiency

I’m inserting data into a collection to store user history (about 100 items / second), and querying the last hour of data using the aggregation framework (once a minute)
In order to keep my collection optimal, I'm considering two possible options:
Make a standard collection with a TTL index on the creation date
Make a capped collection and query the last hour of data.
Which would be the more efficient solution? i.e. less demanding on the mongo boxes - in terms of I/O, memory usage, CPU etc. (I currently have 1 primary and 1 secondary, with a few hidden nodes. In case that makes a difference)
(I’m ok with adding a bit of a buffer on my capped collection to store 3-4 hours of data on average, and if users become very busy at certain times not getting the full hour of data)
Using a capped collection will be more efficient. Capped collections preserve the order of records by not allowing documents to be deleted or to update them in ways to increase their size, so it can always append to the current end of the collection. This makes insertion simpler and more efficient than with a standard collection.
A TTL-index needs to maintain an additional index for the TTL-field which needs to be updated with every insert, which is an additional slowdown on inserts (this point is of course irrelevant when you would also add an index on the timestamp when using a capped collection). Also, the TTL is enforced by a background job which runs at regular intervals and takes up performance. The job is low-priority and MongoDB is allowed to delay it when there are more high-priority tasks to do. That means you can not rely on the TTL being enforced accurately. So when exact accuracy of the time interval matters, you will have to include the time interval in your query even when you have a TTL set.
The big drawback of capped collections is that it is hard to anticipate how large they really need to be. If your application scales up and you receive a lot more or a lot larger documents than anticipated, you will begin to lose data. You should generally only use capped collections for cases where losing older documents prematurely is not that big of a deal.

Updating large number of records in a collection

I have collection called TimeSheet having few thousands records now. This will eventually increase to 300 million records in a year. In this collection I embed few fields from another collection called Department which is mostly won't get any updates and only rarely some records will be updated. By rarely I mean only once or twice in a year and also not all records, only less than 1% of the records in the collection.
Mostly once a department is created there won't any update, even if there is an update, it will be done initially (when there are not many related records in TimeSheet)
Now if someone updates a department after a year, in a worst case scenario there are chances collection TimeSheet will have about 300 million records totally and about 5 million matching records for the department which gets updated. The update query condition will be on a index field.
Since this update is time consuming and creates locks, I'm wondering is there any better way to do it? One option that I'm thinking is run update query in batches by adding extra condition like UpdatedDateTime> somedate && UpdatedDateTime < somedate.
Other details:
A single document size could be about 3 or 4 KB
We have a replica set containing three replicas.
Is there any other better way to do this? What do you think about this kind of design? What do you think if there numbers I given are less like below?
1) 100 million total records and 100,000 matching records for the update query
2) 10 million total records and 10,000 matching records for the update query
3) 1 million total records and 1000 matching records for the update query
Note: The collection names department and timesheet, and their purpose are fictional, not the real collections but the statistics that I have given are true.
Let me give you a couple of hints based on my global knowledge and experience:
Use shorter field names
MongoDB stores the same key for each document. This repetition causes a increased disk space. This can have some performance issue on a very huge database like yours.
Pros:
Less size of the documents, so less disk space
More documennt to fit in RAM (more caching)
Size of the do indexes will be less in some scenario
Cons:
Less readable names
Optimize on index size
The lesser the index size is, the more it gets fit in RAM and less the index miss happens. Consider a SHA1 hash for git commits for example. A git commit is many times represented by first 5-6 characters. Then simply store the 5-6 characters instead of the all hash.
Understand padding factor
For updates happening in the document causing costly document move. This document move causing deleting the old document and updating it to a new empty location and updating the indexes which is costly.
We need to make sure the document don't move if some update happens. For each collection there is a padding factor involved which tells, during document insert, how much extra space to be allocated apart from the actual document size.
You can see the collection padding factor using:
db.collection.stats().paddingFactor
Add a padding manually
In your case you are pretty sure to start with a small document that will grow. Updating your document after while will cause multiple document moves. So better add a padding for the document. Unfortunately, there is no easy way to add a padding. We can do it by adding some random bytes to some key while doing insert and then delete that key in the next update query.
Finally, if you are sure that some keys will come to the documents in the future, then preallocate those keys with some default values so that further updates don't cause growth of document size causing document moves.
You can get details about the query causing document move:
db.system.profile.find({ moved: { $exists : true } })
Large number of collections VS large number of documents in few collection
Schema is something which depends on the application requirements. If there is a huge collection in which we query only latest N days of data, then we can optionally choose to have separate collection and old data can be safely archived. This will make sure that caching in RAM is done properly.
Every collection created incur a cost which is more than cost of creating collection. Each of the collection has a minimum size which is a few KBs + one index (8 KB). Every collection has a namespace associated, by default we have some 24K namespaces. For example, having a collection per User is a bad choice since it is not scalable. After some point Mongo won't allow us to create new collections of indexes.
Generally having many collections has no significant performance penalty. For example, we can choose to have one collection per month, if we know that we are always querying based on months.
Denormalization of data
Its always recommended to keep all the related data for a query or sequence of queries in the same disk location. You something need to duplicate the information across different documents. For example, in a blog post, you'll want to store post's comments within the post document.
Pros:
index size will be very less as number of index entries will be less
query will be very fast which includes fetching all necessary details
document size will be comparable to page size which means when we bring this data in RAM, most of the time we are not bringing other data along the page
document move will make sure that we are freeing a page, not a small tiny chunk in the page which may not be used in further inserts
Capped Collections
Capped collection behave like circular buffers. They are special type of fixed size collections. These collection can receive very high speed writes and sequential reads. Being fixed size, once the allocated space is filled, the new documents are written by deleting the older ones. However document updates are only allowed if the updated document fits the original document size (play with padding for more flexibility).