I have a huge amount of data in my mongodb. It's filled with tweets (50 GB) and my Ram is 8 GB. When querying it retrieves all tweets and mongodb starts filling the ram, when it reaches 8 GB it starts moving files to disk. This is the part where it gets really slowwwww. So i changed the query from skipping and starting using indexes. Now i have indexes and i query only 8GB to my program, save the id of the last tweet used in a file and the program stops. Then restart the program and it goes get the id of the tweet from the file. But mogod server still is ocupping the ram with the first 8GB, that no longer will be used, because i have a index to the last. How can i clean the memory of the mongo db server without restarting it?
(running in a win)
I am a bit confused by your logic here.
So i changed the query from skipping and starting using indexes. Now i have indexes and i query only 8GB to my program, save the id of the last tweet used in a file and the program stops.
Using ranged queries will not help the amount of data you have to page in (in fact it might worsen it because of the index), it merely makes the query faster server side by using an index for huge skips (like 42K+ row skip). If you are dong the same as that skip() but in index then (without a covered index) then you are still paging in exactly the same.
It is slow due to memory mapping and your working set. You have more data than RAM and not only that but you are using more of that data than you have RAM as such you are page faulting probably all the time.
Restarting the program will not solve this, nor will clearing its data OS side (with restart or specific command) because of your queries. You probably need to either:
Think about your queries so that your working set is more in line to your memory
Or shard your data across many servers so that you don't have to build up your primary server
Or get a bigger primary server (moar RAM!!!!!)
Edit
The LRU of your OS should be swapping out old data already since MongoDB is using its fully allocated lot, which means that if that 8GB isn't swapped it is because your working set is taking that full 8GB (most likely with some swap on the end).
Related
The problem is that we have a huge dataset consists of 50 mln records and almost all fields are indexed, that causes huge consumption of RAM, and after collection is deleted resources are not released, I know that this can be solved by restarting the server, but this solution is not applicable under our situation. So, my question - is there a way to release RAM resources without restarting mongo server? Version of Mongo is 4.4. Thanks in advance.
Not directly... MongoDB never make memory free, it just replaces it or allocates more.
But if you start reading from the disk data what you're going to need, that data will replace that part of memory.
Base problem is that MongoDB will always use (eventually) all free memory what is available and try to keep in memory all active data. So, reading data from the disk, makes that data "active" and will change the content of disk cache in the memory.
I have a question regarding MongoDB's collection size.
I did a small stress test in which my MongoDB server was constantly inserting, deleting and updating data for about 48 hours. The documents were only of small size, simply a numerical value and a timestamp as well as an ID.
Now, after those 48 hours, the collection used for inserting, deleting and updating data was 98.000 Bytes and the preallocated storage size was 696.320 Bytes. It has become that much higher than the actual collection size because of one input spike during an insertion phase. Due to following deletions of objects the actual collection size decreased again, the preallocated storage size didn't (AFAIK a common database management problem, since it's the same with e.g. MySQL).
After the stress test was completed I created a dump of my MongoDB database and dropped the database completely, so I could import the dump afterwards again and see how the stats would look then. And as I suspected, the collection size was still the same (98.000 Bytes) but the preallocated storage size went down to 40.960 Bytes (from 696.320 Bytes before).
Since we want to try out MongoDB for an application that produces hundreds of MB of data and therefore I/O traffic every day, we need to keep the database and its occupied space to a minimum. And preferably without having to create a dump, drop the whole database and import the dump again every now and then.
Now my question is: is there a way to call the MongoDB garbage collector functionally from code? The software behind it is a Java software and my idea was to call the garbage collector after a certain amount of time/operations or after the preallocated storage size has reached a certain threshold.
Or maybe there's an ever better (more elegant) way to minimize the occupied space?
Any help would be appreciated and I'll try to provide any further information if needed. Thanks in advance.
I have a static database (that will never even receive a write) of around 5 GB, while my server RAM is 30 GB. I'm focusing on returning complicated aggregations to the user as fast as possible, so I don't see a reason why I shouldn't have (a) the indexes and (b) the entire dataset stored entirely in RAM, and (c) automatically stored there whenever the Mongo server boots up. Currently my main bottleneck is running group commands to find unique elements out of millions of rows.
My question is, how can I do either (a), (b), or (c) while running on the new Mongo/WiredTiger? I know the "touch" command doesn't work with WiredTiger, so most information on the Internet seems out of date. Are (a), (b), or (c) already done automatically? Should I not be doing each of these steps with this use case?
Normaly you shouldn't have to do anything. The disk pages are loaded in RAM upon request and stay there. If there is no more free memory the older (unused) pages get unloaded to be used by other programs that need them.
If you must have your whole db in ram you could use a ramdisk and tell mongo to use it as a storage device.
I would recommend that you revise your indices and/or data structures. Having the correct ones can make a huge difference in performance. We are talking about seconds vs hours.
When I try to copy a database from one mongoDB server to another (About 100GB) the mongo daemon process takes 99% of the available RAM (Windows 64bit 16GB). As a result the system becomes very slow and sometimes unstable.
Is there any way to avoid it?
MongoDB 2.0.6
Albert.
MongoDB is very much an "in ram" application. Mongo has all of your database memory mapped for usage but normally only the most recently used data will be in RAM (called your working set) and mongo will page out to get any data not in RAM as needed. Normally mongo's behaviour is only to have as much as it needs in RAM, however when you do something like a DB Copy all of the data is needed - thus the mongod consuming all your ram.
There is no ideal solution to this, but if desperately needed you could use WSRM http://technet.microsoft.com/en-us/library/cc732553.aspx to try and limit the amount of RAM consumed by the process. This will have the effect of making the copy take longer and may cause other issues.
For more than a month is my war with mongoDB. Until I lose =] ...
Battle 1. Battle 2.
And now a new problem. Again, not enough memory.
Initially, this was solved by simply increasing the memory at a rate of VPS. Then journal = false. But now I got to the top of your plan and continue to increase the memory is not possible.
For my base are lacking 4 GB of memory.
How should I choose a database for the project, was nowhere written that there are so many mongoDB memory. With about 10 million records in the mongoDB missing 4 GB of memory, when my MySQL database with 10 million easily copes with 1.4 GB of memory.
The problem as I understand it, a large number of index fields. But since I can not log into the database, respectively, can not remove them. They needed me in the early stages of development, now they are not important to me.
Tell me please, can I remove them somehow?
There is a dump of the database is completely whole folder database / data / db
On my PC with 4 GB of memory database does not start on a VPS with 4GB same.
As an alternative, I think to take a test period at some VPS / VDS to run mongo and delete keys.
Do you know a web hosting with a test period and 6 GB of memory?
Or if there is an alternative, could you say what?
The issues has very little to do with the size of your data set. MongoDB uses memory mapped files for its storage engine. As such it'll start swapping in pages of hot data into memory when it can and it does so fairly aggressively (or more accurately, the OS memory management does).
Basically it uses as much memory as is available to it and there's very little you can do to avoid it. All data pages (be it actual data or indexes) that are accessed during operation will be swapped into memory if there is space available.
There are plenty of references to this on the internet and on mongodb.org by the way. Saying it isn't mentioned anywhere isn't really true.