Mongodb not returning from query calls - mongodb

I have an issue with my current mongodb deployment. It seems like when mongo is under load, sometimes it will randomly get into a state where it doesn't return from query calls. I have to restart the node server (which reestablishes connection with mongo) to fix this.
Mongo doesn't throw any errors when this is happening, so there is no way to detect this bug. I've been profiling the database, but it says all my queries returns in under 5 seconds, so I don't think this is a case of unoptimized queries.
Do you guys have any other ideas as to what would cause mongodb to not return from any queries?

Related

MongoDB closes connection on read operation

I run MongoDB 4.0 on WiredTiger under Ubuntu Server 16.04 to store complex documents. There is an issue with one of the collections: the documents have many images written as strings in base64. I understand this is a bad practice, but I need some time to fix it.
Because of this some find operations fail, but only those which have a non-empty filter or skip. For example db.collection('collection').find({}) runs OK while db.collection('collection').find({category: 1}) just closes connection after a timeout. It doesn't matter how many documents should be returned: if there's a filter, the error will pop every time (even if it should return 0 docs), while an empty query always executes well until skip is too big.
UPD: some skip values make queries to fail. db.collection('collection').find({}).skip(5000).limit(1) runs well, db.collection('collection').find({}).skip(9000).limit(1) takes way much time but executes too, while db.collection('collection').find({}).skip(10000).limit(1) fails every time. Looks like there's some kind of buffer where the DB stores query related data and on the 10000 docs it runs out of the resources. The collection itself has ~10500 docs. Also, searching by _id runs OK. Unfortunately, I have no opportunity to make new indexes because the operation fails just like read.
What temporary solution I may use before removing base64 images from the collection?
This happens because such a problematic data scheme causes huge RAM usage. The more entities there are in the collection, the more RAM is needed not only to perform well but even to run find.
Increasing MongoDB default RAM usage with storage.wiredTiger.engineConfig.cacheSizeGB config option allowed all the operations to run fine.

How to disable MongoDB aggregation timeout

I want to run aggregation on my large data sets. (It's about 361K documents) and Insert them to another collection.
I getting this error:
I tried to increase Max Time but it has maximum and it's not enough for my data sets. I found https://docs.mongodb.com/manual/reference/method/cursor.noCursorTimeout/ but it seems noCursorTimeout only apply on find not aggregation.
please tell me how I can disable cursor timeout or another solution to do this.
I am no MongoDB expert but will interpret what I know.
MongoDB Aggregation Cursors don't have a mechanism to adjust Batch Size or set Cursor Timeouts.
Therefore there is no direct way to alter this and the timeout of an aggregation query solely depends on the cursorTimeoutMillis parameter of the MongoDB or mongos` instance. Its default timeout value is 10 minutes.
Your only option is to change this value by the below command.
use admin
db.runCommand({setParameter:1, cursorTimeoutMillis: 1800000})
However, I strongly advise you against using this command. That's because it's a safety mechanism built into MongoDB. It automatically deletes queries that are running idle for more than 10 minutes, so that there is a lesser load in the MongoDB server. If you change this parameter (say to 30 minutes), MongoDB will allow idle queries to be running in the background for those 30 minutes, which will not only make all the new queries slower to execute, but also increase load and memory on the MongoDB side.
You have a couple of workarounds. Reduct the amount of documents if working on MongoDB Compass or copy and run the commands on Mongo Shell (I had success so far with this method).

mongodb taking 500 ms in fetching 150 records

We are using MongoDb(3.0.6) for the application we are using. We have around 150 entries in one of the collections, but it takes approx 500ms to fetch all of these records, which I didn't expect. Mongo is deployed on a company server. What can be done to reduce this time?
We are not getting too many reads and there is no CPU load, etc, What can be mistake which may be causing this or what config should be changes to affect these.
Here is my schema: http://pastebin.com/x18iDPKf
I am just querying all the entries, which are 160 in number. I don't think time taken is due to Mongoose or NodeJs, as when I quesry using RoboMongo, It still takes same time.
Output of db.<collection>.stats().size :
223000
The query I am doing is:
db.getCollection('collectionName').find({})
Definitely it shouldn't be a problem with MongoDB. It should be a temporary issue or might be due to system constraints such as Internal Memory and so on.
If still the problem exists, use Indexes on the appropriate field which you are querying.
https://docs.mongodb.com/manual/indexes/

Meteor Mongo Never Returns Data

Using Meteor 1.3.2.4 and Mongo 3.2 (which doesn't seem like it should have major problems using it with Meteor), when running queries (really any queries) on collections larger than ~10,000 documents, they never return, or take many minutes to return.
I have indexes on most of these fields; this is not a no-index problem (I wish).
There is no evidence of any issues in the mongodb logs, just connection accepted. I have no mongo warnings or errors of any kind (fixed the mongo kernel warnings I had).
And the weirdest part about this is that when using the mongo cli, these queries run just fine, in a second or so.
One collection I'm running has ~500k docs and the other 15M.
What could be the issue? I read a few places that MongoDB 3.2 should work fine with Meteor, am I wrong?

How to check the status of Mongo DB that crashes all the time

I have a lot of write and read activity in my Mongo DB. After running it for a while the DB crashes and I need to clear all records and restart it. I think it is because it is using too much memory. (10000 records every minute)
I need to find the root cause.
Here are results from db.serverStatus().mem and db.stats():
db.serverStatus().mem
db.stats()
How can I tell from this whether my mongo is healthy?
Please, could you provide me some guidance?