I am having trouble trying to call {allowDiskUse:true} in the mongodb compass GUI tool. I have created a view based on an aggregation of another collection. The view returns an error of
Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.
Hence i saw that it is required to append {allowDiskUse:true} but i am unable to find a suitable place to call. There is a possibility to use the $out stage to write to another collection but I would like to try the view first :)
ADD ON:
I have tried to run the query db.noDups.aggregate([],{allowDiskUse : true }); in command line and it works. But I would like to execute in MongoDB compass for the visualization and exporting function.
I also tried {},{},{allowDiskUse: true} in the filter condition but still no luck :(
Btw I am on MongoDB 4.2.6 Community and MongoDB compass 1.25.0
I tried appending it to the filter and it didn't work. I've looked on many different forums and kind find a solution for allowDiskUse from Compass. This seems kind of crazy that you need to add such a kludgy option to even do modest groupings on small amounts of data.
I also have been looking to see how to increase the amount of memory that Mongo can use to get around having to do this. I have Mongo installed on a server with 512GB of memory, it seems rather silly to have developers jumping through hoops like this.
Related
I have written a number of Presto queries that pull from mongoDB collections, but others in our project query mongo directly. These folks would like to use my queries to save them the time of having to rewrite them.
Is there a way to obtain/extract the mongoDB query language generated by Presto?
Didn't see anything in the MongoDB connector documentation that would indicate how or if this was possible.
I am aware of SQL-mongo converters out there, but Presto SQL extends normal SQL to enable things like unwrapping arrays etc. that we encounter with non-relational stores and these converts have trouble with these things in my experience.
You can set MongoDB driver log level DEBUG in log.properties:
org.mongodb=DEBUG
However, it will print many unrelated logs (e.g. healthcheck). Filed an issue https://github.com/prestosql/presto/issues/5600
I guess the easiest way is to look into Mongodb while the query is running and get it from there, for example via logging:
db.setProfilingLevel(2)
db.system.profile.find().pretty()
You may also use some GUIs like MongoVue or Robo 3T - I used MongoVue in the past to evaluate running queries.
I am reasking this question as i thought this question should be on seperate thread from this one in-mongodb-know-index-of-array-element-matched-with-in-operator.
I am using mongoDB and actually i was writing all of my queries using simple queries which are find, update etc. (No Aggregations). Now i read on many SO posts see this one for example mongodb-aggregation-match-vs-find-speed. Now i thought about why increasing computation time on server because as if i will compute more then my server load will become more, so i tried to use aggregations and i thought i am going in right direction now. But later on my previous question andreas-limoli told me about not using aggregations as it is slow and for using simple queries and computing on server. Now literally i am in a delimma about what should i use, i am working with mongoDB from a year now but i don't have any knowledge about its performance when data size increases so i completely don't know which one should i pick.
Also one more thing i didn't find on anywhere, if aggregation is slower than is it because of $lookup or not, because $lookup is the foremost thing i thought about using aggregation because otherwise i have to execute many queries serially and then compute on server which appears to me very poor in front of aggregation.
Also i read about 100MB restriction on mongodb aggregation when passing data from one pipeline to other, so how people handle that case efficiently and also if they turn on Disk usage then because Disk usage slow down everything than how people handle that case.
Also i fetched 30,000 sample collection and tried to run aggregation with $match and find query and i found that aggregation was little bit faster than find query which was aggregation took 180ms to execute where as find took 220 ms to execute.
Please help me out guys please it would be really helpful for me.
Aggregation pipelines are costly queries. It might impact on your performance as an increasing data because of CPU memory. If you can achieve the with find query, go for it because Aggregation is costlier once DB data increases.
Aggregation framework in MongoDB is similar to join operations in SQL. Aggregation pipelines are generally resource intensive operations. So if in case your work is satisfied with simple queries, you should use that one at first place.
However, if it is absolute necessary then you can use aggregation pipelines in case you need to fetch the data from the multiple collections.
How could we make MongoDB report errors for queries that don't use indices?
We end up creating indices for every query anyway so it would be great if MongoDB would report missing indices for us. Also it would be convenient to be able to configure the restriction on a connection basis. This way indices wouldn't come into our way when working from MongoDB shell.
The notablescan ( http://docs.mongodb.org/manual/reference/parameters/#param.notablescan ) option for the MongoDB binary (mongod.exe or mongod depending on your OS) allows you to stop any query, with an emitted log error, which does not use an index at all.
This option will not stop inefficient queries though so that part will still need to be discovered manually by you.
I am using mongodb 2.4.6 and python 2.7 .I have frequent executing queries.Is it possible to save the frequent qaueries results in cache.?
Thanks in advance!
Yes but you will need to make one, how about memcached or redis?
However as a pre-cautionary note, MongoDB does have its recently used data cached to RAM by the OS already so unless you are doing some really resource intensive aggregation query or you are using the results outside of your working set window you might not actually find that it increases performance all that much.
Is it possible to run MongoDB commands like a query to grab additional data or to do an update from with in MongoDB's MapReduce command. Either in the Map or the Reduce function?
Is this completely ludicrous to do anyways? Currently I have some documents that refer to separate collections using the MongoDB DBReference command.
Thanks for the help!
Is it possible to run MongoDB commands... from within MongoDB's MapReduce command.
In theory, this is possible. In practice there are lots of problems with this.
Problem #1: exponential work. M/R is already pretty intense and poorly logged. Adding queries can easily make M/R run out of control.
Problem #2: context. Imagine that you're running a sharded M/R and you are querying into an unsharded collection. Does the current context even have that connection?
You're basically trying to implement JOIN logic and MongoDB has no joins. Instead, you may need to build the final data in a couple of phases by running a few loops on a few sets of data.