Why is mongoDB not displaying all results - mongodb

this might seem quite simple but does anyone know why mongoDB is only returning the first 20 results despite importing 800 from a CSV file?
Thanks

The mongo shell is not intended to be a production client, it is an administrative and test tool.
When you run a query, the mongo shell returns and displays the first batch of results. You will need to either request an additional batch (i.e. Type "it" for more), use a method like toArray to exhaust the cursor, or save the cursor to a variable so you can iterate it.

Related

MongoDB closes connection on read operation

I run MongoDB 4.0 on WiredTiger under Ubuntu Server 16.04 to store complex documents. There is an issue with one of the collections: the documents have many images written as strings in base64. I understand this is a bad practice, but I need some time to fix it.
Because of this some find operations fail, but only those which have a non-empty filter or skip. For example db.collection('collection').find({}) runs OK while db.collection('collection').find({category: 1}) just closes connection after a timeout. It doesn't matter how many documents should be returned: if there's a filter, the error will pop every time (even if it should return 0 docs), while an empty query always executes well until skip is too big.
UPD: some skip values make queries to fail. db.collection('collection').find({}).skip(5000).limit(1) runs well, db.collection('collection').find({}).skip(9000).limit(1) takes way much time but executes too, while db.collection('collection').find({}).skip(10000).limit(1) fails every time. Looks like there's some kind of buffer where the DB stores query related data and on the 10000 docs it runs out of the resources. The collection itself has ~10500 docs. Also, searching by _id runs OK. Unfortunately, I have no opportunity to make new indexes because the operation fails just like read.
What temporary solution I may use before removing base64 images from the collection?
This happens because such a problematic data scheme causes huge RAM usage. The more entities there are in the collection, the more RAM is needed not only to perform well but even to run find.
Increasing MongoDB default RAM usage with storage.wiredTiger.engineConfig.cacheSizeGB config option allowed all the operations to run fine.

change collection data formats within Meteor

Looking for the best way to fix data formats in my Meteor app. When I started, I wasn't using anything like SimpleSchema or being as consistent as I should have been with Date formats.
So now I'd like to get everything back to proper Date objects.
I'm still new-ish to Mongo, and I was a little surprised to find- and please correct me if I'm wrong- that there's no way to update all records and modify an attribute using its current value. I've got timestamps that came from an API POST that might be Strings, epoch times from new Date().getTime(), some actual Dates, etc.
I plan to use moment(currentValue).toDate() to fix this. I'm using percolate:migrations for data changes 1) so that changes stay in my repo and 2) so data is consistent wherever the app is run. I've looked at this question and I assume I'll need to iterate over my collections. But snapshot() isn't available in Meteor.
Do I need to write and manually run a mongo script for this?
Generally I prefer to run migration scripts from the mongo shell since it's easier to execute (compared to deploying the code that runs the migration) and it gives you access to the full mongo api. You can run load(path/to/script) in the mongo console if you want to pre define your script.
snapshot() ensures you wont modify the same document twice. From MongoDB docs
Append the snapshot() method to a cursor to toggle the “snapshot” mode. This ensures that the query will not return a document multiple times, even if intervening write operations result in a move of the document due to the growth in document size.
Running without snapshot() would possibly result in passing a date object (that was just converted) to your update function. Since you are planning to cover this case already (you are saying you already have some date objects in your db) it doesnt change much. Ergo, you can run this from meteor without snapshot() but you might as well use the shell to get used to it :)
And you are correct that there is no way to update a document based on its current value. Looping through all documents and updating them one by one is rather slow, so if you have a huge collection you might want to schedule some downtime.

MongoDB - how do i see everything in a collection using the shell?

I deployed a test app (http://meteortipslinda2.meteor.com/) and added accounts-facebook. Facebook login does not work, however, because I probably input the wrong API key.
I know I can access the shell with meteor mongo meteortipslinda2.meteor.com, but I'm at a loss about how to see everything in these collections:
meteor_accounts_loginServiceConfiguration
meteor_oauth_pendingCredentials
meteor_oauth_pendingRequestTokens
I'm assuming that I can update the value to the correct one in the mongoDB shell, but first I need to figure out how to see the contents of the collections.
To list all the documents in a collection, just call find without any parameters.
db.myCollection.find()
If there are a lot of documents, it will batch them up. You will then be able to show the next batch by typing it.

Matching documents of mongodb with json

Is there any way to verify that all documents of a mongodb are correctly entered i.e. to check that the data in JSON file and the inserted data are same?
If yes how to do it? Consider that there are 3 millions documents in db.
I want to do it with java script.
Thanks
You will have to run a find for every document that you expect to be in the database, verifying that there is in fact an exact match present (just use the entire document as the match criteria).
In the future, you can use safe mode (safe=True in most drivers, but the syntax varies slightly) to make sure your writes do not fail. Using safe mode will alert you as to the results of the write.

mongodb get count without repeating find

When performing a query in MongoDb, I need to obtain a total count of all matches, along with the documents themselves as a limited/paged subset.
I can achieve the goal with two queries, but I do not see how to do this with one query. I am hoping there is a mongo feature that is, in some sense, equivalent to SQL_CALC_FOUND_ROWS, as it seems like overkill to have to run the query twice. Any help would be great. Thanks!
EDIT: Here is Java code to do the above.
DBCursor cursor = collection.find(searchQuery).limit(10);
System.out.println("total objects = " + cursor.count());
I'm not sure which language you're using, but you can typically call a count method on the cursor that's the result of a find query and then use that same cursor to obtain the documents themselves.
It's not only overkill to run the query twice, but there is also the risk of inconsistency. The collection might change between the two queries, or each query might be routed to a different peer in a replica set, which may have different versions of the collection.
The count() function on cursors (in the MongoDB JavaScript shell) really runs another query, You can see that by typing "cursor.count" (without parentheses), so it is not better than running two queries.
In the C++ driver, cursors don't even have a "count" function. There is "itcount", but it only loops over the cursor and fetches all results, which is not what you want (for performance reasons). The Java driver also has "itcount", and there the documentation says that it should be used for testing only.
It seems there is no way to do a "find some and get total count" consistently and efficiently.