If for example I keep lists of user posts in redis, for example a user has 1000 posts, and the posts documents are stored into mongodb but the link between the user and the posts is stored inside redis, I can rtetrieve the array containing all the ids of a user post from redis, but what is the efficient way to retrieving them from mongodb?
do I pass a parameter to mongoDB with the array of ids, and mongo will fetch those for me?
I don't seem to find any documentation on this, if Anyone is willing to help me out!
thanks in advance!
To retrieve a number of documents per id, you can use the $in operator to build the MongoDB query. See the following section from the documentation:
http://docs.mongodb.org/manual/reference/operator/query/in/#op._S_in
For instance you can build a query such as:
db.mycollection.find( { _id : { $in: [ id1, id2, id3, .... ] } } )
Depending on how much ids will be returned by Redis, you may have to group them in batch of n items (n=100 for instance) to run several MongoDB queries. IMO, this is a bad practice to build such query containing more than a few thousands ids. It is better to have smaller queries but accept to pay for the extra roundtrips.
Related
I would like to have an opinion from MongoDB experts as what mongodb query optimization / feature options, I can apply to make read query faster for such large collection of documents (datasets) to be available in one go (find query).
May be someone in the mongodb community has faced similar issue and they have some better thoughts to resolve the same.
I am using MongoDB2.6
The requirement is to fetch all the records in one query since these documents are populated in the excel sheet for users to download the data in excel.
I have a large collection of users somwhere 2000000 documents in the user collection.
User Collection fields:
{
_id: ObjectId("54866f133345a61b6b9a701"),
appId: "20fabc6",
firstName: "FN",
lastName: "LN",
email: "ln#1.com",
userType: "I",
active: "true",
lastUpdatedBy: "TN",
lastUpdatedDate: ISODate("2013-01-24T05:00:00Z"),
createdTime: ISODate("2011-01-24T05:00:00Z"),
}
I have a find query which is currently tying to fetch 900000 documents from User collection. It looks like the query get stuck while fetching that much amount of documents.
Below is the find query:
db.test.User.find({"appId": "20fabc6"}).sort({"appId" : 1});
Query Function:
List<Users> findByAppId(Object[] appIds) {
Query query = new Query();
query.addCriteria(Criteria.where("appId").in(appIds));
return mongoTemplate.find(query, Users.class);
}
I have placed index on the above appId field, still the query is taking too long.
I ran the count on the query and I can see 900000 records for the above find query matching the appId
db.test.User.find({"appId": "20fabc6"}).count();
900000
Below are some of the options, I can think of which can reduce the amount of documents:
1) Adding more fields to filter the records- which is still large amount
db.test.User.find({"appId": "20fabc6"}, "active": "true").count();
700000
2) Limit the no.of records using mongodb limit operation for range queries - which will impact our first requirement to download all the user data in one go into excel sheet.
Does Aggregation with Cursor will help or sharding in cluster will help, if we have to execute the above find query and fetch that much amount of documents (900000) in one go?
I will appreciate for any help or pointers to resolve the same.
Thanks.
Your sort() is unnecessary as you are trying to find only documents with appId of 20fabc6, so why would you then sort by this same appId since it will be the same for all records returned?
Create an index on the appId field
db.test.User.ensureIndex({"appId":1})
And your query should only scan 900000 documents. You can double check this with performance metadata by using the .explain() method on your find.
db.test.User.find({"appId": "20fabc6"}).explain()
I have a posts collection which stores posts related info and author information. This is a nested tree.
Then I have a postrating collection which stores which user has rated a particular post up or down.
When a request is made to get a nested tree for a particular post, I also need to return if the current user has voted, and if yes, up or down on each of the post being returned.
In SQL this would be something like "posts.*, postrating.vote from posts join postrating on postID and postrating.memberID=currentUser".
I know MongoDB does not support joins. What are my options with MongoDB?
use map reduce - performance for a simple query?
in the post document store the ratings - BSON size limit?
Get list of all required posts. Get list of all votes by current user. Loop on posts and if user has voted add that to output?
Is there any other way? Can this be done using aggregation?
NOTE: I started on MongoDB last week.
In MongoDB, the simplest way is probably to handle this with application-side logic and not to try this in a single query. There are many ways to structure your data, but here's one possibility:
user_document = {
name : "User1",
postsIhaveLiked : [ "post1", "post2" ... ]
}
post_document = {
postID : "post1",
content : "my awesome blog post"
}
With this structure, you would first query for the user's user_document. Then, for each post returned, you could check if the post's postID is in that user's "postsIhaveLiked" list.
The main idea with this is that you get your data in two steps, not one. This is different from a join, but based on the same underlying idea of using one key (in this case, the postID) to relate two different pieces of data.
In general, try to avoid using map-reduce for performance reasons. And for this simple use case, aggregation is not what you want.
I am trying to fetch the documents from a collection based on the existence of a reference to these documents in another collection.
Let's say I have two collections Users and Courses and the models look like this:
User: {_id, name}
Course: {_id, name, user_id}
Note: this just a hypothetical example and not actual use case. So let's assume that duplicates are fine in the name field of Course. Let's thin Course as CourseRegistrations.
Here, I am maintaining a reference to User in the Course with the user_id holding the _Id of User. And note that its stored as a string.
Now I want to retrieve all users who are registered to a particular set of courses.
I know that it can be done with two queries. That is first run a query and get the users_id field from the Course collection for the set of courses. Then query the User collection by using $in and the user ids retrieved in the previous query. But this may not be good if the number of documents are in tens of thousands or more.
Is there a better way to do this in just one query?
What you are saying is a typical sql join. But thats not possible in mongodb. As you suggested already you can do that in 2 different queries.
There is one more way to handle it. Its not exactly a solution, but the valid workaround in NonSql databases. That is to store most frequently accessed fields inside the same collection.
You can store the some of the user collection fields, inside the course collection as embedded field.
Course : {
_id : 'xx',
name: 'yy'
user:{
fname : 'r',
lname :'v',
pic: 's'
}
}
This is a good approach if the subset of fields you intend to retrieve from user collection is less. You might be wondering the redundant user data stored in course collection, but that's exactly what makes mongodb powerful. Its a one time insert but your queries will be lot faster.
For example, we have two collections
users {userId, firstName, lastName}
votes {userId, voteDate}
I need a report of the name of all users which have more than 20 votes a day.
How can I write query to get data from MongoDB?
The easiest way to do this is to cache the number of votes for each user in the user documents. Then you can get the answer with a single query.
If you don't want to do that, the map-reduce the results into a results collection, and query that collection. You can then run incremental map-reduces that only calculate new votes to keep your results up to date: http://www.mongodb.org/display/DOCS/MapReduce#MapReduce-IncrementalMapreduce
You shouldn't really be trying to do joins with Mongo. If you are you've designed your schema in a relational manner.
In this instance I would store the vote as an embedded document on the user.
In some scenarios using embedded documents isn't feasible, and in that situation I would do two database queries and join the results at the client rather than using MapReduce.
I can't provide a fuller answer now, but you should be able to achieve this using MapReduce. The Map step would return the userIds of the users who have more than 20 votes, the reduce step would return the firstName and lastName, I think...have a look here.
I've a collection named Events. Each Eventdocument have a collection of Participants as embbeded documents.
Now is my question.. is there a way to query an Event and get all Participants thats ex. Age > 18?
When you query a collection in MongoDB, by default it returns the entire document which matches the query. You could slice it and retrieve a single subdocument if you want.
If all you want is the Participants who are older than 18, it would probably be best to do one of two things:
Store them in a subdocument inside of the event document called "Over18" or something. Insert them into that document (and possibly the other if you want) and then when you query the collection, you can instruct the database to only return the "Over18" subdocument. The downside to this is that you store your participants in two different subdocuments and you will have to figure out their age before inserting. This may or may not be feasible depending on your application. If you need to be able to check on arbitrary ages (i.e. sometimes its 18 but sometimes its 21 or 25, etc) then this will not work.
Query the collection and retreive the Participants subdocument and then filter it in your application code. Despite what some people may believe, this isnt terrible because you dont want your database to be doing too much work all the time. Offloading the computations to your application could actually benefit your database because it now can spend more time querying and less time filtering. It leads to better scalability in the long run.
Short answer: no. I tried to do the same a couple of months back, but mongoDB does not support it (at least in version <= 1.8). The same question has been asked in their Google Group for sure. You can either store the participants as a separate collection or get the whole documents and then filter them on the client. Far from ideal, I know. I'm still trying to figure out the best way around this limitation.
For future reference: This will be possible in MongoDB 2.2 using the new aggregation framework, by aggregating like this:
db.events.aggregate(
{ $unwind: '$participants' },
{ $match: {'age': {$gte: 18}}},
{ $project: {participants: 1}
)
This will return a list of n documents where n is the number of participants > 18 where each entry looks like this (note that the "participants" array field now holds a single entry instead):
{
_id: objectIdOfTheEvent,
participants: { firstName: 'only one', lastName: 'participant'}
}
It could probably even be flattened on the server to return a list of participants. See the officcial documentation for more information.