Mongo - split 1 query into N queries - mongodb

I have a collection of millions of docs as follows:
{
customerId: "12345" // string of numbers
foo: "xyz"
}
I want to read every document in the collection and use the data in each for a large batch job. Each customer is independent, but 1 customer may have multiple docs which must be processed together.
I would like to split the work into N separate queries i.e. N tasks (that can be spread over M clients if N > M).
How can each query consider different exclusive and adjoining sets of customers efficiently?
One way might be for task 1 to query all customers who's ids start with "1"; task2 to query all docs for all customers who's ids start with "2" etc - giving N=10, which is spreadable over up to 10 clients. Not sure whether querying by substring is fast though. Is there a better method?

You may use $skip / $limit operators to split your data into separate queries.
Pseudocode
I assume MongoDB driver automatically generates an ObjectId for the _id field
var N = 10;
var M = db.collection.count({});
// We calculate how many tasks we should execute
var tasks = M / N + (M % N > 0 ? 1 : 0);
//Iterate over tasks to get fixed amount data for each job
for (var i = 0; i < tasks; i++) {
var batch = db.collection.aggregate([
{ $sort : { _id : 1 } },
{ $skip : i },
{ $limit : N },
//Use $lookup "multiple docs"
]).toArray();
//i=0 data: 0 - 10
//i=1 data: 11 - 20
//i=2 data: 21 - 30
...
//i=100 data: 1000 - 1010
//Note: If there are no enough N results, MongoDB will return 0 ... N records
// Process batch here
}
Traceability
How can you know if job finished or not? Where job stuck?
Add extra fields once you finish job execution:
jobId - You can know what task processed this data
startDate - When did data processing started
endDate - When did data processing finished

Related

Bucketing and counting for histogram in MongoDB

I want to implement a histogram based on the data stored in MongoDB. I want to get counts based on bucketing. I have to create buckets based on only one input value that is number of groups. for example group = 4
Consider there are multiple transactions are running and we stored transaction time as one of the fields. I want to calculate counts of transactions based on time required to finish the transaction.
How can I use aggregation framework or map reduce to create a bucketing?
Sample data:
{
"transactions": {
"149823": {
"timerequired": 5
},
"168243": {
"timerequired": 4
},
"168244": {
"timerequired": 10
},
"168257": {
"timerequired": 15
},
"168258": {
"timerequired": 8
},
"timerequired": 18
}
}
In the output I want to print bucket size and count of transactions fall into that bucket.
Bucket count
0-5 2
5-10 2
10-15 1
15-20 1
From mongo version 3.4, the functions $bucket and $bucketAuto are available . They can easily solve your request:
db.transactions.aggregate( [
{
$bucketAuto: {
groupBy: "$timerequired",
buckets: 4
}
}
])

MongoDB Query advice for weighted randomized aggregation

By far I have encountered ways for selecting random documents but my problem is a bit more of a pickle.So here goes
I have a collection which contains say a 1000+ documents (products)
say each document has a more or less generic format of .Say for simplicity it is
{"_id":{},"name":"Product1","groupid":5}
The groupid is a number say between 1 to 20 denoting the product belongs to that group.
Now if my query input is something like an array of {groupid->weight} for eg {[{"2":4},{"7":6}]} and say another parameter n(=10 say) Then I need to be able to pick 4 random documents that belong to groupid 2 and 6 random documents that belong to groupid 7.
The only solution i can think of is to run 'm' subqueries where m is the array length in the query input.
How do I accomplish this an efficient manner in MongoDB using probably a Mapreduce.
Picking up n random documents for each group.
Group the records by the groupid field. Emit the groupid as key
and the record as value.
For each group pick n random documents from the values array.
Let,
var parameter = {"5":1,"6":2}; //groupid->weight, keep it as an Object.
be the input to the map reduce functions.
The map function, emit only those group ids which we have provided as the parameter.
var map = function map(){
if(parameter.hasOwnProperty(this.groupid)){
emit(this.groupid,this);
}
}
The reduce function, for each group, get random records based on the parameter object in scope.
var reduce = function(key,values){
var length = values.length;
var docs = [];
var added = [];
var i= 1;
while(i<=parameter[key]){
var index = Math.floor(Math.random()*length);
if(added.indexOf(index) == -1){
docs.push(values[index]);
added.push(index);
i++;
}
else{
i--;
}
}
return {result:docs};
}
Invoking map reduce on the collection, by passing the parameter object in scope.
db.collection.mapReduce(map,
reduce,
{out: "sam",
scope:{"parameter":{"5":1,"6":2,"n":10}}})
To get the dumped output:
db.sam.find({},{"_id":0,"value.result":1}).pretty()
When you bring the parameter n into picture, you need to specify the number of documents for each group as a ratio, or else that parameter is not necessary at all.

search in limited number of record MongoDB

I want to search in the first 1000 records of my document whose name is CityDB. I used the following code:
db.CityDB.find({'index.2':"London"}).limit(1000)
but it does not work, it return the first 1000 of finding, but I want to search just in the first 1000 records not all records. Could you please help me.
Thanks,
Amir
Note that there is no guarantee that your documents are returned in any particular order by a query as long as you don't sort explicitely. Documents in a new collection are usually returned in insertion order, but various things can cause that order to change unexpectedly, so don't rely on it. By the way: Auto-generated _id's start with a timestamp, so when you sort by _id, the objects are returned by creation-date.
Now about your actual question. When you first want to limit the documents and then perform a filter-operation on this limited set, you can use the aggregation pipeline. It allows you to use $limit-operator first and then use the $match-operator on the remaining documents.
db.CityDB.aggregate(
// { $sort: { _id: 1 } }, // <- uncomment when you want the first 1000 by creation-time
{ $limit: 1000 },
{ $match: { 'index.2':"London" } }
)
I can think of two ways to achieve this:
1) You have a global counter and every time you input data into your collection you add a field count = currentCounter and increase currentCounter by 1. When you need to select your first k elements, you find it this way
db.CityDB.find({
'index.2':"London",
count : {
'$gte' : currentCounter - k
}
})
This is not atomic and might give you sometimes more then k elements on a heavy loaded system (but it can support indexes).
Here is another approach which works nice in the shell:
2) Create your dummy data:
var k = 100;
for(var i = 1; i<k; i++){
db.a.insert({
_id : i,
z: Math.floor(1 + Math.random() * 10)
})
}
output = [];
And now find in the first k records where z == 3
k = 10;
db.a.find().sort({$natural : -1}).limit(k).forEach(function(el){
if (el.z == 3){
output.push(el)
}
})
as you see your output has correct elements:
output
I think it is pretty straight forward to modify my example for your needs.
P.S. also take a look in aggregation framework, there might be a way to achieve what you need with it.

Mongo aggregation filter

We have several map-reduce jobs that run on a scheduler and aggregate some counts for us. We'd like to switch these over to real time aggregate calls. The problem is, the map-reduce in all it's infinite flexibility is tallying 4 different counts for the collection it runs against. Things like:
var result =
{
MessageId: this.MessageId,
Date: this.Created,
Queued : (this.Status == 0 ? 1 : 0),
Sent : (this.Status == 1 ? 1 : 0),
Failed : (this.Status == 2 ? 1 : 0),
Total : 1,
Unsubscribes: 0
};
If I were in SQL I don't think I could pull this off with a single GROUP BY/SUM, because I need a different filter for each SUM. Is it possible in mongo, or do I need to run 4 separate $group statements with different $match clauses?

optimizing hourly statistics retrieval with mongodb

I've collected about 10 mio documents spaning a few weeks in my mongodb database, and I want to be able to calculate some simple statistics and output them.
The statistics I'm trying to get is the average of the rating on each document within a timespan, in one hour intervals.
To give an idea of what I'm trying to do, follow this sudo code:
var dateTimeStart;
var dateTimeEnd;
var distinctHoursBetweenDateTimes = getHours(dateTimeStart, dateTimeEnd);
var totalResult=[];
foreach( distinctHour in distinctHoursBetweenDateTimes )
tmpResult = mapreduce_getAverageRating( distinctHour, distinctHour +1 )
totalResult[distinctHour] = tmpResult;
return totalResult;
My document structure is something like:
{_id, rating, topic, created_at}
Created_at is the date I'm gathering my statistics based on (time of insertion and time created are not always the same)
I've created an index on the created_at field.
The following is my mapreduce:
map = function (){
emit( this.Topic , { 'total' : this.Rating , num : 1 } );
};
reduce = function (key, values){
var n = {'total' : 0, num : 0};
for ( var i=0; i<values.length; i++ ){
n.total += values[i].total;
n.num += values[i].num;
}
return n;
};
finalize = function(key, res){
res.avg = res.total / res.num;
return res;
};
I'm pretty sure this can be done more effectively - possibly by letting mongo do more work, instead of running several map-reduce statements in a row.
At this point each map-reduce takes about 20-25 seconds so counting statistics for all the hours over a few days suddenly takes up a very long time.
My impression is that mongo should be suited for this kind of work - hence I must obviously be doing something wrong.
Thanks for your help!
And I assume the time is part of the documents you are MapReducing?
When you run the MapReduce over all documents, determine the hour in the map function and add it to the key you emit, you could do all this in a single MapReduce.