Find Modify and Insert into the Same Collection of MongoDB - mongodb

Can any one help me with below case in MongoDB:
find documents with a condition from Collection A. (lets say we got
2 documents)
Modify those 2 documents and then insert into Same collection A , with out disturbing original 2 documents.
Aggregation wont support merge into same collection, I got it through simple javascript, but we need in MapReduce.
Below is my Simple Script:
db.col1.find({
"field1": "value"
}).forEach(function(d1) {
d1.f2 = NumberInt(d1.f2 / 10);
db.col1.save(d1)
})

Before saving the modified document d1, change its _id with a new ObjectId(), this will insert new d1 instead of updating the existing one:
db.col1.find({"field1" : "value"}).forEach(function(d1)
{
d1.f2 = NumberInt(d1.f2/10);
d1._id = ObjectId();
db.col1.save(d1);
})

Related

How to overwrite docs from one collection to other collection in mongodb

I have docs in collection c1, and update_dt. I want to insert all these docs from c1 to collection c2. It might be possible some docs already exists in c2 but still I want to insert updated docs to collection c2.
e.g-
collection c1-
doc1- { _id : 100, update_dt : "2015-10-29T15:45:00.000Z", Name: "Leo" }
Collection c2-
doc1- { _id : 100, update_dt : "2015-10-10T15:45:00.000Z" }.
And I want to overwrite doc1 of collection c1 to doc1 of collection c2.
I tried but shows duplicate key error as _id field can't be same.
I want to automate this, will really appreciate your help.

MongoDB select subdocument with aggregation function

I have a mongo DB collection that looks something like this:
{
{
_id: objectId('aabbccddeeff'),
objectName: 'MyFirstObject',
objectLength: 0xDEADBEEF,
objectSource: 'Source1',
accessCounter: {
'firstLocationCode' : 283,
'secondLocationCode' : 543,
'ThirdLocationCode' : 564,
'FourthLocationCode' : 12,
}
}
...
}
Now, assuming that this is not the only record in the collection and that most/all of the documents contain the accessCounter subdocument/field how will I go with selecting the x first documents where I have the most access from a specific location.
A sample "query" will be something like:
"Select the first 10 documents From myCollection where the accessCounter.firstLocationCode are the highest"
So a sample result will be X documents where the accessCounter. will be the greatest is the database.
Thank your for taking the time to read my question.
No need for an aggregation, that is a basic query:
db.collection.find().sort({"accessCounter.firstLocation":-1}).limit(10)
In order to speed this up, you should create a subdocument index on accessCounter first:
db.collection.ensureIndex({'accessCounter':-1})
assuming the you want to do the same query for all locations. In case you only want to query firstLocation, create the index on accessCounter.firstLocation.
You can speed this up further in case you only need the accessCounter value by making this a so called covered query, a query of which the values to return come from the index itself. For example, when you have the subdocument indexed and you query for the top secondLocations, you should be able to do a covered query with:
db.collection.find({},{_id:0,"accessCounter.secondLocation":1})
.sort("accessCounter.secondLocation":-1).limit(10)
which translates to "Get all documents ('{}'), don't return the _id field as you do by default ('_id:0'), get only the 'accessCounter.secondLocation' field ('accessCounter.secondLocation:1'). Sort the returned values in descending order and give me the first ten."

Finding if the document is unique

I am querying 3 collections in MongoDB and then creating a new document by taking some fields from the documents of the 3 separate collections. For example: I am taking field 'A' from first collection, field 'B' from second and field 'C' from third.
Using them i am creating a json document like
var uploadDoc = {
'A' : <value of A>,
'B' : <value of B>,
'C' : <value of C>,
}
This uploadDoc is being uploaded to another collection.
Question: I wish to upload only distinct values of uploadDoc. By default MongoDB gives each uploadDoc a unique id. How do I insert uplodDocs to the collection only when another document with the same A, B and C values hasn't been inserted before?
I am using javascript to query the collections and create docs.
Two ways are simple:
use "upserts"
db.collection.update(uploadDoc,uploadDoc,{ "upsert": true })
Use a unique index
db.collection.ensureIndex({ "A": 1, "B": 1, "C": 1 },{ "unique": true });
db.collection.insert(uploadDoc); // Same thing fails :(
Both work. Choose one.
You should use Unique Indexes: doc
You shouldn't use upsert without unique indexes:
To avoid inserting the same document more than once, only use upsert: true if the query field is uniquely indexed.
because
Consider when multiple clients issue the following update with an upsert parameter at the same time:
[cut]
If all update() operations complete the query portion before any client successfully inserts data, and there is no unique index on the name field, then each update operation may result in an insert.
from here

MongoDB 2.6 aggregation updates the $out collection

I'm currently using MongoDB 2.6 through MongoHQ. I've several mapreduces jobs which crunch raw data from a collection (c1) to produce a new collection (c2).
I've also an aggregation pipeline which parses (c2) to generate a new collection (c3) with the great $out operator.
However, I need to add extra fields to (c3) outside of the aggregation pipeline and keep them even after a new run of the aggregation but it seems that aggregation, based on the _id key just overwrite the content without updating it. So if I've previously add an extra field like foo : 'bar' to (c3) and I re-run the aggregation, I will loose the foo field.
Based on documentation (http://docs.mongodb.org/manual/reference/operator/aggregation/out/#pipe._S_out)
Replace Existing Collection
If the collection specified by the $out operation already exists, then upon completion of the aggregation, the $out stage atomically replaces the existing collection with the new results collection. The $out operation does not change any indexes that existed on the previous collection. If the aggregation fails, the $out operation makes no changes to the pre-existing collection.
Is there a better way or a tricky one :-) to update the $out collection instead of overwriting records with same _id ? I could write a python script or javascript to do the job but I would to avoid doing many database calls and in a smarter way as aggregation. May be it is not possible, so I will look for a different and more 'classical' path.
Thanks for your help
Well, not directly with the $out operator as much with the mapReduce output this is pretty much an "overwrite" operation (though mapReduce does have "merge" and "reduce" modes as well).
But since you have a MongoDB 2.6 version you do actually return a "cursor". So while the "client/server" interaction may not be as optimal as you want but you also have "bulk update" operations so you can do something along the lines of:
var cursor = db.collection.aggregate([
// pipeline here
]);
var batch = [];
while ( cursor.hasNext() ) {
var doc = cursor.next();
var updoc = {
"q": { "_id": doc._id },
"u": {
// only new fields except for
"$setOnInsert": {
// the fields you expect to add from before
},
"upsert": true
}
};
batch.push(updoc);
// try to do sensible under 16MB updates, number may vary
if ( ( batch.length % 500 ) == 0 ) {
db.runCommand({
"update": "newcollection",
"updates": batch
});
batch = []; // reset the content
}
}
db.runCommand({
"update": "newcollection",
"updates": batch
});
And of course, though there will be many naysayers, and not without reason because you really need to weigh up the consequences ( which are very real ), you can always wrap what is essentially a JavaScript call with db.eval() in order to get the full server side execution.
But where possible ( and that is unless you have a completely remote database solution ), then it is generally advised to take the "client/server" option, but keep the process as "close" ( in networking terms ) to the server as possible.
Unlike Map reduce it seems as though the $out operator in the aggregation framework has a very specific set of pre-defined behaviours ( http://docs.mongodb.org/manual/reference/operator/aggregation/out/#behaviors ), however, it does seem that the $out option could change, I did not find a JIRA relating to this specific case however others have posted changes ( https://jira.mongodb.org/browse/SERVER-13201 ).
As for solving your problem now, you either are forced to revert back to Map Reduce (I don't know the scenario from where this is being run) or aggregate in a certain manner that allows you to feed in the new data and the old data you need.
Most common way of achieving this might be to update the original rows with the new data, maybe by aggregating the original row back down to itself.
Thanks for all your messages.
As I do not want to use cursor (requests consuming) I try to get the job by combining 2 map reduces jobs and one aggregation. It is quite 'fat' but it works and could give some idea for others.
Of course, I would be very pleased hearing from you other great alternatives.
So, I have a collection c1 which is the result of a previous mapreduce job as you could see by the value object.
c1 : { id:'xxxx', value:{ language:'...', keyword: '...', params: '...', field1: val1, field2: val2}}
the xxxx unique ID key is the concatenation of the value.language , value.keyword and value.params as follow :
*xxxx = _*
I've got another collection c2 : { _id : ObjectID, language:'...', keyword:'...', field1: val1, field2: val2, labels: 'yyyyy'} which is quite a
projection of the c1 collection but with an extra field labels which is a string with different labels comma separated. This c2 collection is a central repository for all combination of language and keywords with their attached field values.
Target
The target is to group all records from the c1 collection based on the
group key _, make some calculations on
other fields and store the result to the c2 collection but by keeping
the old 'labels' field from c2 with the same key. So fields1 & 2 of
this c2 collection will be recalculated each time we launch the whole
batch but the labels field will stay unchanged.
As described in my first message, by using aggregation or mapreduce jobs you could not reach this target as the 'labels' field will be removed.
As I do not want to use cursors and other foreach loop which are very network and database resquests consuming (I have a big collection and I use a MongoHQ service)
I try to solve the problem by using mapreduce and aggregation jobs.
1st Phase
So, firstly I run a mapreduce job (m1) which is a sort of copy of the c2 collection but clearing the value of field1 & 2 to 0. The result will be store in a c3 collection.
function m1Map(){
language = this['value']['language'];
keyword = this['value']['keyword'];
labels = this['labels'];
key = language + '_' + keyword;
emit(key,{'language':language,'keyword':keyword,'field1': 0, 'field2': 0.0, 'labels' : labels});
}
function m1Reduce(key,values){
language = values[0]['language'];
keyword = values[0]['keyword'];
labels = values[0]['labels'];
return {'language':language,'keyword':keyword,'field1': 0, 'field2': 0.0, 'labels' : labels}};
}
So now, c3 is a copy of c2 collection with field1&2 set to 0. Here is the shape of this collection :
c3 : { id:'', value:{ language:'...', keyword: '...', field1: 0, field2: 0.0, labels: '...'}}
2nd Phase
In a second step I run a mapreduce job (m2) which group the c1 collection value by the key _ and I project an extra field 'labels' with a fixed value 'x' in my example. This 'x' value is never used on the c2 collection, that is a special value. The output of this m2 mapreduce job will be stored in the same previous c3 collection with a 'reduce' option in the out directive. The python script will be described further.
function m2Map(){
language = this['value']['language'];
keyword = this['value']['keyword'];
field1 = this['value']['field1'];
field2 = this['value']['field2'];
key = language + '_' + keyword;
emit(key,{'language':language,'keyword':keyword,'field1': field1, 'field2': field2, 'labels' : 'x'});
}
Then I make some calculations on the Reduce function :
function m2Reduce(key,values){
// Init
language = values[0]['language'];
keyword = values[0]['keyword'];
field1 = 0;
field2 = 0;
bLabel = 0;
for (var i = 0; i < values.length; i++){
if (values[i]['labels'] == 'x') {
// We know these emit values are coming from the map and not from previous value on the c2 collection
// 'x' is never used on the c2 collection
field1 += parseInt(values[i]['field1']);
field2 += parseFloat(values[i]['field2']);
} else {
// these values are from the c2 collection
if (bLabel == 0) {
// we keep the former value for the 'labels' field
labels = values[i]['labels'];
bLabel = 1;
} else {
// we concatenate the 'labels' field if we have 2 records but theorytically it is impossible as c2 has only one record by unique key
// anyway, a good check afterwards :-)
labels += ','+values[i]['labels'];
}
}
}
if (bLabel == 0) {
// if values are only coming from the map emit, we force again the 'x' value for labels, it these values are re-used in another reduce call
labels = 'x';
}
return {'language':language,'keyword':keyword, 'field1': field1, 'field2': field2, 'labels' : labels};
}
The Python mapreduce script which calls the two m1 & m2 mapreduce jobs
(see pymongo for import : http://api.mongodb.org/python/2.7rc0/installation.html)
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from pymongo import MongoClient
from pymongo import MongoReplicaSetClient
from bson.code import Code
from bson.son import SON
# MongoHQ
uri = 'mongodb://user:passwd#url_node1:port,url_node2:port/mydb'
client = MongoReplicaSetClient(uri,replicaSet='set-xxxxxxx')
db = client.mydb
coll1 = db.c1
coll2 = db.c2
#Load map and reduce functions
m1_map = Code(open('m1Map.js','r').read())
m1_reduce = Code(open('m1Reduce.js','r').read())
m2_map = Code(open('m2Map.js','r').read())
m2_reduce = Code(open('m2Reduce.js','r').read())
#Run the map-reduce queries
results = coll2.map_reduce(m1_map,m1_reduce,"c3",query={})
results = coll1.map_reduce(m2_map,m2_reduce,out=SON([("reduce", "c3")]),query={})
3rd Phase
At this point, we have a c3 collection which is complete with all field 1 & 2 computed values and the labels kept. So now, we have to run a last aggregation pipeline to copy the c3 content (in a mapreduce form with a compound value) to a more classical collection c2 with flatten fields without the value object.
db.c3.aggregate([{$project : { _id: 0, keyword: '$value.keyword', language: '$value.language', field1: '$value.field1', field2 : '$value.field2', labels : '$value.labels'}},{$out:'c2'}])
Et voilĂ  ! The target is reached. This solution is quite long with 2 mapreduce jobs and one aggregation pipeline but this is an alternative solution for those who do not want to use consuming cursor or external loop.
Thanks.

MongoDB, using group by aggregate framework to get unique strings

I'm trying to use the aggregation framework to group a lot of strings together to indentify the unique ones. I must also keep some information about the rest of the fields. This would be analogous to me using the * operator in mysql with a group by statement.
SELECT *
FROM my_table
GROUP BY field1
I have tried using the aggregation framework, and it works fine just to get unique fields.
db.mycollection.aggregate({
$group : { _id : "$field1"}
})
What if I want the other fields that went with that. MySQL would only give me the first one that appeared in the group (which I'm fine with). Thats what I thought the $first operator did.
db.mycollection.aggregate({
$group : {
_id : "$field1",
another_field : {$first : "$field2"}
}})
This way it groups by field1 but still gives me back the other fields attached to document. When I try this I get:
exception: aggregation result exceeds maximum document size (16MB)
Which I have a feeling is because it is returning the whole aggregation back as one document. Can I return it as another json array?
thanks in advance
You're doing the aggregation correctly, but as the error message indicates, the full result of the aggregate call cannot be larger than 16 MB.
Work-arounds would be to either add a filter to reduce the size of the result or use map-reduce instead and output the result to another collection.
If you unique values of the result does not exceed 2000 you could use group() function like
db.mycollection.group( {key : {field1 : 1, field2 : 1}}, reduce: function(curr, result){}, initial{} })
Last option would be map reduce:
db.mycollection.mapReduce( function() { emit( {field1 :1, field2: 1}, 1); }, function(key, values) { return 1;}, {out: {replace: "unique_field1_field2"}})
and your result would be in "unique_field1_field2" collection
Another alternative is use the distinct function:
db.mycollection.distinct('field1')
This functions accepts a second argument, a query, where you can filter the documents.