pymongo - ensureIndex and upserts - mongodb

I have a simple dict that defines a base record as shown below:
record = {
'h': site_hash, #combination of date (below) and site id hashed with md5
'dt': d, # date - YYYYMMDD
'si': data['site'], # site id
'cl': data['client'], # client id
'nt': data['type'], # site type
}
Then I call the following to update the record if it doesn't exist with the following:
collection.update(
record,
{'$inc':updates}, # updates contain some values that increase such as events: 1, actions:1, etc
True # do upsert
);
I was wondering if I change the above to the following if it would have better performance since the code below only looks existing 'h' values instead of h/dt/si/cl/nt and I'd only need ensureIndex on the 'h' field. However, obviously $set would execute every time causing more writes the record as opposed to just $inc.
record = {
'h': site_hash, #combination of date (below) and site id hashed with md5
}
values = {
'dt': d, # date - YYYYMMDD
'si': data['site'], # site id
'cl': data['client'], # client id
'nt': data['type'], # site type
}
collection.update(
record,
{'$inc':updates,'$set':values},
True # do upsert
);
Does anyone have any tips or suggestions on best practice here?

If 'h' is already unique then you can just create an index on h, there's no need to index 'dt', 'si', etc. In that case I expect your first example to be a little more performant under very heavy load, for the somewhat obscure reason that it will create smaller entries in the journal.

Related

AR Query for jsonb attribute column

I'm using Postgres for my db, and I have a column in my Auction model as a jsonb column. I want to create a query that searches the Auction's json column to see whether a user's name OR email exists in any Auction instance's json column.
Right now I have #auctions = Auction.where('invitees #> ?', {current_user.name.downcase => current_user.email.downcase}.to_json), but that only brings back exact key => value matches I want to know whether the name OR the email exists.
You're using the #> operator. The description for it is this:
“Does the left JSON value contain the right JSON path/value entries
at the top level?”
You probably want the ? operator if you want to return a row when the key (name in your case) matches.
There's not a good way to search for values only in a JSON column (see this answer for more details), but you could check if the key exists alone or the key and value match exists.
The same ActiveRecord methods and chaining apply as when using non-JSON columns, namely where and where(…).or(where(…)):
class Auction
def self.by_invitee(user)
name = user.name.downcase
json = { name => user.email } # note: you should be downcasing emails anyways
where('invitee ? :name', name: name).or(
where('invitee #> :json', json: json)
)
end
end
This is just a temporary patch until I add an Invite model, but casting the invitee column to text I can search the columns. Like so
SELECT * FROM auctions
WHERE data::text LIKE "%#{string I'm searching for}%"
So, AR:
Auction.where('data::text LIKE ?', "%#{string I'm searching for}%")

Why mongodb stores some numbers as NumberLong?

Mongodb is installed on Windows 8 and I use spring-data-mongodb to work with it.
I have collection with field pId, it's a number.
I see strange situation, mongodb stores some pId as simple number but some of them as NumberLong.
query:
db.mycollection.distinct("pId", {"dayDate" : { "$gte" : ISODate("2015-04-14T00:00:00.000Z")}})
output:
[ 61885, 61886, NumberLong(61887) ]
Why it happens and may I change something to use the same data type for all pId values?
I have been using Laravel with MongoDB, and as far as my understanding of MongoDB, I founded that:
If you're saving your data (numbers) in quote than, it will save as string, but if you assign number to variable in normal way (no quotes) then it will save in NumberLong format or when you are doing type cast by using (int)
Example.
$data = '1' (Stores as string)
$data = 1 or $data = (int)'1' (Store as NumberLong)

MongoDB 2.6 aggregation updates the $out collection

I'm currently using MongoDB 2.6 through MongoHQ. I've several mapreduces jobs which crunch raw data from a collection (c1) to produce a new collection (c2).
I've also an aggregation pipeline which parses (c2) to generate a new collection (c3) with the great $out operator.
However, I need to add extra fields to (c3) outside of the aggregation pipeline and keep them even after a new run of the aggregation but it seems that aggregation, based on the _id key just overwrite the content without updating it. So if I've previously add an extra field like foo : 'bar' to (c3) and I re-run the aggregation, I will loose the foo field.
Based on documentation (http://docs.mongodb.org/manual/reference/operator/aggregation/out/#pipe._S_out)
Replace Existing Collection
If the collection specified by the $out operation already exists, then upon completion of the aggregation, the $out stage atomically replaces the existing collection with the new results collection. The $out operation does not change any indexes that existed on the previous collection. If the aggregation fails, the $out operation makes no changes to the pre-existing collection.
Is there a better way or a tricky one :-) to update the $out collection instead of overwriting records with same _id ? I could write a python script or javascript to do the job but I would to avoid doing many database calls and in a smarter way as aggregation. May be it is not possible, so I will look for a different and more 'classical' path.
Thanks for your help
Well, not directly with the $out operator as much with the mapReduce output this is pretty much an "overwrite" operation (though mapReduce does have "merge" and "reduce" modes as well).
But since you have a MongoDB 2.6 version you do actually return a "cursor". So while the "client/server" interaction may not be as optimal as you want but you also have "bulk update" operations so you can do something along the lines of:
var cursor = db.collection.aggregate([
// pipeline here
]);
var batch = [];
while ( cursor.hasNext() ) {
var doc = cursor.next();
var updoc = {
"q": { "_id": doc._id },
"u": {
// only new fields except for
"$setOnInsert": {
// the fields you expect to add from before
},
"upsert": true
}
};
batch.push(updoc);
// try to do sensible under 16MB updates, number may vary
if ( ( batch.length % 500 ) == 0 ) {
db.runCommand({
"update": "newcollection",
"updates": batch
});
batch = []; // reset the content
}
}
db.runCommand({
"update": "newcollection",
"updates": batch
});
And of course, though there will be many naysayers, and not without reason because you really need to weigh up the consequences ( which are very real ), you can always wrap what is essentially a JavaScript call with db.eval() in order to get the full server side execution.
But where possible ( and that is unless you have a completely remote database solution ), then it is generally advised to take the "client/server" option, but keep the process as "close" ( in networking terms ) to the server as possible.
Unlike Map reduce it seems as though the $out operator in the aggregation framework has a very specific set of pre-defined behaviours ( http://docs.mongodb.org/manual/reference/operator/aggregation/out/#behaviors ), however, it does seem that the $out option could change, I did not find a JIRA relating to this specific case however others have posted changes ( https://jira.mongodb.org/browse/SERVER-13201 ).
As for solving your problem now, you either are forced to revert back to Map Reduce (I don't know the scenario from where this is being run) or aggregate in a certain manner that allows you to feed in the new data and the old data you need.
Most common way of achieving this might be to update the original rows with the new data, maybe by aggregating the original row back down to itself.
Thanks for all your messages.
As I do not want to use cursor (requests consuming) I try to get the job by combining 2 map reduces jobs and one aggregation. It is quite 'fat' but it works and could give some idea for others.
Of course, I would be very pleased hearing from you other great alternatives.
So, I have a collection c1 which is the result of a previous mapreduce job as you could see by the value object.
c1 : { id:'xxxx', value:{ language:'...', keyword: '...', params: '...', field1: val1, field2: val2}}
the xxxx unique ID key is the concatenation of the value.language , value.keyword and value.params as follow :
*xxxx = _*
I've got another collection c2 : { _id : ObjectID, language:'...', keyword:'...', field1: val1, field2: val2, labels: 'yyyyy'} which is quite a
projection of the c1 collection but with an extra field labels which is a string with different labels comma separated. This c2 collection is a central repository for all combination of language and keywords with their attached field values.
Target
The target is to group all records from the c1 collection based on the
group key _, make some calculations on
other fields and store the result to the c2 collection but by keeping
the old 'labels' field from c2 with the same key. So fields1 & 2 of
this c2 collection will be recalculated each time we launch the whole
batch but the labels field will stay unchanged.
As described in my first message, by using aggregation or mapreduce jobs you could not reach this target as the 'labels' field will be removed.
As I do not want to use cursors and other foreach loop which are very network and database resquests consuming (I have a big collection and I use a MongoHQ service)
I try to solve the problem by using mapreduce and aggregation jobs.
1st Phase
So, firstly I run a mapreduce job (m1) which is a sort of copy of the c2 collection but clearing the value of field1 & 2 to 0. The result will be store in a c3 collection.
function m1Map(){
language = this['value']['language'];
keyword = this['value']['keyword'];
labels = this['labels'];
key = language + '_' + keyword;
emit(key,{'language':language,'keyword':keyword,'field1': 0, 'field2': 0.0, 'labels' : labels});
}
function m1Reduce(key,values){
language = values[0]['language'];
keyword = values[0]['keyword'];
labels = values[0]['labels'];
return {'language':language,'keyword':keyword,'field1': 0, 'field2': 0.0, 'labels' : labels}};
}
So now, c3 is a copy of c2 collection with field1&2 set to 0. Here is the shape of this collection :
c3 : { id:'', value:{ language:'...', keyword: '...', field1: 0, field2: 0.0, labels: '...'}}
2nd Phase
In a second step I run a mapreduce job (m2) which group the c1 collection value by the key _ and I project an extra field 'labels' with a fixed value 'x' in my example. This 'x' value is never used on the c2 collection, that is a special value. The output of this m2 mapreduce job will be stored in the same previous c3 collection with a 'reduce' option in the out directive. The python script will be described further.
function m2Map(){
language = this['value']['language'];
keyword = this['value']['keyword'];
field1 = this['value']['field1'];
field2 = this['value']['field2'];
key = language + '_' + keyword;
emit(key,{'language':language,'keyword':keyword,'field1': field1, 'field2': field2, 'labels' : 'x'});
}
Then I make some calculations on the Reduce function :
function m2Reduce(key,values){
// Init
language = values[0]['language'];
keyword = values[0]['keyword'];
field1 = 0;
field2 = 0;
bLabel = 0;
for (var i = 0; i < values.length; i++){
if (values[i]['labels'] == 'x') {
// We know these emit values are coming from the map and not from previous value on the c2 collection
// 'x' is never used on the c2 collection
field1 += parseInt(values[i]['field1']);
field2 += parseFloat(values[i]['field2']);
} else {
// these values are from the c2 collection
if (bLabel == 0) {
// we keep the former value for the 'labels' field
labels = values[i]['labels'];
bLabel = 1;
} else {
// we concatenate the 'labels' field if we have 2 records but theorytically it is impossible as c2 has only one record by unique key
// anyway, a good check afterwards :-)
labels += ','+values[i]['labels'];
}
}
}
if (bLabel == 0) {
// if values are only coming from the map emit, we force again the 'x' value for labels, it these values are re-used in another reduce call
labels = 'x';
}
return {'language':language,'keyword':keyword, 'field1': field1, 'field2': field2, 'labels' : labels};
}
The Python mapreduce script which calls the two m1 & m2 mapreduce jobs
(see pymongo for import : http://api.mongodb.org/python/2.7rc0/installation.html)
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from pymongo import MongoClient
from pymongo import MongoReplicaSetClient
from bson.code import Code
from bson.son import SON
# MongoHQ
uri = 'mongodb://user:passwd#url_node1:port,url_node2:port/mydb'
client = MongoReplicaSetClient(uri,replicaSet='set-xxxxxxx')
db = client.mydb
coll1 = db.c1
coll2 = db.c2
#Load map and reduce functions
m1_map = Code(open('m1Map.js','r').read())
m1_reduce = Code(open('m1Reduce.js','r').read())
m2_map = Code(open('m2Map.js','r').read())
m2_reduce = Code(open('m2Reduce.js','r').read())
#Run the map-reduce queries
results = coll2.map_reduce(m1_map,m1_reduce,"c3",query={})
results = coll1.map_reduce(m2_map,m2_reduce,out=SON([("reduce", "c3")]),query={})
3rd Phase
At this point, we have a c3 collection which is complete with all field 1 & 2 computed values and the labels kept. So now, we have to run a last aggregation pipeline to copy the c3 content (in a mapreduce form with a compound value) to a more classical collection c2 with flatten fields without the value object.
db.c3.aggregate([{$project : { _id: 0, keyword: '$value.keyword', language: '$value.language', field1: '$value.field1', field2 : '$value.field2', labels : '$value.labels'}},{$out:'c2'}])
Et voilà ! The target is reached. This solution is quite long with 2 mapreduce jobs and one aggregation pipeline but this is an alternative solution for those who do not want to use consuming cursor or external loop.
Thanks.

MongoDB MongoEngine index declaration

I have Document
class Store(Document):
store_id = IntField(required=True)
items = ListField(ReferenceField(Item, required=True))
meta = {
'indexes': [
{
'fields': ['campaign_id'],
'unique': True
},
{
'fields': ['items']
}
]
}
And want to set up indexes in items and store_id, does my configuration right?
Your second index declaration looks like it should do what you want. But to make sure that the index is really effective, you should use explain. Connect to your database with the mongo shell and perform a find-query which should use that index followed by .explain(). Example:
db.yourCollection.find({items:"someItem"}).explain();
The output will be a document with lots of fields. The documentation explains what exactly each field means. Pay special attention to these fields:
millis Time in milliseconds the query required
indexOnly (self-explaining)
n number of returned documents
nscannedObjects the number of objects which had to be examined without using an index. For an index-only query this should be equal to n. When it is higher, it means that some documents could not be excluded by an index and had to be scanned manually.

MongoDB with update queries, create and add item to list or increase item's counter

I have the following document schema:
{
date: dateValue
items:
[
{ name: 'a', counter: 4},
{ name: 'b', counter: 17},
{ name: 'aabbb', counter: 15},
...
]
}
I would like to have an update query with upsert that creates the entire record if the record does not exist.
In addition, i want to check if a certain item exists on the list (by it's name),
if the item doesn't exist i want to add a new one to the list with counter = 1.
If the item exists raise the counter by 1.
Is there any way to do this query in with one update statment ?
You need to do two things:
Use the {upsert:1} flag on update to insert particular date document if it doesn't already exist.
Use {$inc} operator to increment your item values. It turns out that if you increment a field that doesn't exist by 1 it will be created with value 1 (it's as if it existed with value 0).
You may not be able to get the above accomplished with the schema you currently have. In order to increment a counter it has to be the name - i.e. "a":1, "b":17, etc. You currently have it as key:"name", counter:"value" which means you can only update them with positional operator. But positional operator requires that you match an element in order to successfully update it, so there goes the strategy to use $inc
So it looks like if you want to do this in a single update statement you would need to change your schema - only you can decide if that's the way to go since it may affect how your other reads and writes interact with the data.