Finding if the document is unique - mongodb

I am querying 3 collections in MongoDB and then creating a new document by taking some fields from the documents of the 3 separate collections. For example: I am taking field 'A' from first collection, field 'B' from second and field 'C' from third.
Using them i am creating a json document like
var uploadDoc = {
'A' : <value of A>,
'B' : <value of B>,
'C' : <value of C>,
}
This uploadDoc is being uploaded to another collection.
Question: I wish to upload only distinct values of uploadDoc. By default MongoDB gives each uploadDoc a unique id. How do I insert uplodDocs to the collection only when another document with the same A, B and C values hasn't been inserted before?
I am using javascript to query the collections and create docs.

Two ways are simple:
use "upserts"
db.collection.update(uploadDoc,uploadDoc,{ "upsert": true })
Use a unique index
db.collection.ensureIndex({ "A": 1, "B": 1, "C": 1 },{ "unique": true });
db.collection.insert(uploadDoc); // Same thing fails :(
Both work. Choose one.

You should use Unique Indexes: doc
You shouldn't use upsert without unique indexes:
To avoid inserting the same document more than once, only use upsert: true if the query field is uniquely indexed.
because
Consider when multiple clients issue the following update with an upsert parameter at the same time:
[cut]
If all update() operations complete the query portion before any client successfully inserts data, and there is no unique index on the name field, then each update operation may result in an insert.
from here

Related

Mongo - How can a narrower query be slower than a generic one?

This is executed immediately:
db.mycollection.find({ strField: 'AAA'}).count()
And this takes a lot to finish:
db.mycollection.find({ strField: 'AAA', dateTimeField: { $exists: true }}).count()
This is how I created my index:
db.mycollection.createIndex({strField: 1, dateTimeField: 1}, { sparse: true })
But it doesn't work even using hint(indexName)
Why this happens and how to fix it?
The { $exists: true } query predicate is problematic, especially if there are documents in the collection for which that field does not exist.
When MongoDB creates an index entry for a document, it collects all of the field values according to the index spec, and concatenates them.
If a field is not present in the document, the index stores null in that field's position.
If the field is explicitly set to null, it also stores null in that field's position.
This means that these 2 documents will have identical entries in the index:
{ strField: 'AAA', dateTimeField: null}
{ strField: 'AAA'}
Note that even with the index being sparse, both documents will be indexed since at least one of the indexes fields exists in each document.
When testing {dateTimeFied:{$exists:true}}, the first document will match, while the second will not.
When processing a count query using an index, if the query can be satisfied by scanning a single range of the index, the query executor can use a count_scan stage, and get the correct result without loading a single document from disk.
Because the executor cannot definitively tell from the index whether or not the field exists, it cannot use a count_scan, and must instead use an ordinary ixscan followed by a fetch stage, and load all of the matching documents from disk in order to arrive at the correct count.
In the case of the first query, the executor would have been able to use a count_scan, while the second would have had to examine all of the documents. You should be able to see this by running explain with the executionStats option on each query.
One way to avoid this pitfall is to take advantage of the fact that MongoDB query operators are type-sensitive. This means that this query will match any document where dateTimeField is greater than epoch 0, and a timestamp:
db.mycollection.find({ strField: 'AAA', dateTimeField: { $gte: new ISODate("1970-01-01T00:00:00Z") }}).count()
This will allow the query executor to count all of the documents that have the matching string and contain a date, but will exclude documents that contain a dateTimeField with a numeric or string value.

How to optimize mongo query with two parallel array?

I have a query like this:
xml_db.find(
{
'high_performer': {
'$nin': [some_value]
},
'low_performer': {
'$nin': [some_value]
},
'expiration_date': {
'$gte': datetime.now().strftime('%Y-%m-%d')
},
'source': 'some_value'
}
)
I have tried to create an index with those fields but getting error:
pymongo.errors.OperationFailure: cannot index parallel arrays [low_performer] [high_performer]
So, how to efficiently run this query?
Compound indexing ordering should follow the equality --> sort --> range rule. A good description of this can be found in this response.
This means that the first field in the index would be source, followed by the range filters (expiration_date, low_performer and high_performer).
As you noticed, one of the "performer" fields cannot be included in the index since only a single array can be indexed. You should use your knowledge of the data set to determine which filter (low_performer or high_performer) would be more selective and choose that filter to be included in the index.
Assuming that high_performer is more selective, the only remaining step would be to determine the ordering between expiration_date and high_performer. Again, you should use your knowledge of the data set to make this determination based on selectivity.
Assuming expiration_date is more selective, the index to create would then be:
{ "source" : 1, "expiration_date" : 1, "high_performer" : 1 }

Optimising queries in mongodb

I am working on optimising my queries in mongodb.
In normal sql query there is an order in which where clauses are applied. For e.g. select * from employees where department="dept1" and floor=2 and sex="male", here first department="dept1" is applied, then floor=2 is applied and lastly sex="male".
I was wondering does it happen in a similar way in mongodb.
E.g.
DbObject search = new BasicDbObject("department", "dept1").put("floor",2).put("sex", "male");
here which match clause will be applied first or infact does mongo work in this manner at all.
This question basically arises from my background with SQL databases.
Please help.
If there are no indexes we have to scan the full collection (collection scan) in order to find the required documents. In your case if you want to apply with order [department, floor and sex] you should create this compound index:
db.employees.createIndex( { "department": 1, "floor": 1, "sex" : 1 } )
As documentation: https://docs.mongodb.org/manual/core/index-compound/
db.products.createIndex( { "item": 1, "stock": 1 } )
The order of the fields in a compound index is very important. In the
previous example, the index will contain references to documents
sorted first by the values of the item field and, within each value of
the item field, sorted by values of the stock field.

MongoDB select subdocument with aggregation function

I have a mongo DB collection that looks something like this:
{
{
_id: objectId('aabbccddeeff'),
objectName: 'MyFirstObject',
objectLength: 0xDEADBEEF,
objectSource: 'Source1',
accessCounter: {
'firstLocationCode' : 283,
'secondLocationCode' : 543,
'ThirdLocationCode' : 564,
'FourthLocationCode' : 12,
}
}
...
}
Now, assuming that this is not the only record in the collection and that most/all of the documents contain the accessCounter subdocument/field how will I go with selecting the x first documents where I have the most access from a specific location.
A sample "query" will be something like:
"Select the first 10 documents From myCollection where the accessCounter.firstLocationCode are the highest"
So a sample result will be X documents where the accessCounter. will be the greatest is the database.
Thank your for taking the time to read my question.
No need for an aggregation, that is a basic query:
db.collection.find().sort({"accessCounter.firstLocation":-1}).limit(10)
In order to speed this up, you should create a subdocument index on accessCounter first:
db.collection.ensureIndex({'accessCounter':-1})
assuming the you want to do the same query for all locations. In case you only want to query firstLocation, create the index on accessCounter.firstLocation.
You can speed this up further in case you only need the accessCounter value by making this a so called covered query, a query of which the values to return come from the index itself. For example, when you have the subdocument indexed and you query for the top secondLocations, you should be able to do a covered query with:
db.collection.find({},{_id:0,"accessCounter.secondLocation":1})
.sort("accessCounter.secondLocation":-1).limit(10)
which translates to "Get all documents ('{}'), don't return the _id field as you do by default ('_id:0'), get only the 'accessCounter.secondLocation' field ('accessCounter.secondLocation:1'). Sort the returned values in descending order and give me the first ten."

Map reduce to delete duplicates (mongodb)

I have created map reduce function to get all documents along with their count.
I need to remove all the duplicates now. How should I do it?
res = col.map_reduce(map,reduce,"my_results");
Gives output like:
{u'_id': u'http://www.hardassetsinvestor.com/features/5485-soft-commodity-q4-report-low-inventories-buoy-cocoa-growing-stocks-weigh-on-coffee-cotton-a-sugar.html', u'value': 2.0}
{u'_id': u'http://www.hardassetsinvestor.com/market-monitor-archive/5490-week-in-review-gold-a-silver-kick-off-2014-strongly-oil-a-natgas-stall.html', u'value': 2.0}
Assuming you don't care which duplicate gets removed, an easy approach is to ensure a unique index with dropDups:true.
For example, assuming a field name of url:
db.collection.ensureIndex( { url: 1 }, { unique: true, dropDups: true } )
Important note from the dropDups documentation:
As in all unique indexes, if a document does not have the indexed field, MongoDB will include it in the index with a “null” value.
If subsequent fields do not have the indexed field, and you have set {dropDups: true}, MongoDB will remove these documents from the collection when creating the index. If you combine dropDups with the sparse option, this index will only include documents in the index that have the value, and the documents without the field will remain in the database.
You would write a small application to do this, i.e. in the shell:
db.my_results.find().forEach(function(doc){
if(doc.value > 1)
db.realCollection.remove({_id: doc._id}, true);
});
The end true makes remove only remove once
Edit
Adding Python since the above code is hard to translate:
for doc in db.my_results.find():
if doc.value > 1:
for i in range(0, doc.value):
db.realCollection.remove({'_id': doc._id}, true);