I'm newbie for manipulating MongoDB recently, but I really need to update some fields like these..
db_name: test
table_name: info
{
"_id" : ObjectId("54bf9ab4f8eda6747567b122"),
"archives" : [
{
"evalution" : {
"positive" : 0,
"undefine" : 0,
"negative" : 0
},
"source" : ObjectId("54cb6f455decd8037528756b")
}
]
}
I want to increase positive and undefine by 1, and if evalution doesn't exists,
"evalution" : {
"positive" : 0,
"undefine" : 0,
"negative" : 0
}
should be added to the object.
I don't know if I express myself clearly, but I really need some help..
If evalution doesn't exists u can create a sub-doc using following query.
db.test.update(
{_id:ObjectId("54bf9ab4f8eda6747567b122"),
"archives.source" : ObjectId("54cb6f455decd8037528756b"),
"archives.evalution":{$exists:false}},
{$set:{"archives.$.evalution":{positive:1,negative:0,undefine:1}}}
)
NOTE: Positional operator is only upto one level, so u cannot increment the values until you know the index of archieves array.
you get the the index of the archieves array and increment using following command. In this case 0 where
source = ObjectId("54cb6f455decd8037528756b")
db.test.update(
{_id:_id:ObjectId("54bf9ab4f8eda6747567b122")},
{$inc:{"archives.0.evalution.positive":1}}
)
It would help to know what the semantics of the update are. Why are you incrementing those two fields? Why are you incrementing two fields, as if doing a count, but if the fields don't exist you want to set default values to 0 (as opposed to, say, 1)? Is there a semantic difference between "evaluation doesn't exist" and the value
{
"positive" : 0,
"undefine" : 0,
"negative" : 0
}
for evaluation? Or between "evaluation doesn't exist" and
{
"positive" : -1,
"undefine" : -1,
"negative" : 0
}
for the value? It seems like no, for at least one of those default values, and so you should set evaluation to have the appropriate default value on insert of the document or of the array element. If you do that, your update is simple:
db.info.update(
{
"_id" : ObjectId("54bf9ab4f8eda6747567b122"),
"archives.source" : ObjectId("54cb6f455decd8037528756b")
},
{
"$inc" : {
"archives.$.evalution.positive" : 1,
"archives.$.evalution.undefine" : 1,
}
}
)
Also, you have some typos, I think:
evalution probably means evaluation or possibly evolution?
the proper complement to positive and negative is undefined
Related
Hello I have objects like this one in the db:
{ "_id" : ObjectId("56d9fc68bb9dcdcc9f73e6b7"), "ApplicationId" : 9, "CreatedDateUtc" : ISODate("2016-01-01T21:26:57.116Z"), "Message" : "yolo", "EventType" : 4 }
And I will search by the ApplicationId and the CreatedDateUtc (optionally EventType) fields so I made a composite index (I kept the defult index on id field intact):
{
"v" : 1,
"key" : {
"ApplicationId" : 1,
"CreatedDateUtc" : -1
},
"name" : "ApplicationId_1_CreatedDateUtc_-1",
"ns" : "test.testLogs"
}
Is that a good idea to use a field which is unique almost every time (the date) as part of the index? If I get the whole index idea, then this approach will bloat the index making it harder to find stuff fast. Am I correct?
With 770k entries I have ~19MB index. I have no idea is it large or not, but it seems big.
> db.testLogs.count()
770999
> db.testLogs.totalIndexSize()
18952192
I was thinking about making a field unique for each hour (maybe the date with minutes and the small stuff floor'ed) and use it for indexing. Any better ideas?
I am attempting to build a query to run from Mongo client that will allow access to the following element of a hash within a hash within a hash.
Here is the structure of the data:
"_id" : ObjectId("BSONID"),
"e1" : "value",
"e2" : "value",
"e3" : "value"),
"updated_at" : ISODate("2015-08-31T21:04:37.669Z"),
"created_at" : ISODate("2015-01-05T07:20:17.833Z"),
"e4" : 62,
"e5" : {
"sube1" : {
"26444745" : {
"subsube1" : "value",
"subsube2" : "value",
"subsube3" : "value I am looking for",
"subsube4" : "value",
"subsube5" : "value"
},
"40937803" : {
"subsube1" : "value",
"subsube2" : "value",
"subsube3" : "value I am looking for",
"subsube4" : "value",
"subsube5" : "value"
},
"YCPGF5SRTJV2TVVF" : {
"subsube1" : "value",
"subsube2" : "value",
"subsube3" : "value I am looking for",
"subsube4" : "value",
"subsube5" : "value"
}
}
}
So I have tried dotted notation based on a suggestion for "diving" into an wildcard named hash using db.my_collection.find({"e5.sube1.subsube4": "value I am looking for"}) which keeps coming back with an empty result set. I have also tried the find using a match instead of an exact value using /value I am lo/ and still an empty result set. I know there is at least 1 document which has the "value I am looking for".
Any ideas - note I am restricted to using the Mongo shell client.
Thanks.
So since this is not capable of being made into a javascript/mongo shell array I will go to plan B which is write some code be it Perl or Ruby and pull the result set into an array of hashes and walk each document/sub-document.
Thanks Mario for the help.
You have two issues:
You're missing one level.
You are checking subsube4 instead of subsube3
Depending on what subdocument of sube1 you want to check, you should do
db.my_collection.find({"e5.sube1.26444745.subsube4": "value I am looking for"})
or
db.my_collection.find({"e5.sube1.40937803.subsube4": "value I am looking for"})
or
db.my_collection.find({"e5.sube1.YCPGF5SRTJV2TVVF.subsube4": "value I am looking for"})
You could use the $or operator if you want to look in any one of the three.
If you don't know the keys of your documents, that's an issue with your schema design: you should use arrays instead of objects. Similar case: How to query a dynamic key - mongodb schema design
EDIT
Since you explain that you have a special request to know the count of "value I am looking for" only one time, we can run a map reduce. You can run those commands in the shell.
Define map function
var iurMapFunction = function() {
for (var key in this.e5.sube1) {
if (this.e5.sube1[key].subsube3 == "value I am looking for") {
var value = {
count: 1,
subkey: key
}
emit(key, value);
}
}
};
Define reduce function
var iurReduceFunction = function(keys, countObjVals) {
reducedVal = {
count: 0
};
for (var idx = 0; idx < countObjVals.length; idx++) {
reducedVal.count += countObjVals[idx].count;
}
return reducedVal;
};
Run mapreduce command
db.my_collection.mapReduce(iurMapFunction,
iurReduceFunction, {
out: {
replace: "map_reduce_result"
},
}
);
Find your counts
db.map_reduce_result.find()
This should give you, for each dynamic key in your object, the number of times it had an embedded field subsube3 with value value I am looking for.
I have a collection like this
{
"_id" : ObjectId("54368d9125c3dc7c1f43295f"),
"nome" : "John",
"eta" : 30,
"data" : ISODate("2014-10-09T10:30:00.000Z")
}
{
"_id" : ObjectId("54368d9c25c3dc7c1f432960"),
"nome" : "Paul",
"eta" : 31
}
And I do this Query
db.coll.find({eta:{$gt:30}})
My result is one Document (Paul)
db.coll.find({eta:{$gt:30}}).count() //1
If i do
db.coll.find({eta:{$gt:30}}).skip(1)
I haven't Result,and it's ok.
But If i do this
db.coll.find({eta:{$gt:30}}).skip(1).count()
my Result is 1
From the documentation for count():
By default, the count() method ignores the effects of the cursor.skip() and cursor.limit(). Set applySkipLimit to true to consider the effect of these methods.
So you can supply an optional parameter named applySkipLimit to count(), if you want the effect of skip() to be considered, like this:
db.coll.find({eta:{$gt:30}}).skip(1).count({applySkipLimit:1});
or simply
db.coll.find({eta:{$gt:30}}).skip(1).count(true);
Use size instead of count as it includes the effects of any skip and limit calls on the cursor:
db.coll.find({eta:{$gt:30}}).skip(1).size()
I have got records in my collection as shown below
{
"_id" : ObjectId("53722c39e4b04a53021cf3c6"),
"symbol" : "AIA",
"tbq" : 1356,
"tsq" : 0,
"tquan" : 6831336,
"tvol" : 17331.78,
"bquantity" : 1356,
"squantity" : 0
}
{
"_id" : ObjectId("53722c38e4b04a53021cf3c1"),
"symbol" : "SAA",
"tbq" : 0,
"tsq" : 9200,
"tquan" : 6036143,
"tvol" : 50207.43,
"bquantity" : 0,
"squantity" : 9200
}
I am displaying the results in the ascending order of bquantity, and also at the same time I want to display only certain columns in the result (symbol, bquantity, squantity) and ignore the rest.
I tried with the below query, but still its displaying all the fields .
Please tell me how can i eliminate those fields from the result ?
db.stock.find().sort(
{"bquantity":-1},
{
symbol: 1,
bquantity: 1,
squantity:1 ,
_id:0,
tbq:0,
tsq:0,
tquan:0,
tvol:0
}
)
The field filter is a parameter to the find function, not to the sort function, i.e.:
db.stock.find({}, { _id:0,tbq:0, tsq:0,tquan:0,tvol:0}).sort({"bquantity":-1})
The empty hash used as first parameter to find is required as an 'empty query' which matches all documents in stock.
I have a Mongo find query that works well to extract specific fields from a large document like...
db.profiles.find(
{ "profile.ModelID" : 'LZ241M4' },
{
_id : 0,
"profile.ModelID" : 1,
"profile.AVersion" : 2,
"profile.SVersion" : 3
}
);
...this produces the following output. Note how the SVersion comes before the AVersion in the document even though my projection asked for AVersion before SVersion.
{ "profile" : { "ModelID" : "LZ241M4", "SVersion" : "3.5", "AVersion" : "4.0.3" } }
{ "profile" : { "ModelID" : "LZ241M4", "SVersion" : "4.0", "AVersion" : "4.0.3" } }
...the problem is that I want the output to be...
{ "profile" : { "ModelID" : "LZ241M4", "AVersion" : "4.0.3", "SVersion" : "3.5" } }
{ "profile" : { "ModelID" : "LZ241M4", "AVersion" : "4.0.3", "SVersion" : "4.0" } }
What do I have to do get the Mongo JavaScript shell to present the results of my query in the field order that I specify?
I have achieved it by projecting the fields using aliases, instead of including and excluding by 0 and 1s.
Try this:
{
_id : 0,
"profile.ModelID" :"$profile.ModelID",
"profile.AVersion":"$profile.AVersion",
"profile.SVersion":"$profile.SVersion"
}
I get it now. You want to return results ordered by "fields" rather the value of a fields.
Simple answer is that you can't do this. Maybe its possible with the new aggregation framework. But this seems overkill just to order fields.
The second object in a find query is for including or excluding returned fields not for ordering them.
{
_id : 0, // 0 means exclude this field from results
"profile.ModelID" : 1, // 1 means include this field in the results
"profile.AVersion" :2, // 2 means nothing
"profile.SVersion" :3, // 3 means nothing
}
Last point, you shouldn't need to do this, who cares what order the fields come-back in.
You application should be able to make use of the fields it needs regardless of the order the fields are in.
Another solution I applied to achieve this is the following:
db.profiles
.find({ "profile.ModelID" : 'LZ241M4' })
.toArray()
.map(doc => ({
profile: {
ModelID: doc.profile.ModelID,
AVersion: doc.profile.AVersion,
SVersion: doc.profile.SVersion
}
}))
Since version 2.6 (that came out in 2014) MongoDB preserves the order of the document fields following the write operation (source).
P.S. If you are using Python you might find this interesting.