I'm using for the first time a Mongodb and I got a very weird bug.. I have a 'games' collection and I can't make a search with _id query on it..
I try directly on the mongo shell and this is the result :
> db.games.count()
0
> db.games.insert({created:'ok'})
WriteResult({ "nInserted" : 1 })
> db.games.find()
{ "_id" : ObjectId("54f7364d1f2f9378d7a5ddde"), "created" : "ok" }
> db.games.findOne({_id:'54f7364d1f2f9378d7a5ddde'})
null
> db.games.find({_id:'54f7364d1f2f9378d7a5ddde'})
>
I really don't know what is going on ? I was thinking about a weird index on _id but I found nothing..
> db.games.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "cobra.games"
}
]
>
This can maybe help you
> db.version()
2.6.5
Someone have an idea ?
try:
db.games.find({"_id" : ObjectId("54f7364d1f2f9378d7a5ddde")})
The type of the data stored in the key _id is of ObjectId and your were searching for a String. So you the types were not matching. MongoDB has not auto-casting of String to ObjectId. This Error can also happen if your store your numbers as a Long but in your query you enter an Integer.
Related
I Have a collection which has data like {m:1,n:3}
want to update this collection like {m:1,n:3,s:4}
where s= m+n in mongodb,
so for that, I tried
db.data.update({},{$set:{s:(m+n)});
but it doesn't work and I tried multiple things but still haven't a solution. How can I achieve this?
with aggregation you can $addFields and $out to a collection
db.col.aggregate([
{$addFields : {s : {$add:["$m", "$n"]}}},
{$out : "col"}
])
with $project for mongo version less than 3.4
db.col.aggregate([
{$project : {m:1, n:1, s : {$add:["$m", "$n"]}}},
{$out : "col"}
])
result
> db.col.drop()
true
> db.col.insert({m:2,n:4})
WriteResult({ "nInserted" : 1 })
> db.col.find()
{ "_id" : ObjectId("5a911efc6bb20635697c6b17"), "m" : 2, "n" : 4 }
> db.col.aggregate([{$addFields : {s : {$add:["$m", "$n"]}}},{$out : "col"}])
> db.col.find()
{ "_id" : ObjectId("5a911efc6bb20635697c6b17"), "m" : 2, "n" : 4, "s" : 6 }
>
Sorry for the basic question, I'm new to mongo and learning my way around.
I have run an aggregate query in Mongo:
> var result = db.urls.aggregate({$group : {_id : "$pagePath"} });
> result
{ "_id" : "/section1/page1" }
{ "_id" : "/section1/page2" }
...
Type "it" for more
I would now like to save the results of this aggregate query into a new collection. This is what I've tried:
> db.agg1.insert(result);
WriteResult({ "nInserted" : 1 })
But this seems to have inserted all the rows as just one row:
> db.agg1.count()
1
> db.agg1.findOne();
{ "_id" : "/section1/page1" }
{ "_id" : "/section1/page2" }
...
Type "it" for more
How can I insert these as separate rows?
I've tried inserting the _id directly, without success:
> db.agg1.insert(result._id);
2014-12-17T15:23:26.679+0000 no object passed to insert! at src/mongo/shell/collection.js:196
Use the $out pipeline operator for that:
db.urls.aggregate([
{$group : {_id : "$pagePath"} },
{$out: "agg1"}
]);
Note that $out was added in MongoDB 2.6.
Using MongoDB version 2.4.4, I have a profile collection containing profiles documents.
I have the following query:
Query: { "loc" : { "$near" : [ 32.08290052711715 , 34.80888522811172] , "$maxDistance" : 0.0089992800575954}}
Fields: { "friendsCount" : 1 , "tappsCount" : 1 , "imageUrl" : 1 , "likesCount" : 1 , "lastActiveTime" : 1 , "smallImageUrl" : 1 , "loc" : 1 , "pid" : 1 , "firstName" : 1}
Sort: { "lastActiveTime" : -1}
Limited to 100 documents.
loc - embedded document containing the keys ( lat,lon)
I am getting the exception:
org.springframework.data.mongodb.UncategorizedMongoDbException: too much data for sort() with no index. add an index or specify a smaller limit;
As stated in the exception when I down-size the limit to 50 it works.. but it ain't option for me.
I have the following 2 relevant indexes on the profile document:
{'loc':'2d'}
{'lastActiveTime':-1}
I have also tried compound index as below but without success.
{'loc':'2d', 'lastActiveTime':-1}
This is example document (with the relevant keys):
{
"_id" : "5d5085601208aa918bea3c1ede31374d",
"gender" : "female",
"isCreated" : true,
"lastActiveTime" : ISODate("2013-04-08T11:30:56.615Z"),
"loc" : {
"lat" : 32.082230499955806,
"lon" : 34.813542940344945,
"locTime" : NumberLong(0)
}
}
There are other fields in the profile documents .. basically average profile document size is 0.5 MB correct me if I am wrong but if I am specifying only the relevant response fields (as I do) it is not the cause for the problem.
Don't know if it helps but when I down-size the limit size to 50 and the query succeed
I have the following explain information (via MongoVUE client)
cursor : GeoSearchCursor
isMultyKey : False
n : 50
nscannedObjects : 50
nscanned : 50
nscannedObjectsAllPlans : 50
nscannedAllPlans : 50
scanAndOrder : True
indexOnly : False
nYields : 0
nChunkSkips : 0
millis : 10
indexBounds :
It is a blocker for me and I will appreciate your help, what am I doing wrong? How can I make the query roll with the needed limit size?
Try creating a compound index instead of two indexes.
db.collection.ensureIndex( { 'loc':'2d','lastActiveTime':-1 } )
You can also suggest the query which index to use:
db.collection.find(...).hint('myIndexName')
Cannot find entry by specifing ts.t(ts is a Timestamp type)
Digging the oplog, I want to figure out how many operations there are in a second.
Cannot find entry by specifing a timestamp field, ok with other fields.
$
In mongo shell:
> db.oplog.rs.findOne()
{
"ts" : {
"t" : 1335200998000,
"i" : 540
},
"h" : NumberLong("4405509386688070776"),
"op" : "i",
"ns" : "new_insert",
"o" : {
"_id" : ObjectId("4f958fad55ba26db6a000a8b"),
"username" : "go9090",
"message" : "hello, test.",
}
}
> db.oplog.rs.find().count()
419583
> db.oplog.rs.test.find({"ts.t":1335200998000}).count()
0
> db.oplog.rs.test.find({"ts.t":/^1335200998/}).count()
0
> db.oplog.rs.test.find({ts:{ "t" : 1335200998000, "i" : 540 }}).count()
0
I believe the ts field is actually a Timestamp field, the console just tries to simplify it for you (which does make it very misleading). You can do the query like this and it should work:
db.oplog.rs.find({ ts: Timestamp(1335200998000, 540)});
You can use $gte and $lte as normal:
db.oplog.rs.find({ ts: {$gte: Timestamp(1335100998000, 1)}});
db.oplog.rs.find({ ts: {$lte: Timestamp(1335900998000, 1)}});
The second argument is an incremental ordinal for operations within a given second.
You're simply using ".test" while you shouldn't be. The following works:
db.oplog.rs.find( {'ts.t': 1335200998000 } );
When I call ensureIndex from the mongo shell on a collection for a compound index an _id field of type ObjectId is auto-generated in the index object.
> db.system.indexes.find();
{ "name" : "_id_", "ns" : "database.coll", "key" : { "_id" : 1 } }
{ "_id" : ObjectId("4ea78d66413e9b6a64c3e941"), "ns" : "database.coll", "key" : { "a.b" : 1, "a.c" : 1 }, "name" : "a.b_1_a.c_1" }
This makes intuitive sense as all documents in a collection need an _id field (even system.indexes, right?), but when I check the indexes generated by morphia's ensureIndex call for the same collection *there is no _id property*.
Looking at morphia's source code, it's clear that it's calling the same code that the shell uses, but for some reason (whether it's the fact that I'm creating a compound index or indexing an Embedded document or both) they produce different results. Can anyone explain this behavior to me?
Not exactly sure how you managed to get an _id field in the indexes collection but both shell and Morphia originated ensureIndex calls for compound indexes do not put an _id field in the index object :
> db.test.ensureIndex({'a.b':1, 'a.c':1})
> db.system.indexes.find({})
{ "v" : 1, "key" : { "_id" : 1 }, "ns" : "test.test", "name" : "_id_" }
{ "v" : 1, "key" : { "a.b" : 1, "a.c" : 1 }, "ns" : "test.test", "name" : "a.b_1_a.c_1" }
>
Upgrade to 2.x if you're running an older version to avoid running into now resolved issues. And judging from your output you are running 1.8 or earlier.