When I use Mongodb aggregation with Birt I had the error below:
org.eclipse.datatools.connectivity.oda.OdaException:
Unable to run the Aggregate command operation.
Check that your connected MongoDB server is version 2.2 or later. ;
com.mongodb.CommandResult$CommandFailure: command failed [aggregate]:
{ "serverUsed" : "/xxx.xx.xx.xxx:27017" ,
"errmsg" : "exception: aggregation result exceeds maximum document size (16MB)" ,
"code" : 16389 , "ok" : 0.0 ,
"$gleStats" : { "lastOpTime" : { "$ts" : 0 , "$inc" : 0} ,
"electionId" : { "$oid" : "557cd07784d145278edfba15"}}}
Yes, you can run aggregate MongoDB queries. Check the MongoDB version.
Related
I have a Mongodb collection with schema validation.
I executed db.correspondence.validate({full:true}) and received "nInvalidDocuments" : NumberLong(0)
I am able to insert a document but update is failing.
MongoDB Enterprise > db.correspondence.find({"correspondenceIdentifier": "ca4697e2-a40c-11ea-a632-0a0a6b0e0000"}).count()
8
MongoDB Enterprise > db.correspondence.insert({"correspondenceIdentifier": "ca4697e2-a40c-11ea-a632-0a0a6b0e0000",mdmContractIdentifier:'3334444444444444','name':'Vat'});
WriteResult({ "nInserted" : 1 })
MongoDB Enterprise > db.correspondence.find({"correspondenceIdentifier": "ca4697e2-a40c-11ea-a632-0a0a6b0e0000"}).count()
9
MongoDB Enterprise > db.correspondence.find({mdmContractIdentifier:'3334444444444444'}).count()
2
MongoDB Enterprise > db.correspondence.updateOne({"correspondenceIdentifier": "ca4697e2-a40c-11ea-a632-0a0a6b0e0000"},{$set:{mdmContractIdentifier:'3334444444444444'}});
2020-08-05T20:50:28.164-0400 E QUERY [thread1] **WriteError: Document failed validation** :
WriteError({
"index" : 0,
"code" : 121,
"errmsg" : "Document failed validation",
"op" : {
"q" : {
"correspondenceIdentifier" : "ca4697e2-a40c-11ea-a632-0a0a6b0e0000"
},
"u" : {
"$set" : {
"mdmContractIdentifier" : "3334444444444444"
}
},
"multi" : false,
"upsert" : false
}
})`
WriteError#src/mongo/shell/bulk_api.js:466:48
Bulk/mergeBatchResults#src/mongo/shell/bulk_api.js:846:49
Bulk/executeBatch#src/mongo/shell/bulk_api.js:910:13
Bulk/this.execute#src/mongo/shell/bulk_api.js:1154:21
DBCollection.prototype.updateOne#src/mongo/shell/crud_api.js:572:17
#(shell):1:1
I am trying to get the count of entries that I have in the collection collectionUID grouped by the field status by using the commands suggested on https://stackoverflow.com/a/23116396/2806163
Sample Response in the collection:
{ "_id" : ObjectId("b....5"), "status" : "processing", "product_id" :
"a...2", "destination" : { }, ..., "request_id" : "b...d", "timestamp"
: "1536028784.0797083", "userID" : "Daksh" }
Error:TypeError: cmd.cursor is undefined
> db.collectionUID.aggregate([
... {"$group" : {_id:"$status", count:{$sum:1}}}
... ])
2018-09-18T18:57:41.983+0530 E QUERY [thread1] TypeError: cmd.cursor is undefined :
DBCollection.prototype.aggregate#src/mongo/shell/collection.js:1322:1
#(shell):1:1
>
PS:
mongo --version: MongoDB shell version v3.4.10-4-g67ee356c6b
I am getting following exception when I tried to fetch the data from a mongodb collection. This collection is having very huge data.
The exception is:
com.mongodb.MongoQueryException: Query failed with error code 10334 and error message 'BSONObj size: 24020168 (0x16E84C8) is invalid. Size must be between 0 and 16793600(16MB)' on server 10.15.0.227:27017
And following is my query which I used to get the data from mongodb:
db.getCollection('triggered_policies').aggregate(
[{ "$match" : { "policy_name" : "EIQSOC-1040-ec"}},
{ "$project" : { "cust_created_at" : { "$add" : [ "$created_at" , 19800000]} , "event_ids" : "$event_ids" , "trigger_time" : "$trigger_time" , "created_at" : "$created_at" , "triggered_rules" : "$triggered_rules"}},
{ "$sort" : { "created_at" : -1}},
{ "$group" :
{ "_id" :
{
"$hour" : "$cust_created_at"} ,
"triggered_policies" : { "$addToSet" : { "trigger_time" : "$trigger_time" , "created_at" : "$created_at" , "event_ids" : "$event_ids" , "triggered_rules" : "$triggered_rules"}
}
}
},
{ "$sort" : { "_id" : 1}}
])
Following is the exact exception which we are getting:
Error: getMore command failed: {
"ok" : 0,
"errmsg" : "BSONObj size: 25994482 (0x18CA4F2) is invalid. Size must be between 0 and 16793600(16MB)",
"code" : 10334
}
Please help us to solve the issue.
Looks like the document created during aggregation exceeds the 16MB size restriction in mongo db. You might have to change your aggregate query to not accumulate too much data into a single document which exceeds the 16MB size limit.
Below is the quote from Mongo DB documentation:
BSON Document Size
The maximum BSON document size is 16 megabytes.
The maximum document size helps ensure that a single document cannot use excessive amount of RAM or, during transmission, excessive amount of bandwidth. To store documents larger than the maximum size, MongoDB provides the GridFS API. See mongofiles and the documentation for your driver for more information about GridFS.
I am firing the following query in mongodb
db.acollection.find({
"field.location": {
"$near": [19.0723058, 73.00067739999997]
},
$maxDistance : 100000
}).count()
and getting the following error -
uncaught exception: count failed: {
"shards" : {
},
"cause" : {
"errmsg" : "exception: unknown top level operator: $maxDistance",
"code" : 2,
"ok" : 0
},
"code" : 2,
"ok" : 0,
"errmsg" : "failed on : Shard ShardA"
}
You did it wrong. The $maxDistance argument is a "child" of the $near operator:
db.acollection.find({
"field.location": {
"$near": [19.0723058, 73.00067739999997],
"$maxDistance": 100000
}
}).count()
Has to be within the same expression.
Also look at GeoJSON when you are making a new application. It is the way you should be storing in the future.
Using MongoDB version 2.4.4, I have a profile collection containing profiles documents.
I have the following query:
Query: { "loc" : { "$near" : [ 32.08290052711715 , 34.80888522811172] , "$maxDistance" : 0.0089992800575954}}
Fields: { "friendsCount" : 1 , "tappsCount" : 1 , "imageUrl" : 1 , "likesCount" : 1 , "lastActiveTime" : 1 , "smallImageUrl" : 1 , "loc" : 1 , "pid" : 1 , "firstName" : 1}
Sort: { "lastActiveTime" : -1}
Limited to 100 documents.
loc - embedded document containing the keys ( lat,lon)
I am getting the exception:
org.springframework.data.mongodb.UncategorizedMongoDbException: too much data for sort() with no index. add an index or specify a smaller limit;
As stated in the exception when I down-size the limit to 50 it works.. but it ain't option for me.
I have the following 2 relevant indexes on the profile document:
{'loc':'2d'}
{'lastActiveTime':-1}
I have also tried compound index as below but without success.
{'loc':'2d', 'lastActiveTime':-1}
This is example document (with the relevant keys):
{
"_id" : "5d5085601208aa918bea3c1ede31374d",
"gender" : "female",
"isCreated" : true,
"lastActiveTime" : ISODate("2013-04-08T11:30:56.615Z"),
"loc" : {
"lat" : 32.082230499955806,
"lon" : 34.813542940344945,
"locTime" : NumberLong(0)
}
}
There are other fields in the profile documents .. basically average profile document size is 0.5 MB correct me if I am wrong but if I am specifying only the relevant response fields (as I do) it is not the cause for the problem.
Don't know if it helps but when I down-size the limit size to 50 and the query succeed
I have the following explain information (via MongoVUE client)
cursor : GeoSearchCursor
isMultyKey : False
n : 50
nscannedObjects : 50
nscanned : 50
nscannedObjectsAllPlans : 50
nscannedAllPlans : 50
scanAndOrder : True
indexOnly : False
nYields : 0
nChunkSkips : 0
millis : 10
indexBounds :
It is a blocker for me and I will appreciate your help, what am I doing wrong? How can I make the query roll with the needed limit size?
Try creating a compound index instead of two indexes.
db.collection.ensureIndex( { 'loc':'2d','lastActiveTime':-1 } )
You can also suggest the query which index to use:
db.collection.find(...).hint('myIndexName')