Aggregations work in standalone but not in sharded cluster - mongodb

I'm currently trying the aggregations with MongoDB using the json found here : http://media.mongodb.org/zips.json
So, I imported it thousands of time and then I tried this command :
db.CO_villes.aggregate({$group:{_id:"$state",population:{$sum:"$pop"}}})
And I got this error :
2019-04-24T13:49:19.579+0000 E QUERY [js] Error: command failed: {
"ok" : 0,
"errmsg" : "unrecognized field 'mergeByPBRT'",
"code" : 9,
"codeName" : "FailedToParse",
"operationTime" : Timestamp(1556113758, 2),
"$clusterTime" : {
"clusterTime" : Timestamp(1556113758, 2),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} : aggregate failed :
I have a sharded cluster with 3 MongoDB instances.
I can face this issue too when I try to get the indexes with "Compass".
I tried to export the data and to remove the id field using the "sed" command (because my Ids were not all with "ObjectID")and to import it but I still face this issue.

I solved my issue by creating a 3.6 cluster instead of a 4.0.6. So I think this is a bug related to the new versions of MongoDB.

Related

"show dbs" fails on mongo shell

This is the first time I am trying to connect to this mongo instance, which is setup by my ex-colleague. when I run "show dbs", I have seen such message:
rs0:SECONDARY> show dbs
2022-01-05T18:33:11.282+0000 E QUERY [js] uncaught exception: Error: listDatabases failed:{
"operationTime" : Timestamp(1641407590, 1),
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotMasterNoSlaveOk",
"$clusterTime" : {
"clusterTime" : Timestamp(1641407590, 1),
"signature" : {
"hash" : BinData(0,"...="),
"keyId" : NumberLong("...")
}
}
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs/<#src/mongo/shell/mongo.js:135:19
Mongo.prototype.getDBs#src/mongo/shell/mongo.js:87:12
shellHelper.show#src/mongo/shell/utils.js:906:13
shellHelper#src/mongo/shell/utils.js:790:15
#(shellhelp2):1:1
rs0:SECONDARY>
Any ideas what could be wrong ?
Thanks,
Jack
Following is the screenshot How I got that failure.
You need to execute:
rs.slaveOk()
From the SECONDARY to allow show dbs after ...

MongoDB IAM Authentication : Cannot Authenticate using IAM Roles

I am unable to authenticate using IAM roles which are added to the cluster.
Currently, There is a Role created and attached with the Ec2 instance. Using the same role while connecting to the DB and running any command. I am getting the following error.
error: {
"operationTime" : Timestamp(1635767862, 1),
"ok" : 0,
"errmsg" : "command find requires authentication",
"code" : 13,
"codeName" : "Unauthorized",
"$clusterTime" : {
"clusterTime" : Timestamp(1635767862, 1),
"signature" : {
"hash" : BinData(0,"WibtM8VK2aorci9mA6QNyP/ummU="),
"keyId" : NumberLong("7023742477949468676")
}
}
}
Has anyone ever faced any issue like this with IAM roles and MongoDB Atlas mongo?

MongoDB insert and find correctly when using PL but records not shown on mongo shell

I'm writing dummy server to insert to mongodb, the connection string already match mongo command line connection string, the database and collection name too. Inserting using programming language (PL) driver works fine, Inserting using mongo shell also works fine. But both record seems doesn't shown each other (records that inserted using PL only can be seen when queried using PL, records that inserted using mongo shell only can be found when queried using mongo command line). What's the possible cause for this?
On programming language (there's bunch of records already, only showing first 2 records)
On mongo shell (only 1 record which I inserted just now)
rs0:PRIMARY> db.stats()
{
"db" : "test",
"collections" : 1,
"views" : 0,
"objects" : 1,
"avgObjSize" : 36,
"dataSize" : 36,
"storageSize" : 24576,
"indexes" : 1,
"indexSize" : 24576,
"totalSize" : 49152,
"scaleFactor" : 1,
"fsUsedSize" : 138781839360,
"fsTotalSize" : 233197473792,
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1598360409, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1598360409, 1)
}
rs0:PRIMARY> db.test1.points.find()
{ "_id" : ObjectId("5f4506c18fda02c740f2650d"), "test" : 1 }
What's the possible cause of this? was it because I modify the systemd command line argument from
ExecStart=/usr/bin/mongod --config /etc/mongod.conf
to
ExecStart=/usr/bin/mongod --config /etc/mongod.conf --replSet rs0
But this on the server, this should globally effecting both client right?
Nevermind, just know that this is NOT the correct way to access the database
db.dbName.colName
db.test1.points.find()
this is the correct way, I don't know since when this changed or it's already like that since beginning..
db = db.getSiblingDB('dbName')
db = db.getSiblingDB('test1')
db.points.find()

MongoDB find the corrupted doc

I have a collection of 9 million records where I found an index in which if I tried to get all the documents, it throws the below error.
Error: error: {
"ok" : 0,
"errmsg" : "invalid bson type in element with field name '_contract_end_date' in object with unknown _id",
"code" : 22,
"codeName" : "InvalidBSON",
"operationTime" : Timestamp(1585753324, 14),
"$clusterTime" : {
"clusterTime" : Timestamp(1585753324, 14),
"signature" : {
"hash" : BinData(0,"2fEF+tGQoHsjvCCWph9YhkVajCs="),
"keyId" : NumberLong("6756221167083716618")
}
}
}
So I tried to rename the field to contract_end_date by using $rename operator. When I tried updateMany, it throws the same error.
But updateOne works. But this is not helpful as I just see the success message but not actually updating 100 odd docs for that index. I wonder how to see that corrupted doc to identify the other fields which will help me to identify the application which corrupts.
Sample doc: -It's a pretty simple flatten structure - around 50 fields are there in each doc - no nested docs.
{
_id:
sys_contract_end_date:
customer_name:
location:
owner:
retailer:
seller:
}

Mongodb: db automatically drops after a while

My code was working fine till yesterday- performing simple insert, update operations. Since yesterday, database automatically drops. On checking local.oplog.rs, i found the last entry as drop every time even though my code doesn't drop db anywhere.
{
"ts" : Timestamp(1484893960, 1),
"t" : NumberLong(3),
"h" : NumberLong(-945311492786762202),
"v" : NumberInt(2),
"op" : "c",
"ns" : "randomdb.$cmd",
"o" : {
"dropDatabase" : NumberInt(1)
}
}
The mongodb is deployed on a virtual machine (bitnami) on Microsoft Azure.
Interestingly, the db drops anytime- sometimes while in use, sometimes while idle.