mongodb set feature compatibility to 3.4 fail - mongodb

I want to enable the features of my Mongodb 3.4 after upgrading.
I have sharded cluster enviroment
And after I ran the commands:
use admin
db.adminCommand ({setFeatureCompatibilityVersion: "3.4"})
In mongos instance
I got the following output:
{"ok": 1}
But when I try to check with it succeeded with command:
db.adminCommand ({getParameter: 1, featureCompatibilityVersion: 1})
I get the following output:
{"ok": 0, "errmsg": "no option found to get"}
And when I check in ops manager I see that the command did not work

Related

How to set internalQueryMaxAddToSetBytes in mongo db version 3.6.9

Is there a way we can set internalQueryMaxAddToSetBytes in mongdb version 3.6.9 ? The admin command db.adminCommand({setParameter: 1, internalQueryMaxAddToSetBytes: newLimit}) is not supported in 3.6.9.
I got below error message
"errmsg" : "attempted to set unrecognized parameter [internalQueryMaxAddToSetBytes], use help:true to see options "
How can we configure this for 3.6.9?
Related - https://jira.mongodb.org/browse/SERVER-44869
https://jira.mongodb.org/browse/SERVER-44174
Per the server ticket this feature requires 3.6.17 or newer server.

Mongo shell not creating indexes on running databases

We dumped the definition of indexes using mongo-index-exporter and we try to apply them at each launch of our application, so to achieve a reproducible environment, by executing
mongo --verbose "mongodb://$MONGO_DB_USER:$MONGO_DB_PASSWORD#$MONGO_DB_HOST/$MONGO_DB_NAME?authSource=$MONGO_AUTH_SOURCE&authMode=scram-sha1" indexes.js
on a file like so:
print('Using db ' + db)
setVerboseShell(true)
print('Creating indexes on mycoll1')
db.mycoll1.createIndex({"_id":1}, {"name":"_id_", "background": true});
db.mycoll1.createIndex({"createdAt":1,"field1.field2.field3":1}, {"name":"coll1_field123_idx"});
print('Creating indexes on mycoll2')
db.mycoll2.createIndex({"_id":1}, {"name":"_id_", "background": true});
db.mycoll2.createIndex({"createdAt":1,"field1.field2.field3":1}, {"name":"coll1_field123_idx"});
To create indexes on a replicaSet with five nodes, we are performing the
following command:
mongo --verbose "mongodb://$MONGO_DB_USER:$MONGO_DB_PASSWORD#$MONGO_DB_HOST/$MONGO_DB_NAME?authSource=$MONGO_AUTH_SOURCE&authMode=scram-sha1" indexes.js
this works fine for us in QA, where MONGO_DB_HOST is a single node and we are using a fresh new empty database, and doesn't work in production where additionally the database is already existing (and the collection have contents).
Additional relevant information:
background mode doesn't make any difference
The verbose mode doesn't work, i.e. although we launch mongo client with -v and we do a setVerbose(true) nothing is logged on the console
Our mongo shell is 3.6.11 and our mongo server is 3.2.9
Copy pastying the commands inside a mongo shell executed on the server works terminates immediately and results in the following output
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 11,
"numIndexesAfter" : 11,
"note" : "all indexes already exist",
"ok" : 1
}

How to interact with MongoDB via shell in OpenShift Online 3 Dev Preview?

I've got multiple issues with an application which I suspect are related to permissions on the database.
Everything seems locked down however and I can't complete basic commands such as show dbs in order to troubleshoot the problems further and "see" what I'm working with. I've been stuck on this for two days now and it's really frustrating.
I've tried this both from the online console and on local terminal, both with and without user credentials supplied at login:
Online Console
Console > MongoDB Service > Deployment > Pod > mongodb-1-vs19d > Terminal:
sh-4.2$ mongo
MongoDB shell version: 2.6.9
connecting to: test
> show dbs
2016-07-13T04:33:10.809-0400 listDatabases failed:{
"ok" : 0,
"errmsg" : "not authorized on admin to execute command { listDatabases:
1.0 }",
"code" : 13
} at src/mongo/shell/mongo.js:47
Local Terminal
me#my-computer:~$ oc rsh mongodb-1-vs19d
sh-4.2$ mongo
MongoDB shell version: 2.6.9
connecting to: test
> show dbs
2016-07-13T04:35:06.449-0400 listDatabases failed:{
"ok" : 0,
"errmsg" : "not authorized on admin to execute command { listDatabases: 1.0 }",
"code" : 13
} at src/mongo/shell/mongo.js:47
Local Terminal With User Credentials
me#my-computer:~$ oc rsh mongodb-1-vs19d
sh-4.2$ mongo -u $MONGODB_USER -p $MONGODB_PASSWORD $MONGODB_DATABASE
MongoDB shell version: 2.6.9
connecting to: users
> show dbs
2016-07-13T04:51:39.127-0400 listDatabases failed:{
"ok" : 0,
"errmsg" : "not authorized on admin to execute command { listDatabases: 1.0 }",
"code" : 13
} at src/mongo/shell/mongo.js:47
Troubleshooting envars are correct:
me#my-computer:~$ oc env pods mongodb-1-vs19d --list
# pods mongodb-1-vs19d, container mongodb
MONGODB_USER=admin
MONGODB_PASSWORD=secret
MONGODB_DATABASE=users
MONGODB_ADMIN_PASSWORD=very-secret
The database was created from local terminal with:
oc new-app mongodb-persistent -p MONGODB_USER=admin,MONGODB_PASSWORD=secret,MONGODB_ADMIN_PASSWORD=very-secret
As per official docs:
https://docs.openshift.com/online/getting_started/beyond_the_basics.html#btb-provisioning-a-database
https://docs.openshift.com/online/using_images/db_images/mongodb.html#running-mongodb-commands-in-containers
It looks like you're trying to show dbs as user who has not been granted the needed role. You can try authenticating yourself as admin as follows:
mongo -u admin -p $MONGODB_ADMIN_PASSWORD admin
Then you should be able to show dbs, show users, use other databases and show users there, etc...
It's a bit confusing that your $MONGODB_USER is named admin - this regular user will have access to the $MONGODB_DATABASE. Another admin has been created by the used image most likely as well.

Unrecognized pipeline stage name: '$sample'

when I run this aggregation pipeline in Robomongo
db.getCollection('xyz').aggregate([{$match: {tyu: "asd", ghj: "qwe"}},
{$sample: {size: 5}}])
I receive this error:
assert: command failed: {
"errmsg" : "exception: Unrecognized pipeline stage name: '$sample'",
"code" : 16436,
"ok" : 0
I'm using mongodb ver 3.2.6 and since $sample is supported from 3.2 onward.
(https://docs.mongodb.com/manual/reference/operator/aggregation/sample/#pipe._S_sample)
Im a little confused as to why I receive this error message.
Maybe I'm just missing something small.
Thanks
As stated in the comments of the question. Mongo client had a version of 3.2.6 but Mongo db had a version of 3.0.6.
I used version() in the shell to get the client's version and
db.version() to get the DB's version.
ver 3.0.6 is too low to support $sample as stated in the mongo documentation
https://docs.mongodb.com/manual/reference/operator/aggregation/sample/#pipe._S_sample

mongodb sharded collection query failed: setShardVersion failed host

I have encountered a problem after adding a shard to mongodb cluster.
I did the following operations:
1. deploy a mongodb cluster with primary shard named 'shard0002'(10.204.8.155:27010) for all databases.
2. for some reason I removed it and add a new shard of different host (10.204.8.100:27010, was automaticlly named shard0002 too) after migrating finished.
3. then add another shard (the one removed in step 1), named 'shard0003'
4. executing query on a sharded collection.
5. the following errors appeared:
mongos> db.count.find()
error: {
"$err" : "setShardVersion failed host: 10.204.8.155:27010 { errmsg: \"exception: gotShardName different than what i had before before [shard0002] got [shard0003] \", code: 13298, ok: 0.0 }",
"code" : 10429
}
I tried to rename the shardname, but it's not allowed.
mongos> use config
mongos> db.shards.update({_id: "shard0003"}, {$set: {_id: "shard_11"}})
Mod on _id not allowed
I have also tried to remove it, draining stared but processing seems hung up.
what can I do ?
------------------------
lastupate (24/02/2014 00:29)
I found the answer on google. since mongod has it's own cache for mongod configuration, just restart the sharded mongod process, the problem will be fixed.