MongoDb DropDatabase Not Working - mongodb

I have a 200gb data base on a sharded four node cluster and and I would like to drop the databse and delet all the files associated to it from the node. I am connecting to my mongos and call dropDatabase on it. The system comes back with ok but if call show dbs, it will show the database again and shows that it is still occupying the 200gb. What I am doing wrong?

I think you are running into this issue:
https://jira.mongodb.org/browse/SERVER-4804
In most cases it seems like the database is in fact removed but the mongos still reports it as being there. You can verify it is gone by either trying to use the DB and getting an error or by logging into the shards directly and checking.
The bug refers to issues with dropping databases while a migration is happening. You can workaround the cause of the issue by doing something like this (sub in your own dbname):
mongos> use config
switched to db config
// 1. stop the balancer
mongos> db.settings.update({_id: "balancer"}, {$set: {stopped: true}}, true)
// 2. wait for in-progress migrations to finish, this may take a few seconds
mongos> while (db.locks.findOne({_id: "balancer", state: {$ne: 0}}) != null) { sleep(1000); }
// 3. now you can safely drop the database
mongos> use <dbname>
switched to db <dbname>
mongos> db.dropDatabase()
{ "dropped" : "<dbname>", "ok" : 1 }
You may want to run the flushRouterConfig on the mongoses to refresh the config info:
mongos> use config
switched to db config
mongos> var mongoses = db.mongos.find()
mongos> while (mongoses.hasNext()) { new Mongo(mongoses.next()._id).getDB("admin").runCommand({flushRouterConfig: 1}) }
{ "flushed" : true, "ok" : 1 }
Of course, the real fix will only come along when the fix is committed - looks like it is targeted for 2.1
If you are in a broken state, you can try this, but it is tricky:
To "reset" the sharding metadata to recover from this issue, please try to do the following
First, stop the balancer (as above) and wait for migrations to finish (also as above)
Next, ensure there is no activity from the app servers on the database in question
Now, ensure that there are no collection entries in config.collections for namespaces beginning with "TestCollection." If so, remove those entries through a mongos:
mongos> use config
mongos> db.collections.find({_id: /^TestCollection\./})
// if any records found, delete them
mongos> db.collections.remove({_id: /^TestCollection\./})
Next up, ensure there is no database entry in config.databases for "TestCollection", and if so, remove it through mongos:
mongos> use config
switched to db config
mongos> db.databases.find({_id: "TestCollection"})
// if any records found, delete them
mongos> db.databases.remove({_id: "TestCollection"})
Now, ensure there are no entries in config.chunks for any namespaces in the database (like the default test namespace). If there are any, remove through mongos:
mongos> use config
switched to db config
mongos> db.chunks.find({ns: /^test\./})
// if any records found, delete them
mongos> db.chunks.remove({ns: /^test\./})
Then, flushRouterConfig on all mongoses:
mongos> use config
switched to db config
mongos> var mongoses = db.mongos.find()
mongos> while (mongoses.hasNext()) { new Mongo(mongoses.next()._id).getDB("admin").runCommand({flushRouterConfig: 1}) }
{ "flushed" : true, "ok" : 1 }
...
Finally, manually connect to each shard primary, and drop the database on the shards (not all shards may have the database, but it's best to be thorough and issue the dropDatabase() call on all
Regarding in-progress migrations, you can use this snippet:
// 2. wait for in-progress migrations to finish, this may take a few seconds
mongos> while (db.locks.findOne({_id: "balancer", state: {$ne: 0}}) != null) { sleep(1000); }
When done, don't forget to reenable the balancer:
mongos> use config
switched to db config
mongos> db.settings.update({_id: "balancer"}, {$set: {stopped: false}}, true)

I had this exact same problem, and discovered that issuing the db.dropDatabase as a regular user failed silently but doing the same as sudo worked, in case that helps anyone.

Related

Mongo shell not creating indexes on running databases

We dumped the definition of indexes using mongo-index-exporter and we try to apply them at each launch of our application, so to achieve a reproducible environment, by executing
mongo --verbose "mongodb://$MONGO_DB_USER:$MONGO_DB_PASSWORD#$MONGO_DB_HOST/$MONGO_DB_NAME?authSource=$MONGO_AUTH_SOURCE&authMode=scram-sha1" indexes.js
on a file like so:
print('Using db ' + db)
setVerboseShell(true)
print('Creating indexes on mycoll1')
db.mycoll1.createIndex({"_id":1}, {"name":"_id_", "background": true});
db.mycoll1.createIndex({"createdAt":1,"field1.field2.field3":1}, {"name":"coll1_field123_idx"});
print('Creating indexes on mycoll2')
db.mycoll2.createIndex({"_id":1}, {"name":"_id_", "background": true});
db.mycoll2.createIndex({"createdAt":1,"field1.field2.field3":1}, {"name":"coll1_field123_idx"});
To create indexes on a replicaSet with five nodes, we are performing the
following command:
mongo --verbose "mongodb://$MONGO_DB_USER:$MONGO_DB_PASSWORD#$MONGO_DB_HOST/$MONGO_DB_NAME?authSource=$MONGO_AUTH_SOURCE&authMode=scram-sha1" indexes.js
this works fine for us in QA, where MONGO_DB_HOST is a single node and we are using a fresh new empty database, and doesn't work in production where additionally the database is already existing (and the collection have contents).
Additional relevant information:
background mode doesn't make any difference
The verbose mode doesn't work, i.e. although we launch mongo client with -v and we do a setVerbose(true) nothing is logged on the console
Our mongo shell is 3.6.11 and our mongo server is 3.2.9
Copy pastying the commands inside a mongo shell executed on the server works terminates immediately and results in the following output
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 11,
"numIndexesAfter" : 11,
"note" : "all indexes already exist",
"ok" : 1
}

mongodb sharded collection query failed: setShardVersion failed host

I have encountered a problem after adding a shard to mongodb cluster.
I did the following operations:
1. deploy a mongodb cluster with primary shard named 'shard0002'(10.204.8.155:27010) for all databases.
2. for some reason I removed it and add a new shard of different host (10.204.8.100:27010, was automaticlly named shard0002 too) after migrating finished.
3. then add another shard (the one removed in step 1), named 'shard0003'
4. executing query on a sharded collection.
5. the following errors appeared:
mongos> db.count.find()
error: {
"$err" : "setShardVersion failed host: 10.204.8.155:27010 { errmsg: \"exception: gotShardName different than what i had before before [shard0002] got [shard0003] \", code: 13298, ok: 0.0 }",
"code" : 10429
}
I tried to rename the shardname, but it's not allowed.
mongos> use config
mongos> db.shards.update({_id: "shard0003"}, {$set: {_id: "shard_11"}})
Mod on _id not allowed
I have also tried to remove it, draining stared but processing seems hung up.
what can I do ?
------------------------
lastupate (24/02/2014 00:29)
I found the answer on google. since mongod has it's own cache for mongod configuration, just restart the sharded mongod process, the problem will be fixed.

How to convert a MongoDB replica set to a stand alone server

Consider, I have 4 replicate sets and the config is as follows:
{
"_id": "rs_0",
"version": 5,
"members" : [
{"_id": 1, "host": "127.0.0.1:27001"},
{"_id": 2, "host": "127.0.0.1:27002"},
{"_id": 3, "host": "127.0.0.1:27003"},
{"_id": 4, "host": "127.0.0.1:27004"}
]
}
I am able to connect to all sets using
mongo --port <port>
There are documents for getting information on Convert a Standalone to a Replica Set, but can anyone tell me how to convert back to standalone from replica set?
Remove all secondary hosts from replica set (rs.remove('host:port')), restart the mongo deamon without replSet parameter (editing /etc/mongo.conf) and the secondary hosts starts in standalone mode again.
The Primary host is tricky one, because you can't remove it from the replica set with rs.remove.
Once you have only the primary node in the replica set, you should exit mongo shell and stop mongo. Then you edit the /etc/mongo.conf and remove the replSet parameter and start mongo again.
Once you start mongo you are already in standalone mode, but the mongo shell will prompt a message like:
2015-07-31T12:02:51.112+0100 [initandlisten] ** WARNING: mongod started without --replSet yet 1 documents are present in local.system.replset
to remove the warning you can do 2 procedures:
1) Droping the local db and restarting mongo:
use local
db.dropDatabase();
/etc/init.d/mongod restart
2)Or if you don't want to be so radical, you can do:
use local
db.system.replset.find()
and it will prompt a message like:
{ "_id" : "replicaSetName", "version" : 1, "members" : [ { "_id" : 0, "host" : "hostprimary:mongoport" } ] }
then you will erase it using:
db.system.replset.remove({ "_id" : "replicaSetName", "version" : 1, "members" : [ { "_id" : 0, "host" : "hostprimary:mongoport" } ] })
and it will probably prompt:
WriteResult({ "nRemoved" : 1 })
Now, you can restart the mongo and the warning should be gone, and you will have your mongo in standalone mode without warnings
Just remove a host from replica set (rs.remove('host:port')), relaunch it without replSet parameter and it's standalone again.
On an Ubuntu Machine
Stop your mongo server
open /etc/mongod.conf
Comment the replication and replSetName line
#replication:
#replSetName: rs0
Start your mongo server and go to mongo shell
drop local database
use local
db.dropDatabase()
Restart mongo
The MongoDB Documentation suggests the following to perform maintenance on a replica set member, which brings the the replica set member into standalone mode for further operations. With little modification it can be made standalone:
If node in concern is the only node in a shard, drain the chunks to other shards as per MongoDB documentation here, or else the sharded database will break, i.e.
Make sure balancer is enabled by connecting to mongos and run sh.startBalancer(timeout, interval)
For the shard in concern, go to admin database and db.adminCommand( { removeShard: "mongodb0" } )
Check draining status by repeating above removeShard command, wait for draining to complete
If node in concern is primary, do rs.stepDown(300)
Stop the node by running db.shutdownServer()
Change the yaml config by:
commenting out replication.replSetName (--replSetName in command line)
commenting out sharding.clusterRole for shard or config server (--shardsvc and --configsvr in command line)
commenting out net.port, then change it to a different port (--port in command line)
Start the mongod instance
If change is permanent, go to other mongod instance and run rs.remove("host:port")
After this, the node in concern should be up and running in standalone mode.
Follow below steps :
Go to mongo shell on Secondary servers
Stop the secondary servers by using below command :
use admin
db.shutdownServer()
Go to Linux shell- on secondary servers and type below command :
sudo service mongod stop
Starting the MongoDB replication -
Go to Linux shell - on secondary servers and type below command :
sudo service mongod start
Starting the MongoDB replication -
Go to primary and type below commands to start the replication :
a] rs.initiate()
b] rs.add("Secondar -1:port no")
c] rs.add("Secondary-2:port no")
d] rs.add({ "_id" : 3, "host" : "Hidden_member:port no", "priority" : 0,
"hidden" : true })
e] rs.status()

MongoDB logging all queries

The question is as basic as it is simple... How do you log all queries in a "tail"able log file in mongodb?
I have tried:
setting the profiling level
setting the slow ms parameter starting
mongod with the -vv option
The /var/log/mongodb/mongodb.log keeps showing just the current number of active connections...
You can log all queries:
$ mongo
MongoDB shell version: 2.4.9
connecting to: test
> use myDb
switched to db myDb
> db.getProfilingLevel()
0
> db.setProfilingLevel(2)
{ "was" : 0, "slowms" : 1, "ok" : 1 }
> db.getProfilingLevel()
2
> db.system.profile.find().pretty()
Source: http://docs.mongodb.org/manual/reference/method/db.setProfilingLevel/
db.setProfilingLevel(2) means "log all operations".
I ended up solving this by starting mongod like this (hammered and ugly, yeah... but works for development environment):
mongod --profile=1 --slowms=1 &
This enables profiling and sets the threshold for "slow queries" as 1ms, causing all queries to be logged as "slow queries" to the file:
/var/log/mongodb/mongodb.log
Now I get continuous log outputs using the command:
tail -f /var/log/mongodb/mongodb.log
An example log:
Mon Mar 4 15:02:55 [conn1] query dendro.quads query: { graph: "u:http://example.org/people" } ntoreturn:0 ntoskip:0 nscanned:6 keyUpdates:0 locks(micros) r:73163 nreturned:6 reslen:9884 88ms
Because its google first answer ...
For version 3
$ mongo
MongoDB shell version: 3.0.2
connecting to: test
> use myDb
switched to db
> db.setLogLevel(1)
http://docs.mongodb.org/manual/reference/method/db.setLogLevel/
MongoDB has a sophisticated feature of profiling. The logging happens in system.profile collection. The logs can be seen from:
db.system.profile.find()
There are 3 logging levels (source):
Level 0 - the profiler is off, does not collect any data. mongod always writes operations longer than the slowOpThresholdMs threshold to its log. This is the default profiler level.
Level 1 - collects profiling data for slow operations only. By default slow operations are those slower than 100 milliseconds.
You can modify the threshold for “slow” operations with the slowOpThresholdMs runtime option or the setParameter command. See the Specify the Threshold for Slow Operations section for more information.
Level 2 - collects profiling data for all database operations.
To see what profiling level the database is running in, use
db.getProfilingLevel()
and to see the status
db.getProfilingStatus()
To change the profiling status, use the command
db.setProfilingLevel(level, milliseconds)
Where level refers to the profiling level and milliseconds is the ms of which duration the queries needs to be logged. To turn off the logging, use
db.setProfilingLevel(0)
The query to look in the system profile collection for all queries that took longer than one second, ordered by timestamp descending will be
db.system.profile.find( { millis : { $gt:1000 } } ).sort( { ts : -1 } )
I made a command line tool to activate the profiler activity and see the logs in a "tail"able way --> "mongotail":
$ mongotail MYDATABASE
2020-02-24 19:17:01.194 QUERY [Company] : {"_id": ObjectId("548b164144ae122dc430376b")}. 1 returned.
2020-02-24 19:17:01.195 QUERY [User] : {"_id": ObjectId("549048806b5d3db78cf6f654")}. 1 returned.
2020-02-24 19:17:01.196 UPDATE [Activation] : {"_id": "AB524"}, {"_id": "AB524", "code": "f2cbad0c"}. 1 updated.
2020-02-24 19:17:10.729 COUNT [User] : {"active": {"$exists": true}, "firstName": {"$regex": "mac"}}
...
But the more interesting feature (also like tail) is to see the changes in "real time" with the -f option, and occasionally filter the result with grep to find a particular operation.
See documentation and installation instructions in: https://github.com/mrsarm/mongotail
(also runnable from Docker, specially if you want to execute it from Windows https://hub.docker.com/r/mrsarm/mongotail)
if you want the queries to be logged to mongodb log file, you have to set both
the log level and the profiling, like for example:
db.setLogLevel(1)
db.setProfilingLevel(2)
(see https://docs.mongodb.com/manual/reference/method/db.setLogLevel)
Setting only the profiling would not have the queries logged to file, so you can only get it from
db.system.profile.find().pretty()
Once profiling level is set using db.setProfilingLevel(2).
The below command will print the last executed query.
You may change the limit(5) as well to see less/more queries.
$nin - will filter out profile and indexes queries
Also, use the query projection {'query':1} for only viewing query field
db.system.profile.find(
{
ns: {
$nin : ['meteor.system.profile','meteor.system.indexes']
}
}
).limit(5).sort( { ts : -1 } ).pretty()
Logs with only query projection
db.system.profile.find(
{
ns: {
$nin : ['meteor.system.profile','meteor.system.indexes']
}
},
{'query':1}
).limit(5).sort( { ts : -1 } ).pretty()
The profiler data is written to a collection in your DB, not to file. See http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/
I would recommend using 10gen's MMS service, and feed development profiler data there, where you can filter and sort it in the UI.
I think that while not elegant, the oplog could be partially used for this purpose: it logs all the writes - but not the reads...
You have to enable replicatoon, if I'm right. The information is from this answer from this question: How to listen for changes to a MongoDB collection?
Setting profilinglevel to 2 is another option to log all queries.
db.setProfilingLevel(2,-1)
This worked! it logged all query info in mongod log file
I recommend checking out mongosniff. This can tool can do everything you want and more. Especially it can help diagnose issues with larger scale mongo systems and how queries are being routed and where they are coming from since it works by listening to your network interface for all mongo related communications.
http://docs.mongodb.org/v2.2/reference/mongosniff/
I wrote a script that will print out the system.profile log in real time as queries come in. You need to enable logging first as stated in other answers. I needed this because I'm using Windows Subsystem for Linux, for which tail still doesn't work.
https://github.com/dtruel/mongo-live-logger
db.adminCommand( { getLog: "*" } )
Then
db.adminCommand( { getLog : "global" } )
This was asked a long time ago but this may still help someone:
MongoDB profiler logs all the queries in the capped collection system.profile. See this: database profiler
Start mongod instance with --profile=2 option that enables logging all queries
OR if mongod instances is already running, from mongoshell, run db.setProfilingLevel(2) after selecting database. (it can be verified by db.getProfilingLevel(), which should return 2)
After this, I have created a script which utilises mongodb's tailable cursor to tail this system.profile collection and write the entries in a file.
To view the logs I just need to tail it:tail -f ../logs/mongologs.txt.
This script can be started in background and it will log all the operation on the db in the file.
My code for tailable cursor for the system.profile collection is in nodejs; it logs all the operations along with queries happening in every collection of MyDb:
const MongoClient = require('mongodb').MongoClient;
const assert = require('assert');
const fs = require('fs');
const file = '../logs/mongologs'
// Connection URL
const url = 'mongodb://localhost:27017';
// Database Name
const dbName = 'MyDb';
//Mongodb connection
MongoClient.connect(url, function (err, client) {
assert.equal(null, err);
const db = client.db(dbName);
listen(db, {})
});
function listen(db, conditions) {
var filter = { ns: { $ne: 'MyDb.system.profile' } }; //filter for query
//e.g. if we need to log only insert queries, use {op:'insert'}
//e.g. if we need to log operation on only 'MyCollection' collection, use {ns: 'MyDb.MyCollection'}
//we can give a lot of filters, print and check the 'document' variable below
// set MongoDB cursor options
var cursorOptions = {
tailable: true,
awaitdata: true,
numberOfRetries: -1
};
// create stream and listen
var stream = db.collection('system.profile').find(filter, cursorOptions).stream();
// call the callback
stream.on('data', function (document) {
//this will run on every operation/query done on our database
//print 'document' to check the keys based on which we can filter
//delete data which we dont need in our log file
delete document.execStats;
delete document.keysExamined;
//-----
//-----
//append the log generated in our log file which can be tailed from command line
fs.appendFile(file, JSON.stringify(document) + '\n', function (err) {
if (err) (console.log('err'))
})
});
}
For tailable cursor in python using pymongo, refer the following code which filters for MyCollection and only insert operation:
import pymongo
import time
client = pymongo.MongoClient()
oplog = client.MyDb.system.profile
first = oplog.find().sort('$natural', pymongo.ASCENDING).limit(-1).next()
ts = first['ts']
while True:
cursor = oplog.find({'ts': {'$gt': ts}, 'ns': 'MyDb.MyCollection', 'op': 'insert'},
cursor_type=pymongo.CursorType.TAILABLE_AWAIT)
while cursor.alive:
for doc in cursor:
ts = doc['ts']
print(doc)
print('\n')
time.sleep(1)
Note: Tailable cursor only works with capped collections. It cannot be used to log operations on a collection directly, instead use filter: 'ns': 'MyDb.MyCollection'
Note: I understand that the above nodejs and python code may not be of much help for some. I have just provided the codes for reference.
Use this link to find documentation for tailable cursor in your languarge/driver choice Mongodb Drivers
Another feature that i have added after this logrotate.
Try out this package to tail all the queries (without oplog operations): https://www.npmjs.com/package/mongo-tail-queries
(Disclaimer: I wrote this package exactly for this need)

How to modify replica set config?

I have a mongo 2 node cluster running, with this replica set config.
config = {_id: "repl1", members:[
{_id: 0, host: 'localhost:15000'},
{_id: 1, host: '192.168.2.100:15000'}]
}
I have to move these both nodes on to new servers. I have copied everything from old to new servers, but I'm running into issues while reconfiguring the replica config due to ip change on the 2nd node.
I have tried this.
config = {_id: "repl1", members:[
{_id: 0, host: 'localhost:15000'},
{_id: 1, host: '192.168.2.200:15000'}]
}
rs.reconfig(config)
{
"startupStatus" : 1,
"errmsg" : "loading local.system.replset config (LOADINGCONFIG)",
"ok" : 0
}
It shows above message, but change is not happening.
I also tried changing replica set name but pointing to the same data dirs.
I am getting the following error:
rs.initiate()
{
"errmsg" : "local.oplog.rs is not empty on the initiating member. cannot initiate.",
"ok" : 0
}
What are the right steps to change the IP but keeping the data on the 2nd node, or do i need to recreate/resync the 2nd node?
Well , I had the same problem.
I had to delete all replication and oplog.
use local
db.dropDatabase()
restart your mongo with new set name
config = {_id: "repl1", members:[
{_id: 0, host: 'localhost:15000'},
{_id: 1, host: '192.168.2.100:15000'}]
}
rs.initiate(config)
I hope this works for you too
You can use force option when reconfiguring replica set:
rs.reconfig(config, {force: true})
Note that, as Adam already suggested in the comments, you should have at least 3 nodes: 2 full nodes and 1 arbiter (minimum supported configuration) or 3 full nodes (minimum recommended configuration) so that primary node can be elected.
I realise this is an old post, but I discovered I was getting this exact same error when trying to change the port used by secondaries in my replica set.
In my case, I needed to stop the secondary whose config I was changing, and bring it up on its new address and port BEFORE applying the changed config on the Primary.
This is in the mongo documentation, but the order in which I had to bring things up and down was something I'd mis-read on the first pass, so for clarity I've repeated that here:
Shut down the secondary member of the replica set you are moving.
Bring that secondary back up at its new address
Make the configuration change as detailed in the original post above
You can use rs.reconfig option. First retrieve the current configuration with rs.conf(). Modify the configuration document as needed, and then pass the modified document to rs.reconfig()
More info in docs.