Mongo Shell No Method Find - mongodb

I am running Debian with MongoDB shell version: 2.4.3
I run
use dbname
db.stats.find()
And it outputs the following
> db.stats.find()
Mon May 13 17:55:20.933 JavaScript execution failed: TypeError: Object function (scale){
return this.runCommand( { dbstats : 1 , scale : scale } );
} has no method 'find'
However running it on other collections works fine.
This mongo instance is being used with nodejs.

If you really created a collection named stats in your database dbname then I would advise you to rename it. In the shell the db object has a stats() method for looking at statistics of the database.
Meanwhile you can use slightly more complex syntax:
> db.getSiblingDB("dbname").getCollection("stats").find()
Fetched 0 record(s) in 4ms
Or if you are in dbname then:
> db.getCollection("stats").find()

I presume you want db.stats().

Related

Mongo shell : can't update log level (setLogLevel is not a function)

I am trying to increase my mongo log level without success:
MongoDB shell version: 2.6.10
connecting to: test
> use TreeDB
switched to db TreeDB
> db.setLogLevel(5,"query")
2018-01-23T12:11:28.221+0100 TypeError: Property 'setLogLevel' of object TreeDB is not a function
How to correctly use this function?
> db.setLogLevel
TreeDB.setLogLevel
db.help() does not output this function.
docs.mongodb references it here since 3.0: https://docs.mongodb.com/manual/reference/method/db.setLogLevel/
I am using ubuntu 16.04, mongoshell 2.6.10, mongo: 3.6.2 (via docker)
As explained in the comments, I have to install at least 3.0 version of mongoshell.
This should be done following https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/

Robot framework : Database library keywords not getting executed

I recently started working with Robot framework. So I had a requirement where I needed to connect with Postgres db.
So though I am able to connect with the db but then when I try to execute queries, the flow is getting stuck. Even the test is not failing. Following is what I did:
Connect To Database psycopg2 ${DBName} ${DBUser} ${DBPass} ${DBHost} ${DBPort}
${current_row_count} = Row Count Select * from xyz
The first statement is executing fine but then it gets stuck on second statement.
Can somebody help me out on this
To Execute Query and get data from result :
Connect To Database psycopg2 ${DBName} ${DBUser} ${DBPass} ${DBHost} ${DBPort}
${output} = Query SELECT * from xyz;
Log ${output}
${DataResults}= Get from list ${output} 0
${DataResults}= Convert to list ${DataResults}
${DataResults}= Get from list ${DataResults} 0
${DataResults} convert to string ${DataResults}
Disconnect From Database
You are not executing your query.... read below a bit documentation and an example ;)
In the example you can see example variable but introduce your data ;)
Name: Connect To Database Using Custom Params
Source: DatabaseLibrary
Arguments:
[ dbapiModuleName=None | db_connect_string= ]
Loads the DB API 2.0 module given dbapiModuleName then uses it to connect to the database using the map string db_custom_param_string.
Example usage Example usage: :
Connect To Database Using Custom Params pymssql database='${db_database}' , user='${db_user}', password='${db_password}', host='${db_host}'
${queryResults} Query ${query}
Disconnect From Database

Migration From Tokumx 1.5 To Percona Server For mongodb 3.11

Migrating Data from Tokumx To Percona Server For MonoDB
Step 1 :
This guide describes how to upgrade existing Percona TokuMX instance to Percona Server for MongoDB. The following JavaScript files are required to perform the upgrade:
• allDbStats.js
• tokumx_dump_indexes.js
• psmdb_restore_indexes.js
You can download those files from GitHub.
Step 2 :
Run the allDbStats.js script to record database state before migration:
$ mongo ./allDbStats.js > ~/allDbStats.before.out
Step 3 :
Perform a dump of the database:
$ mongodump --out /your/dump/path
Step 4 :
Perform a dump of the indexes:
$ ./tokumx_dump_indexes.js > /your/dump/path/tokumxIndexes.json
Step 5 :
Restore the collections without indexes using “--noIndexRestore” switch:
$ mongorestore --noIndexRestore /your/dump/path
Step 6 :
Restore the indexes (this may take a while). This step will remove clustering options to the collections before inserting.
$./psmdb_restore_indexes.js --eval "data='/your/dump/path/tokumxIndexes.json' "
Step 7 :
Run the allDbStats.js script to record database state after migration:
mongo ./allDbStats.js > ~/allDbStats.after.out
This is the guide i have found in the Migration from Tokumx to Percona server for mongodb. at step 6 when i try to restore indexes i get below mentioned error :
/mnt/tokumx-bkup/tokumxIndexes.json
2016-06-29T05:28:20.028+0000 E QUERY SyntaxError: Unexpected identifier
at /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js:78:1 at /mnt/tokumx-bkup/tokumxIndexes.json
2016-06-29T05:28:20.028+0000 E QUERY Error: error loading js file: /mnt/tokumx-bkup/tokumxIndexes.json
at /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js:78:1 at /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js:78
failed to load: /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js
Any help will be welcomed.
Thanks
check the tokumxIndexes.json file. When tokumx_dump_indexes.js is run, the mongo shell parameter --quiet must be used or the resulting json will contain the shell preamble at the beginning.
And check the file using something like http://jsonlint.com/
Also if preamble is present delete these two lines from the tokumxIndexes.json file.
"MongoDB shell version: 3.0.11-1.6
connecting to: 127.0.0.1:27017/test"
and Run the script again.
and Run the script again
$./psmdb_restore_indexes.js --eval "data='/your/dump/path/tokumxIndexes.json' "
Now this script will start build Index Process.

MongoDB logging all queries

The question is as basic as it is simple... How do you log all queries in a "tail"able log file in mongodb?
I have tried:
setting the profiling level
setting the slow ms parameter starting
mongod with the -vv option
The /var/log/mongodb/mongodb.log keeps showing just the current number of active connections...
You can log all queries:
$ mongo
MongoDB shell version: 2.4.9
connecting to: test
> use myDb
switched to db myDb
> db.getProfilingLevel()
0
> db.setProfilingLevel(2)
{ "was" : 0, "slowms" : 1, "ok" : 1 }
> db.getProfilingLevel()
2
> db.system.profile.find().pretty()
Source: http://docs.mongodb.org/manual/reference/method/db.setProfilingLevel/
db.setProfilingLevel(2) means "log all operations".
I ended up solving this by starting mongod like this (hammered and ugly, yeah... but works for development environment):
mongod --profile=1 --slowms=1 &
This enables profiling and sets the threshold for "slow queries" as 1ms, causing all queries to be logged as "slow queries" to the file:
/var/log/mongodb/mongodb.log
Now I get continuous log outputs using the command:
tail -f /var/log/mongodb/mongodb.log
An example log:
Mon Mar 4 15:02:55 [conn1] query dendro.quads query: { graph: "u:http://example.org/people" } ntoreturn:0 ntoskip:0 nscanned:6 keyUpdates:0 locks(micros) r:73163 nreturned:6 reslen:9884 88ms
Because its google first answer ...
For version 3
$ mongo
MongoDB shell version: 3.0.2
connecting to: test
> use myDb
switched to db
> db.setLogLevel(1)
http://docs.mongodb.org/manual/reference/method/db.setLogLevel/
MongoDB has a sophisticated feature of profiling. The logging happens in system.profile collection. The logs can be seen from:
db.system.profile.find()
There are 3 logging levels (source):
Level 0 - the profiler is off, does not collect any data. mongod always writes operations longer than the slowOpThresholdMs threshold to its log. This is the default profiler level.
Level 1 - collects profiling data for slow operations only. By default slow operations are those slower than 100 milliseconds.
You can modify the threshold for “slow” operations with the slowOpThresholdMs runtime option or the setParameter command. See the Specify the Threshold for Slow Operations section for more information.
Level 2 - collects profiling data for all database operations.
To see what profiling level the database is running in, use
db.getProfilingLevel()
and to see the status
db.getProfilingStatus()
To change the profiling status, use the command
db.setProfilingLevel(level, milliseconds)
Where level refers to the profiling level and milliseconds is the ms of which duration the queries needs to be logged. To turn off the logging, use
db.setProfilingLevel(0)
The query to look in the system profile collection for all queries that took longer than one second, ordered by timestamp descending will be
db.system.profile.find( { millis : { $gt:1000 } } ).sort( { ts : -1 } )
I made a command line tool to activate the profiler activity and see the logs in a "tail"able way --> "mongotail":
$ mongotail MYDATABASE
2020-02-24 19:17:01.194 QUERY [Company] : {"_id": ObjectId("548b164144ae122dc430376b")}. 1 returned.
2020-02-24 19:17:01.195 QUERY [User] : {"_id": ObjectId("549048806b5d3db78cf6f654")}. 1 returned.
2020-02-24 19:17:01.196 UPDATE [Activation] : {"_id": "AB524"}, {"_id": "AB524", "code": "f2cbad0c"}. 1 updated.
2020-02-24 19:17:10.729 COUNT [User] : {"active": {"$exists": true}, "firstName": {"$regex": "mac"}}
...
But the more interesting feature (also like tail) is to see the changes in "real time" with the -f option, and occasionally filter the result with grep to find a particular operation.
See documentation and installation instructions in: https://github.com/mrsarm/mongotail
(also runnable from Docker, specially if you want to execute it from Windows https://hub.docker.com/r/mrsarm/mongotail)
if you want the queries to be logged to mongodb log file, you have to set both
the log level and the profiling, like for example:
db.setLogLevel(1)
db.setProfilingLevel(2)
(see https://docs.mongodb.com/manual/reference/method/db.setLogLevel)
Setting only the profiling would not have the queries logged to file, so you can only get it from
db.system.profile.find().pretty()
Once profiling level is set using db.setProfilingLevel(2).
The below command will print the last executed query.
You may change the limit(5) as well to see less/more queries.
$nin - will filter out profile and indexes queries
Also, use the query projection {'query':1} for only viewing query field
db.system.profile.find(
{
ns: {
$nin : ['meteor.system.profile','meteor.system.indexes']
}
}
).limit(5).sort( { ts : -1 } ).pretty()
Logs with only query projection
db.system.profile.find(
{
ns: {
$nin : ['meteor.system.profile','meteor.system.indexes']
}
},
{'query':1}
).limit(5).sort( { ts : -1 } ).pretty()
The profiler data is written to a collection in your DB, not to file. See http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/
I would recommend using 10gen's MMS service, and feed development profiler data there, where you can filter and sort it in the UI.
I think that while not elegant, the oplog could be partially used for this purpose: it logs all the writes - but not the reads...
You have to enable replicatoon, if I'm right. The information is from this answer from this question: How to listen for changes to a MongoDB collection?
Setting profilinglevel to 2 is another option to log all queries.
db.setProfilingLevel(2,-1)
This worked! it logged all query info in mongod log file
I recommend checking out mongosniff. This can tool can do everything you want and more. Especially it can help diagnose issues with larger scale mongo systems and how queries are being routed and where they are coming from since it works by listening to your network interface for all mongo related communications.
http://docs.mongodb.org/v2.2/reference/mongosniff/
I wrote a script that will print out the system.profile log in real time as queries come in. You need to enable logging first as stated in other answers. I needed this because I'm using Windows Subsystem for Linux, for which tail still doesn't work.
https://github.com/dtruel/mongo-live-logger
db.adminCommand( { getLog: "*" } )
Then
db.adminCommand( { getLog : "global" } )
This was asked a long time ago but this may still help someone:
MongoDB profiler logs all the queries in the capped collection system.profile. See this: database profiler
Start mongod instance with --profile=2 option that enables logging all queries
OR if mongod instances is already running, from mongoshell, run db.setProfilingLevel(2) after selecting database. (it can be verified by db.getProfilingLevel(), which should return 2)
After this, I have created a script which utilises mongodb's tailable cursor to tail this system.profile collection and write the entries in a file.
To view the logs I just need to tail it:tail -f ../logs/mongologs.txt.
This script can be started in background and it will log all the operation on the db in the file.
My code for tailable cursor for the system.profile collection is in nodejs; it logs all the operations along with queries happening in every collection of MyDb:
const MongoClient = require('mongodb').MongoClient;
const assert = require('assert');
const fs = require('fs');
const file = '../logs/mongologs'
// Connection URL
const url = 'mongodb://localhost:27017';
// Database Name
const dbName = 'MyDb';
//Mongodb connection
MongoClient.connect(url, function (err, client) {
assert.equal(null, err);
const db = client.db(dbName);
listen(db, {})
});
function listen(db, conditions) {
var filter = { ns: { $ne: 'MyDb.system.profile' } }; //filter for query
//e.g. if we need to log only insert queries, use {op:'insert'}
//e.g. if we need to log operation on only 'MyCollection' collection, use {ns: 'MyDb.MyCollection'}
//we can give a lot of filters, print and check the 'document' variable below
// set MongoDB cursor options
var cursorOptions = {
tailable: true,
awaitdata: true,
numberOfRetries: -1
};
// create stream and listen
var stream = db.collection('system.profile').find(filter, cursorOptions).stream();
// call the callback
stream.on('data', function (document) {
//this will run on every operation/query done on our database
//print 'document' to check the keys based on which we can filter
//delete data which we dont need in our log file
delete document.execStats;
delete document.keysExamined;
//-----
//-----
//append the log generated in our log file which can be tailed from command line
fs.appendFile(file, JSON.stringify(document) + '\n', function (err) {
if (err) (console.log('err'))
})
});
}
For tailable cursor in python using pymongo, refer the following code which filters for MyCollection and only insert operation:
import pymongo
import time
client = pymongo.MongoClient()
oplog = client.MyDb.system.profile
first = oplog.find().sort('$natural', pymongo.ASCENDING).limit(-1).next()
ts = first['ts']
while True:
cursor = oplog.find({'ts': {'$gt': ts}, 'ns': 'MyDb.MyCollection', 'op': 'insert'},
cursor_type=pymongo.CursorType.TAILABLE_AWAIT)
while cursor.alive:
for doc in cursor:
ts = doc['ts']
print(doc)
print('\n')
time.sleep(1)
Note: Tailable cursor only works with capped collections. It cannot be used to log operations on a collection directly, instead use filter: 'ns': 'MyDb.MyCollection'
Note: I understand that the above nodejs and python code may not be of much help for some. I have just provided the codes for reference.
Use this link to find documentation for tailable cursor in your languarge/driver choice Mongodb Drivers
Another feature that i have added after this logrotate.
Try out this package to tail all the queries (without oplog operations): https://www.npmjs.com/package/mongo-tail-queries
(Disclaimer: I wrote this package exactly for this need)

Mongodump and mongorestore; field not found

I'm trying to dump a database from another server (this works fine), then restore it on a new server (this does not work fine).
I first run:
mongodump --host -d
This creates a folder dump/db which contains all of the bson documents.
Then in the dump folder, I'm running:
mongorestore -d dbname db
This works and iterates through the files, but I get this error on dbname.system.users
Wed May 23 02:08:05 { key: { _id: 1 }, ns: "dbname.system.users", name: "_id_" }
Error creating index dbname.system.usersassertion: 13111 field not found, expected type 16
Any ideas how to resolve this?
If it realy different versions, use --noIndexRestore option. And create all index after that.
Any chance the source and destination are different versions?
In any case, to get around this, restore the collections individually using the -c flag to the target DB and then build the indexes afterward. The system collection is the one used for indexes, so it is fairly easy to recreate - try it last once everything else has been restore, and if it still fails you can always just recreate the relevant indexes.
The issue could also caused by this bug in older versions of Mongo (In my case it was 2.0.8):
https://jira.mongodb.org/browse/SERVER-7181
Basically, you get 13111 field not found, expected type 16 error when it should actually be prompting you to enter your authentication details.
And example of how I fixed it:
root#precise64:/# mongorestore /backups/demand/ondemand.05-24-2013T114223/
connected to: 127.0.0.1
[REDACTED]
Fri May 24 11:48:15 going into namespace [test.system.indexes]
Fri May 24 11:48:15 { key: { _id: 1 }, ns: "test.system.users", name: "_id_" }
Error creating index test.system.usersassertion: 13111 field not found, expected type 16
# Error when not giving username and password
root#precise64:/# mongorestore -u fakeuser -p fakepassword /backups/demand/ondemand.05-24-2013T114223/
connected to: 127.0.0.1
[REDACTED]
Fri May 24 11:57:11 /backups/demand/ondemand.05-24-2013T114223/test/system.users.bson
Fri May 24 11:57:11 going into namespace [test.system.users]
1 objects found
# Works fine when giving username and password! :)
Hope that helps anyone who's issue doesn't get fixed by the previous 2 replies!
This can also happen if you are trying to mongorestore into MongoDB 2.6+ and the dump you are trying to restore contains a system.users table in any database other than admin. In MongoDB 2.2 and 2.4 the system.userscollections could occur in any database. The auth schema migration associated with MongoDB 2.6 moved all users into the system.users table in the admin database, but left behind the system.users tables in the other databases (MongoDB 2.6 just ignores these). This seems to cause this assertion when importing into MongoDB 2.6.