Whether MongoDB has logs for each insertion and removal - mongodb

I am wondering whether mongodb has logs for each insertion and removal? ie. monitoring or backup capabilities?

You can actually create an extremely verbose log of writes and reads and all that.
When you go to actually run mongod you can define a: http://docs.mongodb.org/manual/reference/mongod/#cmdoption-mongod--diaglog param which when set to 1 will log every single write operation including insertion and deletion.

Look at the oplog. It could be what you are looking for.
docs.mongodb.org
www.briancarpio.com

By default only queries which take longer that slowms are logged. You can log every query whith profiling level 2 however.
See here for more details

Related

Is it possible to see the incoming queries in mongodb to debug/trace issues?

I have mongo running on my macbook (OSX).
Is it possible to run some kind of a 'monitor' that will display any income requests to my mongodb?
I need to trace if I have the correct query formatting from my application.
You will find these tools (or utilities) useful for monitoring as well as diagnosing purposes. All the tools except mtools are packaged with MongoDB server (sometimes they are installed separately).
1. Database Profiler
The profiler stores every CRUD operation coming into the database; it is off, by default. Having it on is quite expensive; it turns every read into a read+insert, and every write into a write+insert. CAUTION: Keeping it on can quickly overpower the server with incoming operations - saturating the IO.
But, it is a very useful tool when used for a short time to find what is going on with database operations. It is recommended to be used in development environments.
The profiler setting can be accessed by using the command db.getProfilingLevel(). To activate the profilre use the db.setProfilingLevel(level) command. Verify what is captured by the profiler in the db.system.profile collection; you can query it like any other collection using the find or aggregate methods. The db.system.profile document field op specifies the type of database operation; e.g., for queries it is "query".
The profiler has three levels:
0is not capturing any info (or is turned off and default). 1 captures every query that takes over 100ms. 2 captures every query;this can be used to find the actual load that is coming in.
2. mongoreplay
mongoreplay is a traffic capture and replay tool for MongoDB that you can use to inspect and record commands sent to a MongoDB instance, and then replay those commands back onto another host at a later time. NOTE: Available for Linux and macOS.
3. mongostat
mongostat commad-line utility provides a quick overview of the status of a currently running mongod instance.
You can view the incoming operations in real-time. The statistics are displated, by default every second. There are various options to customize the output, the time interval, etc.
4. mtools
mtools is a collection of helper scripts to parse, filter, and visualize (thru graphs) MongoDB log files.
You will find the mlogfilter script useful; it reduces the amount of information from MongoDB log files using various command options. For example, mlogfilter mongod.log --operation query filters the log by query operations only.

Is db.stats() a blocking call for MongoDB?

While researching how to check the size of a MongoDB, I found this comment:
Be warned that dbstats blocks your database while it runs, so it's not suitable in production. https://jira.mongodb.org/browse/SERVER-5714
Looking at the linked bug report (which is still open), it quotes the Mongo docs as saying:
Command takes some time to run, typically a few seconds unless the .ns file is very large (via use of --nssize). While running other operations may be blocked.
However, when I check the current Mongo docs, I don't find that text. Instead, they say:
The time required to run the command depends on the total size of the database. Because the command must touch all data files, the command may take several seconds to run.
For MongoDB instances using the WiredTiger storage engine, after an unclean shutdown, statistics on size and count may off by up to 1000 documents as reported by collStats, dbStats, count. To restore the correct statistics for the collection, run validate on the collection.
Does this mean the WiredTiger storage engine changed this to a non-blocking call by keeping ongoing stats?
a bit late to the game but I found this question while looking for the answer, and the answer is: Yes until 3.6.12 / 4.0.5 it was acquiring a "shared" lock ("R") which block all write requests during the execution. After that it's now an "intent shared" lock ("r") which doesn't block write requests. Read requests were not impacted.
Source: https://jira.mongodb.org/browse/SERVER-36437

How to obtain incoming traffic/hits to MongoDB instance in real time?

I have a 3-member replica set.
When there is any read/write from an application, I need to get the information lively like we tail -f a log file in unix.
Is there any method or command available?
One of the options is to enable profiling for all operations.
See details here: https://docs.mongodb.org/v3.0/tutorial/manage-the-database-profiler/
So, to enable profiling for specific database execute in shell:
db.setProfilingLevel(2)
After this on each read/write operation an appropriate record will appear in system.profile collection with details about the query, time of execution, etc...
Also note that this would affect performance significantly, so don't use profiling for all queries on production environments

difference between db.runCommand({getlasterror:1,fsync:true}) and db.runCommand({getlasterror:1}) in MongoDB?

I understand that to getlasterror, it guarantees that the write has been done to a file.
This means that, even the computer power is off, the previous write is still ok.
But what is the use of fsync:true?
Essentially getLastError checking for an error in last database operation for the current connection. If you will run this command with fsync option it will also flush data to the datafiles (by defaul mongodb do it each 60 seconds).
More details you can find here and here

How to track how long some Mongo queries take

I have a few Mongo queries in the JS format, such as:
db.hello.update(params,data);
How do I run them in such a way that I can see exactly how long they've taken to run later?
There are a couple of options:
Do your updates with safe=true, which will cause the update call to block until mongod has written the data (the exact syntax for this depends on the driver you're using). You can add timing code around your updates in your application code, and log as appropriate.
Enable verbose (or more-verbose) logging, and use the log files to determine the time spent during your updates. See the mongo docs on logging for more information.
Enable the profiler, which stores information about queries and updates in a capped collection, db.system.profile, including the time spent servicing the query or update. Note that enabling the profiler affects performance, though not severely. See the mongo docs on profiling for more information.