MongoDB performance monitoring in Grafana? - mongodb

Can you monitor the performance of MongoDB in Grafana?
I know there is this plugin https://grafana.com/grafana/plugins/grafana-mongodb-datasource/ that helps to "visualise the data"... But I'm wondering if there is a way to monitor MongoDB performance during a JMeter load test, for example, the number of connections or the number of rows written/deleted during a test? Visualisation in Grafana would be nice but I'd be happy to start with just seeing the output somewhere ...

You could try querying the MongoDB as per this article.;
https://www.blazemeter.com/blog/how-load-test-mongodb-jmeter
For monitoring connections, you could try something like db.serverStatus().connections as per the detail in this thread;
Check the current number of connections to MongoDb
For other queries you can read the documentation here:
https://docs.mongodb.com/manual/reference/method/db.collection.count/#mongodb-method-db.collection.count
As for visualising in Grafana, I have only used that app for monitoring the response time info, avg, 95th etc... Not sure how counts and queries would be displayed.
You could possibly output it to the jmeter.log file using log.info() so that you have a record of the result...

Related

How do you monitor a MongoDB database to verify its status during major database operations?

I will soon need to perform some operations on our (azure-deployed) production database:
2 large MongoRestore operations
2 index rebuilds
2 collection counts
I need to ensure that these operations will not crash our servers by using up too many resources. I currently don't know how "resource-hungry" my operations are/will be, and how much load they will generate. I want to know how likely they are to crash the database.
My plan is to first execute these operations in our development environment and monitor the resulting load to get an idea of the "strain" that these operations will create.
I basically have two questions:
What tool(s) is/are best for monitoring the job? Specifically for monitoring the state/health of the database as these operations are running on it, to get an idea of how "safe" the operations are. I currently have a paid "Core" version of Studio 3T. I also have the free version of Mongo Compass.
What metrics should I be watching for? I'm new to this- so I assume I have watch RAM/CPU usage to make sure it doesn't go above a certain threshold. How do I know "how high is too high"? What else should I look for?
Monitor the RAM , CPU , IOPS , storage usage from the infrastructure.
mongotop and mongostat give some general usage idea.
Monitor the mongod/mongos logs ( the most valuable information is there )
--noIndexRestore (add this to avoid your members to be blocked for index re-creation immediately after the load , you will create the indexes later in background )
--numInsertionWorkersPerCollection=num1 , bigger num1 faster restore(more resource usage)
--numParallelCollections=num2(default=4) , bigger num2 faster restore(more resource usage)
--writeConcern="{w:'majority'}" ( keep it for safety , reducing it will improv significantly load speed )

Is it possible to see the incoming queries in mongodb to debug/trace issues?

I have mongo running on my macbook (OSX).
Is it possible to run some kind of a 'monitor' that will display any income requests to my mongodb?
I need to trace if I have the correct query formatting from my application.
You will find these tools (or utilities) useful for monitoring as well as diagnosing purposes. All the tools except mtools are packaged with MongoDB server (sometimes they are installed separately).
1. Database Profiler
The profiler stores every CRUD operation coming into the database; it is off, by default. Having it on is quite expensive; it turns every read into a read+insert, and every write into a write+insert. CAUTION: Keeping it on can quickly overpower the server with incoming operations - saturating the IO.
But, it is a very useful tool when used for a short time to find what is going on with database operations. It is recommended to be used in development environments.
The profiler setting can be accessed by using the command db.getProfilingLevel(). To activate the profilre use the db.setProfilingLevel(level) command. Verify what is captured by the profiler in the db.system.profile collection; you can query it like any other collection using the find or aggregate methods. The db.system.profile document field op specifies the type of database operation; e.g., for queries it is "query".
The profiler has three levels:
0is not capturing any info (or is turned off and default). 1 captures every query that takes over 100ms. 2 captures every query;this can be used to find the actual load that is coming in.
2. mongoreplay
mongoreplay is a traffic capture and replay tool for MongoDB that you can use to inspect and record commands sent to a MongoDB instance, and then replay those commands back onto another host at a later time. NOTE: Available for Linux and macOS.
3. mongostat
mongostat commad-line utility provides a quick overview of the status of a currently running mongod instance.
You can view the incoming operations in real-time. The statistics are displated, by default every second. There are various options to customize the output, the time interval, etc.
4. mtools
mtools is a collection of helper scripts to parse, filter, and visualize (thru graphs) MongoDB log files.
You will find the mlogfilter script useful; it reduces the amount of information from MongoDB log files using various command options. For example, mlogfilter mongod.log --operation query filters the log by query operations only.

FIWARE Orion/MongoDB Performance on AWS

I seem to be having real issues trying to get performance anywhere near that stated in the docs (~700 - 2000 tps with a VM of: 2 vCPUs 4GB RAM). I have tried on a local VM, a local machine and a few AWS VMs and I can't get anywhere close. - The maximum I have achieved is 80 tps on an AWS VM.
I have tried changing the -dbPoolSize and the -reqPoolSize for orion and playing with ulimit to set it to that suggested by MongoDB - but everything I change doesn't seem to get me anywhere close.
I have set indexes on the _id.id, _id.type and _id.servicePath as suggested in the docs - the latter of which gave me an increase from 40 tps to 80 tps.
Are there any config options for Orion or Mongo that I should be setting away from the default which will get me any closer? Are there any other tips for performance? The link in the docs to the test scripts doesn't work so I haven't been able to see the examples.
I have created my own test scripts using Node.js and I have tested update and queries using a variable amount of concurrent connections and between 1 and 2 load injectors.
From looking at the output from "top" the load is with Mongo as it almost maxes out the CPU but adding more cores to the VM doesn't change the stats. The VM has 7.5GB or 15GB of RAM so mongo should be able to put all the data into memory for blazing fast performance?
I have used mongostat to see that the connections from orion to mongo change with the -dbPoolSize option, but this doesn't yield any better performance.
Any help you can provide would be much appreciated.
I have tried using CentOS 6.5 and 6.7 with Orion 0.25 and 0.26 and MongoDB 2.6 with ~500,000 entities
My test scripts and data are on GitHub
I have only tested without subscriptions so far, but I have scripts ready to test with subscriptions - but I wanted to get a good baseline before adding subscriptions.
My data is modeled around parking spaces in the UK countries their regions and their outcodes (first part of the postcode). This is using servicePaths to split them down to parking lot in an outcode.
Here is a gist with the requests and mongo shell output
Performance is a complex topic which depends on many factor (deployment setup of Orion and MongoDB, startup configuration of Orion and MongoDB, hardware profile in the systems hosting the processes, network communications, overprovisioning level in the case of virtualization, injected load, etc.) so there isn't any general answer to deal with this kind of problems. However, I'd try to provide some hints and recommendations that I hope may help.
Regarding versions, Orion 0.26.0 (or 0.26.1) is recommended over 0.25.0. We have included a lot of improvements related with performance in Orion 0.26.0. Regarding MongoDB, we have also found that 3.0 could be much better than 2.6, specially in update intensive scenarios.
Having said that, first of all you should locate the bottleneck. Useful tools to do this are top, mongostat and mongotop. It could be either Orion, MongoDB or the network connecting them. If the bottleneck is CB, maybe the performance tuning hints provided in this document may help. Slow queries information in MongoDB could be also pointing to bottlenecks at Orion. If the bottleneck is MongoDB, taking into account the large number of entities you have (500,000) maybe you should consider to implement sharding. If the bottleneck is the network, colocation both Orion and MongoDB may help.
Finally, some things you can also try in order to get more insight into the problem:
Run some tests outside AWS (i.e. virtual machines in local premises) to compare. I don't know too much about the overprovising policy in AWS but based in my previous experiences with other cloud providers the VM overprovisioning (specially if it varies along time) could impact in performance.
Analyze if the peformance is related with the number of entities. E.g. run test with 500, 5,000, 50,000 and 500,000 entities and get the performance figure in each case.
Analyze if the performance is related with the usage of servicePath, e.g. put all the 500,000 entities in the default service path / (moving the current content of the servicePath to another place, e.g. an entity attribute or part of the entity ID string) and test. Currently Orion uses a regex to filter for servicePath and that could be slow.

Mongo response time monitor

Is there a way to monitor the response time since mongo got a command to execute till the returned an answer?
I couldn't find any in MMS nor in Server Density?
Is there other service that can give me that information?
Thanks
MongoDB profiling: http://www.mongodb.org/display/DOCS/Database+Profiler should be able to profile your queries for you and tell you what the response times were.
The default setting is 2 like so:
db.setProfilingLevel(2);
Which will write all operations to db.system.profile. You also have 1 for more precise testing for slow queries.

Query to check MongoDB Current progress?

I am started using MongoDB few days ago. Everything is fine with MongoDB but i couldn't figure out is query to check Current progress of MongoDB. ie [ Command to check which query is currently in progress in MongoDB or Command to list out overall Process of MongoDB ]. I tried executing the command "mongostat" but it doesn't provides which query is in progress. So Please provide the remedy for this case.
Advance Thanks,
Use db.currentOp(). If it's not enough, try profiling
You'll want to do mongo profiling:
Database Profiling
You need to first turn on the profiler by selecting your database; than db.setProfilingLevel(2);
From there you can start tracking your queries. If you haven't done so, I'd recommend installing MMS; mongo's monitoring system which is just an outstanding monitoring tool.
It's helped me a ton in watching queries come through.
Set Profiling =2 increase the load on sever like enabling general logs in MySQL.
It increase disk I/O.
So its better to monitor by db.currentOp(). Otherwise use profiling for a short period of time and disable it after logging few queries in logs.