Is there a way to monitor the response time since mongo got a command to execute till the returned an answer?
I couldn't find any in MMS nor in Server Density?
Is there other service that can give me that information?
Thanks
MongoDB profiling: http://www.mongodb.org/display/DOCS/Database+Profiler should be able to profile your queries for you and tell you what the response times were.
The default setting is 2 like so:
db.setProfilingLevel(2);
Which will write all operations to db.system.profile. You also have 1 for more precise testing for slow queries.
Related
Can you monitor the performance of MongoDB in Grafana?
I know there is this plugin https://grafana.com/grafana/plugins/grafana-mongodb-datasource/ that helps to "visualise the data"... But I'm wondering if there is a way to monitor MongoDB performance during a JMeter load test, for example, the number of connections or the number of rows written/deleted during a test? Visualisation in Grafana would be nice but I'd be happy to start with just seeing the output somewhere ...
You could try querying the MongoDB as per this article.;
https://www.blazemeter.com/blog/how-load-test-mongodb-jmeter
For monitoring connections, you could try something like db.serverStatus().connections as per the detail in this thread;
Check the current number of connections to MongoDb
For other queries you can read the documentation here:
https://docs.mongodb.com/manual/reference/method/db.collection.count/#mongodb-method-db.collection.count
As for visualising in Grafana, I have only used that app for monitoring the response time info, avg, 95th etc... Not sure how counts and queries would be displayed.
You could possibly output it to the jmeter.log file using log.info() so that you have a record of the result...
I am a new user of mongodb, I m currently doing a stress test, 100thousands data per 5s are inserting with 10 threads and we have already stored x00million of data. The db is getting gravely slow. Although when I restart the computer it get faster for a while, it drops down again after a short period of time. why is that? Can I do something to avoid?
Please share more information. What queries do you have? Please don't take this as a complete answer because this is just a suggestion. I can not add comments unfortunately. You may try this;
http://docs.mongodb.org/manual/reference/command/repairDatabase/
I am started using MongoDB few days ago. Everything is fine with MongoDB but i couldn't figure out is query to check Current progress of MongoDB. ie [ Command to check which query is currently in progress in MongoDB or Command to list out overall Process of MongoDB ]. I tried executing the command "mongostat" but it doesn't provides which query is in progress. So Please provide the remedy for this case.
Advance Thanks,
Use db.currentOp(). If it's not enough, try profiling
You'll want to do mongo profiling:
Database Profiling
You need to first turn on the profiler by selecting your database; than db.setProfilingLevel(2);
From there you can start tracking your queries. If you haven't done so, I'd recommend installing MMS; mongo's monitoring system which is just an outstanding monitoring tool.
It's helped me a ton in watching queries come through.
Set Profiling =2 increase the load on sever like enabling general logs in MySQL.
It increase disk I/O.
So its better to monitor by db.currentOp(). Otherwise use profiling for a short period of time and disable it after logging few queries in logs.
I am preparing a small app that will aggregate data on users on my website (via socket.io). I want to insert all data to my monogDB every hour.
What is the best way to do that? setInterval(60000) seems to be a lil bit lame :)
You can use cron for example and run your node.js app as scheduled job.
EDIT:
In case where the program have to run continuously, then probably setTimeout is one of the few possible choices (which is quite simple to implement). Otherwise you can offload your data to some temporary storage system, for example redis and then regularly run other node.js program to move your data, however this may introduce new dependency on other DB system and increase complexity depending on your scenario. Redis can also be in this case as some kind of failsafe solution in case when your main node.js app will unexpectedly be terminated and lose part or all of your data batch.
You should aggregate in real time, not once per hour.
I'd take a look at this presentation by BuddyMedia to see how they are doing real time aggregation down to the minute. I am using an adapted version of this approach for my realtime metrics and it works wonderfully.
http://www.slideshare.net/pstokes2/social-analytics-with-mongodb
Why not just hit the server with a curl request that triggers the database write? You can put the command on an hourly cron job and listen on a local port.
You could have mongo store the last time you copied your data and each time any request comes in you could check to see how long it's been since you last copied your data.
Or you could try a setInterval(checkRestore, 60000) for once a minute checks. checkRestore() would query the server to see if the last updated time is greater than an hour old. There are a few ways to do that.
An easy way to store the date is to just store it as the value of Date.now() (https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Date) and then check for something like db.logs.find({lastUpdate:{$lt:Date.now()-6000000}}).
I think I confused a few different solutions there, but hopefully something like that will work!
If you're using Node, a nice CRON-like tool to use is Forever. It uses to same CRON patterns to handle repetition of jobs.
I have a .NET Core 2.1 application that allows users to search a large database, with the possibility of using lots of parameters. The data access is done through ADO.NET. Some of the queries generated result in long running queries (several hours). Obviously, the user gives up on waiting, but the query chugs along in SQL Server.
I realize that the root cause is the design of the app, but I would like a quick solution for now, if possible.
I have tried many solutions, but none seem to work as expected.
What I have tried:
CommandTimeout
CommandTimeout works as expected with ExecuteNonQuery but does not work with ExecuteReader, as discussed in this forum
When you execute command.ExecuteReader(), you don't get this exception because the server responds on time. The application doesn't respond because it reads data to the memory, and the ExecuteReader() method doesn't return control until all the data is read.
I have also tried using SqlDataAdapter, but this does not work either.
SQL Server query governor
SQL Server's query governor works off of the estimated execution plan, and while it does work sometimes, it does not always catch inefficient queries.
SQL Server execution time-out
Tools > Options > Query Execution > SQL Server > General
I'm not sure what this does, but after entering a value of 1, SQL Server still allows queries to run as long as they need. I tried restarting the server instance, but that did not make any difference.
Again, I realize that the cause of this problem is the way that the queries are generated, but with so many parameters and so much data, fine tuning a solution in the design of the application may take some time. As of now, we are manually killing any spid associated with this app that has run over 10 or so minutes.
EDIT:
I abandoned the hope of finding a simple solution. If you're having a similar issue, here is what we did to address it:
We created a .net core console app that polls the database for queries running over a certain allotted time. The app looks at the login name and the amount of time it's been running and determines whether to kill the process.
https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlcommand.cancel?view=netframework-4.7.2
Looking through the documentation on SqlCommand.Cancel, I think it might solve your issue.
If you were to create and start a Timer before you call ExecuteReader(), you could then keep track of how long the query is running, and eventually call the Cancel method yourself.
(Note: I wanted to add this as a comment but I don't have the reputation to be allowed to yet)