After reading the internet I am still not sure.
Using different tools like mongo command line or Robo 3T GUI I see that my query takes about 70ms to provide results.
At the same time if I use explain it gives me executionTimeMillis at 14ms
The connection is already established, so there should be no overhead there and yet the difference is around X5.
What are your thoughts?
explain.executionStats.executionTimeMillis
Total time in milliseconds required for query plan selection and query execution.
Response time:
Time between start and end of the query. It includes below times.
Time taken for processor cycles if any[wait time] + executionTimeMillis[plan selection, execution] + Time taken to return the response[returning the last byte of the response]
Related
I am facing issue with insert of mongo. When I use insert_one to insert data in IDEA, the code takes a few seconds to response on average, while using insert_many to insert all the data that needs to be inserted only takes a few seconds. However, the interesting thing is that the same data inserted using Navicat takes milliseconds.
Another interesting point is that when I run the code on mac book, it is several times faster than windows, and the time difference between the two is obvious.In other high configuration windows environment, it will also be very slow.
By the way, when I query data, the response is fast, in any environment.
I've been trying to calculate the time my queries take to complete in PostgreSQL.
I've written a bash script to issue a query using the following command "psql < query1.txt > /dev/null". However, the time measured using EXPLAIN is significantly different than the time measured using my bash script.
For one of the queries that returns 200,000+ rows, using the bash script, I got 13 seconds average elapsed time. But when I use EXPLAIN, the JSON file shows that it should take 218.735 milliseconds.
Is there is a way to find out where this extra time comes from?
I'm assuming that this happens because of the huge number of rows the from the query. Is there is a way to test a SELECT query without outputting its rows?
Note: I've also used a java application to measure the elapsed time. I got 1.2 seconds compared to 218.735 milliseconds with EXPLAIN command.
Can it be that EXPLAIN is not accurate?
I'm running a query out of pgAdmin 4 against a Postgres 9.5 database. Is there any method to get an estimation on how long this query will run? It is running now for nearly 20 hours.
I only found info about logging and similar to get the actual execution time after the query finished.
The query is about to order about 300,000 postGIS points using st_distance in a recursive CTE.
Has SQL or Postgres any mechanism to prevent infinite running queries? The recursion should stop at some point, is there maybe a way to peek at the last constructed row, which would give me a hint, how far the query is in the recursion.
If your transaction is in a dead lock, PostgreSQL may solve it by killing one (or some) of the transaction(s) involved.
Else, you have to master what you're doing.
When you use EXPLAIN (without ANALYZE), the planner is estimating your query duration but this value has to be taken as relative not absolute.
I just want to know what is the reason for having different time while executing the same query in PostgreSQL.
For Eg: select * from datas;
For the first time it takes 45ms
For the second time the same query takes 55ms and the next time it takes some other time.Can any one say What is the reason for having non static time.
Simple, everytime the database has to read the whole table and retrieve the rows. There might be 100 different things happening in database which might cause a difference of few millis. There is no need to panic. This is bound to happen. You can expect the operation to take same time with some millis accuracy. If there is a huge difference then it is something which has to be looked.
Have u applied indexing in your table . it also increases speed to a great deal!
Compiling the explanation from
Reference by matt b
EXPLAIN statement? helps us to display the execution plan that the PostgreSQL planner generates for the supplied statement.
The execution plan shows how the
table(s) referenced by the statement will be scanned — by plain
sequential scan, index scan, etc. — and if multiple tables are
referenced, what join algorithms will be used to bring together the
required rows from each input table
And Reference by Pablo Santa Cruz
You need to change your PostgreSQL configuration file.
Do enable this property:
log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
After that, execution time will be logged and you will be able to figure out exactly how bad (or good) are performing your queries.
Well that's about the case with every app on every computer. Sometimes the operating system is busier than other times, so it takes more time to get the memory you ask it for or your app gets fewer CPU time slices or whatever.
I have an application written on Play Framework 1.2.4 with Hibernate(default C3P0 connection pooling) and PostgreSQL database (9.1).
Recently I turned on slow queries logging ( >= 100 ms) in postgresql.conf and found some issues.
But when I tried to analyze and optimize one particular query, I found that it is blazing fast in psql (0.5 - 1 ms) in comparison to 200-250 ms in the log. The same thing happened with the other queries.
The application and database server is running on the same machine and communicating using localhost interface.
JDBC driver - postgresql-9.0-801.jdbc4
I wonder what could be wrong, because query duration in the log is calculated considering only database processing time excluding external things like network turnarounds etc.
Possibility 1: If the slow queries occur occasionally or in bursts, it could be checkpoint activity. Enable checkpoint logging (log_checkpoints = on), make sure the log level (log_min_messages) is 'info' or lower, and see what turns up. Checkpoints that're taking a long time or happening too often suggest you probably need some checkpoint/WAL and bgwriter tuning. This isn't likely to be the cause if the same statements are always slow and others always perform well.
Possibility 2: Your query plans are different because you're running them directly in psql while Hibernate, via PgJDBC, will at least sometimes be doing a PREPARE and EXECUTE (at the protocol level so you won't see actual statements). For this, compare query performance with PREPARE test_query(...) AS SELECT ... then EXPLAIN ANALYZE EXECUTE test_query(...). The parameters in the PREPARE are type names for the positional parameters ($1,$2,etc); the parameters in the EXECUTE are values.
If the prepared plan is different to the one-off plan, you can set PgJDBC's prepare threshold via connection parameters to tell it never to use server-side prepared statements.
This difference between the plans of prepared and unprepared statements should go away in PostgreSQL 9.2. It's been a long-standing wart, but Tom Lane dealt with it for the up-coming release.
It's very hard to say for sure without knowing all the details of your system, but I can think of a couple of possibilities:
The query results are cached. If you run the same query twice in a short space of time, it will almost always complete much more quickly on the second pass. PostgreSQL maintains a cache of recently retrieved data for just this purpose. If you are pulling the queries from the tail of your log and executing them immediately this could be what's happening.
Other processes are interfering. The execution time for a query varies depending on what else is going on in the system. If the queries are taking 100ms during peak hour on your website when a lot of users are connected but only 1ms when you try them again late at night this could be what's happening.
The point is you are correct that the query duration isn't affected by which library or application is calling it, so the difference must be coming from something else. Keep looking, good luck!
There are several possible reasons. First if the database was very busy when the slow queries excuted, the query may be slower. So you may need to observe the load of the OS at that moment for future analysis.
Second the history plan of the sql may be different from the current session plan. So you may need to install auto_explain to see the actual plan of the slow query.