Difference between total and execution time in postgresql? - postgresql

When I run any SQL in PostgreSQL manager I'm coming across execution time: 328 ms; total time: 391 ms. I'm wondering what is this two times that is execution time and
total time.

Not sure what PostgreSQL manager is, but it is most likely combination of those:
Planning time: 0.430 ms
Execution time: 150.225 ms
Planning is how long it takes for Postgres to decide how to get your data. You send query and server might try to optimize it, that takes time.
Execution is how long it took to actually run that plan.
You can verify it yourself if you send your query like that:
EXPLAIN (ANALYZE)
SELECT something FROM table WHERE whatever = 5;

Related

slow query fetch time

I am using gcp cloud sql (mysql 8.0.18) and I am trying to execute a query for only 5000 rows,
SELECT * FROM client_1079.c_crmleads ORDER BY LeadID DESC LIMIT 5000;
but I think the execution is taking long time to fetch data
here is the time details
Affected rows: 0 Found rows: 5,000 Warnings: 0 Duration for 1 query: 0.797 sec. (+ 117.609 sec. network)
Instance configuration is vCPU: 8 , RAM: 20 GB, SSD: 410GB
screenshot of gcp cloud sql instance
also I am facing some issues on high table_open_cache and high ram utilization.
how do I reduce open_table_cache also how to increase instance performance?
Looks like the size of the data retrieved is quite large and the time spent on sending the data from the SQL instance to your App is the reason of the latency observed.
You may review your use case and maybe retrieve less information, or try to parallellize queries, or improve the SQL instance I/O performance (it is related to DB Disk Size).

Postgres EXPLAIN ANALYZE: Total time greatly exceeds sum of parts

I have a python script that receives external events and writes them to Postgres database.
Most of the time INSERT operates quite fast (< 0.3 sec). But sometimes query execution time exceeds 10-15 seconds! There are about 500 events per minute and such a slow behavior is unacceptable.
The query is simple:
INSERT INTO tests (...) VALUES (...)
The table (has about 5 million records) is quite simple too:
I've added 'EXPLAIN ANALYZE' before 'INSERT' in my script and it gives me this:
Insert on tests (cost=0.00..0.01 rows=1 width=94) (actual time=0.051..0.051 rows=0 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=94) (actual time=0.010..0.010 rows=1 loops=1)
Planning time: 0.014 ms
Execution time: 15624.374 ms
How can this be possible? How can I find out what is it doing these 15 seconds?
I'm using Windows server and Postgres 9.6. Script and Postgres are on the same machine.
Additionally I've collected Windows Performance Counter's data during this (disk queue length, Processor time) and it showed nothing.
This server is virtual server on VMWare ESXi, but I don't know what can I examine in it about this situation.
Added
These queries are running in multiple threads (several parallel scripts do that).
INSERT is done without explicit transaction.
There is no triggers (there was a trigger, but I removed it and nothing changed) and no foreign keys.
What if you execute it a second time?
this query is executed in several scripts about 400 times a minute (total). And most of the time it executes quickly. I cannot catch this long execution time in query tool.
I will definitely try to look into pg_stat_activity next time I'll see this, thanks.

postgres performance - query hanging - query analysis tools and configuration question

We have a query we have been using for several months. It recently started hanging. The query is a join from 4 tables. One of those tables is only a few thousand records, one a hundred thousand, and 2 are about 2 million. It has been running in about 20 seconds for several months.
After several attempts to identify the issue by adding indexes to unindexed fields with no avail we modified one of the large tables to be a sub query with the result of 100000 records versus 2 million. The query now runs in 20 about seconds.
Explain of the query that hangs produces:
Limit (cost=1714850.81..1714850.81 rows=1 width=79)
While explain of the query that runs in 20 seconds produces:
Limit (cost=1389451.40..1389451.40 rows=1 width=79)
The query that hangs is larger, but does not indicate a significant difference.
Questions:
Are there thresh holds in postgres that cause it to use system
resources differently, i.e. disk buffering versus memory buffering?
The query that hangs shows one cpu with 100% usage. The system is
Linux. Iotop does not show extraordinary io usage. The system has 32
GB RAM and 8 processors. Postgres does not appear to by loading the
system heavily.
Are there other tools that can be applied? The query sub select
worked in this case but we may not be able to reduce a join
dimensions in this way in the future.
As a note the full explain does not show a markedly different execution plan.
Thanks, Dan

Google Cloud PostgreSQL: Utilization remains at 100%

I am using Google Cloud PostgreSQL which has utilization CPU 100%. I have upgraded the instance to use 2 cores. Now the instance is running on 2 CPU's and 3.75Gb of RAM. Still the instance is using 100% of CPU resources. Again, I have upgrade the instance to 6 cores and 12Gb of RAM, but still no change in CPU utilization. Here are some stats metrics:
I want any thought about why this is happening, how can I figure out the solution?
I have checked the number of queries running on PostgreSQL. Number of queries is less than 100 and execution time is less than 30 seconds. PostgreSQL verison is 9.6
I've been doing this daily now, I'll share how I debug this problem.
Firsts of all, install extension pgstatstatements so it will store all execution statistics of all SQL statements executed on the server.
After that, it's easy...
This query will show most "expensive" queries:
SELECT substring(query, 1, 50) AS short_query,
round(total_time::numeric, 2) AS total_time,
calls,
round(mean_time::numeric, 2) AS mean,
round(max_time::numeric, 2) AS max_time,
round((100 * total_time / sum(total_time::numeric) OVER ())::numeric, 2) AS percentage_cpu,
query
FROM pg_stat_statements
ORDER BY total_time DESC LIMIT 10
And this one to reset the statistics, useful when you want to debug a specific period:
SELECT pg_stat_statements_reset()
In order to see which queries are running currently on the server:
SELECT user, pid, client_addr, query, query_start, NOW() - query_start AS elapsed
FROM pg_stat_activity
WHERE query != '<IDLE>'
-- AND EXTRACT(EPOCH FROM (NOW() - query_start)) > 1
ORDER BY elapsed DESC;
If you have a better way to debug the performance please tell me!
Also if some GCP engineers are reading this, please enable more metrics that can enable us to trace the problem. Example process CPU on the server can tell which DB/Schema is taking too much CPU.
EDIT:
Google released query insights and it's useful when you don't want to make your hands dirty!
I still use pgstatstatements!

Long running PSQL queries timing out randomly - cannot analyse

I have some PSQL queries running on RDS. 90% of the time these queries will run fine. However occasionally these queries will randomly timeout and not execute. I have enabled logging and auto_explain however auto_explain will only log query plans for queries that complete. If I increase the statement_timeout the queries will still continue to timeout at random intervals with no explanation.
Has anyone seen this issue before or have any idea how to analyse queries that do not complete?