Postgresql IN operator Performance: List vs Subquery - postgresql

For a list of ~700 ids the query performance is over 20x slower than passing a subquery that returns those 700 ids. It should be the opposite.
e.g. (first query takes under 400ms, the later 9600 ms)
select date_trunc('month', day) as month, sum(total)
from table_x
where y_id in (select id from table_y where prop = 'xyz')
and day between '2015-11-05' and '2016-11-04'
group by month
is 20x faster on my machine than passing the array directly:
select date_trunc('month', day) as month, sum(total)
from table_x
where y_id in (1625, 1871, ..., 1640, 1643, 13291, 1458, 13304, 1407, 1765)
and day between '2015-11-05' and '2016-11-04'
group by month
Any idea what could be the problem or how to optimize and obtain the same performance?

The difference is a simple filter vs a hash join:
explain analyze
select i
from t
where i in (500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600);
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Seq Scan on t (cost=0.00..140675.00 rows=101 width=4) (actual time=0.648..1074.567 rows=101 loops=1)
Filter: (i = ANY ('{500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600}'::integer[]))
Rows Removed by Filter: 999899
Planning time: 0.170 ms
Execution time: 1074.624 ms
explain analyze
select i
from t
where i in (select i from r);
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
Hash Semi Join (cost=3.27..17054.40 rows=101 width=4) (actual time=0.382..240.389 rows=101 loops=1)
Hash Cond: (t.i = r.i)
-> Seq Scan on t (cost=0.00..14425.00 rows=1000000 width=4) (actual time=0.030..117.193 rows=1000000 loops=1)
-> Hash (cost=2.01..2.01 rows=101 width=4) (actual time=0.074..0.074 rows=101 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 12kB
-> Seq Scan on r (cost=0.00..2.01 rows=101 width=4) (actual time=0.010..0.035 rows=101 loops=1)
Planning time: 0.245 ms
Execution time: 240.448 ms
To have the same performance join the array:
explain analyze
select i
from
t
inner join
unnest(
array[500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600]::int[]
) u (i) using (i)
;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
Hash Join (cost=2.25..18178.25 rows=100 width=4) (actual time=0.267..243.768 rows=101 loops=1)
Hash Cond: (t.i = u.i)
-> Seq Scan on t (cost=0.00..14425.00 rows=1000000 width=4) (actual time=0.022..118.709 rows=1000000 loops=1)
-> Hash (cost=1.00..1.00 rows=100 width=4) (actual time=0.063..0.063 rows=101 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 12kB
-> Function Scan on unnest u (cost=0.00..1.00 rows=100 width=4) (actual time=0.028..0.041 rows=101 loops=1)
Planning time: 0.172 ms
Execution time: 243.816 ms
Or use the values syntax:
explain analyze
select i
from t
where i = any (values (500),(501),(502),(503),(504),(505),(506),(507),(508),(509),(510),(511),(512),(513),(514),(515),(516),(517),(518),(519),(520),(521),(522),(523),(524),(525),(526),(527),(528),(529),(530),(531),(532),(533),(534),(535),(536),(537),(538),(539),(540),(541),(542),(543),(544),(545),(546),(547),(548),(549),(550),(551),(552),(553),(554),(555),(556),(557),(558),(559),(560),(561),(562),(563),(564),(565),(566),(567),(568),(569),(570),(571),(572),(573),(574),(575),(576),(577),(578),(579),(580),(581),(582),(583),(584),(585),(586),(587),(588),(589),(590),(591),(592),(593),(594),(595),(596),(597),(598),(599),(600))
;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
Hash Semi Join (cost=2.53..17053.65 rows=101 width=4) (actual time=0.279..239.888 rows=101 loops=1)
Hash Cond: (t.i = "*VALUES*".column1)
-> Seq Scan on t (cost=0.00..14425.00 rows=1000000 width=4) (actual time=0.022..117.199 rows=1000000 loops=1)
-> Hash (cost=1.26..1.26 rows=101 width=4) (actual time=0.059..0.059 rows=101 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 12kB
-> Values Scan on "*VALUES*" (cost=0.00..1.26 rows=101 width=4) (actual time=0.002..0.027 rows=101 loops=1)
Planning time: 0.242 ms
Execution time: 239.933 ms

Try to change the critical line to this:
where y_id = any (values (1625, 1871, ..., 1640, 1643, 13291, 1458, 13304, 1407, 1765) )

Related

How to get "memory usage" on postgres CTE query plan?

I'm comparing different queries options in order to improve perfomance and I cannot understand why my CTE query plan does not show the "memory usage" option when I run it with explain analyze.
Here's my query:
EXPLAIN ANALYZE WITH
CTE AS (
SELECT
id,
jsonb_agg(CASE WHEN each_result->>'another_key' = 'A'
THEN jsonb_set(each_result, '{another_key}', '"B"', false)
ELSE each_result
END
) AS result_updated
FROM my_table, jsonb_array_elements(column->'key') as result
GROUP BY id
)
UPDATE my_table
SET column = jsonb_set(column, '{each_result}', (SELECT result_updated FROM CTE WHERE CTE.id = my_table.id), false)
;
And here's the query plan result:
QUERY PLAN
-------------------------------------------------------------------------------------------------------
Update on custom_sequence (cost=3305.32..25976.06 rows=1003 width=83) (actual time=4071.870..4071.870 rows=0 loops=1)
CTE cte
-> HashAggregate (cost=3292.78..3305.32 rows=1003 width=48) (actual time=536.656..618.699 rows=1003 loops=1)
Group Key: my_table_1.id
-> Nested Loop (cost=0.01..2039.04 rows=100300 width=48) (actual time=0.323..59.200 rows=78234 loops=1)
-> Seq Scan on my_table my_table_1 (cost=0.00..33.03 rows=1003 width=34) (actual time=0.023..0.289 rows=1003 loops=1)
-> Function Scan on jsonb_array_elements step (cost=0.01..1.00 rows=100 width=32) (actual time=0.044..0.049 rows=78 loops=1003)
-> Seq Scan on my_table (cost=0.00..22670.74 rows=1003 width=83) (actual time=629.743..3701.285 rows=1003 loops=1)
SubPlan 2
-> CTE Scan on cte (cost=0.00..22.57 rows=5 width=32) (actual time=1.992..3.482 rows=1 loops=1003)
Filter: (id = my_table.id)
Rows Removed by Filter: 1002
Planning time: 0.458 ms
Execution time: 4081.778 ms
(14 rows)
My another option is an UPDATE FROM:
EXPLAIN ANALYZE UPDATE my_table
SET column = jsonb_set(column, '{each_result}', q2.result_updated, false)
FROM (
SELECT
id,
jsonb_agg(CASE WHEN each_result->>'anoter_key' = 'A'
THEN jsonb_set(each_result, '{another_key}', '"B"', false)
ELSE each_result
END
) AS result_updated
FROM my_table, jsonb_array_elements(column->'key') AS result
GROUP BY id
) q2
WHERE q2.id = my_table.id
;
When I run query plan for this query, I got the "memory usage" info :
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Update on my_table (cost=3748.30..4223.44 rows=1001 width=153) (actual time=992.003..992.003 rows=0 loops=1)
-> Hash Join (cost=3748.30..4223.44 rows=1001 width=153) (actual time=633.204..758.817 rows=1004 loops=1)
Hash Cond: (my_table.id = q2.id)
-> Seq Scan on my_table (cost=0.00..460.01 rows=1001 width=67) (actual time=0.031..0.550 rows=1004 loops=1)
-> Hash (cost=3735.79..3735.79 rows=1001 width=120) (actual time=632.951..632.951 rows=1004 loops=1)
Buckets: 1024 (originally 1024) Batches: 2 (originally 1) Memory Usage: 32791kB
-> Subquery Scan on q2 (cost=3713.26..3735.79 rows=1001 width=120) (actual time=537.310..605.349 rows=1004 loops=1)
-> HashAggregate (cost=3713.26..3725.78 rows=1001 width=48) (actual time=537.306..604.269 rows=1004 loops=1)
Group Key: my_table_1.id
-> Nested Loop (cost=0.01..2462.01 rows=100100 width=48) (actual time=0.373..47.725 rows=78312 loops=1)
-> Seq Scan on my_table my_table_1 (cost=0.00..460.01 rows=1001 width=34) (actual time=0.014..0.483 rows=1004 loops=1)
-> Function Scan on jsonb_array_elements step (cost=0.01..1.00 rows=100 width=32) (actual time=0.036..0.039 rows=78 loops=1004)
Planning time: 0.924 ms
Execution time: 998.818 ms
(14 rows)
How can I get the memory usage information on my CTE query too?

How can I speed up my PostgreSQL SELECT function that uses a list for its WHERE clause?

I have a function SELECT that takes in a list of symbol of parameters.
CREATE OR REPLACE FUNCTION api.stats(p_stocks text[])
RETURNS TABLE(symbol character, industry text, adj_close money, week52high money, week52low money, marketcap money,
pe_ratio int, beta numeric, dividend_yield character)
as $$
SELECT DISTINCT ON (t1.symbol) t1.symbol,
t3.industry,
cast(t2.adj_close as money),
cast(t1.week52high as money),
cast(t1.week52low as money),
cast(t1.marketcap as money),
cast(t1.pe_ratio as int),
ROUND(t1.beta,2),
to_char(t1.dividend_yield * 100, '99D99%%')
FROM api.security_stats as t1
LEFT JOIN api.security_price as t2 USING (symbol)
LEFT JOIN api.security as t3 USING (symbol)
WHERE symbol = any($1) ORDER BY t1.symbol, t2.date DESC
$$ language sql
PARALLEL SAFE;
I'm trying to speed up the initial query by adding indexes and other methods, it did reduce my query time by half the speed, but only when the list has ONE value, it's still pretty slow with more than one value.
For brevity, I've added the original select statement below, with only one symbol as a parameter, AAPL:
SELECT DISTINCT ON (t1.symbol) t1.symbol,
t3.industry,
cast(t2.adj_close as money),
cast(t1.week52high as money),
cast(t1.week52low as money),
cast(t1.marketcap as money),
cast(t1.pe_ratio as int),
ROUND(t1.beta,2),
to_char(t1.dividend_yield * 100, '99D99%%')
FROM api.security_stats as t1
LEFT JOIN api.security_price as t2 USING (symbol)
LEFT JOIN api.security as t3 USING (symbol)
WHERE symbol = 'AAPL' ORDER BY t1.symbol, t2.date DESC;
Here are the details on performance:
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=71365.86..72083.62 rows=52 width=130) (actual time=828.301..967.263 rows=1 loops=1)
-> Sort (cost=71365.86..72083.62 rows=287101 width=130) (actual time=828.299..946.342 rows=326894 loops=1)
Sort Key: t2.date DESC
Sort Method: external merge Disk: 33920kB
-> Hash Right Join (cost=304.09..25710.44 rows=287101 width=130) (actual time=0.638..627.083 rows=326894 loops=1)
Hash Cond: ((t2.symbol)::text = (t1.symbol)::text)
-> Bitmap Heap Scan on security_price t2 (cost=102.41..16523.31 rows=5417 width=14) (actual time=0.317..2.658 rows=4478 loops=1)
Recheck Cond: ((symbol)::text = 'AAPL'::text)
Heap Blocks: exact=153
-> Bitmap Index Scan on symbol_price_idx (cost=0.00..101.06 rows=5417 width=0) (actual time=0.292..0.293 rows=4478 loops=1)
Index Cond: ((symbol)::text = 'AAPL'::text)
-> Hash (cost=201.02..201.02 rows=53 width=79) (actual time=0.290..0.295 rows=73 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 17kB
-> Nested Loop Left Join (cost=4.98..201.02 rows=53 width=79) (actual time=0.062..0.252 rows=73 loops=1)
Join Filter: ((t1.symbol)::text = (t3.symbol)::text)
-> Bitmap Heap Scan on security_stats t1 (cost=4.70..191.93 rows=53 width=57) (actual time=0.046..0.195 rows=73 loops=1)
Recheck Cond: ((symbol)::text = 'AAPL'::text)
Heap Blocks: exact=73
-> Bitmap Index Scan on symbol_stats_idx (cost=0.00..4.69 rows=53 width=0) (actual time=0.029..0.029 rows=73 loops=1)
Index Cond: ((symbol)::text = 'AAPL'::text)
-> Materialize (cost=0.28..8.30 rows=1 width=26) (actual time=0.000..0.000 rows=1 loops=73)
-> Index Scan using symbol_security_idx on security t3 (cost=0.28..8.29 rows=1 width=26) (actual time=0.011..0.011 rows=1 loops=1)
Index Cond: ((symbol)::text = 'AAPL'::text)
Planning Time: 0.329 ms
Execution Time: 973.894 ms
Now, I will take the same SELECT statement above and change the where clause to WHERE symbol in ('AAPL','TLSA') to replicate my original FUNCTION first mentioned.
EDIT: Here is the new test using multiple values, when I changed work_mem to 10mb:
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=253542.02..255477.13 rows=101 width=130) (actual time=5239.415..5560.114 rows=2 loops=1)
-> Sort (cost=253542.02..254509.58 rows=387022 width=130) (actual time=5239.412..5507.122 rows=430439 loops=1)
Sort Key: t1.symbol, t2.date DESC
Sort Method: external merge Disk: 43056kB
-> Hash Left Join (cost=160938.84..191162.40 rows=387022 width=130) (actual time=2558.718..3509.201 rows=430439 loops=1)
Hash Cond: ((t1.symbol)::text = (t2.symbol)::text)
-> Hash Left Join (cost=50.29..400.99 rows=107 width=79) (actual time=0.617..0.864 rows=112 loops=1)
Hash Cond: ((t1.symbol)::text = (t3.symbol)::text)
-> Bitmap Heap Scan on security_stats t1 (cost=9.40..359.81 rows=107 width=57) (actual time=0.051..0.246 rows=112 loops=1)
Recheck Cond: ((symbol)::text = ANY ('{AAPL,TSLA}'::text[]))
Heap Blocks: exact=112
-> Bitmap Index Scan on symbol_stats_idx (cost=0.00..9.38 rows=107 width=0) (actual time=0.030..0.031 rows=112 loops=1)
Index Cond: ((symbol)::text = ANY ('{AAPL,TSLA}'::text[]))
-> Hash (cost=28.73..28.73 rows=973 width=26) (actual time=0.558..0.559 rows=973 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 64kB
-> Seq Scan on security t3 (cost=0.00..28.73 rows=973 width=26) (actual time=0.009..0.274 rows=973 loops=1)
-> Hash (cost=99479.91..99479.91 rows=3532691 width=14) (actual time=2537.403..2537.404 rows=3532691 loops=1)
Buckets: 262144 Batches: 32 Memory Usage: 6170kB
-> Seq Scan on security_price t2 (cost=0.00..99479.91 rows=3532691 width=14) (actual time=0.302..1347.778 rows=3532691 loops=1)
Planning Time: 1.409 ms
Execution Time: 5569.160 ms
I've managed to solve the problem by removing a adj_close from my original query. My function is now fast. Thank you for helping me point out the problem within my query planner.

Planner not using index order to sort the records using CTE

I am trying to pass some ids into an in-clause on a sorted index with the same order by condition but the query planner is explicitly sorting the data after performing index search. below are my queries.
Generate a temporary table.
SELECT a.n/20 as n, md5(a.n::TEXT) as b INTO temp_table
From generate_series(1, 100000) as a(n);
create an index
CREATE INDEX idx_temp_table ON temp_table(n ASC, b ASC);
In below query, planner uses index ordering and doesn't explicitly sorts the data.(expected)
EXPLAIN ANALYSE
SELECT * from
temp_table WHERE n = 10
ORDER BY n, b
limit 5;
Query Plan
QUERY PLAN Limit (cost=0.42..16.07 rows=5 width=36) (actual time=0.098..0.101 rows=5 loops=1)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..1565.17 rows=500 width=36) (actual time=0.095..0.098 rows=5 loops=1)
Index Cond: (n = 10)
Heap Fetches: 5 Planning time: 0.551 ms Execution time: 0.128 ms
but when i use one or more ids from a cte and pass them in clause then planner only uses index to fetch the values but explicitly sorts them afterwards (not expected).
EXPLAIN ANALYSE
WITH cte(x) AS (VALUES (10))
SELECT * from temp_table
WHERE n IN ( SELECT x from cte)
ORDER BY n, b
limit 5;
then planner uses below query plan
QUERY PLAN
QUERY PLAN
Limit (cost=85.18..85.20 rows=5 width=37) (actual time=0.073..0.075 rows=5 loops=1)
CTE cte
-> Values Scan on "*VALUES*" (cost=0.00..0.03 rows=2 width=4) (actual time=0.001..0.002 rows=2 loops=1)
-> Sort (cost=85.16..85.26 rows=40 width=37) (actual time=0.072..0.073 rows=5 loops=1)
Sort Key: temp_table.n, temp_table.b
Sort Method: top-N heapsort Memory: 25kB
-> Nested Loop (cost=0.47..84.50 rows=40 width=37) (actual time=0.037..0.056 rows=40 loops=1)
-> Unique (cost=0.05..0.06 rows=2 width=4) (actual time=0.009..0.010 rows=2 loops=1)
-> Sort (cost=0.05..0.06 rows=2 width=4) (actual time=0.009..0.010 rows=2 loops=1)
Sort Key: cte.x
Sort Method: quicksort Memory: 25kB
-> CTE Scan on cte (cost=0.00..0.04 rows=2 width=4) (actual time=0.004..0.005 rows=2 loops=1)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..42.02 rows=20 width=37) (actual time=0.012..0.018 rows=20 loops=2)
Index Cond: (n = cte.x)
Heap Fetches: 40
Planning time: 0.166 ms
Execution time: 0.101 ms
I tried putting an explicit sorting while passing the ids in where clause so that sorted order in ids is maintained but still planner sorted explicitly
EXPLAIN ANALYSE
WITH cte(x) AS (VALUES (10))
SELECT * from temp_table
WHERE n IN ( SELECT x from cte)
ORDER BY n, b
limit 5;
Query plan
QUERY PLAN
Limit (cost=42.62..42.63 rows=5 width=37) (actual time=0.042..0.044 rows=5 loops=1)
CTE cte
-> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)
-> Sort (cost=42.61..42.66 rows=20 width=37) (actual time=0.042..0.042 rows=5 loops=1)
Sort Key: temp_table.n, temp_table.b
Sort Method: top-N heapsort Memory: 25kB
-> Nested Loop (cost=0.46..42.28 rows=20 width=37) (actual time=0.025..0.033 rows=20 loops=1)
-> HashAggregate (cost=0.05..0.06 rows=1 width=4) (actual time=0.009..0.009 rows=1 loops=1)
Group Key: cte.x
-> Sort (cost=0.03..0.04 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=1)
Sort Key: cte.x
Sort Method: quicksort Memory: 25kB
-> CTE Scan on cte (cost=0.00..0.02 rows=1 width=4) (actual time=0.003..0.003 rows=1 loops=1)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..42.02 rows=20 width=37) (actual time=0.014..0.020 rows=20 loops=1)
Index Cond: (n = cte.x)
Heap Fetches: 20
Planning time: 0.167 ms
Execution time: 0.074 ms
Can anyone explain why planner is using an explicit sort on the data? Is there a way to by pass this and make planner use the index sorting order so additional sorting on the records can be saved. In production, we have similar case but size of our selection is too big but only a handful of records needs to fetched with pagination. Thanks in anticipation!
It is actually a decision made by the planner, with a larger set of values(), Postgres will switch to a smarter plan, with the sort done before the merge.
select version();
\echo +++++ Original
EXPLAIN ANALYSE
WITH cte(x) AS (VALUES (10))
SELECT * from temp_table
WHERE n IN ( SELECT x from cte)
ORDER BY n, b
limit 5;
\echo +++++ TEN Values
EXPLAIN ANALYSE
WITH cte(x) AS (VALUES (10),(11),(12),(13),(14),(15),(16),(17),(18),(19)
)
SELECT * from temp_table
WHERE n IN ( SELECT x from cte)
ORDER BY n, b
limit 5;
\echo ++++++++ one row from table
EXPLAIN ANALYSE
WITH cte(x) AS (SELECT n FROM temp_table WHERE n = 10)
SELECT * from temp_table
WHERE n IN ( SELECT x from cte)
ORDER BY n, b
limit 5;
\echo ++++++++ one row from table TWO ctes
EXPLAIN ANALYSE
WITH val(x) AS (VALUES (10))
, cte(x) AS (
SELECT n FROM temp_table WHERE n IN (select x from val)
)
SELECT * from temp_table
WHERE n IN ( SELECT x from cte)
ORDER BY n, b
limit 5;
Resulting plans:
version
-------------------------------------------------------------------------------------------------------
PostgreSQL 11.3 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4, 64-bit
(1 row)
+++++ Original
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=13.72..13.73 rows=5 width=37) (actual time=0.197..0.200 rows=5 loops=1)
CTE cte
-> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)
-> Sort (cost=13.71..13.76 rows=20 width=37) (actual time=0.194..0.194 rows=5 loops=1)
Sort Key: temp_table.n, temp_table.b
Sort Method: top-N heapsort Memory: 25kB
-> Nested Loop (cost=0.44..13.37 rows=20 width=37) (actual time=0.083..0.097 rows=20 loops=1)
-> HashAggregate (cost=0.02..0.03 rows=1 width=4) (actual time=0.018..0.018 rows=1 loops=1)
Group Key: cte.x
-> CTE Scan on cte (cost=0.00..0.02 rows=1 width=4) (actual time=0.007..0.008 rows=1 loops=1)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..13.14 rows=20 width=37) (actual time=0.058..0.068 rows=20 loops=1)
Index Cond: (n = cte.x)
Heap Fetches: 20
Planning Time: 1.328 ms
Execution Time: 0.360 ms
(15 rows)
+++++ TEN Values
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.91..89.11 rows=5 width=37) (actual time=0.179..0.183 rows=5 loops=1)
CTE cte
-> Values Scan on "*VALUES*" (cost=0.00..0.12 rows=10 width=4) (actual time=0.001..0.007 rows=10 loops=1)
-> Merge Semi Join (cost=0.78..3528.72 rows=200 width=37) (actual time=0.178..0.181 rows=5 loops=1)
Merge Cond: (temp_table.n = cte.x)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..3276.30 rows=100000 width=37) (actual time=0.030..0.123 rows=204 loops=1)
Heap Fetches: 204
-> Sort (cost=0.37..0.39 rows=10 width=4) (actual time=0.023..0.023 rows=1 loops=1)
Sort Key: cte.x
Sort Method: quicksort Memory: 25kB
-> CTE Scan on cte (cost=0.00..0.20 rows=10 width=4) (actual time=0.003..0.013 rows=10 loops=1)
Planning Time: 0.197 ms
Execution Time: 0.226 ms
(13 rows)
++++++++ one row from table
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=14.39..58.52 rows=5 width=37) (actual time=0.168..0.173 rows=5 loops=1)
CTE cte
-> Index Only Scan using idx_temp_table on temp_table temp_table_1 (cost=0.42..13.14 rows=20 width=4) (actual time=0.010..0.020 rows=20 loops=1)
Index Cond: (n = 10)
Heap Fetches: 20
-> Merge Semi Join (cost=1.25..3531.24 rows=400 width=37) (actual time=0.167..0.170 rows=5 loops=1)
Merge Cond: (temp_table.n = cte.x)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..3276.30 rows=100000 width=37) (actual time=0.025..0.101 rows=204 loops=1)
Heap Fetches: 204
-> Sort (cost=0.83..0.88 rows=20 width=4) (actual time=0.039..0.039 rows=1 loops=1)
Sort Key: cte.x
Sort Method: quicksort Memory: 25kB
-> CTE Scan on cte (cost=0.00..0.40 rows=20 width=4) (actual time=0.012..0.031 rows=20 loops=1)
Planning Time: 0.243 ms
Execution Time: 0.211 ms
(15 rows)
++++++++ one row from table TWO ctes
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=14.63..58.76 rows=5 width=37) (actual time=0.224..0.229 rows=5 loops=1)
CTE val
-> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)
CTE cte
-> Nested Loop (cost=0.44..13.37 rows=20 width=4) (actual time=0.038..0.052 rows=20 loops=1)
-> HashAggregate (cost=0.02..0.03 rows=1 width=4) (actual time=0.007..0.007 rows=1 loops=1)
Group Key: val.x
-> CTE Scan on val (cost=0.00..0.02 rows=1 width=4) (actual time=0.003..0.003 rows=1 loops=1)
-> Index Only Scan using idx_temp_table on temp_table temp_table_1 (cost=0.42..13.14 rows=20 width=4) (actual time=0.029..0.038 rows=20 loops=1)
Index Cond: (n = val.x)
Heap Fetches: 20
-> Merge Semi Join (cost=1.25..3531.24 rows=400 width=37) (actual time=0.223..0.226 rows=5 loops=1)
Merge Cond: (temp_table.n = cte.x)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..3276.30 rows=100000 width=37) (actual time=0.038..0.114 rows=204 loops=1)
Heap Fetches: 204
-> Sort (cost=0.83..0.88 rows=20 width=4) (actual time=0.082..0.082 rows=1 loops=1)
Sort Key: cte.x
Sort Method: quicksort Memory: 25kB
-> CTE Scan on cte (cost=0.00..0.40 rows=20 width=4) (actual time=0.040..0.062 rows=20 loops=1)
Planning Time: 0.362 ms
Execution Time: 0.313 ms
(21 rows)
Beware of CTEs!.
For the planner, CTEs are more or less black boxes, and very little is known about expected number of rows, statistics distribution, or ordering inside.
In cases where CTEs result in a bad plan (the original question is not such a case), a CTE can often be replaced by a (temp) view, which is seen by the planner in its full naked glory.
Update
Starting with version 11, CTEs are handled differently by the planner: if they do not have side effects, they are candidates for being merged with the main query. (but is still a good idea to check your query plans)
The optimizet isn't aware that the CTE is sorted. If you scan an index for multiple values and have an ORDER BY, PostgreSQL will always sort.
The only thing that comes to my mind is to create a temporary table with the values from the IN list and put an index on that temporary table. Then when you join with that table, PostgreSQL will be aware of the ordering and might for example choose a merge join that can use the indexes.
Of course that means a lot of overhead, and it could easily be that the original sort wins out.

Analyze: Why a query taking could take so long, seems costs are low?

I am having these results for analyze for a simple query that does not return more than 150 records from tables less than 200 records most of them, as I have a table that stores latest value and the other fields are FK of the data.
Update: see the new results from same query some our later. The site is not public and/or there should be not users right now as it is in development.
explain analyze
SELECT lv.station_id,
s.name AS station_name,
s.latitude,
s.longitude,
s.elevation,
lv.element_id,
e.symbol AS element_symbol,
u.symbol,
e.name AS element_name,
lv.last_datetime AS datetime,
lv.last_value AS valor,
s.basin_id,
s.municipality_id
FROM (((element_station lv /*350 records*/
JOIN stations s ON ((lv.station_id = s.id))) /*40 records*/
JOIN elements e ON ((lv.element_id = e.id))) /*103 records*/
JOIN units u ON ((e.unit_id = u.id))) /* 32 records */
WHERE s.id = lv.station_id AND e.id = lv.element_id AND lv.interval_id = 6 and
lv.last_datetime >= ((now() - '06:00:00'::interval) - '01:00:00'::interval)
I have already tried VACUUM and after that some is saved, but again after some times it goes up. I have implemented an index on the fields.
Nested Loop (cost=0.29..2654.66 rows=1 width=92) (actual time=1219.390..35296.253 rows=157 loops=1)
Join Filter: (e.unit_id = u.id)
Rows Removed by Join Filter: 4867
-> Nested Loop (cost=0.29..2652.93 rows=1 width=92) (actual time=1219.383..35294.083 rows=157 loops=1)
Join Filter: (lv.element_id = e.id)
Rows Removed by Join Filter: 16014
-> Nested Loop (cost=0.29..2648.62 rows=1 width=61) (actual time=1219.301..35132.373 rows=157 loops=1)
-> Seq Scan on element_station lv (cost=0.00..2640.30 rows=1 width=20) (actual time=1219.248..1385.517 rows=157 loops=1)
Filter: ((interval_id = 6) AND (last_datetime >= ((now() - '06:00:00'::interval) - '01:00:00'::interval)))
Rows Removed by Filter: 168
-> Index Scan using stations_pkey on stations s (cost=0.29..8.31 rows=1 width=45) (actual time=3.471..214.941 rows=1 loops=157)
Index Cond: (id = lv.station_id)
-> Seq Scan on elements e (cost=0.00..3.03 rows=103 width=35) (actual time=0.003..0.999 rows=103 loops=157)
-> Seq Scan on units u (cost=0.00..1.32 rows=32 width=8) (actual time=0.002..0.005 rows=32 loops=157)
Planning time: 8.312 ms
Execution time: 35296.427 ms
update, same query running it tonight; no changes:
Sort (cost=601.74..601.88 rows=55 width=92) (actual time=1.822..1.841 rows=172 loops=1)
Sort Key: lv.last_datetime DESC
Sort Method: quicksort Memory: 52kB
-> Nested Loop (cost=11.60..600.15 rows=55 width=92) (actual time=0.287..1.680 rows=172 loops=1)
-> Hash Join (cost=11.31..248.15 rows=55 width=51) (actual time=0.263..0.616 rows=172 loops=1)
Hash Cond: (e.unit_id = u.id)
-> Hash Join (cost=9.59..245.60 rows=75 width=51) (actual time=0.225..0.528 rows=172 loops=1)
Hash Cond: (lv.element_id = e.id)
-> Bitmap Heap Scan on element_station lv (cost=5.27..240.25 rows=75 width=20) (actual time=0.150..0.359 rows=172 loops=1)
Recheck Cond: ((last_datetime >= ((now() - '06:00:00'::interval) - '01:00:00'::interval)) AND (interval_id = 6))
Heap Blocks: exact=22
-> Bitmap Index Scan on element_station_latest (cost=0.00..5.25 rows=75 width=0) (actual time=0.136..0.136 rows=226 loops=1)
Index Cond: ((last_datetime >= ((now() - '06:00:00'::interval) - '01:00:00'::interval)) AND (interval_id = 6))
-> Hash (cost=3.03..3.03 rows=103 width=35) (actual time=0.062..0.062 rows=103 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 15kB
-> Seq Scan on elements e (cost=0.00..3.03 rows=103 width=35) (actual time=0.006..0.031 rows=103 loops=1)
-> Hash (cost=1.32..1.32 rows=32 width=8) (actual time=0.019..0.019 rows=32 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 10kB
-> Seq Scan on units u (cost=0.00..1.32 rows=32 width=8) (actual time=0.003..0.005 rows=32 loops=1)
-> Index Scan using stations_pkey on stations s (cost=0.29..6.39 rows=1 width=45) (actual time=0.005..0.006 rows=1 loops=172)
Index Cond: (id = lv.station_id)
Planning time: 2.390 ms
Execution time: 2.009 ms
The problem is the misestimate of the number of rows in the sequential scan on element_station. Either autoanalyze has kicked in and calculated new statistics for the table or the data changed.
The problem is probably that PostgreSQL doesn't know the result of
((now() - '06:00:00'::interval) - '01:00:00'::interval)
at query planning time.
If that is possible for you, do it in two steps: First, calculate the expression above (either in PostgreSQL or on the client side). Then run the query with the result as a constant. That will make it easier for PostgreSQL to estimate the result count.

Explain postgres query, why is the query that much longer with WHERE and LIMIT

I'm using postgres v9.6.5. I have a query which seems not that complicated and was wondering why is it so "slow" (it's not really that slow, but I don't have a lot of data actually - like a few thousand rows).
Here is the query:
SELECT o0.*
FROM "orders" AS o0
JOIN "balances" AS b1 ON b1."id" = o0."balance_id"
JOIN "users" AS u3 ON u3."id" = b1."user_id"
WHERE (u3."partner_id" = 3)
ORDER BY o0."id" DESC LIMIT 10;
And that's query plan:
Limit (cost=0.43..12.84 rows=10 width=148) (actual time=0.062..53.866 rows=4 loops=1)
-> Nested Loop (cost=0.43..4750.03 rows=3826 width=148) (actual time=0.061..53.864 rows=4 loops=1)
Join Filter: (b1.user_id = u3.id)
Rows Removed by Join Filter: 67404
-> Nested Loop (cost=0.43..3945.32 rows=17856 width=152) (actual time=0.025..38.457 rows=16852 loops=1)
-> Index Scan Backward using orders_pkey on orders o0 (cost=0.29..897.80 rows=17856 width=148) (actual time=0.016..11.558 rows=16852 loops=1)
-> Index Scan using balances_pkey on balances b1 (cost=0.14..0.16 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=16852)
Index Cond: (id = o0.balance_id)
-> Materialize (cost=0.00..1.19 rows=3 width=4) (actual time=0.000..0.000 rows=4 loops=16852)
-> Seq Scan on users u3 (cost=0.00..1.18 rows=3 width=4) (actual time=0.023..0.030 rows=4 loops=1)
Filter: (partner_id = 3)
Rows Removed by Filter: 12
Planning time: 0.780 ms
Execution time: 54.053 ms
I actually tried without LIMIT and I got quite different plan:
Sort (cost=874.23..883.80 rows=3826 width=148) (actual time=11.361..11.362 rows=4 loops=1)
Sort Key: o0.id DESC
Sort Method: quicksort Memory: 26kB
-> Hash Join (cost=3.77..646.55 rows=3826 width=148) (actual time=11.300..11.346 rows=4 loops=1)
Hash Cond: (o0.balance_id = b1.id)
-> Seq Scan on orders o0 (cost=0.00..537.56 rows=17856 width=148) (actual time=0.012..8.464 rows=16852 loops=1)
-> Hash (cost=3.55..3.55 rows=18 width=4) (actual time=0.125..0.125 rows=24 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Hash Join (cost=1.21..3.55 rows=18 width=4) (actual time=0.046..0.089 rows=24 loops=1)
Hash Cond: (b1.user_id = u3.id)
-> Seq Scan on balances b1 (cost=0.00..1.84 rows=84 width=8) (actual time=0.011..0.029 rows=96 loops=1)
-> Hash (cost=1.18..1.18 rows=3 width=4) (actual time=0.028..0.028 rows=4 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on users u3 (cost=0.00..1.18 rows=3 width=4) (actual time=0.014..0.021 rows=4 loops=1)
Filter: (partner_id = 3)
Rows Removed by Filter: 12
Planning time: 0.569 ms
Execution time: 11.420 ms
And also without WHERE (but with LIMIT):
Limit (cost=0.43..4.74 rows=10 width=148) (actual time=0.023..0.066 rows=10 loops=1)
-> Nested Loop (cost=0.43..7696.26 rows=17856 width=148) (actual time=0.022..0.065 rows=10 loops=1)
Join Filter: (b1.user_id = u3.id)
Rows Removed by Join Filter: 139
-> Nested Loop (cost=0.43..3945.32 rows=17856 width=152) (actual time=0.009..0.029 rows=10 loops=1)
-> Index Scan Backward using orders_pkey on orders o0 (cost=0.29..897.80 rows=17856 width=148) (actual time=0.007..0.015 rows=10 loops=1)
-> Index Scan using balances_pkey on balances b1 (cost=0.14..0.16 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=10)
Index Cond: (id = o0.balance_id)
-> Materialize (cost=0.00..1.21 rows=14 width=4) (actual time=0.001..0.001 rows=15 loops=10)
-> Seq Scan on users u3 (cost=0.00..1.14 rows=14 width=4) (actual time=0.005..0.007 rows=16 loops=1)
Planning time: 0.286 ms
Execution time: 0.097 ms
As you can see, without WHERE it's much faster. Can someone provide me with some information where can I look for explanations for those plans to better understand them? And also what can I do to make those queries faster (or I shouldn't worry cause with like 100 times more data they will still be fast enough? - 50ms is fine for me tbh)
PostgreSQL thinks that it will be fastest if it scans orders in the correct order until it finds a matching users entry that satisfies the WHERE condition.
However, it seems that the data distribution is such that it has to scan almost 17000 orders before it finds a match.
Since PostgreSQL doesn't know how values correlate across tables, there is nothing much you can do to change that.
You can force PostgreSQL to plan the query without the LIMIT clause like this:
SELECT *
FROM (<your query without ORDER BY and LIMIT> OFFSET 0) q
ORDER BY id DESC LIMIT 10;
With a top-N-sort this should perform better.