This is the query:
EXPLAIN (analyze, BUFFERS, SETTINGS)
SELECT
operation.id
FROM
operation
RIGHT JOIN(
SELECT uid, did FROM (
SELECT uid, did FROM operation where id = 993754
) t
) parts ON (operation.uid = parts.uid AND operation.did = parts.did)
and EXPLAIN info:
Nested Loop Left Join (cost=0.85..29695.77 rows=100 width=8) (actual time=13.709..13.711 rows=1 loops=1)
Buffers: shared hit=4905
-> Unique (cost=0.42..8.45 rows=1 width=16) (actual time=0.011..0.013 rows=1 loops=1)
Buffers: shared hit=5
-> Index Only Scan using oi on operation operation_1 (cost=0.42..8.44 rows=1 width=16) (actual time=0.011..0.011 rows=1 loops=1)
Index Cond: (id = 993754)
Heap Fetches: 1
Buffers: shared hit=5
-> Index Only Scan using oi on operation (cost=0.42..29686.32 rows=100 width=24) (actual time=13.695..13.696 rows=1 loops=1)
Index Cond: ((uid = operation_1.uid) AND (did = operation_1.did))
Heap Fetches: 1
Buffers: shared hit=4900
Settings: max_parallel_workers_per_gather = '4', min_parallel_index_scan_size = '0', min_parallel_table_scan_size = '0', parallel_setup_cost = '0', parallel_tuple_cost = '0', work_mem = '256MB'
Planning Time: 0.084 ms
Execution Time: 13.728 ms
Why does Nested Loop cost more and more time than sum of childs cost? What can I do for that? The Execution Time should less than 1 ms right?
update:
Nested Loop Left Join (cost=5.88..400.63 rows=101 width=8) (actual time=0.012..0.012 rows=1 loops=1)
Buffers: shared hit=8
-> Index Scan using oi on operation operation_1 (cost=0.42..8.44 rows=1 width=16) (actual time=0.005..0.005 rows=1 loops=1)
Index Cond: (id = 993754)
Buffers: shared hit=4
-> Bitmap Heap Scan on operation (cost=5.45..391.19 rows=100 width=24) (actual time=0.004..0.005 rows=1 loops=1)
Recheck Cond: ((uid = operation_1.uid) AND (did = operation_1.did))
Heap Blocks: exact=1
Buffers: shared hit=4
-> Bitmap Index Scan on ou (cost=0.00..5.42 rows=100 width=0) (actual time=0.003..0.003 rows=1 loops=1)
Index Cond: ((uid = operation_1.uid) AND (did = operation_1.did))
Buffers: shared hit=3
Settings: max_parallel_workers_per_gather = '4', min_parallel_index_scan_size = '0', min_parallel_table_scan_size = '0', parallel_setup_cost = '0', parallel_tuple_cost = '0', work_mem = '256MB'
Planning Time: 0.127 ms
Execution Time: 0.028 ms
Thanks all of you, when I split the index to btree(id) and btree(uid, did), everything's going perfect, but what caused those can not be used together? Any details or rules?
BTW, the sql is used for Real-Time Calculation, there are some Window Functions code didn't show here.
The Nested Loop does not take much time actually. The actual time of 13.709..13.711 means that it took 13.709 ms until the first row was ready to be emitted from this node and it took 0.002 ms until it was finished.
Note that the startup cost of 13.709 ms includes the cost of its two child nodes. Both of the child nodes need to emit at least one row before the nested loop can start.
The Unique child began emitting its first (and only) row after 0.011 ms. The Index Only Scan child however only started to emit its first (and only) row after 13.695 ms. This means that most of your actual time spent is in this Index Only Scan.
There is a great answer here which explains the costs and actual times in depth.
Also there is a nice tool at https://explain.depesz.com which calculates an inclusive and exclusive time for each node. Here it is used for your query plan which clearly shows that most of the time is spent in the Index Only Scan.
Since the query is spending almost all of the time in this index only scan, optimizations there will have the most benefit. Creating a separate index for the columns uid and did on the operation table should improve query time a lot.
CREATE INDEX operation_uid_did ON operation(uid, did);
The current execution plan contains 2 index only scans.
A slow one:
-> Index Only Scan using oi on operation (cost=0.42..29686.32 rows=100 width=24) (actual time=13.695..13.696 rows=1 loops=1)
Index Cond: ((uid = operation_1.uid) AND (did = operation_1.did))
Heap Fetches: 1
Buffers: shared hit=4900
And a fast one:
-> Index Only Scan using oi on operation operation_1 (cost=0.42..8.44 rows=1 width=16) (actual time=0.011..0.011 rows=1 loops=1)
Index Cond: (id = 993754)
Heap Fetches: 1
Buffers: shared hit=5
Both of them use the index oi but have different index conditions. Note how the fast one, who uses the id as index condition only needs to load 5 pages of data (Buffers: shared hit=5). The slow one needs to load 4900 pages instead (Buffers: shared hit=4900). This indicates that the index is optimized to query for id but not so much for uid and did. Probably the index oi covers all 3 columns id, uid, did in this order.
A multi-column btree index can only be used efficently when there are constraints in the query on the leftmost columns. The official documentation about multi-column indexes explains this very well in depth.
Why does Nested Loop cost more and more time than sum of childs cost?
Based on your example, it doesn't. Can you elaborate on what makes you think it does?
Anyway, it seems extravagant to visit 4900 pages to fetch 1 tuple. I'm guessing your tables are not getting vacuumed enough.
Although now I prefer Florian's suggestion, that "uid" and "did" are not the leading columns of the index, and that is why it is slow. It is basically doing a full index scan, using the index as a skinny version of the table. It is a shame that EXPLAIN output doesn't make it clear when a index is being used in this fashion, rather than the traditional "jump to a specific part of the index"
So you have a missing index.
Related
I have product_details table with 30+ Million records. product attributes text type data is stored into column Value1.
Front end(web) users search for product details and it will be queried on column Value1.
create table product_details(
key serial primary key ,
product_key int,
attribute_key int ,
Value1 text[],
Value2 int[],
status text);
I created gin index on column Value1 to improve search query performance.
query execution improved a lot for many queries.
Tables and indexes are here
Below is one of query used by application for search.
select p.key from (select x.product_key,
x.value1,
x.attribute_key,
x.status
from product_details x
where value1 IS NOT NULL
) as pr_d
join attribute_type at on at.key = pr_d.attribute_key
join product p on p.key = pr_d.product_key
where value1_search(pr_d.value1) ilike '%B s%'
and at.type = 'text'
and at.status = 'active'
and pr_d.status = 'active'
and 1 = 1
and p.product_type_key=1
and 1 = 1
group by p.key
query is executed in 2 or 3 secs if we search %B % or any single or two char words and below is query plan
Group (cost=180302.82..180302.83 rows=1 width=4) (actual time=49.006..49.021 rows=65 loops=1)
Group Key: p.key
-> Sort (cost=180302.82..180302.83 rows=1 width=4) (actual time=49.005..49.009 rows=69 loops=1)
Sort Key: p.key
Sort Method: quicksort Memory: 28kB
-> Nested Loop (cost=0.99..180302.81 rows=1 width=4) (actual time=3.491..48.965 rows=69 loops=1)
Join Filter: (x.attribute_key = at.key)
Rows Removed by Join Filter: 10051
-> Nested Loop (cost=0.99..180270.15 rows=1 width=8) (actual time=3.396..45.211 rows=69 loops=1)
-> Index Scan using products_product_type_key_status on product p (cost=0.43..4420.58 rows=1413 width=4) (actual time=0.024..1.473 rows=1630 loops=1)
Index Cond: (product_type_key = 1)
-> Index Scan using product_details_product_attribute_key_status on product_details x (cost=0.56..124.44 rows=1 width=8) (actual time=0.026..0.027 rows=0 loops=1630)
Index Cond: ((product_key = p.key) AND (status = 'active'))
Filter: ((value1 IS NOT NULL) AND (value1_search(value1) ~~* '%B %'::text))
Rows Removed by Filter: 14
-> Seq Scan on attribute_type at (cost=0.00..29.35 rows=265 width=4) (actual time=0.002..0.043 rows=147 loops=69)
Filter: ((value_type = 'text') AND (status = 'active'))
Rows Removed by Filter: 115
Planning Time: 0.732 ms
Execution Time: 49.089 ms
But if i search for %B s%, query took 75 secs and below is query plan (second time query execution took 63 sec)
In below query plan, DB engine didn't consider index for scan as in above query plan indexes were used. Not sure why ?
Group (cost=8057.69..8057.70 rows=1 width=4) (actual time=62138.730..62138.737 rows=12 loops=1)
Group Key: p.key
-> Sort (cost=8057.69..8057.70 rows=1 width=4) (actual time=62138.728..62138.732 rows=14 loops=1)
Sort Key: p.key
Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=389.58..8057.68 rows=1 width=4) (actual time=2592.685..62138.710 rows=14 loops=1)
-> Hash Join (cost=389.15..4971.85 rows=368 width=4) (actual time=298.280..62129.956 rows=831 loops=1)
Hash Cond: (x.attribute_type = at.key)
-> Bitmap Heap Scan on product_details x (cost=356.48..4937.39 rows=681 width=8) (actual time=298.117..62128.452 rows=831 loops=1)
Recheck Cond: (value1_search(value1) ~~* '%B s%'::text)
Rows Removed by Index Recheck: 26168889
Filter: ((value1 IS NOT NULL) AND (status = 'active'))
Rows Removed by Filter: 22
Heap Blocks: exact=490 lossy=527123
-> Bitmap Index Scan on product_details_value1_gin (cost=0.00..356.31 rows=1109 width=0) (actual time=251.596..251.596 rows=2846970 loops=1)
Index Cond: (value1_search(value1) ~~* '%B s%'::text)
-> Hash (cost=29.35..29.35 rows=265 width=4) (actual time=0.152..0.153 rows=269 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 18kB
-> Seq Scan on attribute_type at (cost=0.00..29.35 rows=265 width=4) (actual time=0.010..0.122 rows=269 loops=1)
Filter: ((value_type = 'text') AND (status = 'active'))
Rows Removed by Filter: 221
-> Index Scan using product_pkey on product p (cost=0.43..8.39 rows=1 width=4) (actual time=0.009..0.009 rows=0 loops=831)
Index Cond: (key = x.product_key)
Filter: (product_type_key = 1)
Rows Removed by Filter: 1
Planning Time: 0.668 ms
Execution Time: 62138.794 ms
Any suggestions pls to improve query for search %B s%
thanks
ilike '%B %' has no usable trigrams in it. The planner knows this, and punishes the pg_trgm index plan so much that the planner then goes with an entirely different plan instead.
But ilike '%B s%' does have one usable trigram in it, ' s'. It turns out that this trigram sucks because it is extremely common in the searched data, but the planner currently has no way to accurately estimate how much it sucks.
Even worse, this large number matches means your full bitmap can't fit in work_mem so it goes lossy. Then it needs to recheck all the tuples in any page which contains even one tuple that has the ' s' trigram in it, which looks like it is most of the pages in your table.
The first thing to do is to increase your work_mem to the point you stop getting lossy blocks. If most of your time is spent in the CPU applying the recheck condition, this should help tremendously. If most of your time is spent reading the product_details from disk (so that the recheck has the data it needs to run) then it won't help much. If you had done EXPLAIN (ANALYZE, BUFFERS) with track_io_timing turned on, then we would already know which is which.
Another thing you could do is have the application inspect the search parameter, and if it looks like two letters (with or without a space between), then forcibly disable that index usage, or just throw an error if there is no good reason to do that type of search. For example, changing the part of the query to look like this will disable the index:
where value1_search(pr_d.value1)||'' ilike '%B s%'
Another thing would be to rethink your data representation. '%B s%' is a peculiar thing to search for. Why would anyone search for that? Does it have some special meaning within the context of your data, which is not obvious to the outside observer? Maybe you could represent it in a different way that gets along better with pg_trgm.
Finally, you could try to improve the planning for GIN indexes generally by explicitly estimating how many tuples are going to fail recheck (due to inherent lossiness of the index, not due to overrunning work_mem). This would be a major undertaking, and you would be unlikely to see it in production for at least a couple years, if ever.
I am having problems optimizing a query in PostgreSQL 9.5.14.
select *
from file as f
join product_collection pc on (f.product_collection_id = pc.id)
where pc.mission_id = 7
order by f.id asc
limit 100;
Takes about 100 seconds. If I drop the limit clause it takes about 0.5:
With limit:
explain (analyze,buffers) ... -- query exactly as above
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.84..859.32 rows=100 width=457) (actual time=102793.422..102856.884 rows=100 loops=1)
Buffers: shared hit=222430592
-> Nested Loop (cost=0.84..58412343.43 rows=6804163 width=457) (actual time=102793.417..102856.872 rows=100 loops=1)
Buffers: shared hit=222430592
-> Index Scan using file_pkey on file f (cost=0.57..23409008.61 rows=113831736 width=330) (actual time=0.048..28207.152 rows=55858772 loops=1)
Buffers: shared hit=55652672
-> Index Scan using product_collection_pkey on product_collection pc (cost=0.28..0.30 rows=1 width=127) (actual time=0.001..0.001 rows=0 loops=55858772)
Index Cond: (id = f.product_collection_id)
Filter: (mission_id = 7)
Rows Removed by Filter: 1
Buffers: shared hit=166777920
Planning time: 0.803 ms
Execution time: 102856.988 ms
Without limit:
=> explain (analyze,buffers) ... -- query as above, just without limit
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=20509671.01..20526681.42 rows=6804163 width=457) (actual time=456.175..510.596 rows=142055 loops=1)
Sort Key: f.id
Sort Method: quicksort Memory: 79392kB
Buffers: shared hit=37956
-> Nested Loop (cost=0.84..16494851.02 rows=6804163 width=457) (actual time=0.044..231.051 rows=142055 loops=1)
Buffers: shared hit=37956
-> Index Scan using product_collection_mission_id_index on product_collection pc (cost=0.28..46.13 rows=87 width=127) (actual time=0.017..0.101 rows=87 loops=1)
Index Cond: (mission_id = 7)
Buffers: shared hit=10
-> Index Scan using file_product_collection_id_index on file f (cost=0.57..187900.11 rows=169535 width=330) (actual time=0.007..1.335 rows=1633 loops=87)
Index Cond: (product_collection_id = pc.id)
Buffers: shared hit=37946
Planning time: 0.807 ms
Execution time: 569.865 ms
I have copied the database to a backup server so that I may safely manipulate the database without something else changing it on me.
Cardinalities:
Table file: 113,831,736 rows.
Table product_collection: 1370 rows.
The query without LIMIT: 142,055 rows.
SELECT count(*) FROM product_collection WHERE mission_id = 7: 87 rows.
What I have tried:
searching stack overflow
vacuum full analyze
creating two column indexes on file.product_collection_id & file.id. (there already are single column indexes on every field touched.)
creating two column indexes on file.id & file.product_collection_id.
increasing the statistics on file.id & file.product_collection_id, then re-vacuum analyze.
changing various query planner settings.
creating non-materialized views.
walking up and down the hallway while muttering to myself.
None of them seem to change the performance in a significant way.
Thoughts?
UPDATE from OP:
Tested this on PostgreSQL 9.6 & 10.4, and found no significant changes in plans or performance.
However, setting random_page_cost low enough is the only way to get faster performance on the without limit search.
With a default random_page_cost = 4, the without limit:
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=9270013.01..9287875.64 rows=7145054 width=457) (actual time=47782.523..47843.812 rows=145697 loops=1)
Sort Key: f.id
Sort Method: external sort Disk: 59416kB
Buffers: shared hit=3997185 read=1295264, temp read=7427 written=7427
-> Hash Join (cost=24.19..6966882.72 rows=7145054 width=457) (actual time=1.323..47458.767 rows=145697 loops=1)
Hash Cond: (f.product_collection_id = pc.id)
Buffers: shared hit=3997182 read=1295264
-> Seq Scan on file f (cost=0.00..6458232.17 rows=116580217 width=330) (actual time=0.007..17097.581 rows=116729984 loops=1)
Buffers: shared hit=3997169 read=1295261
-> Hash (cost=23.08..23.08 rows=89 width=127) (actual time=0.840..0.840 rows=87 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 15kB
Buffers: shared hit=13 read=3
-> Bitmap Heap Scan on product_collection pc (cost=4.97..23.08 rows=89 width=127) (actual time=0.722..0.801 rows=87 loops=1)
Recheck Cond: (mission_id = 7)
Heap Blocks: exact=10
Buffers: shared hit=13 read=3
-> Bitmap Index Scan on product_collection_mission_id_index (cost=0.00..4.95 rows=89 width=0) (actual time=0.707..0.707 rows=87 loops=1)
Index Cond: (mission_id = 7)
Buffers: shared hit=3 read=3
Planning time: 0.929 ms
Execution time: 47911.689 ms
User Erwin's answer below will take me some time to fully understand and generalize to all of the use cases needed. In the mean time we will probably use either a materialized view or just flatten our table structure.
This query is harder for the Postgres query planner than it might look. Depending on cardinalities, data distribution, value frequencies, sizes, ... completely different query plans can prevail and the planner has a hard time predicting which is best. Current versions of Postgres are better at this in several aspects, but it's still hard to optimize.
Since you retrieve only relatively few rows from product_collection, this equivalent query with LIMIT in a LATERAL subquery should avoid performance degradation:
SELECT *
FROM product_collection pc
CROSS JOIN LATERAL (
SELECT *
FROM file f -- big table
WHERE f.product_collection_id = pc.id
ORDER BY f.id
LIMIT 100
) f
WHERE pc.mission_id = 7
ORDER BY f.id
LIMIT 100;
Edit: This results in a query plan with explain (analyze,verbose) provided by the OP:
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=30524.34..30524.59 rows=100 width=457) (actual time=13.128..13.167 rows=100 loops=1)
Buffers: shared hit=3213
-> Sort (cost=30524.34..30546.09 rows=8700 width=457) (actual time=13.126..13.152 rows=100 loops=1)
Sort Key: file.id
Sort Method: top-N heapsort Memory: 76kB
Buffers: shared hit=3213
-> Nested Loop (cost=0.57..30191.83 rows=8700 width=457) (actual time=0.060..9.868 rows=2880 loops=1)
Buffers: shared hit=3213
-> Seq Scan on product_collection pc (cost=0.00..69.12 rows=87 width=127) (actual time=0.024..0.336 rows=87 loops=1)
Filter: (mission_id = 7)
Rows Removed by Filter: 1283
Buffers: shared hit=13
-> Limit (cost=0.57..344.24 rows=100 width=330) (actual time=0.008..0.071 rows=33 loops=87)
Buffers: shared hit=3200
-> Index Scan using file_pc_id_index on file (cost=0.57..582642.42 rows=169535 width=330) (actual time=0.007..0.065 rows=33 loops=87)
Index Cond: (product_collection_id = pc.id)
Buffers: shared hit=3200
Planning time: 0.595 ms
Execution time: 13.319 ms
You need these indexes (will help your original query, too):
CREATE INDEX idx1 ON file (product_collection_id, id); -- crucial
CREATE INDEX idx2 ON product_collection (mission_id, id); -- helpful
You mentioned:
two column indexes on file.id & file.product_collection_id.
Etc. But we need it the other way round: id last. The order of index expressions is crucial. See:
Is a composite index also good for queries on the first field?
Rationale: With only 87 rows from product_collection, we only fetch a maximum of 87 x 100 = 8700 rows (fewer if not every pc.id has 100 rows in table file), which are then sorted before picking the top 100. Performance degrades with the number of rows you get from product_collection and with bigger LIMIT.
With the multicolumn index idx1 above, that's 87 fast index scans. The rest is not very expensive.
More optimization is possible, depending on additional information. Related:
Can spatial index help a “range - order by - limit” query
I have a large table (30M rows) which has ~10 jsonb B-tree indexes.
When I create a query using few conditions, the query is relatively fast.
When I add more conditions, especially one with a sparse jsonb index (e.g. an integer between 0 and 1,000,000), the query speed drops off dramatically.
I am wondering whether jsonb indexes are slower than native indexes? Would I expect a performance boost by switching to native columns rather than JSON?
Table definition:
id integer
type text
data jsonb
company_index ARRAY
exchange_index ARRAY
eligible boolean
Example query:
SELECT id, data, type
FROM collection.bundles
WHERE ( (ARRAY['.X'] && bundles.exchange_index) AND
type IN ('discussion') AND
( ((data->>'sentiment_score')::bigint > 0 AND
(data->'display_tweet'->'stocktwit'->'id') IS NOT NULL) ) AND
( eligible = true ) AND
((data->'display_tweet'->'stocktwit')->>'id')::bigint IS NULL )
ORDER BY id DESC
LIMIT 50
Output:
Limit (cost=0.56..16197.56 rows=50 width=212) (actual time=31900.874..31900.874 rows=0 loops=1)
Buffers: shared hit=13713180 read=1267819 dirtied=34 written=713
I/O Timings: read=7644.206 write=7.294
-> Index Scan using bundles2_id_desc_idx on bundles (cost=0.56..2401044.17 rows=7412 width=212) (actual time=31900.871..31900.871 rows=0 loops=1)
Filter: (eligible AND ('{.X}'::text[] && exchange_index) AND (type = 'discussion'::text) AND ((((data -> 'display_tweet'::text) -> 'stocktwit'::text) -> 'id'::text) IS NOT NULL) AND (((data ->> 'sentiment_score'::text))::bigint > 0) AND (((((data -> 'display_tweet'::text) -> 'stocktwit'::text) ->> 'id'::text))::bigint IS NULL))
Rows Removed by Filter: 16093269
Buffers: shared hit=13713180 read=1267819 dirtied=34 written=713
I/O Timings: read=7644.206 write=7.294
Planning time: 0.366 ms
Execution time: 31900.909 ms
Note:
There are jsonb B-tree indexes on every jsonb condition used in this query. exchange_index and company_index have GIN indexes.
UPDATE
After Laurenz's changed query:
Limit (cost=150634.15..150634.27 rows=50 width=211) (actual time=15925.828..15925.828 rows=0 loops=1)
Buffers: shared hit=1137490 read=680349 written=2
I/O Timings: read=2896.702 write=0.038
-> Sort (cost=150634.15..150652.53 rows=7352 width=211) (actual time=15925.827..15925.827 rows=0 loops=1)
Sort Key: bundles.id DESC
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=1137490 read=680349 written=2
I/O Timings: read=2896.702 write=0.038
-> Bitmap Heap Scan on bundles (cost=56666.15..150316.40 rows=7352 width=211) (actual time=15925.816..15925.816 rows=0 loops=1)
Recheck Cond: (('{.X}'::text[] && exchange_index) AND (type = 'discussion'::text))
Filter: (eligible AND ((((data -> 'display_tweet'::text) -> 'stocktwit'::text) -> 'id'::text) IS NOT NULL) AND (((data ->> 'sentiment_score'::text))::bigint > 0) AND (((((data -> 'display_tweet'::text) -> 'stocktwit'::text) ->> 'id'::text))::bigint IS NULL))
Rows Removed by Filter: 273230
Heap Blocks: exact=175975
Buffers: shared hit=1137490 read=680349 written=2
I/O Timings: read=2896.702 write=0.038
-> BitmapAnd (cost=56666.15..56666.15 rows=23817 width=0) (actual time=1895.890..1895.890 rows=0 loops=1)
Buffers: shared hit=37488 read=85559
I/O Timings: read=325.535
-> Bitmap Index Scan on bundles2_exchange_index_ops_idx (cost=0.00..6515.57 rows=863703 width=0) (actual time=218.690..218.690 rows=892669 loops=1)
Index Cond: ('{.X}'::text[] && exchange_index)
Buffers: shared hit=7 read=313
I/O Timings: read=1.458
-> Bitmap Index Scan on bundles_eligible_idx (cost=0.00..23561.74 rows=2476877 width=0) (actual time=436.719..436.719 rows=2569331 loops=1)
Index Cond: (eligible = true)
Buffers: shared hit=37473
-> Bitmap Index Scan on bundles2_type_idx (cost=0.00..26582.83 rows=2706276 width=0) (actual time=1052.267..1052.267 rows=2794517 loops=1)
Index Cond: (type = 'discussion'::text)
Buffers: shared hit=8 read=85246
I/O Timings: read=324.077
Planning time: 0.433 ms
Execution time: 15928.959 ms
All your fancy indexes are not used at all, so the problem is not if they are fast or not.
There are several things at play here:
Seeing the dirtied and the written pages during the index scan, I suspect that there are quite a lot of “dead tuples” in your table. When the index scan visits them and notices they are dead, it “kills” those index entries so that subsequent index scans don't have to repeat that work.
If you repeat the query, you will probably notice that the number of blocks and the execution time becomes less.
You can reduce that problem by running VACUUM on the table or making sure autovacuum processes the table often enough.
Your major problem, however, is that the LIMIT clause tempts PostgreSQL to use the following strategy:
Since you only want 50 result rows in an order for which you have an index, just examine the table rows in index order and discard all rows that do not match the complicated condition until you have 50 results.
Unfortunately it has to scan 16093319 rows until it has found its 50 hits. The rows at the “high id” end of the table don't match the condition. PostgreSQL does not know about that correlation.
The solution is to discourage PostgreSQL from going down that route. The easiest way would be to drop all indexes on id, but given its name that is probably unfeasible.
The other way is to keep PostgreSQL from “seeing” the LIMIT clause when it plans the scan:
SELECT id, data, type
FROM (SELECT id, data, type
FROM collection.bundles
WHERE /* all your complicated conditions */
OFFSET 0) subquery
ORDER BY id DESC
LIMIT 50;
Remark: You didn't show your index definitions, but it sounds to be like you have quite a lot of them, possibly too many. Indexes are expensive, so make sure you define only those that give you a clear benefit.
I'm working with the HackerNews dataset in Postgres. There are about 17M rows about 14.5M of them are comments and about 2.5M are stories. There is a very active user named "rbanffy" who has 25k submissions, about equal split stories/comments. Both "by" and "type" have separate indices.
I have a query:
SELECT *
FROM "hn_items"
WHERE by = 'rbanffy'
and type = 'story'
ORDER BY id DESC
LIMIT 20 OFFSET 0
That runs quickly (it's using the 'by' index). If I change the type to "comment" then it's very slow. From the explain, it doesn't use either index and does a scan.
Limit (cost=0.56..56948.32 rows=20 width=1937)
-> Index Scan using hn_items_pkey on hn_items (cost=0.56..45823012.32 rows=16093 width=1937)
Filter: (((by)::text = 'rbanffy'::text) AND ((type)::text = 'comment'::text))
If I change the query to have type||''='comment', then it will use the 'by' index and executes quickly.
Why is this happening? I understand from https://stackoverflow.com/a/309814/214545 that having to do a hack like this implies something is wrong. But I don't know what.
EDIT:
This is the explain for the type='story'
Limit (cost=72553.07..72553.12 rows=20 width=1255)
-> Sort (cost=72553.07..72561.25 rows=3271 width=1255)
Sort Key: id DESC
-> Bitmap Heap Scan on hn_items (cost=814.59..72466.03 rows=3271 width=1255)
Recheck Cond: ((by)::text = 'rbanffy'::text)
Filter: ((type)::text = 'story'::text)
-> Bitmap Index Scan on hn_items_by_index (cost=0.00..813.77 rows=19361 width=0)
Index Cond: ((by)::text = 'rbanffy'::text)
EDIT:
EXPLAIN (ANALYZE,BUFFERS)
Limit (cost=0.56..59510.10 rows=20 width=1255) (actual time=20.856..545.282 rows=20 loops=1)
Buffers: shared hit=21597 read=2658 dirtied=32
-> Index Scan using hn_items_pkey on hn_items (cost=0.56..47780210.70 rows=16058 width=1255) (actual time=20.855..545.271 rows=20 loops=1)
Filter: (((by)::text = 'rbanffy'::text) AND ((type)::text = 'comment'::text))
Rows Removed by Filter: 46798
Buffers: shared hit=21597 read=2658 dirtied=32
Planning time: 0.173 ms
Execution time: 545.318 ms
EDIT: EXPLAIN (ANALYZE,BUFFERS) of type='story'
Limit (cost=72553.07..72553.12 rows=20 width=1255) (actual time=44.121..44.127 rows=20 loops=1)
Buffers: shared hit=20137
-> Sort (cost=72553.07..72561.25 rows=3271 width=1255) (actual time=44.120..44.123 rows=20 loops=1)
Sort Key: id DESC
Sort Method: top-N heapsort Memory: 42kB
Buffers: shared hit=20137
-> Bitmap Heap Scan on hn_items (cost=814.59..72466.03 rows=3271 width=1255) (actual time=6.778..37.774 rows=11630 loops=1)
Recheck Cond: ((by)::text = 'rbanffy'::text)
Filter: ((type)::text = 'story'::text)
Rows Removed by Filter: 12587
Heap Blocks: exact=19985
Buffers: shared hit=20137
-> Bitmap Index Scan on hn_items_by_index (cost=0.00..813.77 rows=19361 width=0) (actual time=3.812..3.812 rows=24387 loops=1)
Index Cond: ((by)::text = 'rbanffy'::text)
Buffers: shared hit=152
Planning time: 0.156 ms
Execution time: 44.422 ms
EDIT: latest test results
I was playing around with the type='comment' query and noticed if changed the limit to a higher number like 100, it used the by index. I played with the values until I found the critical number was '47'. If I had a limit of 47, the by index was used, if I had a limit of 46, it was a full scan. I assume that number isn't magical, just happens to be the threshold for my dataset or some other variable I don't know. I'm don't know if this helps.
Since there ate many comments by rbanffy, PostgreSQL assumes that it will be fast enough if it searches the table in the order implied by the ORDER BY clause (which can use the primary key index) until it has found 20 rows that match the search condition.
Unfortunately it happens that in the guy has grown lazy lately — at any rate, PostgreSQL has to scan the 46798 highest ids until it has found its 20 hits. (You really shouldn't have removed the Backwards, that confused me.)
The best way to work around that is to confuse PostgreSQL so that it doesn't choose the primary key index, perhaps like this:
SELECT *
FROM (SELECT * FROM hn_items
WHERE by = 'rbanffy'
AND type = 'comment'
OFFSET 0) q
ORDER BY id DESC
LIMIT 20;
I have the following PostGIS/greSQL query
SELECT luc.*
FROM spatial_derived.lucas12 luc,
(SELECT geom
FROM spatial_derived.germany_bld
WHERE state = 'SN') sn
WHERE ST_Contains(sn.geom, luc.geom)
Query plan:
Nested Loop (cost=2.45..53.34 rows=8 width=236) (actual time=1.030..26.751 rows=1282 loops=1)
-> Seq Scan on germany_bld (cost=0.00..2.20 rows=1 width=18399) (actual time=0.023..0.029 rows=1 loops=1)
Filter: ((state)::text = 'SN'::text)
Rows Removed by Filter: 15
-> Bitmap Heap Scan on lucas12 luc (cost=2.45..51.06 rows=8 width=236) (actual time=1.002..26.031 rows=1282 loops=1)
Recheck Cond: (germany_bld.geom ~ geom)
Filter: _st_contains(germany_bld.geom, geom)
Rows Removed by Filter: 499
Heap Blocks: exact=174
-> Bitmap Index Scan on lucas12_geom_idx (cost=0.00..2.45 rows=23 width=0) (actual time=0.419..0.419 rows=1781 loops=1)
Index Cond: (germany_bld.geom ~ geom)
Planning time: 0.536 ms
Execution time: 27.023 ms
which is due to an index on the geometry columns pretty fast. However when I want to add a buffer to the sn polygon (1 big polygon that represents a border line, hence a quite simple feature):
SELECT luc.*
FROM spatial_derived.lucas12 luc,
(SELECT ST_Buffer(geom, 30000) geom
FROM spatial_derived.germany_bld
WHERE state = 'SN') sn
WHERE ST_Contains(sn.geom, luc.geom)
Query plan:
Nested Loop (cost=0.00..13234.80 rows=7818 width=236) (actual time=6221.391..1338380.257 rows=2298 loops=1)
Join Filter: st_contains(st_buffer(germany_bld.geom, 30000::double precision), luc.geom)
Rows Removed by Join Filter: 22637
-> Seq Scan on germany_bld (cost=0.00..2.20 rows=1 width=18399) (actual time=0.018..0.036 rows=1 loops=1)
Filter: ((state)::text = 'SN'::text)
Rows Removed by Filter: 15
-> Seq Scan on lucas12 luc (cost=0.00..1270.55 rows=23455 width=236) (actual time=0.005..25.623 rows=24935 loops=1)
Planning time: 0.271 ms
Execution time: 1338381.079 ms
the query takes forever! I blame it on the not existing index in the temporally table sn. The massive decrease in speed can't be 'caused by ST_Buffer() as it's itself really fast and the buffered feature is simple.
Two Questions:
1) Am I right?
2) What can I do, to reach similar speed as with the first query?
I've ran into a trap. ST_Buffer() is not the right choice here rather ST_DWithin() which keeps the indexes of every geometry column when actually performing a bounding box comparison. The help page for ST_Buffer() clearly states to not make the mistake using ST_Buffer(), but instead use ST_DWithin() for radius searches. Since the word Buffer is used in a lot of GIS softwares I didn't consider looking for alternatives.
SELECT luc.*
FROM spatial_derived.lucas12 luc
JOIN spatial_derived.germany_bld sn ON ST_DWithin(sn.geom, luc.geom, 30000)
WHERE bld.state = 'SN'
works and only takes a second (2300 points within that "buffer")!
to check if you right, you can leave sn as is and apply ST_Buffer on join:
SELECT luc.*
FROM spatial_derived.lucas12 luc,
(SELECT geom
FROM spatial_derived.germany_bld
WHERE state = 'SN') sn
WHERE ST_Contains(ST_Buffer(sn.geom, 30000), luc.geom)
Query plan:
Nested Loop (cost=0.00..13234.80 rows=7818 width=236) (actual time=6237.876..1340000.576 rows=2298 loops=1)
Join Filter: st_contains(st_buffer(germany_bld.geom, 30000::double precision), luc.geom)
Rows Removed by Join Filter: 22637
-> Seq Scan on germany_bld (cost=0.00..2.20 rows=1 width=18399) (actual time=0.023..0.038 rows=1 loops=1)
Filter: ((state)::text = 'SN'::text)
Rows Removed by Filter: 15
-> Seq Scan on lucas12 luc (cost=0.00..1270.55 rows=23455 width=236) (actual time=0.004..24.525 rows=24935 loops=1)
Planning time: 0.453 ms
Execution time: 1340001.420 ms
this query will answer both your questions or first, depending on result.
Update
Your assumption seems to be wrong. The ST_Buffer() causes speed drop down
You seem to join on much larger set when using the ST_Buffer, so time increase is quite expected. You can run explain analyze for both with and without ST_Buffer() queries - it probably will show same plans with different rows number and cost second value...