Very slow postgres ORDER BY performance despite indices - postgresql

Following tables:
Collections (around 100 records)
Events (related to collections, around 6000 per collection)
Sales (related to events, in total around 2m records)
I need to get all sales for a certain collection, sorted by either sales.timestamp DESC (datetime field) or sales.id DESC (since id is already in insertion order) but to filter by collection_id I need to join the events table in first, then filter by e.collection_id.
To help with this I created a separate index on timestamp: idx_sales_timestamp_desc (timestamp DESC NULLS LAST), alongside the usual pkey index on sales.id
EXPLAIN (analyze,buffers) SELECT * from sales s
LEFT JOIN events e ON s.sales_event = e.sales_event
WHERE e.collection_id = 9
ORDER BY s.id DESC -- identical results with s.timestamp
LIMIT 10;
Without the ORDER:
Limit (cost=0.85..196.61 rows=10 width=619) (actual time=0.069..2.416 rows=10 loops=1)
Buffers: shared hit=172 read=3
I/O Timings: read=1.810
-> Nested Loop (cost=0.85..122231.34 rows=6244 width=619) (actual time=0.068..2.413 rows=10 loops=1)
Buffers: shared hit=172 read=3
I/O Timings: read=1.810
-> Index Scan using idx_events_collection_id on events e (cost=0.43..32359.71 rows=9551 width=206) (actual time=0.027..0.074 rows=47 loops=1)
Index Cond: (collection_id = 9)
Buffers: shared hit=24
-> Index Scan using idx_sales_sales_event on sales s (cost=0.42..9.39 rows=2 width=413) (actual time=0.049..0.049 rows=0 loops=47)
Index Cond: (sales_event = e.sales_event)
Buffers: shared hit=148 read=3
I/O Timings: read=1.810
Planning:
Buffers: shared hit=20
Planning Time: 0.418 ms
Execution Time: 2.444 ms
With the order:
Limit (cost=1001.00..3353.78 rows=10 width=619) (actual time=1084.650..2353.191 rows=10 loops=1)
Buffers: shared hit=81908 read=6967 dirtied=1930
I/O Timings: read=3732.771
-> Gather Merge (cost=1001.00..1470076.56 rows=6244 width=619) (actual time=1084.649..2352.683 rows=10 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=81908 read=6967 dirtied=1930
I/O Timings: read=3732.771
-> Nested Loop (cost=0.98..1468355.82 rows=2602 width=619) (actual time=622.297..1768.414 rows=6 loops=3)
Buffers: shared hit=81908 read=6967 dirtied=1930
I/O Timings: read=3732.771
-> Parallel Index Scan Backward using sales_pkey on sales s (cost=0.42..58907.93 rows=303693 width=413) (actual time=0.237..301.251 rows=6008 loops=3)
Buffers: shared hit=9958 read=1094 dirtied=1479
I/O Timings: read=513.609
-> Index Scan using events_pkey on events e (cost=0.55..4.64 rows=1 width=206) (actual time=0.243..0.243 rows=0 loops=18024)
Index Cond: (sales_event = s.sales_event)
Filter: (collection_id = 9)
Rows Removed by Filter: 0
Buffers: shared hit=71950 read=5873 dirtied=451
I/O Timings: read=3219.161
Planning:
Buffers: shared hit=20
Planning Time: 0.268 ms
Execution Time: 2354.905 ms
sales DDL:
-- Table Definition ----------------------------------------------
CREATE TABLE sales (
id BIGSERIAL PRIMARY KEY,
transaction text,
sales_event text,
price bigint,
name text,
timestamp timestamp with time zone,
);
-- Indices -------------------------------------------------------
CREATE UNIQUE INDEX sales_pkey ON sales(id int8_ops);
CREATE INDEX idx_sales_sales_event ON sales(sales_event text_ops);
CREATE INDEX idx_sales_timestamp_desc ON sales(timestamp timestamptz_ops DESC);
Events DDL:
-- Table Definition ----------------------------------------------
CREATE TABLE events (
created_at timestamp with time zone,
updated_at timestamp with time zone,
sales_event text PRIMARY KEY,
collection_id bigint REFERENCES collections(id) REFERENCES collections(id),
);
-- Indices -------------------------------------------------------
CREATE UNIQUE INDEX events_pkey ON events(sales_event text_ops);
Without the ORDER BY, I'm at around 500ms. With the ORDER BY, it easily ends up in the 2s-3m or longer category, depending on DB load, despite all indices being used according to the explain.
The default order when omitting ORDER BY altogether is not the one I want. I kept ANALYZE up to date as well.
How do I solve this in a good way?

Related

Postgres function slower than same ad hoc query

I have had several cases where a Postgres function that returns a table result from a query is much slower than running the actual query. Why is that?
This is one example, but I've found that function is slower than just the query in many cases.
create function trending_names(date_start timestamp with time zone, date_end timestamp with time zone, gender_filter character, country_filter text)
returns TABLE(name_id integer, gender character, country text, score bigint, rank bigint)
language sql
as
$$
select u.name_id,
n.gender,
u.country,
count(u.rank) as score,
row_number() over (order by count(u.rank) desc) as rank
from babynames.user_scores u
inner join babynames.names n on u.name_id = n.id
where u.created_at between date_start and date_end
and u.rank > 0
and n.gender = gender_filter
and u.country = country_filter
group by u.name_id, n.gender, u.country
$$;
This is the query plan for a select from the function:
Function Scan on trending_names (cost=0.25..10.25 rows=1000 width=84) (actual time=1118.673..1118.861 rows=2238 loops=1)
Buffers: shared hit=216509 read=29837
Planning Time: 0.078 ms
Execution Time: 1119.083 ms
Query plan from just running the query. This takes less than half the time.
WindowAgg (cost=44834.98..45593.32 rows=43334 width=25) (actual time=383.387..385.223 rows=2238 loops=1)
Planning Time: 2.512 ms
Execution Time: 387.403 ms
Buffers: shared hit=100446 read=50220
-> Sort (cost=44834.98..44943.31 rows=43334 width=17) (actual time=383.375..383.546 rows=2238 loops=1)
Sort Method: quicksort Memory: 271kB
Sort Key: (count(u.rank)) DESC
Buffers: shared hit=100446 read=50220
-> HashAggregate (cost=41064.22..41497.56 rows=43334 width=17) (actual time=381.088..381.906 rows=2238 loops=1)
" Group Key: u.name_id, u.country, n.gender"
Buffers: shared hit=100446 read=50220
-> Hash Join (cost=5352.15..40630.88 rows=43334 width=13) (actual time=60.710..352.646 rows=36271 loops=1)
Hash Cond: (u.name_id = n.id)
Buffers: shared hit=100446 read=50220
-> Index Scan using user_scores_rank_ix on user_scores u (cost=0.43..35077.55 rows=76796 width=11) (actual time=24.193..287.393 rows=69770 loops=1)
-> Hash (cost=5005.89..5005.89 rows=27667 width=6) (actual time=36.420..36.420 rows=27472 loops=1)
Rows Removed by Filter: 106521
Index Cond: (rank > 0)
Filter: ((created_at >= '2021-01-01 00:00:00+00'::timestamp with time zone) AND (country = 'sv'::text) AND (created_at <= now()))
Buffers: shared hit=99417 read=46856
Buffers: shared hit=1029 read=3364
Buckets: 32768 Batches: 1 Memory Usage: 1330kB
-> Seq Scan on names n (cost=0.00..5005.89 rows=27667 width=6) (actual time=0.022..24.447 rows=27472 loops=1)
Rows Removed by Filter: 21559
Filter: (gender = 'f'::bpchar)
Buffers: shared hit=1029 read=3364
I'm also confused on why it does a Seq scan on names n in the last step since names.id is the primary key and gender is indexed.

PostgreSQL slow order

I have table (over 100 millions records) on PostgreSQL 13.1
CREATE TABLE report
(
id serial primary key,
license_plate_id integer,
datetime timestamp
);
Indexes (for test I create both of them):
create index report_lp_datetime_index on report (license_plate_id, datetime);
create index report_lp_datetime_desc_index on report (license_plate_id desc, datetime desc);
So, my question is why query like
select * from report r
where r.license_plate_id in (1,2,4,5,6,7,8,10,15,22,34,75)
order by datetime desc
limit 100
Is very slow (~10sec). But query without order statement is fast (milliseconds).
Explain:
explain (analyze, buffers, format text) select * from report r
where r.license_plate_id in (1,2,4,5,6,7,8,10,15,22,34, 75,374,57123)
limit 100
Limit (cost=0.57..400.38 rows=100 width=316) (actual time=0.037..0.216 rows=100 loops=1)
Buffers: shared hit=103
-> Index Scan using report_lp_id_idx on report r (cost=0.57..44986.97 rows=11252 width=316) (actual time=0.035..0.202 rows=100 loops=1)
Index Cond: (license_plate_id = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75,374,57123}'::integer[]))
Buffers: shared hit=103
Planning Time: 0.228 ms
Execution Time: 0.251 ms
explain (analyze, buffers, format text) select * from report r
where r.license_plate_id in (1,2,4,5,6,7,8,10,15,22,34,75,374,57123)
order by datetime desc
limit 100
Limit (cost=44193.63..44193.88 rows=100 width=316) (actual time=4921.030..4921.047 rows=100 loops=1)
Buffers: shared hit=11455 read=671
-> Sort (cost=44193.63..44221.76 rows=11252 width=316) (actual time=4921.028..4921.035 rows=100 loops=1)
Sort Key: datetime DESC
Sort Method: top-N heapsort Memory: 128kB
Buffers: shared hit=11455 read=671
-> Bitmap Heap Scan on report r (cost=151.18..43763.59 rows=11252 width=316) (actual time=54.422..4911.927 rows=12148 loops=1)
Recheck Cond: (license_plate_id = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75,374,57123}'::integer[]))
Heap Blocks: exact=12063
Buffers: shared hit=11455 read=671
-> Bitmap Index Scan on report_lp_id_idx (cost=0.00..148.37 rows=11252 width=0) (actual time=52.631..52.632 rows=12148 loops=1)
Index Cond: (license_plate_id = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75,374,57123}'::integer[]))
Buffers: shared hit=59 read=4
Planning Time: 0.427 ms
Execution Time: 4921.128 ms
You seem to have rather slow storage, if reading 671 8kB-blocks from disk takes a couple of seconds.
The way to speed this up is to reorder the table in the same way as the index, so that you can find the required rows in the same or adjacent table blocks:
CLUSTER report_lp_id_idx USING report_lp_id_idx;
Be warned that rewriting the table in this way causes downtime – the table will not be available while it is being rewritten. Moreover, PostgreSQL does not maintain the table order, so subsequent data modifications will cause performance to gradually deteriorate, so that after a while you will have to run CLUSTER again.
But if you need this query to be fast no matter what, CLUSTER is the way to go.
Your two indices do exactly the same thing, so you can remove the second one, it's useless.
To optimize your query, the order of the fields inside the index must be reversed:
create index report_lp_datetime_index on report (datetime,license_plate_id);
BEGIN;
CREATE TABLE foo (d INTEGER, i INTEGER);
INSERT INTO foo SELECT random()*100000, random()*1000 FROM generate_series(1,1000000) s;
CREATE INDEX foo_d_i ON foo(d DESC,i);
COMMIT;
VACUUM ANALYZE foo;
EXPLAIN ANALYZE SELECT * FROM foo WHERE i IN (1,2,4,5,6,7,8,10,15,22,34,75) ORDER BY d DESC LIMIT 100;
Limit (cost=0.42..343.92 rows=100 width=8) (actual time=0.076..9.359 rows=100 loops=1)
-> Index Only Scan Backward using foo_d_i on foo (cost=0.42..40976.43 rows=11929 width=8) (actual time=0.075..9.339 rows=100 loops=1)
Filter: (i = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75}'::integer[]))
Rows Removed by Filter: 9016
Heap Fetches: 0
Planning Time: 0.339 ms
Execution Time: 9.387 ms
Note the index is not used to optimize the WHERE clause. It is used here as a compact and fast way to store references to the rows ordered by date DESC, so the ORDER BY can do an index-only scan and avoid sorting. By adding column id to the index, an index-only scan can be performed to test the condition on id, without hitting the table for every row. Since there is a low LIMIT value it does not need to scan the whole index, it only scans it in date DESC order until it finds enough rows satisfying the WHERE condition to return the result.
It will be faster if you create the index in date DESC order, this could be useful if you use ORDER BY date DESC + LIMIT in other queries too.
You forget that OP's table has a third column, and he is using SELECT *. So that wouldn't be an index-only scan.
Easy to work around. The optimum way to do this query would be an index-only scan to filter on WHERE conditions, then LIMIT, then hit the table to get the rows. For some reason if "select *" is used postgres takes the id column from the table instead of taking it from the index, which results in lots of unnecessary heap fetches for rows whose id is rejected by the WHERE condition.
Easy to work around, by doing it manually. I've also added another bogus column to make sure the SELECT * hits the table.
EXPLAIN (ANALYZE,buffers) SELECT * FROM foo
JOIN (SELECT d,i FROM foo WHERE i IN (1,2,4,5,6,7,8,10,15,22,34,75) ORDER BY d DESC LIMIT 100) f USING (d,i)
ORDER BY d DESC LIMIT 100;
Limit (cost=0.85..1281.94 rows=1 width=17) (actual time=0.052..3.618 rows=100 loops=1)
Buffers: shared hit=453
-> Nested Loop (cost=0.85..1281.94 rows=1 width=17) (actual time=0.050..3.594 rows=100 loops=1)
Buffers: shared hit=453
-> Limit (cost=0.42..435.44 rows=100 width=8) (actual time=0.037..2.953 rows=100 loops=1)
Buffers: shared hit=53
-> Index Only Scan using foo_d_i on foo foo_1 (cost=0.42..51936.43 rows=11939 width=8) (actual time=0.037..2.935 rows=100 loops=1)
Filter: (i = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75}'::integer[]))
Rows Removed by Filter: 9010
Heap Fetches: 0
Buffers: shared hit=53
-> Index Scan using foo_d_i on foo (cost=0.42..8.45 rows=1 width=17) (actual time=0.005..0.005 rows=1 loops=100)
Index Cond: ((d = foo_1.d) AND (i = foo_1.i))
Buffers: shared hit=400
Execution Time: 3.663 ms
Another option is to just add the primary key to the date,license_plate index.
SELECT * FROM foo JOIN (SELECT id FROM foo WHERE i IN (1,2,4,5,6,7,8,10,15,22,34,75) ORDER BY d DESC LIMIT 100) f USING (id) ORDER BY d DESC LIMIT 100;
Limit (cost=1357.98..1358.23 rows=100 width=17) (actual time=3.920..3.947 rows=100 loops=1)
Buffers: shared hit=473
-> Sort (cost=1357.98..1358.23 rows=100 width=17) (actual time=3.919..3.931 rows=100 loops=1)
Sort Key: foo.d DESC
Sort Method: quicksort Memory: 32kB
Buffers: shared hit=473
-> Nested Loop (cost=0.85..1354.66 rows=100 width=17) (actual time=0.055..3.858 rows=100 loops=1)
Buffers: shared hit=473
-> Limit (cost=0.42..509.41 rows=100 width=8) (actual time=0.039..3.116 rows=100 loops=1)
Buffers: shared hit=73
-> Index Only Scan using foo_d_i_id on foo foo_1 (cost=0.42..60768.43 rows=11939 width=8) (actual time=0.039..3.093 rows=100 loops=1)
Filter: (i = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75}'::integer[]))
Rows Removed by Filter: 9010
Heap Fetches: 0
Buffers: shared hit=73
-> Index Scan using foo_pkey on foo (cost=0.42..8.44 rows=1 width=17) (actual time=0.006..0.006 rows=1 loops=100)
Index Cond: (id = foo_1.id)
Buffers: shared hit=400
Execution Time: 3.972 ms
Edit
After thinking about it... since the LIMIT restricts the output to 100 rows ordered by date desc, wouldn't it be nice if we could get the 100 most recent rows for each license_plate_id, put all that into a top-n sort, and only keep the best 100 for all license_plate_ids? That would avoid reading and throwing away a lot of rows from the index. Even if that's much faster than hitting the table, it will still load up these index pages in RAM and clog up your buffers with stuff you don't actually need to keep in cache. Let's use LATERAL JOIN:
EXPLAIN (ANALYZE,BUFFERS)
SELECT * FROM foo
JOIN (SELECT d,i FROM
(VALUES (1),(2),(4),(5),(6),(7),(8),(10),(15),(22),(34),(75)) idlist
CROSS JOIN LATERAL
(SELECT d,i FROM foo WHERE i=idlist.column1 ORDER BY d DESC LIMIT 100) f2
ORDER BY d DESC LIMIT 100
) f3 USING (d,i)
ORDER BY d DESC LIMIT 100;
It's even faster: 2ms, and it uses the index on (license_plate_id,date) instead of the other way around. Also, and this is important, since each subquery in the lateral hits only the index pages that contain rows that will actually be selected, while the previous queries hit much more index pages. So you save on RAM buffers.
If you don't need the index on (date,license_plate_id) and don't want to keep a useless index, that could be interesting since this query doesn't use it. On the other hand, if you need the index on (date,license_plate_id) for something else and want to keep it, then... maybe not.
Please post results for the winning query 🔥

PostgreSQL 11 goes for parallel seq scan on partitioned table where index should be enough

The problem is I keep getting seq scan on a rather simple query for a very trivial setup. What am I doing wrong?
Postgres 11 on Windows Server 2016
Config changes done: constraint_exclusion = partition
A single table partitioned to 200 subtables, dozens of million records per partition.
Index on a field in question (assuming one is partitioned also)
Here's the create statement:
CREATE TABLE A (
K int NOT NULL,
X bigint NOT NULL,
Date timestamp NOT NULL,
fy smallint NOT NULL,
fz decimal(18, 8) NOT NULL,
fw decimal(18, 8) NOT NULL,
fv decimal(18, 8) NULL,
PRIMARY KEY (K, X)
) PARTITION BY LIST (K);
CREATE TABLE A_1 PARTITION OF A FOR VALUES IN (1);
CREATE TABLE A_2 PARTITION OF A FOR VALUES IN (2);
...
CREATE TABLE A_200 PARTITION OF A FOR VALUES IN (200);
CREATE TABLE A_Default PARTITION OF A DEFAULT;
CREATE INDEX IX_A_Date ON A (Date);
The query in question:
SELECT K, MIN(Date), MAX(Date)
FROM A
GROUP BY K
That always gives a sequence scan which takes several minutes while it's clearly evident there's no need for table data at all as Date field is indexed and I'm just asking for first and last leaf of its B-tree.
Originally the index was on (K, Date) and it rendered to me quickly that Postgres will not honor one in any query I assumed it to be in use in. Index on (Date) did the trick for other queries and it seems like Postgres claims to partition indexes automatically. However this specific simple query always goes for seq scan.
Any thoughts appreciated!
UPDATE
Query plan (analyze, buffers) is as follows:
Finalize GroupAggregate (cost=4058360.99..4058412.66 rows=200 width=20) (actual time=148448.183..148448.189 rows=5 loops=1)
Group Key: a_16.k
Buffers: shared hit=5970 read=548034 dirtied=4851 written=1446
-> Gather Merge (cost=4058360.99..4058407.66 rows=400 width=20) (actual time=148448.166..148463.953 rows=8 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=5998 read=1919356 dirtied=4865 written=1454
-> Sort (cost=4057360.97..4057361.47 rows=200 width=20) (actual time=148302.271..148302.285 rows=3 loops=3)
Sort Key: a_16.k
Sort Method: quicksort Memory: 25kB
Worker 0: Sort Method: quicksort Memory: 25kB
Worker 1: Sort Method: quicksort Memory: 25kB
Buffers: shared hit=5998 read=1919356 dirtied=4865 written=1454
-> Partial HashAggregate (cost=4057351.32..4057353.32 rows=200 width=20) (actual time=148302.199..148302.203 rows=3 loops=3)
Group Key: a_16.k
Buffers: shared hit=5984 read=1919356 dirtied=4865 written=1454
-> Parallel Append (cost=0.00..3347409.96 rows=94658849 width=12) (actual time=1.678..116664.051 rows=75662243 loops=3)
Buffers: shared hit=5984 read=1919356 dirtied=4865 written=1454
-> Parallel Seq Scan on a_16 (cost=0.00..1302601.32 rows=42870432 width=12) (actual time=0.320..41625.766 rows=34283419 loops=3)
Buffers: shared hit=14 read=873883 dirtied=14 written=8
-> Parallel Seq Scan on a_19 (cost=0.00..794121.94 rows=26070794 width=12) (actual time=0.603..54017.937 rows=31276617 loops=2)
Buffers: shared read=533414
-> Parallel Seq Scan on a_20 (cost=0.00..447025.50 rows=14900850 width=12) (actual time=0.347..52866.404 rows=35762000 loops=1)
Buffers: shared hit=5964 read=292053 dirtied=4850 written=1446
-> Parallel Seq Scan on a_18 (cost=0.00..198330.23 rows=6450422 width=12) (actual time=4.504..27197.706 rows=15481014 loops=1)
Buffers: shared read=133826
-> Parallel Seq Scan on a_17 (cost=0.00..129272.31 rows=4308631 width=12) (actual time=3.014..18423.307 rows=10340224 loops=1)
Buffers: shared hit=6 read=86180 dirtied=1
...
-> Parallel Seq Scan on a_197 (cost=0.00..14.18 rows=418 width=12) (actual time=0.000..0.000 rows=0 loops=1)
-> Parallel Seq Scan on a_198 (cost=0.00..14.18 rows=418 width=12) (actual time=0.001..0.002 rows=0 loops=1)
-> Parallel Seq Scan on a_199 (cost=0.00..14.18 rows=418 width=12) (actual time=0.001..0.001 rows=0 loops=1)
-> Parallel Seq Scan on a_default (cost=0.00..14.18 rows=418 width=12) (actual time=0.001..0.002 rows=0 loops=1)
Planning Time: 16.893 ms
Execution Time: 148466.519 ms
UPDATE 2 Just to avoid future comments like “you should index on (K, Date)”:
The query plan with both indexes in place is exactly the same, analysis numbers are the same and even buffer hits/reads are almost the same.
Aggregate push-down into parallel plans can be enabled by setting enable_partitionwise_aggregate to on.
That will probably speed up your query somewhat, because PostgreSQL doesn't have to pass so many data between the parallel workers.
But it looks like PostgreSQL isn't smart enough to figure out it can use the index to speed up min and max for each partition, although it is smart enough to do that with a non-partitioned table.
There is no pretty way to work around that; you could resort to querying each partition:
SELECT k, min(min_date), max(max_date)
FROM (
SELECT 1 AS k, MIN(date) AS min_date, MAX(date) AS max_date FROM a_1
UNION ALL
SELECT 2, MIN(date), MAX(date) FROM a_2
UNION ALL
...
SELECT 200, MIN(date), MAX(date) FROM a_200
UNION ALL
SELECT k, MIN(date), MAX(date) FROM a_default
) AS all_a
GROUP BY k;
Yuck! There is clearly room for improvement here.
I dug into the code and found the reason in src/backend/optimizer/plan/planagg.c:
/*
* preprocess_minmax_aggregates - preprocess MIN/MAX aggregates
*
* Check to see whether the query contains MIN/MAX aggregate functions that
* might be optimizable via indexscans. If it does, and all the aggregates
* are potentially optimizable, then create a MinMaxAggPath and add it to
* the (UPPERREL_GROUP_AGG, NULL) upperrel.
[...]
*/
void
preprocess_minmax_aggregates(PlannerInfo *root, List *tlist)
{
[...]
/*
* Reject unoptimizable cases.
*
* We don't handle GROUP BY or windowing, because our current
* implementations of grouping require looking at all the rows anyway, and
* so there's not much point in optimizing MIN/MAX.
*/
if (parse->groupClause || list_length(parse->groupingSets) > 1 ||
parse->hasWindowFuncs)
return;
Basically, PostgreSQL punts when it sees a GROUP BY clause.

LIMIT with ORDER BY makes query slow

I am having problems optimizing a query in PostgreSQL 9.5.14.
select *
from file as f
join product_collection pc on (f.product_collection_id = pc.id)
where pc.mission_id = 7
order by f.id asc
limit 100;
Takes about 100 seconds. If I drop the limit clause it takes about 0.5:
With limit:
explain (analyze,buffers) ... -- query exactly as above
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.84..859.32 rows=100 width=457) (actual time=102793.422..102856.884 rows=100 loops=1)
Buffers: shared hit=222430592
-> Nested Loop (cost=0.84..58412343.43 rows=6804163 width=457) (actual time=102793.417..102856.872 rows=100 loops=1)
Buffers: shared hit=222430592
-> Index Scan using file_pkey on file f (cost=0.57..23409008.61 rows=113831736 width=330) (actual time=0.048..28207.152 rows=55858772 loops=1)
Buffers: shared hit=55652672
-> Index Scan using product_collection_pkey on product_collection pc (cost=0.28..0.30 rows=1 width=127) (actual time=0.001..0.001 rows=0 loops=55858772)
Index Cond: (id = f.product_collection_id)
Filter: (mission_id = 7)
Rows Removed by Filter: 1
Buffers: shared hit=166777920
Planning time: 0.803 ms
Execution time: 102856.988 ms
Without limit:
=> explain (analyze,buffers) ... -- query as above, just without limit
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=20509671.01..20526681.42 rows=6804163 width=457) (actual time=456.175..510.596 rows=142055 loops=1)
Sort Key: f.id
Sort Method: quicksort Memory: 79392kB
Buffers: shared hit=37956
-> Nested Loop (cost=0.84..16494851.02 rows=6804163 width=457) (actual time=0.044..231.051 rows=142055 loops=1)
Buffers: shared hit=37956
-> Index Scan using product_collection_mission_id_index on product_collection pc (cost=0.28..46.13 rows=87 width=127) (actual time=0.017..0.101 rows=87 loops=1)
Index Cond: (mission_id = 7)
Buffers: shared hit=10
-> Index Scan using file_product_collection_id_index on file f (cost=0.57..187900.11 rows=169535 width=330) (actual time=0.007..1.335 rows=1633 loops=87)
Index Cond: (product_collection_id = pc.id)
Buffers: shared hit=37946
Planning time: 0.807 ms
Execution time: 569.865 ms
I have copied the database to a backup server so that I may safely manipulate the database without something else changing it on me.
Cardinalities:
Table file: 113,831,736 rows.
Table product_collection: 1370 rows.
The query without LIMIT: 142,055 rows.
SELECT count(*) FROM product_collection WHERE mission_id = 7: 87 rows.
What I have tried:
searching stack overflow
vacuum full analyze
creating two column indexes on file.product_collection_id & file.id. (there already are single column indexes on every field touched.)
creating two column indexes on file.id & file.product_collection_id.
increasing the statistics on file.id & file.product_collection_id, then re-vacuum analyze.
changing various query planner settings.
creating non-materialized views.
walking up and down the hallway while muttering to myself.
None of them seem to change the performance in a significant way.
Thoughts?
UPDATE from OP:
Tested this on PostgreSQL 9.6 & 10.4, and found no significant changes in plans or performance.
However, setting random_page_cost low enough is the only way to get faster performance on the without limit search.
With a default random_page_cost = 4, the without limit:
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=9270013.01..9287875.64 rows=7145054 width=457) (actual time=47782.523..47843.812 rows=145697 loops=1)
Sort Key: f.id
Sort Method: external sort Disk: 59416kB
Buffers: shared hit=3997185 read=1295264, temp read=7427 written=7427
-> Hash Join (cost=24.19..6966882.72 rows=7145054 width=457) (actual time=1.323..47458.767 rows=145697 loops=1)
Hash Cond: (f.product_collection_id = pc.id)
Buffers: shared hit=3997182 read=1295264
-> Seq Scan on file f (cost=0.00..6458232.17 rows=116580217 width=330) (actual time=0.007..17097.581 rows=116729984 loops=1)
Buffers: shared hit=3997169 read=1295261
-> Hash (cost=23.08..23.08 rows=89 width=127) (actual time=0.840..0.840 rows=87 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 15kB
Buffers: shared hit=13 read=3
-> Bitmap Heap Scan on product_collection pc (cost=4.97..23.08 rows=89 width=127) (actual time=0.722..0.801 rows=87 loops=1)
Recheck Cond: (mission_id = 7)
Heap Blocks: exact=10
Buffers: shared hit=13 read=3
-> Bitmap Index Scan on product_collection_mission_id_index (cost=0.00..4.95 rows=89 width=0) (actual time=0.707..0.707 rows=87 loops=1)
Index Cond: (mission_id = 7)
Buffers: shared hit=3 read=3
Planning time: 0.929 ms
Execution time: 47911.689 ms
User Erwin's answer below will take me some time to fully understand and generalize to all of the use cases needed. In the mean time we will probably use either a materialized view or just flatten our table structure.
This query is harder for the Postgres query planner than it might look. Depending on cardinalities, data distribution, value frequencies, sizes, ... completely different query plans can prevail and the planner has a hard time predicting which is best. Current versions of Postgres are better at this in several aspects, but it's still hard to optimize.
Since you retrieve only relatively few rows from product_collection, this equivalent query with LIMIT in a LATERAL subquery should avoid performance degradation:
SELECT *
FROM product_collection pc
CROSS JOIN LATERAL (
SELECT *
FROM file f -- big table
WHERE f.product_collection_id = pc.id
ORDER BY f.id
LIMIT 100
) f
WHERE pc.mission_id = 7
ORDER BY f.id
LIMIT 100;
Edit: This results in a query plan with explain (analyze,verbose) provided by the OP:
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=30524.34..30524.59 rows=100 width=457) (actual time=13.128..13.167 rows=100 loops=1)
Buffers: shared hit=3213
-> Sort (cost=30524.34..30546.09 rows=8700 width=457) (actual time=13.126..13.152 rows=100 loops=1)
Sort Key: file.id
Sort Method: top-N heapsort Memory: 76kB
Buffers: shared hit=3213
-> Nested Loop (cost=0.57..30191.83 rows=8700 width=457) (actual time=0.060..9.868 rows=2880 loops=1)
Buffers: shared hit=3213
-> Seq Scan on product_collection pc (cost=0.00..69.12 rows=87 width=127) (actual time=0.024..0.336 rows=87 loops=1)
Filter: (mission_id = 7)
Rows Removed by Filter: 1283
Buffers: shared hit=13
-> Limit (cost=0.57..344.24 rows=100 width=330) (actual time=0.008..0.071 rows=33 loops=87)
Buffers: shared hit=3200
-> Index Scan using file_pc_id_index on file (cost=0.57..582642.42 rows=169535 width=330) (actual time=0.007..0.065 rows=33 loops=87)
Index Cond: (product_collection_id = pc.id)
Buffers: shared hit=3200
Planning time: 0.595 ms
Execution time: 13.319 ms
You need these indexes (will help your original query, too):
CREATE INDEX idx1 ON file (product_collection_id, id); -- crucial
CREATE INDEX idx2 ON product_collection (mission_id, id); -- helpful
You mentioned:
two column indexes on file.id & file.product_collection_id.
Etc. But we need it the other way round: id last. The order of index expressions is crucial. See:
Is a composite index also good for queries on the first field?
Rationale: With only 87 rows from product_collection, we only fetch a maximum of 87 x 100 = 8700 rows (fewer if not every pc.id has 100 rows in table file), which are then sorted before picking the top 100. Performance degrades with the number of rows you get from product_collection and with bigger LIMIT.
With the multicolumn index idx1 above, that's 87 fast index scans. The rest is not very expensive.
More optimization is possible, depending on additional information. Related:
Can spatial index help a “range - order by - limit” query

Are JSONB indexes slower than native indexes?

I have a large table (30M rows) which has ~10 jsonb B-tree indexes.
When I create a query using few conditions, the query is relatively fast.
When I add more conditions, especially one with a sparse jsonb index (e.g. an integer between 0 and 1,000,000), the query speed drops off dramatically.
I am wondering whether jsonb indexes are slower than native indexes? Would I expect a performance boost by switching to native columns rather than JSON?
Table definition:
id integer
type text
data jsonb
company_index ARRAY
exchange_index ARRAY
eligible boolean
Example query:
SELECT id, data, type
FROM collection.bundles
WHERE ( (ARRAY['.X'] && bundles.exchange_index) AND
type IN ('discussion') AND
( ((data->>'sentiment_score')::bigint > 0 AND
(data->'display_tweet'->'stocktwit'->'id') IS NOT NULL) ) AND
( eligible = true ) AND
((data->'display_tweet'->'stocktwit')->>'id')::bigint IS NULL )
ORDER BY id DESC
LIMIT 50
Output:
Limit (cost=0.56..16197.56 rows=50 width=212) (actual time=31900.874..31900.874 rows=0 loops=1)
Buffers: shared hit=13713180 read=1267819 dirtied=34 written=713
I/O Timings: read=7644.206 write=7.294
-> Index Scan using bundles2_id_desc_idx on bundles (cost=0.56..2401044.17 rows=7412 width=212) (actual time=31900.871..31900.871 rows=0 loops=1)
Filter: (eligible AND ('{.X}'::text[] && exchange_index) AND (type = 'discussion'::text) AND ((((data -> 'display_tweet'::text) -> 'stocktwit'::text) -> 'id'::text) IS NOT NULL) AND (((data ->> 'sentiment_score'::text))::bigint > 0) AND (((((data -> 'display_tweet'::text) -> 'stocktwit'::text) ->> 'id'::text))::bigint IS NULL))
Rows Removed by Filter: 16093269
Buffers: shared hit=13713180 read=1267819 dirtied=34 written=713
I/O Timings: read=7644.206 write=7.294
Planning time: 0.366 ms
Execution time: 31900.909 ms
Note:
There are jsonb B-tree indexes on every jsonb condition used in this query. exchange_index and company_index have GIN indexes.
UPDATE
After Laurenz's changed query:
Limit (cost=150634.15..150634.27 rows=50 width=211) (actual time=15925.828..15925.828 rows=0 loops=1)
Buffers: shared hit=1137490 read=680349 written=2
I/O Timings: read=2896.702 write=0.038
-> Sort (cost=150634.15..150652.53 rows=7352 width=211) (actual time=15925.827..15925.827 rows=0 loops=1)
Sort Key: bundles.id DESC
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=1137490 read=680349 written=2
I/O Timings: read=2896.702 write=0.038
-> Bitmap Heap Scan on bundles (cost=56666.15..150316.40 rows=7352 width=211) (actual time=15925.816..15925.816 rows=0 loops=1)
Recheck Cond: (('{.X}'::text[] && exchange_index) AND (type = 'discussion'::text))
Filter: (eligible AND ((((data -> 'display_tweet'::text) -> 'stocktwit'::text) -> 'id'::text) IS NOT NULL) AND (((data ->> 'sentiment_score'::text))::bigint > 0) AND (((((data -> 'display_tweet'::text) -> 'stocktwit'::text) ->> 'id'::text))::bigint IS NULL))
Rows Removed by Filter: 273230
Heap Blocks: exact=175975
Buffers: shared hit=1137490 read=680349 written=2
I/O Timings: read=2896.702 write=0.038
-> BitmapAnd (cost=56666.15..56666.15 rows=23817 width=0) (actual time=1895.890..1895.890 rows=0 loops=1)
Buffers: shared hit=37488 read=85559
I/O Timings: read=325.535
-> Bitmap Index Scan on bundles2_exchange_index_ops_idx (cost=0.00..6515.57 rows=863703 width=0) (actual time=218.690..218.690 rows=892669 loops=1)
Index Cond: ('{.X}'::text[] && exchange_index)
Buffers: shared hit=7 read=313
I/O Timings: read=1.458
-> Bitmap Index Scan on bundles_eligible_idx (cost=0.00..23561.74 rows=2476877 width=0) (actual time=436.719..436.719 rows=2569331 loops=1)
Index Cond: (eligible = true)
Buffers: shared hit=37473
-> Bitmap Index Scan on bundles2_type_idx (cost=0.00..26582.83 rows=2706276 width=0) (actual time=1052.267..1052.267 rows=2794517 loops=1)
Index Cond: (type = 'discussion'::text)
Buffers: shared hit=8 read=85246
I/O Timings: read=324.077
Planning time: 0.433 ms
Execution time: 15928.959 ms
All your fancy indexes are not used at all, so the problem is not if they are fast or not.
There are several things at play here:
Seeing the dirtied and the written pages during the index scan, I suspect that there are quite a lot of “dead tuples” in your table. When the index scan visits them and notices they are dead, it “kills” those index entries so that subsequent index scans don't have to repeat that work.
If you repeat the query, you will probably notice that the number of blocks and the execution time becomes less.
You can reduce that problem by running VACUUM on the table or making sure autovacuum processes the table often enough.
Your major problem, however, is that the LIMIT clause tempts PostgreSQL to use the following strategy:
Since you only want 50 result rows in an order for which you have an index, just examine the table rows in index order and discard all rows that do not match the complicated condition until you have 50 results.
Unfortunately it has to scan 16093319 rows until it has found its 50 hits. The rows at the “high id” end of the table don't match the condition. PostgreSQL does not know about that correlation.
The solution is to discourage PostgreSQL from going down that route. The easiest way would be to drop all indexes on id, but given its name that is probably unfeasible.
The other way is to keep PostgreSQL from “seeing” the LIMIT clause when it plans the scan:
SELECT id, data, type
FROM (SELECT id, data, type
FROM collection.bundles
WHERE /* all your complicated conditions */
OFFSET 0) subquery
ORDER BY id DESC
LIMIT 50;
Remark: You didn't show your index definitions, but it sounds to be like you have quite a lot of them, possibly too many. Indexes are expensive, so make sure you define only those that give you a clear benefit.