Postgres partitioning order by performance - postgresql

I'm using a partitioned postgres table following the documentation using rules, using a partitioning scheme based on date ranges (my date column is an epoch integer)
The problem is that a simple query to select the row with the maximum value of the sharded column is not using indices:
First, some settings to coerce postgres to do what I want:
SET constraint_exclusion = on;
SET enable_seqscan = off;
The query on a single partition works:
explain (SELECT * FROM urls_0 ORDER BY date_created ASC LIMIT 1);
Limit (cost=0.00..0.05 rows=1 width=38)
-> Index Scan using urls_date_created_idx_0 on urls_0 (cost=0.00..436.68 rows=8099 width=38)
However, the same query on the entire table is seq scanning:
explain (SELECT * FROM urls ORDER BY date_created ASC LIMIT 1);
Limit (cost=50000000274.88..50000000274.89 rows=1 width=51)
-> Sort (cost=50000000274.88..50000000302.03 rows=10859 width=51)
Sort Key: public.urls.date_created
-> Result (cost=10000000000.00..50000000220.59 rows=10859 width=51)
-> Append (cost=10000000000.00..50000000220.59 rows=10859 width=51)
-> Seq Scan on urls (cost=10000000000.00..10000000016.90 rows=690 width=88)
-> Seq Scan on urls_15133 urls (cost=10000000000.00..10000000016.90 rows=690 width=88)
-> Seq Scan on urls_15132 urls (cost=10000000000.00..10000000016.90 rows=690 width=88)
-> Seq Scan on urls_15131 urls (cost=10000000000.00..10000000016.90 rows=690 width=88)
-> Seq Scan on urls_0 urls (cost=10000000000.00..10000000152.99 rows=8099 width=38)
Finally, a lookup by date_created does work correctly with contraint exclusions and index scans:
explain (SELECT * FROM urls where date_created = 1212)
Result (cost=10000000000.00..10000000052.75 rows=23 width=45)
-> Append (cost=10000000000.00..10000000052.75 rows=23 width=45)
-> Seq Scan on urls (cost=10000000000.00..10000000018.62 rows=3 width=88)
Filter: (date_created = 1212)
-> Index Scan using urls_date_created_idx_0 on urls_0 urls (cost=0.00..34.12 rows=20 width=38)
Index Cond: (date_created = 1212)
Does anyone know how to use partitioning so that this type of query will use an index scan?

Postgresql 9.1 knows how to optimize this out of the box.
In 9.0 or earlier, you need to decompose the query manually, by unioning each of the subqueries individually with their own order by/limit statement.

Related

Adding a where clause and order by slows down the view

I have this view that is using lateral against another function. The query is running fine and fast but as soon as I add the condition the where clause and order by. It crawls.
CREATE OR REPLACE VIEW public.vw_top_info_v1_0
AS
SELECT pse.symbol,
pse.order_book,
pse.company_name,
pse.logo_url,
pse.display_logo,
pse.base_url,
stats.value::numeric(20,4) AS stock_value,
stats.volume::numeric(20,0) AS volume,
stats.last_trade_price,
stats.stock_date AS last_trade_date
FROM ( SELECT pse_1.symbol,
pse_1.company_name,
pse_1.order_book,
pse_1.display_logo,
pse_1.base_url,
pse_1.logo_url
FROM vw_pse_traded_companies pse_1
WHERE pse_1.group_name::text = 'N'::text) pse,
LATERAL iq_get_stats_security_for_top_data_v1_0(pse.order_book, (( SELECT date(d.added_date) AS date
FROM prod_itchbbo_p_small_message d
ORDER BY d.added_date DESC
LIMIT 1))::timestamp without time zone) stats(value, volume, stock_date, last_trade_price)
WHERE stats.value IS NOT NULL
ORDER BY stats.value DESC;***
Here's the explain output.
Subquery Scan on vw_top_info_v1_0 (cost=161022.59..165450.34 rows=354220 width=192)
-> Sort (cost=161022.59..161908.14 rows=354220 width=200)
Sort Key: stats.value DESC
InitPlan 1 (returns $0)
-> Limit (cost=49734.18..49734.18 rows=1 width=12)
-> Sort (cost=49734.18..51793.06 rows=823553 width=12)
Sort Key: d.added_date DESC
-> Seq Scan on prod_itchbbo_p_small_message d (cost=0.00..45616.41 rows=823553 width=12)
-> Nested Loop (cost=188.59..10837.44 rows=354220 width=200)
-> Sort (cost=188.34..189.23 rows=356 width=2866)
Sort Key: info.order_book, listed.symbol
-> Hash Join (cost=18.19..173.25 rows=356 width=2866)
Hash Cond: ((info.symbol)::text = (listed.symbol)::text)
-> Seq Scan on prod_stock_information info (cost=0.00..151.85 rows=1220 width=12)
Filter: ((group_name)::text = 'N'::text)
-> Hash (cost=13.64..13.64 rows=364 width=128)
-> Seq Scan on prod_pse_listed_companies listed (cost=0.00..13.64 rows=364 width=128)
-> Function Scan on iq_get_stats_security_for_top_data_v1_0 stats (cost=0.25..10.25 rows=995 width=32)
Filter: (value IS NOT NULL)
Is there a way to improve the query?
I don't fully understand what this is doing but there's a significant cost is coming from the seq scan on prod_itchbbo_p_small_message to sort by date to find the max.
You indicated the cost changes when you add the sort, so if you don't have one, I'd add a b-tree index on prod_itchbbo_p_small_message.added_date.

PostgreSQL - ORDER BY with LIMIT not using indexes as expected

We have two tables - event_deltas and deltas_to_retrieve - which both have BTREE indexes on the same two columns:
CREATE TABLE event_deltas
(
event_id UUID REFERENCES events(id) NOT NULL,
version INT NOT NULL,
json_patch JSONB NOT NULL,
PRIMARY KEY (event_id, version)
);
CREATE TABLE deltas_to_retrieve(event_id UUID NOT NULL, version INT NOT NULL);
CREATE UNIQUE INDEX event_id_version ON deltas_to_retrieve (event_id, version);
In terms of table size, deltas_to_retrieve is a tiny lookup table of ~500 rows. The event_deltas table contains ~7,000,000 rows. Due to the size of the latter table, we want to limit how much we retrieve at once. Therefore, the tables are queried as follows:
SELECT ed.event_id, ed.version
FROM deltas_to_retrieve zz, event_deltas ed
WHERE zz.event_id = ed.event_id
AND ed.version > zz.version
ORDER BY ed.event_id, ed.version
LIMIT 5000;
Without the LIMIT, for the example I'm looking at the query returns ~30,000 rows.
What's odd about this query is the impact of the ORDER BY. Due to the existing indexes, the data comes back in the order we want with or without it. I would rather keep the explicit ORDER BY there so we're future-proofed against future changes, as well as for readability etc. However, as things stand it has a significant negative impact on performance.
According to the docs:
An important special case is ORDER BY in combination with LIMIT n: an explicit sort will have to process all the data to identify the first n rows, but if there is an index matching the ORDER BY, the first n rows can be retrieved directly, without scanning the remainder at all.
This makes me think that, given the indexes we already have in place, the ORDER BY should not slow down the query at all. However, in practice I'm seeing execution times of ~10s with the ORDER BY and <1s without. I've included the plans outputted by EXPLAIN below:
Without ORDER BY
Just EXPLAIN:
QUERY PLAN
Limit (cost=0.56..20033.38 rows=5000 width=20)
-> Nested Loop (cost=0.56..331980.39 rows=82859 width=20)
-> Seq Scan on deltas_to_retrieve zz (cost=0.00..9.37 rows=537 width=20)
-> Index Only Scan using event_deltas_pkey on event_deltas ed (cost=0.56..616.66 rows=154 width=20)
Index Cond: ((event_id = zz.event_id) AND (version > zz.version))
More detailed EXPLAIN (ANALYZE, BUFFERS):
QUERY PLAN
Limit (cost=0.56..20039.35 rows=5000 width=20) (actual time=3.675..2083.063 rows=5000 loops=1)
" Buffers: shared hit=1450 read=4783, local hit=2"
-> Nested Loop (cost=0.56..1055082.88 rows=263260 width=20) (actual time=3.673..2080.745 rows=5000 loops=1)
" Buffers: shared hit=1450 read=4783, local hit=2"
-> Seq Scan on deltas_to_retrieve zz (cost=0.00..27.00 rows=1700 width=20) (actual time=0.022..0.307 rows=241 loops=1)
Buffers: local hit=2
-> Index Only Scan using event_deltas_pkey on event_deltas ed (cost=0.56..619.07 rows=155 width=20) (actual time=1.317..8.612 rows=21 loops=241)
Index Cond: ((event_id = zz.event_id) AND (version > zz.version))
Heap Fetches: 5000
Buffers: shared hit=1450 read=4783
Planning Time: 1.150 ms
Execution Time: 2084.647 ms
With ORDER BY
Just EXPLAIN:
QUERY PLAN
Limit (cost=0.84..929199.06 rows=5000 width=20)
-> Merge Join (cost=0.84..48924145.53 rows=263260 width=20)
Merge Cond: (ed.event_id = zz.event_id)
Join Filter: (ed.version > zz.version)
-> Index Only Scan using event_deltas_pkey on event_deltas ed (cost=0.56..48873353.76 rows=12318733 width=20)
-> Materialize (cost=0.28..6178.03 rows=1700 width=20)
-> Index Only Scan using event_id_version on deltas_to_retrieve zz (cost=0.28..6173.78 rows=1700 width=20)
More detailed EXPLAIN (ANALYZE, BUFFERS):
QUERY PLAN
Limit (cost=0.84..929199.06 rows=5000 width=20) (actual time=4457.770..506706.443 rows=5000 loops=1)
" Buffers: shared hit=78806 read=1071004 dirtied=148, local hit=63"
-> Merge Join (cost=0.84..48924145.53 rows=263260 width=20) (actual time=4457.768..506704.815 rows=5000 loops=1)
Merge Cond: (ed.event_id = zz.event_id)
Join Filter: (ed.version > zz.version)
" Buffers: shared hit=78806 read=1071004 dirtied=148, local hit=63"
-> Index Only Scan using event_deltas_pkey on event_deltas ed (cost=0.56..48873353.76 rows=12318733 width=20) (actual time=4.566..505443.407 rows=1813438 loops=1)
Heap Fetches: 1814767
Buffers: shared hit=78806 read=1071004 dirtied=148
-> Materialize (cost=0.28..6178.03 rows=1700 width=20) (actual time=0.063..2.524 rows=5000 loops=1)
Buffers: local hit=63
-> Index Only Scan using event_id_version on deltas_to_retrieve zz (cost=0.28..6173.78 rows=1700 width=20) (actual time=0.056..0.663 rows=78 loops=1)
Heap Fetches: 78
Buffers: local hit=63
Planning Time: 1.088 ms
Execution Time: 506709.819 ms
I'm not very experienced at reading these plans, but it's obviously thinking that it needs to retrieve everything, sort it and then return TOP N, rather than just grabbing the first N using the index. It's doing a Seq Scan on the smaller deltas_to_retrieve table rather than an Index Only Scan - is that the problem? That table is v. small (~500 rows), so I wonder if it's just not bothering to use the index because of that?
Postgres version: 11.12
Upgrading to Postgres 13 fixed this for us, with the introduction of incremental sort. From some docs on the feature:
Incremental sorting: Sorting is a performance-intensive task, so every improvement in this area can make a difference. Now PostgreSQL 13 introduces incremental sorting, which leverages early-stage sorts of a query and sorts only the incremental unsorted fields, increasing the chances the sorted block will fit in memory and by that, improving performance.
The new query plan from EXPLAIN is as follows, with the query now completing in <500ms consistently:
QUERY PLAN
Limit (cost=71.06..820.32 rows=5000 width=20)
-> Incremental Sort (cost=71.06..15461.82 rows=102706 width=20)
" Sort Key: ed.event_id, ed.version"
Presorted Key: ed.event_id
-> Nested Loop (cost=0.84..6659.05 rows=102706 width=20)
-> Index Only Scan using event_id_version on deltas_to_retrieve zz (cost=0.28..1116.39 rows=541 width=20)
-> Index Only Scan using event_deltas_pkey on event_deltas ed (cost=0.56..8.35 rows=190 width=20)
Index Cond: ((event_id = zz.event_id) AND (version > zz.version))
Note:
[Start by running VACUUM ANALYZE on both tables]
since deltas_to_retrieve only needs to contain the lowest versions, it could be unique on event_id
you can simplify the query to:
SELECT event_id, version
FROM event_deltas ed
WHERE EXISTS (
SELECT * FROM deltas_to_retrieve zz
WHERE zz.event_id = ed.event_id
AND zz.version < ed.version
)
ORDER BY event_id, version
LIMIT 5000;

Forcing Postgresql to use Merge Append

Say I have the following tables and indices:
create table inbound_messages(id int, user_id int, received_at timestamp);
create table outbound_messages(id int, user_id int, sent_at timestamp);
create index on inbound_messages(user_id, received_at);
create index on outbound_messages(user_id, sent_at);
Now I want to pull out the last 20 messages for a user, either inbound or outbound in a specific time range. I can do the following and from the explain it looks like PG walks back both indices in 'parallel' so it minimises the amount of rows it needs to scan.
explain select * from (select id, user_id, received_at as time from inbound_messages union all select id, user_id, sent_at as time from outbound_messages) x where user_id = 5 and time between '2018-01-01' and '2020-01-01' order by user_id,time desc limit 20;
Limit (cost=0.32..16.37 rows=2 width=16)
-> Merge Append (cost=0.32..16.37 rows=2 width=16)
Sort Key: inbound_messages.received_at DESC
-> Index Scan Backward using inbound_messages_user_id_received_at_idx on inbound_messages (cost=0.15..8.17 rows=1 width=16)
Index Cond: ((user_id = 5) AND (received_at >= '2018-01-01 00:00:00'::timestamp without time zone) AND (received_at <= '2020-01-01 00:00:00'::timestamp without time zone))
-> Index Scan Backward using outbound_messages_user_id_sent_at_idx on outbound_messages (cost=0.15..8.17 rows=1 width=16)
Index Cond: ((user_id = 5) AND (sent_at >= '2018-01-01 00:00:00'::timestamp without time zone) AND (sent_at <= '2020-01-01 00:00:00'::timestamp without time zone))
For example it could do something crazy like find all the matching rows in memory, and then sort the rows. Lets say there were millions of matching rows then this could take a long time. But because it walks the indices in the same order we want the results in this is a fast operation. It looks like the 'Merge Append' operation is done lazily and it doesn't actually materialize all the matching rows.
Now we can see postgres supports this operation for two distinct tables, however is it possible to force Postgres to use this optimisation for a single table.
Lets say I wanted the last 20 inbound messages for user_id = 5 or user_id = 6.
explain select * from inbound_messages where user_id in (6,7) order by received_at desc limit 20;
Then we get a query plan that does a bitmap heap scan, and then does an in-memory sort. So if there are millions of messages that match then it will look at millions of rows even though theoretically it could use the same Merge trick to only look at a few rows.
Limit (cost=15.04..15.09 rows=18 width=16)
-> Sort (cost=15.04..15.09 rows=18 width=16)
Sort Key: received_at DESC
-> Bitmap Heap Scan on inbound_messages (cost=4.44..14.67 rows=18 width=16)
Recheck Cond: (user_id = ANY ('{6,7}'::integer[]))
-> Bitmap Index Scan on inbound_messages_user_id_received_at_idx (cost=0.00..4.44 rows=18 width=0)
Index Cond: (user_id = ANY ('{6,7}'::integer[]))
We could think of just adding (received_at) as an index on the table and then it will do the same backwards scan. However, if we have a large number of users then we are missing out on a potentially large speedup because we are scanning lots of index entries that would not match the query.
The following approach should work as a way of forcing Postgres to use the "merge append" plan when you are interested in most recent messages for two users from the same table.
[Note: I tested this on YugabyteDB (which is based on Postgres)- so I expect the same to apply to Postgres also.]
explain select * from (
(select * from inbound_messages where user_id = 6 order by received_at DESC)
union all
(select * from inbound_messages where user_id = 7 order by received_at DESC)
) AS result order by received_at DESC limit 20;
which produces:
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.01..3.88 rows=20 width=16)
-> Merge Append (cost=0.01..38.71 rows=200 width=16)
Sort Key: inbound_messages.received_at DESC
-> Index Scan Backward using inbound_messages_user_id_received_at_idx on inbound_messages (cost=0.00..17.35 rows=100 width=16)
Index Cond: (user_id = 6)
-> Index Scan Backward using inbound_messages_user_id_received_at_idx on inbound_messages inbound_messages_1 (cost=0.00..17.35 rows=100 width=16)
Index Cond: (user_id = 7)

PostgreSQL Indexing Run Time

Supposing I have this table with index.
Create Table sample
(
table_date timestamp without timezone,
description
);
CREATE INDEX dailyinv_index
ON sample
USING btree
(date(table_date));
And It has a 33 million rows.
Why is it that running this query
select count(*) from sample where date(table_date) = '8/30/2017' and desc = 'desc1'
yields a result # 12ms
Using PostgreSQL to explain the query plan. this is what it does.
Aggregate (cost=288678.55..288678.56 rows=1 width=0)
->Bitmap Heap Scan on sample (cost=3119.63..288647.57 rows=12393 width=0)
Recheck Cond: (date(table_date) = '2017-08-30'::date)
Filter: ((description)::text = 'desc1'::text)
-> Bitmap Index Scan on dailyinv_index (cost=0.00..3116.54 rows=168529 width=0)
Index Cond: (date(table_date) = '2017-08-30'::date)
but this one
select date(table_date) from sample where date(table_date)<='8/30/2017' order by table_date desc limit 1
yields result after 11,460 ms?
Query Plan
Limit (cost=798243.52..798243.52 rows=1 width=8)
-> Sort (cost=798243.52..826331.69 rows=11235271 width=8)
Sort Key: table_date
-> Bitmap Heap Scan on sample (cost=210305.92..742067.16 rows=11235271 width=8)
Recheck Cond: (date(table_date) <= '2017-08-30'::date)
-> Bitmap Index Scan on dailyinv_index (cost=0.00..207497.10 rows=11235271 width=0)
Index Cond: (date(table_date) <= '2017-08-30'::date)
PostgreSQL Version: 9.4
Maybe Im doing the indexing wrong or I dont know. Really not familiar with indexing. Any help would be great. Thanks a lot!
Your problem is caused by you sorting on table_date rather than date(table_date). This can be corrected by modyfing the query to:
SELECT DATE(table_date)
FROM sample
WHERE DATE(table_date) <= '8/30/2017'
ORDER BY DATE(table_date) DESC
LIMIT 1

Postgis ST_Intersects query doesn't use existing spatial index

I have a table of suburbs and each suburb has a geom value, representing its multipolygon on the map. There is another houses table where each house has a geom value of its point on the map.
Both the geom columns are indexed using gist, and suburbs table has the name column indexed as well. Suburbs table has 8k+ records while houses table has 300k+ records.
Now my task is to find all houses within a suburb named 'FOO'.
QUERY #1:
SELECT * FROM houses WHERE ST_INTERSECTS((SELECT geom FROM "suburbs" WHERE "suburb_name" = 'FOO'), geom);
Query Plan Result:
Seq Scan on houses (cost=8.29..86327.26 rows=102365 width=136)
Filter: st_intersects($0, geom)
InitPlan 1 (returns $0)
-> Index Scan using suburbs_suburb_name on suburbs (cost=0.28..8.29 rows=1 width=32)
Index Cond: ((suburb_name)::text = 'FOO'::text)
running the query took ~3.5s, returning 486 records.
QUERY #2: (prefix ST_INTERSECTS function with _ to explicitly ask it not to use index)
SELECT * FROM houses WHERE _ST_INTERSECTS((SELECT geom FROM "suburbs" WHERE "suburb_name" = 'FOO'), geom);
Query Plan Result: (exactly the same as Query #1)
Seq Scan on houses (cost=8.29..86327.26 rows=102365 width=136)
Filter: st_intersects($0, geom)
InitPlan 1 (returns $0)
-> Index Scan using suburbs_suburb_name on suburbs (cost=0.28..8.29 rows=1 width=32)
Index Cond: ((suburb_name)::text = 'FOO'::text)
running the query took ~1.7s, returning 486 records.
QUERY #3: (Using && operator to add a boundary box overlap check before the ST_Intersects function)
SELECT * FROM houses WHERE (geom && (SELECT geom FROM "suburbs" WHERE "suburb_name" = 'FOO')) AND ST_INTERSECTS((SELECT geom FROM "suburbs" WHERE "suburb_name" = 'FOO'), geom);
Query Plan Result:
Bitmap Heap Scan on houses (cost=21.11..146.81 rows=10 width=136)
Recheck Cond: (geom && $0)
Filter: st_intersects($1, geom)
InitPlan 1 (returns $0)
-> Index Scan using suburbs_suburb_name on suburbs (cost=0.28..8.29 rows=1 width=32)
Index Cond: ((suburb_name)::text = 'FOO'::text)
InitPlan 2 (returns $1)
-> Index Scan using suburbs_suburb_name on suburbs suburbs_1 (cost=0.28..8.29 rows=1 width=32)
Index Cond: ((suburb_name)::text = 'FOO'::text)
-> Bitmap Index Scan on houses_geom_gist (cost=0.00..4.51 rows=31 width=0)
Index Cond: (geom && $0)
running the query took 0.15s, returning 486 records.
Apparently only query #3 is gaining benefit from the spatial index which improves the performance significantly. However, the syntax is ugly and repeating itself to some extend. My question is:
Why postgis is not smart enough to use spatial index in query #1?
Why query #2 has (much) better performance compare to query #1, considering they are both not using index?
Any suggestions to make query #3 prettier? Or is there a better way to construct a query to do the same thing?
Try flattening the query into one query, without unnecessary sub-queries:
SELECT houses.*
FROM houses, suburbs
WHERE suburbs.suburb_name = 'FOO' AND ST_Intersects(houses.geom, suburbs.geom);