Postgres order by using value from a joined table - postgresql

I have a table of conversations, and one of messages. I want to return a list of conversations ordered by their
select *
from "conversations"
left outer join "messages" on "conversations"."sender_id" = "messages"."sender_id" and event = 'user'
order by (cast("messages"."parse_data" -> 'transcription' ->> 'confidence' as double precision)) nulls last fetch next 50 rows only
And here is the index created :
create index "messages_parse_data_transcription_index_bt"
on messages using btree (sender_id, event, (cast("messages"."parse_data" -> 'transcription' ->> 'confidence' as double precision)) asc NULLS LAST );
analyse messages;
Unfortunately while it works on a large database (3GB, 7M lines on messages) it takes a very long time. Index creation is already very long (3m), and the query itself is also very very long (2M48)
Limit (cost=523667.88..523673.63 rows=50 width=3052) (actual time=132421.177..142355.335 rows=50 loops=1)
-> Gather Merge (cost=523667.88..560621.87 rows=321339 width=3052) (actual time=131738.458..141672.591 rows=50 loops=1)
Workers Planned: 1
Workers Launched: 1
-> Sort (cost=522667.87..523471.22 rows=321339 width=3052) (actual time=131626.455..131626.606 rows=48 loops=2)
Sort Key: ((((messages.parse_data -> 'transcription'::text) ->> 'confidence'::text))::double precision)
Sort Method: top-N heapsort Memory: 125kB
Worker 0: Sort Method: top-N heapsort Memory: 125kB
-> Parallel Hash Left Join (cost=369055.84..511993.22 rows=321339 width=3052) (actual time=101533.553..131406.086 rows=279394 loops=2)
Hash Cond: ((conversations.sender_id)::text = (messages.sender_id)::text)
-> Parallel Seq Scan on conversations (cost=0.00..2634.32 rows=69832 width=33) (actual time=0.018..22.318 rows=59357 loops=2)
-> Parallel Hash (cost=281743.66..281743.66 rows=227615 width=3011) (actual time=90093.869..90093.870 rows=276748 loops=2)
Buckets: 2048 Batches: 512 Memory Usage: 2352kB
-> Parallel Seq Scan on messages (cost=0.00..281743.66 rows=227615 width=3011) (actual time=404.291..52209.756 rows=276748 loops=2)
Filter: ((event)::text = 'user'::text)
Rows Removed by Filter: 3163558
Planning Time: 20.895 ms
JIT:
Functions: 27
" Options: Inlining true, Optimization true, Expressions true, Deforming true"
" Timing: Generation 18.348 ms, Inlining 174.073 ms, Optimization 889.102 ms, Emission 424.619 ms, Total 1506.142 ms"
Execution Time: 142365.711 ms
How can I speed up my query execution ?

Related

PostgreSQL inefficient index is selected in sub-query

I've a query that for each row from campaigns gets the most recent row from effects:
SELECT
id,
(
SELECT
created
FROM
effects
WHERE
effects.campaignid = campaigns.id
ORDER BY
effects.created DESC
LIMIT 1
) AS last_activity
FROM
campaigns
WHERE
deleted_at IS NULL
AND id in(53, 54);
To optimize the performance for this query I'm using an index effects_campaign_created_idx over (campaignid,created).
Additionally, for another use case, there's the index effects_created_idx on (created).
For some reason, at one moment, the subquery stopped using the correct index effects_campaign_created_idx and started using effects_created_idx instead, which is highly ineffective and takes ~5 minutes to run, instead of ~40ms.
Whenever I execute the internal query alone (using the same campaignid) the correct index is used.
What could be the reason for such behavior on part of the query planner? Should I structure my query differently, so that the right index is chosen?
What are more advanced ways to debug the query planner behavior?
Here's explain analyze results from executing the offending query:
explain analyze SELECT
id,
(
SELECT
created
FROM
effects
WHERE
effects.campaignid = campaigns.id
ORDER BY
effects.created DESC
LIMIT 1) AS last_activity
FROM
campaigns
WHERE
deleted_at IS NULL
AND id in(53, 54);
-----------------------------------------
Seq Scan on campaigns (cost=0.00..4.56 rows=2 width=12) (actual time=330176.476..677186.438 rows=2 loops=1)
" Filter: ((deleted_at IS NULL) AND (id = ANY ('{53,54}'::integer[])))"
Rows Removed by Filter: 45
SubPlan 1
-> Limit (cost=0.43..0.98 rows=1 width=8) (actual time=338593.165..338593.166 rows=1 loops=2)
-> Index Scan Backward using effects_created_idx on effects (cost=0.43..858859.67 rows=1562954 width=8) (actual time=338593.160..338593.160 rows=1 loops=2)
Filter: (campaignid = campaigns.id)
Rows Removed by Filter: 14026092
Planning Time: 0.245 ms
Execution Time: 677195.239 ms
Following advice here, tried moving to MAX(created) instead of using the Subquery with ORDER BY created DESC LIMIT 1. Unfortunately the results are still poor:
EXPLAIN ANALYZE SELECT
campaigns.id,
subquery.created
FROM
campaigns
LEFT JOIN (
SELECT
campaignid,
MAX(created) created
FROM
effects
GROUP BY
campaignid) subquery ON campaigns.id = subquery.campaignid
WHERE
campaigns.deleted_at IS NULL
AND campaigns.id in(53, 54);
Hash Right Join (cost=667460.06..667462.46 rows=2 width=12) (actual time=30516.620..30573.091 rows=2 loops=1)
Hash Cond: (effects.campaignid = campaigns.id)
-> Finalize GroupAggregate (cost=667457.45..667459.73 rows=9 width=16) (actual time=30251.920..30308.379 rows=23 loops=1)
Group Key: effects.campaignid
-> Gather Merge (cost=667457.45..667459.55 rows=18 width=16) (actual time=30251.832..30308.271 rows=49 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Sort (cost=666457.43..666457.45 rows=9 width=16) (actual time=30156.539..30156.544 rows=16 loops=3)
Sort Key: effects.campaignid
Sort Method: quicksort Memory: 25kB
Worker 0: Sort Method: quicksort Memory: 25kB
Worker 1: Sort Method: quicksort Memory: 25kB
-> Partial HashAggregate (cost=666457.19..666457.28 rows=9 width=16) (actual time=30155.951..30155.957 rows=16 loops=3)
Group Key: effects.campaignid
Batches: 1 Memory Usage: 24kB
Worker 0: Batches: 1 Memory Usage: 24kB
Worker 1: Batches: 1 Memory Usage: 24kB
-> Parallel Seq Scan on effects (cost=0.00..637166.13 rows=5858213 width=16) (actual time=220.784..28693.182 rows=4684157 loops=3)
-> Hash (cost=2.59..2.59 rows=2 width=4) (actual time=264.653..264.656 rows=2 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on campaigns (cost=0.00..2.59 rows=2 width=4) (actual time=264.612..264.640 rows=2 loops=1)
" Filter: ((deleted_at IS NULL) AND (id = ANY ('{53,54}'::integer[])))"
Rows Removed by Filter: 45
Planning Time: 0.354 ms
JIT:
Functions: 34
Options: Inlining true, Optimization true, Expressions true, Deforming true
Timing: Generation 9.958 ms, Inlining 409.293 ms, Optimization 308.279 ms, Emission 206.936 ms, Total 934.465 ms
Execution Time: 30578.920 ms
Notes
It is not the case where there is a majority of effects row with campaignid in (53,54).
I've reindexed and analyzed tables already.
[edit: the index was created with USING btree]
Try refactoring your query to eliminate the correlated subquery. Subqueries like that with LIMIT clauses in them can baffle query planners. What you want is the latest created date for each campaignid. So your subquery will be this:
SELECT campaignid, MAX(created) created
FROM effects
GROUP BY campaignid
You can then build it into your main query like this.
SELECT campaigns.id, subquery.created
FROM campaigns
LEFT JOIN (
SELECT campaignid, MAX(created) created
FROM effects
GROUP BY campaignid
) subquery ON campaigns.id = subquery.campaignid
WHERE campaigns.deleted_at IS NULL
AND campaigns.id in(53, 54);
This allows the query planner to run that subquery efficiently and just once. It will use your (campaignid,created) index.
Asking "why" about query planner output is a tricky business. Query planners are very complex beasts. And correlated subqueries present planning complexities. It's possible a growing table changed some sort of internal index-selectiveness metric.
Pro tip Avoid correlated subqueries whenever possible, especially in long-lived code in growing systems. It isn't always possible, though.

PostgreSQL not using index for very big table

A table raw_data has an index ix_raw_data_timestamp:
CREATE TABLE IF NOT EXISTS public.raw_data
(
ts timestamp without time zone NOT NULL,
log_msg character varying COLLATE pg_catalog."default",
log_image bytea
)
CREATE INDEX IF NOT EXISTS ix_raw_data_timestamp
ON public.raw_data USING btree
(ts ASC NULLS LAST)
TABLESPACE pg_default;
For some reason the index is not used for the following query (and therefore is very slow):
SELECT ts,
log_msg
FROM raw_data
ORDER BY ts ASC
LIMIT 5e6;
The result of EXPLAIN (analyze, buffers, format text) for the query above:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=9752787.07..10336161.14 rows=5000000 width=50) (actual time=789124.600..859046.614 rows=5000000 loops=1)
Buffers: shared hit=12234 read=888521, temp read=2039471 written=2664654
-> Gather Merge (cost=9752787.07..18421031.89 rows=74294054 width=50) (actual time=789085.442..822547.099 rows=5000000 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=12234 read=888521, temp read=2039471 written=2664654
-> Sort (cost=9751787.05..9844654.62 rows=37147027 width=50) (actual time=788203.880..795491.054 rows=1667070 loops=3)
Sort Key: "ts"
Sort Method: external merge Disk: 1758904kB
Worker 0: Sort Method: external merge Disk: 1762872kB
Worker 1: Sort Method: external merge Disk: 1756216kB
Buffers: shared hit=12234 read=888521, temp read=2039471 written=2664654
-> Parallel Seq Scan on raw_data (cost=0.00..1272131.27 rows=37147027 width=50) (actual time=25.436..119352.861 rows=29717641 loops=3)
Buffers: shared hit=12141 read=888520
Planning Time: 5.240 ms
JIT:
Functions: 7
Options: Inlining true, Optimization true, Expressions true, Deforming true
Timing: Generation 0.578 ms, Inlining 76.678 ms, Optimization 24.578 ms, Emission 13.060 ms, Total 114.894 ms
Execution Time: 877489.531 ms
(20 rows)
But it is used for this one:
SELECT ts,
log_msg
FROM raw_data
ORDER BY ts ASC
LIMIT 4e6;
EXPLAIN (analyze, buffers, format text) of the query above is:
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.57..9408157.15 rows=4000000 width=50) (actual time=15.081..44747.127 rows=4000000 loops=1)
Buffers: shared hit=24775 read=61155
-> Index Scan using ix_raw_data_timestamp on raw_data (cost=0.57..209691026.73 rows=89152864 width=50) (actual time=2.218..16077.755 rows=4000000 loops=1)
Buffers: shared hit=24775 read=61155
Planning Time: 1.306 ms
JIT:
Functions: 3
Options: Inlining true, Optimization true, Expressions true, Deforming true
Timing: Generation 0.406 ms, Inlining 1.121 ms, Optimization 7.917 ms, Emission 3.721 ms, Total 13.165 ms
Execution Time: 59028.951 ms
(10 rows)
Needless to say that the aim is to get all queries to use the index no matter the size, but I cannot seem to find a solution.
PS:
There's about 89152922 rows in the database.
Edit:
After increasing the memory to 2G (SET work_mem = '2GB';), the query is a little faster (doesn't use disk anymore) but still nowhere as fast:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=5592250.54..6175624.61 rows=5000000 width=50) (actual time=215887.445..282393.743 rows=5000000 loops=1)
Buffers: shared hit=12224 read=888531
-> Gather Merge (cost=5592250.54..14260731.75 rows=74296080 width=50) (actual time=215874.072..247030.062 rows=5000000 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=12224 read=888531
-> Sort (cost=5591250.52..5684120.62 rows=37148040 width=50) (actual time=215854.323..221828.921 rows=1667147 loops=3)
Sort Key: "ts"
Sort Method: top-N heapsort Memory: 924472kB
Worker 0: Sort Method: top-N heapsort Memory: 924379kB
Worker 1: Sort Method: top-N heapsort Memory: 924281kB
Buffers: shared hit=12224 read=888531
-> Parallel Seq Scan on raw_data (cost=0.00..1272141.40 rows=37148040 width=50) (actual time=25.899..107034.903 rows=29717641 loops=3)
Buffers: shared hit=12130 read=888531
Planning Time: 0.058 ms
JIT:
Functions: 7
Options: Inlining true, Optimization true, Expressions true, Deforming true
Timing: Generation 0.642 ms, Inlining 53.860 ms, Optimization 23.848 ms, Emission 11.768 ms, Total 90.119 ms
Execution Time: 300281.654 ms
(20 rows)
The problem here is you're going to PARALLEL SEQ SCAN and GATHER_MERGE. The gather merge is taking in 74,294,054 rows to output 5,000,000. Which makes sense, because you're saying there are 89,152,922 rows in the DB, and you have no conditional for which to limit them.
Why would it choose this plan, probably because it is forcing materialization because you're over work_mem. So increase your work_mem. If PostgreSQL thinks it can fit all this in memory and that it doesn't have to do this on disk then it will move massively faster.

query taking nested loop instead of hash join

Below query is nested loop and runs for 21 mins, after disabling nested loop it works in < 1min. Table stats are up to date and vacuum is run on the tables, any way to figure out why postgres is taking nested loop instead of efficient hash join.
Also to disable a nested loop is it better to set enable_nestloop to off or increase the random_page_cost? I think setting nestloop to off would stop plans from using nested loop if its going to be efficient in some places. What would be a better alternative, please advise.
SELECT DISTINCT ON (three.quebec_delta)
last_value(three.reviewed_by_nm) OVER wnd AS reviewed_by_nm,
last_value(three.reviewer_specialty_nm) OVER wnd AS reviewer_specialty_nm,
last_value(three.kilo) OVER wnd AS kilo,
last_value(three.review_reason_dscr) OVER wnd AS review_reason_dscr,
last_value(three.review_notes) OVER wnd AS review_notes,
last_value(three.seven_uniform_charlie) OVER wnd AS seven_uniform_charlie,
last_value(three.di_audit_source_system_cd) OVER wnd AS di_audit_source_system_cd,
last_value(three.di_audit_update_dtm) OVER wnd AS di_audit_update_dtm,
three.quebec_delta
FROM
ods_authorization.quebec_foxtrot seven_uniform_foxtrot
JOIN ods_authorization.golf echo ON seven_uniform_foxtrot.four = echo.oscar
JOIN ods_authorization.papa three ON echo.five = three.quebec_delta
AND three.xray = '0'::bpchar
WHERE
seven_uniform_foxtrot.two_india >= (zulu () - '2 years'::interval)
AND lima (three.kilo, 'ADVISOR'::character varying)::text = 'ADVISOR'::text
WINDOW wnd AS (PARTITION BY three.quebec_delta ORDER BY three.seven_uniform_charlie DESC ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
Plan running for 21m and taking nested loop
Unique (cost=550047.63..550257.15 rows=5238 width=281) (actual time=1295000.966..1296128.356 rows=319863 loops=1)
-> WindowAgg (cost=550047.63..550244.06 rows=5238 width=281) (actual time=1295000.964..1296013.046 rows=461635 loops=1)
-> Sort (cost=550047.63..550060.73 rows=5238 width=326) (actual time=1295000.929..1295089.796 rows=461635 loops=1)
Sort Key: three.quebec_delta, three.seven_uniform_charlie DESC
Sort Method: quicksort Memory: 197021kB
-> Nested Loop (cost=1001.12..549724.06 rows=5238 width=326) (actual time=8.274..1292470.826 rows=461635 loops=1)
-> Gather (cost=1000.56..527782.84 rows=24896 width=391) (actual time=4.287..12701.687 rows=3484699 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Nested Loop (cost=0.56..524293.24 rows=10373 width=391) (actual time=3.492..400998.923 rows=1161566 loops=3)
-> Parallel Seq Scan on papa three (cost=0.00..436912.84 rows=10373 width=326) (actual time=1.554..2455.626 rows=1161566 loops=3)
Filter: ((xray = 'november'::bpchar) AND ((lima_sierra(kilo, 'two_zulu'::character varying))::text = 'two_zulu'::text))
Rows Removed by Filter: 501723
-> Index Scan using five_tango on golf echo (cost=0.56..8.42 rows=1 width=130) (actual time=0.342..0.342 rows=1 loops=3484699)
Index Cond: (five_hotel = three.quebec_delta)
-> Index Scan using lima_alpha on quebec_foxtrot seven_uniform_foxtrot (cost=0.56..0.88 rows=1 width=65) (actual time=0.366..0.366 rows=0 loops=3484699)
Index Cond: (four = echo.oscar)
Filter: (two_india >= (zulu() - 'two_two'::interval))
Rows Removed by Filter: 1
Planning time: 0.777 ms
Execution time: 1296183.259 ms
Plan after setting enable_nestloop to off and work_mem to 8GB. I get the same plan when increasing random_page_cost to 1000.
Unique (cost=5933437.24..5933646.68 rows=5236 width=281) (actual time=19898.050..20993.124 rows=319980 loops=1)
-> WindowAgg (cost=5933437.24..5933633.59 rows=5236 width=281) (actual time=19898.049..20879.655 rows=461769 loops=1)
-> Sort (cost=5933437.24..5933450.33 rows=5236 width=326) (actual time=19898.022..19978.839 rows=461769 loops=1)
Sort Key: three.quebec_delta, three.seven_uniform_charlie DESC
Sort Method: quicksort Memory: 197056kB
-> Hash Join (cost=1947451.87..5933113.80 rows=5236 width=326) (actual time=11616.323..17931.146 rows=461769 loops=1)
Hash Cond: (echo.oscar = seven_uniform_foxtrot.four)
-> Gather (cost=438059.74..4423656.32 rows=24897 width=391) (actual time=1909.685..7291.289 rows=3484833 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Hash Join (cost=437059.74..4420166.62 rows=10374 width=391) (actual time=1904.546..7385.948 rows=1161611 loops=3)
Hash Cond: (echo.five = three.quebec_delta)
-> Parallel Seq Scan on golf echo (cost=0.00..3921922.09 rows=8152209 width=130) (actual time=0.003..1756.576 rows=6531668 loops=3)
-> Parallel Hash (cost=436930.07..436930.07 rows=10374 width=326) (actual time=1904.354..1904.354 rows=1161611 loops=3)
Buckets: 4194304 (originally 32768) Batches: 1 (originally 1) Memory Usage: 1135200kB
-> Parallel Seq Scan on papa three (cost=0.00..436930.07 rows=10374 width=326) (actual time=0.009..963.728 rows=1161611 loops=3)
Filter: ((xray = 'november'::bpchar) AND ((lima(kilo, 'two_zulu'::character varying))::text = 'two_zulu'::text))
Rows Removed by Filter: 502246
-> Hash (cost=1476106.74..1476106.74 rows=2662831 width=65) (actual time=9692.517..9692.517 rows=2685656 loops=1)
Buckets: 4194304 Batches: 1 Memory Usage: 287171kB
-> Seq Scan on quebec_foxtrot seven_uniform_foxtrot (cost=0.00..1476106.74 rows=2662831 width=65) (actual time=0.026..8791.556 rows=2685656 loops=1)
Filter: (two_india >= (zulu() - 'two_two'::interval))
Rows Removed by Filter: 9984069
Planning time: 0.742 ms
Execution time: 21218.770 ms
Try an index on papa(lima_sierra(kilo, 'two_zulu'::character varying)) and ANALYZE the table. With that index in place, PostgreSQL collects statistics on the expression, which should improve the estimate, so that you don't get a nested loop join.
If you just replace COALESCE(r_cd, 'ADVISOR') = 'ADVISOR' with
(r_cd = 'ADVISOR' or r_cd IS NULL)
That might use the current table statistics to improve the estimates enough to change the plan.

Search results using ts_vectors and ST_Distance

We have a page where we show a list of results, and the results must be relevant given 2 factors:
keyword similarity
location
we are using postgresql postgis and ts_vectors, however, we don't know how to combine the scores coming of ts vectors and st_distance in order to have the "best" search results, the queries seem to be taking between 30 seconds and 1 minute.
SELECT [121/1808]
ts_rank_cd(doc_vectors, plainto_tsquery('Uber '), 1 | 4 | 32) AS rank, ts_headline('english', short_job_description, plainto_tsquery('Uber '), 'MaxWords=80,MinWords=50'),
-- a bunch of fields omitted...
org.logo
FROM jobs.job as job
LEFT OUTER JOIN jobs.organization as org
ON job.organization_id = org.id
WHERE job.is_expired = 0 and deleted_at is NULL and doc_vectors ## plainto_tsquery('Uber ') order by rank desc offset 80 limit 20;
Do you guys have suggestions for us?
EXPLAIN (ANALYZE, BUFFERS) for same Query:
----------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=886908.73..886908.81 rows=30 width=1108) (actual time=20684.508..20684.518 rows=30 loops=1)
Buffers: shared hit=1584 read=825114
-> Sort (cost=886908.68..889709.48 rows=1120318 width=1108) (actual time=20684.502..20684.509 rows=50 loops=1)
Sort Key: job.created_at DESC
Sort Method: top-N heapsort Memory: 75kB
Buffers: shared hit=1584 read=825114
-> Hash Left Join (cost=421.17..849692.52 rows=1120318 width=1108) (actual time=7.012..18887.816 rows=1111019 loops=1)
Hash Cond: (job.organization_id = org.id)
Buffers: shared hit=1581 read=825114
-> Seq Scan on job (cost=0.00..846329.53 rows=1120318 width=1001) (actual time=0.052..17866.594 rows=1111019 loops=1)
Filter: ((deleted_at IS NULL) AND (is_expired = 0) AND (is_hidden = 0))
Rows Removed by Filter: 196298
Buffers: shared hit=1564 read=824989
-> Hash (cost=264.41..264.41 rows=12541 width=107) (actual time=6.898..6.899 rows=12541 loops=1)
Buckets: 16384 Batches: 1 Memory Usage: 1037kB
Buffers: shared hit=14 read=125
-> Seq Scan on organization org (cost=0.00..264.41 rows=12541 width=107) (actual time=0.021..3.860 rows=12541 loops=1)
Buffers: shared hit=14 read=125
Planning time: 2.223 ms
Execution time: 20684.682 ms```

PostgreSQL SELECT too slow

I am looking for an idea to optimize my query.
Currently, I have a table of 4M lines, I only want to retrieve the last 1000 lines of a reference:
SELECT *
FROM customers_material_events
WHERE reference = 'XXXXXX'
ORDER BY date DESC
LIMIT 1000;
This is the execution plan:
Limit (cost=12512155.48..12512272.15 rows=1000 width=6807) (actual time=8953.545..9013.658 rows=1000 loops=1)
Buffers: shared hit=16153 read=30342
-> Gather Merge (cost=12512155.48..12840015.90 rows=2810036 width=6807) (actual time=8953.543..9013.613 rows=1000 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=16153 read=30342
-> Sort (cost=12511155.46..12514668.00 rows=1405018 width=6807) (actual time=8865.186..8865.208 rows=632 loops=3)
Sort Key: date DESC
Sort Method: top-N heapsort Memory: 330kB
Worker 0: Sort Method: top-N heapsort Memory: 328kB
Worker 1: Sort Method: top-N heapsort Memory: 330kB
Buffers: shared hit=16153 read=30342
-> Parallel Seq Scan on customers_material_events (cost=0.00..64165.96 rows=1405018 width=6807) (actual time=0.064..944.029 rows=1117807 loops=3)
Filter: ((reference)::text = 'FFFEEE'::text)
Rows Removed by Filter: 17188
Buffers: shared hit=16091 read=30342
Planning Time: 0.189 ms
Execution Time: 9013.834 ms
(18 rows)
I see the execution time is very very slow...
The ideal index for this query would be:
CREATE INDEX ON customers_material_events (reference, date);
That would allow you to quickly find the values for a certain reference, automatically ordered by date, so no extra sort step is necessary.