SELECT count(e_id) AS count,
e_id
FROM test
WHERE created_at BETWEEN '2021-12-01 00:00:00' AND '2021-12-08 00:00:00'
AND std IN ( '1' )
AND section IN ( 'Sample' )
GROUP BY e_id
ORDER BY count DESC
LIMIT 4
The table has around 1 M records. The query execution is less than 40 ms but computation takes a hit at the group by and query cost high.
Limit (cost=26133.76..26133.77 rows=4 width=45) (actual time=52.300..52.303 rows=3 loops=1)
-> Sort (cost=26133.76..26134.77 rows=403 width=45) (actual time=52.299..52.301 rows=3 loops=1)
Sort Key: (count(e_id)) DESC
Sort Method: quicksort Memory: 25kB
-> GroupAggregate (cost=26120.66..26127.72 rows=403 width=45) (actual time=52.287..52.289 rows=3 loops=1)
Group Key: e_id
-> Sort (cost=26120.66..26121.67 rows=404 width=37) (actual time=52.281..52.283 rows=5 loops=1)
Sort Key: e_id
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on test (cost=239.19..26103.17 rows=404 width=37) (actual time=49.339..52.261 rows=5 loops=1)
Recheck Cond: ((section)::text = 'test'::text)
" Filter: ((created_at >= '2021-12-01 00:00:00'::timestamp without time zone) AND (created_at <= '2021-12-08 00:00:00'::timestamp without time zone) AND ((std)::text = ANY ('{1,2}'::text[])))"
Rows Removed by Filter: 38329
Heap Blocks: exact=33997
-> Bitmap Index Scan on index_test_on_section (cost=0.00..239.09 rows=7270 width=0) (actual time=6.815..6.815 rows=38334 loops=1)
Index Cond: ((section)::text = 'test'::text)
How can I optimize the group by and count, so that CPU does not shoot up?
The best index for this query is
CREATE INDEX ON test (section, created_at, std) INCLUDE (e_id);
Then VACUUM the table and try again.
Unless you have shown us the wrong plan, the slow step is not the group by, but rather the Bitmap Heap Scan
Your index on "section" returns 38334, of which all but 5 are filtered out. We can't tell if they are filtered out mostly by the "std" criterion or the "created_at" one. You need a more specific multicolumn index. The one i think is most likely to be effective is on (section, std, created_at).
I have table (over 100 millions records) on PostgreSQL 13.1
CREATE TABLE report
(
id serial primary key,
license_plate_id integer,
datetime timestamp
);
Indexes (for test I create both of them):
create index report_lp_datetime_index on report (license_plate_id, datetime);
create index report_lp_datetime_desc_index on report (license_plate_id desc, datetime desc);
So, my question is why query like
select * from report r
where r.license_plate_id in (1,2,4,5,6,7,8,10,15,22,34,75)
order by datetime desc
limit 100
Is very slow (~10sec). But query without order statement is fast (milliseconds).
Explain:
explain (analyze, buffers, format text) select * from report r
where r.license_plate_id in (1,2,4,5,6,7,8,10,15,22,34, 75,374,57123)
limit 100
Limit (cost=0.57..400.38 rows=100 width=316) (actual time=0.037..0.216 rows=100 loops=1)
Buffers: shared hit=103
-> Index Scan using report_lp_id_idx on report r (cost=0.57..44986.97 rows=11252 width=316) (actual time=0.035..0.202 rows=100 loops=1)
Index Cond: (license_plate_id = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75,374,57123}'::integer[]))
Buffers: shared hit=103
Planning Time: 0.228 ms
Execution Time: 0.251 ms
explain (analyze, buffers, format text) select * from report r
where r.license_plate_id in (1,2,4,5,6,7,8,10,15,22,34,75,374,57123)
order by datetime desc
limit 100
Limit (cost=44193.63..44193.88 rows=100 width=316) (actual time=4921.030..4921.047 rows=100 loops=1)
Buffers: shared hit=11455 read=671
-> Sort (cost=44193.63..44221.76 rows=11252 width=316) (actual time=4921.028..4921.035 rows=100 loops=1)
Sort Key: datetime DESC
Sort Method: top-N heapsort Memory: 128kB
Buffers: shared hit=11455 read=671
-> Bitmap Heap Scan on report r (cost=151.18..43763.59 rows=11252 width=316) (actual time=54.422..4911.927 rows=12148 loops=1)
Recheck Cond: (license_plate_id = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75,374,57123}'::integer[]))
Heap Blocks: exact=12063
Buffers: shared hit=11455 read=671
-> Bitmap Index Scan on report_lp_id_idx (cost=0.00..148.37 rows=11252 width=0) (actual time=52.631..52.632 rows=12148 loops=1)
Index Cond: (license_plate_id = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75,374,57123}'::integer[]))
Buffers: shared hit=59 read=4
Planning Time: 0.427 ms
Execution Time: 4921.128 ms
You seem to have rather slow storage, if reading 671 8kB-blocks from disk takes a couple of seconds.
The way to speed this up is to reorder the table in the same way as the index, so that you can find the required rows in the same or adjacent table blocks:
CLUSTER report_lp_id_idx USING report_lp_id_idx;
Be warned that rewriting the table in this way causes downtime – the table will not be available while it is being rewritten. Moreover, PostgreSQL does not maintain the table order, so subsequent data modifications will cause performance to gradually deteriorate, so that after a while you will have to run CLUSTER again.
But if you need this query to be fast no matter what, CLUSTER is the way to go.
Your two indices do exactly the same thing, so you can remove the second one, it's useless.
To optimize your query, the order of the fields inside the index must be reversed:
create index report_lp_datetime_index on report (datetime,license_plate_id);
BEGIN;
CREATE TABLE foo (d INTEGER, i INTEGER);
INSERT INTO foo SELECT random()*100000, random()*1000 FROM generate_series(1,1000000) s;
CREATE INDEX foo_d_i ON foo(d DESC,i);
COMMIT;
VACUUM ANALYZE foo;
EXPLAIN ANALYZE SELECT * FROM foo WHERE i IN (1,2,4,5,6,7,8,10,15,22,34,75) ORDER BY d DESC LIMIT 100;
Limit (cost=0.42..343.92 rows=100 width=8) (actual time=0.076..9.359 rows=100 loops=1)
-> Index Only Scan Backward using foo_d_i on foo (cost=0.42..40976.43 rows=11929 width=8) (actual time=0.075..9.339 rows=100 loops=1)
Filter: (i = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75}'::integer[]))
Rows Removed by Filter: 9016
Heap Fetches: 0
Planning Time: 0.339 ms
Execution Time: 9.387 ms
Note the index is not used to optimize the WHERE clause. It is used here as a compact and fast way to store references to the rows ordered by date DESC, so the ORDER BY can do an index-only scan and avoid sorting. By adding column id to the index, an index-only scan can be performed to test the condition on id, without hitting the table for every row. Since there is a low LIMIT value it does not need to scan the whole index, it only scans it in date DESC order until it finds enough rows satisfying the WHERE condition to return the result.
It will be faster if you create the index in date DESC order, this could be useful if you use ORDER BY date DESC + LIMIT in other queries too.
You forget that OP's table has a third column, and he is using SELECT *. So that wouldn't be an index-only scan.
Easy to work around. The optimum way to do this query would be an index-only scan to filter on WHERE conditions, then LIMIT, then hit the table to get the rows. For some reason if "select *" is used postgres takes the id column from the table instead of taking it from the index, which results in lots of unnecessary heap fetches for rows whose id is rejected by the WHERE condition.
Easy to work around, by doing it manually. I've also added another bogus column to make sure the SELECT * hits the table.
EXPLAIN (ANALYZE,buffers) SELECT * FROM foo
JOIN (SELECT d,i FROM foo WHERE i IN (1,2,4,5,6,7,8,10,15,22,34,75) ORDER BY d DESC LIMIT 100) f USING (d,i)
ORDER BY d DESC LIMIT 100;
Limit (cost=0.85..1281.94 rows=1 width=17) (actual time=0.052..3.618 rows=100 loops=1)
Buffers: shared hit=453
-> Nested Loop (cost=0.85..1281.94 rows=1 width=17) (actual time=0.050..3.594 rows=100 loops=1)
Buffers: shared hit=453
-> Limit (cost=0.42..435.44 rows=100 width=8) (actual time=0.037..2.953 rows=100 loops=1)
Buffers: shared hit=53
-> Index Only Scan using foo_d_i on foo foo_1 (cost=0.42..51936.43 rows=11939 width=8) (actual time=0.037..2.935 rows=100 loops=1)
Filter: (i = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75}'::integer[]))
Rows Removed by Filter: 9010
Heap Fetches: 0
Buffers: shared hit=53
-> Index Scan using foo_d_i on foo (cost=0.42..8.45 rows=1 width=17) (actual time=0.005..0.005 rows=1 loops=100)
Index Cond: ((d = foo_1.d) AND (i = foo_1.i))
Buffers: shared hit=400
Execution Time: 3.663 ms
Another option is to just add the primary key to the date,license_plate index.
SELECT * FROM foo JOIN (SELECT id FROM foo WHERE i IN (1,2,4,5,6,7,8,10,15,22,34,75) ORDER BY d DESC LIMIT 100) f USING (id) ORDER BY d DESC LIMIT 100;
Limit (cost=1357.98..1358.23 rows=100 width=17) (actual time=3.920..3.947 rows=100 loops=1)
Buffers: shared hit=473
-> Sort (cost=1357.98..1358.23 rows=100 width=17) (actual time=3.919..3.931 rows=100 loops=1)
Sort Key: foo.d DESC
Sort Method: quicksort Memory: 32kB
Buffers: shared hit=473
-> Nested Loop (cost=0.85..1354.66 rows=100 width=17) (actual time=0.055..3.858 rows=100 loops=1)
Buffers: shared hit=473
-> Limit (cost=0.42..509.41 rows=100 width=8) (actual time=0.039..3.116 rows=100 loops=1)
Buffers: shared hit=73
-> Index Only Scan using foo_d_i_id on foo foo_1 (cost=0.42..60768.43 rows=11939 width=8) (actual time=0.039..3.093 rows=100 loops=1)
Filter: (i = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75}'::integer[]))
Rows Removed by Filter: 9010
Heap Fetches: 0
Buffers: shared hit=73
-> Index Scan using foo_pkey on foo (cost=0.42..8.44 rows=1 width=17) (actual time=0.006..0.006 rows=1 loops=100)
Index Cond: (id = foo_1.id)
Buffers: shared hit=400
Execution Time: 3.972 ms
Edit
After thinking about it... since the LIMIT restricts the output to 100 rows ordered by date desc, wouldn't it be nice if we could get the 100 most recent rows for each license_plate_id, put all that into a top-n sort, and only keep the best 100 for all license_plate_ids? That would avoid reading and throwing away a lot of rows from the index. Even if that's much faster than hitting the table, it will still load up these index pages in RAM and clog up your buffers with stuff you don't actually need to keep in cache. Let's use LATERAL JOIN:
EXPLAIN (ANALYZE,BUFFERS)
SELECT * FROM foo
JOIN (SELECT d,i FROM
(VALUES (1),(2),(4),(5),(6),(7),(8),(10),(15),(22),(34),(75)) idlist
CROSS JOIN LATERAL
(SELECT d,i FROM foo WHERE i=idlist.column1 ORDER BY d DESC LIMIT 100) f2
ORDER BY d DESC LIMIT 100
) f3 USING (d,i)
ORDER BY d DESC LIMIT 100;
It's even faster: 2ms, and it uses the index on (license_plate_id,date) instead of the other way around. Also, and this is important, since each subquery in the lateral hits only the index pages that contain rows that will actually be selected, while the previous queries hit much more index pages. So you save on RAM buffers.
If you don't need the index on (date,license_plate_id) and don't want to keep a useless index, that could be interesting since this query doesn't use it. On the other hand, if you need the index on (date,license_plate_id) for something else and want to keep it, then... maybe not.
Please post results for the winning query 🔥
I have a query that is very fast for large date filter
EXPLAIN ANALYZE
SELECT "advertisings"."id",
"advertisings"."page_id",
"advertisings"."page_name",
"advertisings"."created_at",
"posts"."image_url",
"posts"."thumbnail_url",
"posts"."post_content",
"posts"."like_count"
FROM "advertisings"
INNER JOIN "posts" ON "advertisings"."post_id" = "posts"."id"
WHERE "advertisings"."created_at" >= '2020-01-01T00:00:00Z'
AND "advertisings"."created_at" < '2020-12-02T23:59:59Z'
ORDER BY "like_count" DESC LIMIT 20
And the query plan is:
Limit (cost=0.85..20.13 rows=20 width=552) (actual time=0.026..0.173 rows=20 loops=1)
-> Nested Loop (cost=0.85..951662.55 rows=987279 width=552) (actual time=0.025..0.169 rows=20 loops=1)
-> Index Scan using posts_like_count_idx on posts (cost=0.43..378991.65 rows=1053015 width=504) (actual time=0.013..0.039 rows=20 loops=1)
-> Index Scan using advertisings_post_id_index on advertisings (cost=0.43..0.53 rows=1 width=52) (actual time=0.005..0.006 rows=1 loops=20)
Index Cond: (post_id = posts.id)
Filter: ((created_at >= '2020-01-01 00:00:00'::timestamp without time zone) AND (created_at < '2020-12-02 23:59:59'::timestamp without time zone))
Planning Time: 0.365 ms
Execution Time: 0.199 ms
However, when I narrow the filter (change "created_at" >= '2020-11-25T00:00:00Z') which returns 9 records (which is less than the limit 20), the query is very slow
EXPLAIN ANALYZE
SELECT "advertisings"."id",
"advertisings"."page_id",
"advertisings"."page_name",
"advertisings"."created_at",
"posts"."image_url",
"posts"."thumbnail_url",
"posts"."post_content",
"posts"."like_count"
FROM "advertisings"
INNER JOIN "posts" ON "advertisings"."post_id" = "posts"."id"
WHERE "advertisings"."created_at" >= '2020-11-25T00:00:00Z'
AND "advertisings"."created_at" < '2020-12-02T23:59:59Z'
ORDER BY "like_count" DESC LIMIT 20
Query plan:
Limit (cost=1000.88..8051.73 rows=20 width=552) (actual time=218.485..4155.336 rows=9 loops=1)
-> Gather Merge (cost=1000.88..612662.09 rows=1735 width=552) (actual time=218.483..4155.328 rows=9 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Nested Loop (cost=0.85..611461.80 rows=723 width=552) (actual time=118.170..3786.176 rows=3 loops=3)
-> Parallel Index Scan using posts_like_count_idx on posts (cost=0.43..372849.07 rows=438756 width=504) (actual time=0.024..1542.094 rows=351005 loops=3)
-> Index Scan using advertisings_post_id_index on advertisings (cost=0.43..0.53 rows=1 width=52) (actual time=0.006..0.006 rows=0 loops=1053015)
Index Cond: (post_id = posts.id)
Filter: ((created_at >= '2020-11-25 00:00:00'::timestamp without time zone) AND (created_at < '2020-12-02 23:59:59'::timestamp without time zone))
Rows Removed by Filter: 1
Planning Time: 0.394 ms
Execution Time: 4155.379 ms
I spent hours googling but couldn't find the right solution. And help would be greatly appreciated.
Updated
When I continue narrowing the filter to
WHERE "advertisings"."created_at" >= '2020-11-27T00:00:00Z'
AND "advertisings"."created_at" < '2020-12-02T23:59:59Z'
which also returns the 9 records as the slow above query. However, this time, the query is really fast again.
Limit (cost=8082.99..8083.04 rows=20 width=552) (actual time=0.062..0.065 rows=9 loops=1)
-> Sort (cost=8082.99..8085.40 rows=962 width=552) (actual time=0.061..0.062 rows=9 loops=1)
Sort Key: posts.like_count DESC
Sort Method: quicksort Memory: 32kB
-> Nested Loop (cost=0.85..8057.39 rows=962 width=552) (actual time=0.019..0.047 rows=9 loops=1)
-> Index Scan using advertisings_created_at_index on advertisings (cost=0.43..501.30 rows=962 width=52) (actual time=0.008..0.012 rows=9 loops=1)
Index Cond: ((created_at >= '2020-11-27 00:00:00'::timestamp without time zone) AND (created_at < '2020-12-02 23:59:59'::timestamp without time zone))
-> Index Scan using posts_pkey on posts (cost=0.43..7.85 rows=1 width=504) (actual time=0.003..0.003 rows=1 loops=9)
Index Cond: (id = advertisings.post_id)
Planning Time: 0.540 ms
Execution Time: 0.096 ms
I have no idea what happens
PostgreSQL follows two different strategies in the first two and the last query:
If there are many matching advertisings rows, it uses a nested loop join to fetch the rows in the order of the ORDER BY clause and discards rows that don't match the condition until it has found 20.
If there are few matching advertisings rows, it fetches those few rows, then the matching rows in posts, then sorts and takes the first 20 rows.
The second execution is slow because PostgreSQL overestimates the rows in advertisings that match the condition. See how it estimates 962 instead of 9 in the third query?
The solution is to improve PostgreSQL's estimate:
if running
ANALYZE advertisings;
is enough to make the slow query fast, tell PostgreSQL to collect statistics more often:
ALTER TABLE advertisings SET (autovacuum_analyze_scale_factor = 0.05);
if that is not enough, try collecting more detailed statistics:
SET default_statistics_target = 1000;
ANALYZE advertisings;
You can experiment with values up to 10000. Once you found the value that works, persist it:
ALTER TABLE advertisings ALTER created_at SET STATISTICS 1000;
I have a single small table(~400k rows), the table is indexed by collection_id and contains a JSON column with several GIN indexes defined, one of them is on the value tagline.id.
The query to get all the objects with a specific tagline.id sometimes is VERY slow:
explain (analyze, buffers)
SELECT "objects_object"."created",
"objects_object"."modified",
"objects_object"."_id",
"objects_object"."id",
"objects_object"."collection_id",
"objects_object"."data",
"objects_object"."search",
"objects_object"."location"::bytea
FROM "objects_object"
WHERE ("objects_object"."collection_id" IN (3381, 3321, 3312, 3262, 3068, 2684, 2508, 2159, 2158, 2154, 2157, 2156)
AND (("objects_object"."data" #>> ARRAY['tagline','id']))::float IN ('8')
AND ("objects_object"."data" -> 'tagline') ? 'id')
ORDER BY "objects_object"."created" DESC,
"objects_object"."id" ASC
LIMIT 101;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=8.46..8.47 rows=1 width=1239) (actual time=5513.374..5513.399 rows=101 loops=1)
Buffers: shared hit=4480 read=6261
-> Sort (cost=8.46..8.47 rows=1 width=1239) (actual time=5513.372..5513.389 rows=101 loops=1)
Sort Key: created DESC, id
Sort Method: top-N heapsort Memory: 247kB
Buffers: shared hit=4480 read=6261
-> Index Scan using index_tagline_id_float_51a27976 on objects_object (cost=0.42..8.45 rows=1 width=1239) (actual time=943.689..5513.002 rows=235 loops=1)
Index Cond: (((data #>> '{tagline,id}'::text[]))::double precision = '8'::double precision)
Filter: (collection_id = ANY ('{3381,3321,3312,3262,3068,2684,2508,2159,2158,2154,2157,2156}'::integer[]))
Rows Removed by Filter: 47295
Buffers: shared hit=4480 read=6261
Planning time: 0.244 ms
Execution time: 5513.439 ms
(13 rows)
If executed multiple times the execution time drops to ~ 5 ms.
What is taking so long? Why after the first time the execution time drops that much?
I don't think it's memory related since the default memory(4MB) is much higher than the required(247Kb).
EDIT:
Index definitions:
SELECT indexdef FROM pg_indexes
WHERE indexname = 'index_tagline_id_float_51a27976';
indexdef
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
CREATE INDEX index_tagline_id_float_51a27976 ON public.objects_object USING btree ((((data #>> ARRAY['tagline'::text, 'id'::text]))::double precision)) WHERE ((data -> 'tagline'::text) ? 'id'::text)
(1 row)
SELECT indexdef FROM pg_indexes
WHERE indexname = 'objects_object_collection_id_6f1559f5';
indexdef
---------------------------------------------------------------------------------------------------------
CREATE INDEX objects_object_collection_id_6f1559f5 ON public.objects_object USING btree (collection_id)
(1 row)
EDIT:
After adding the index test:
select indexdef from pg_indexes where indexname='test';
indexdef
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
CREATE INDEX test ON public.objects_object USING btree ((((data #>> ARRAY['tagline'::text, 'id'::text]))::double precision), collection_id) WHERE ((data -> 'tagline'::text) ? 'id'::text)
(1 row)
The execution time decreased, but so the buffer shared hit, not sure this improved the performance then:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=8.46..8.47 rows=1 width=1238) (actual time=1721.260..1721.281 rows=101 loops=1)
Buffers: shared hit=5460 read=5115
-> Sort (cost=8.46..8.47 rows=1 width=1238) (actual time=1721.257..1721.270 rows=101 loops=1)
Sort Key: created DESC, id
Sort Method: top-N heapsort Memory: 298kB
Buffers: shared hit=5460 read=5115
-> Index Scan using test on objects_object (cost=0.42..8.45 rows=1 width=1238) (actual time=1682.637..1720.793 rows=235 loops=1)
Index Cond: (((data #>> '{tagline,id}'::text[]))::double precision = '8'::double precision)
Filter: (collection_id = ANY ('{3381,3321,3312,3262,3068,2684,2508,2159,2158,2154,2157,2156}'::integer[]))
Rows Removed by Filter: 47295
Buffers: shared hit=5454 read=5115
Planning time: 238.364 ms
Execution time: 1762.996 ms
(13 rows)
The problem seems to be that collection_id should be part of the index condition, not on the filtering, this would avoid getting from the (slow) data storage a large amount of data.
Why is the index not working as expected?
UPDATE:
Apparently the order of the parameters impacted on the query plan,
i rewrote the index as:
select indexdef from pg_indexes where indexname='test';
indexdef
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
CREATE INDEX test ON public.objects_object USING btree (collection_id, (((data #>> ARRAY['tagline'::text, 'id'::text]))::double precision)) WHERE ((data -> 'tagline'::text) ? 'id'::text)
Running now the query we can see the lower number of read records:
Limit (cost=57.15..57.16 rows=1 width=1177) (actual time=1.043..1.059 rows=101 loops=1)
Buffers: shared hit=101 read=10
-> Sort (cost=57.15..57.16 rows=1 width=1177) (actual time=1.040..1.047 rows=101 loops=1)
Sort Key: created DESC, id
Sort Method: top-N heapsort Memory: 304kB
Buffers: shared hit=101 read=10
-> Index Scan using test on objects_object (cost=0.42..57.14 rows=1 width=1177) (actual time=0.094..0.670 rows=232 loops=1)
Index Cond: ((collection_id = ANY ('{3381,3321,3312,3262,3068,2684,2508,2159,2158,2154,2157,2156}'::integer[])) AND (((data #>> '{tagline,id}'::text[]))::double precision = '8'::double precisio
n))
Buffers: shared hit=95 read=10
Planning time: 416.365 ms
Execution time: 43.463 ms
(11 rows)
This particular query could be sped up with the following index:
CREATE INDEX ON public.objects_object (
((data #>> ARRAY['tagline'::text, 'id'::text])::double precision),
collection_id
) WHERE (data -> 'tagline') ? 'id';
This will avoid the filter in the index scan, which is where most of the time is spent.