Speed up PostgreSQL COUNT request with ORDER BY and multiple INNER JOIN - postgresql

I have a "complex" request, used from a back office only (2 users), that takes around 5s to perform. I would like to know if there are some tips to reduce this delay.
There are 5M records in each table.
optimized_all is a varchar and it has a BTREE index.
The ORDER BY seems to be the main cause of the delay. When I remove it, it's 80ms...
The website is on a dedicated server.
work_men is currently set to 10Mb on the postgresl.conf
The request:
SELECT
optimized_all,
COUNT(optimized_all) AS count_optimized_all
FROM
"usr_drinks"
INNER JOIN usr_seasons ON usr_seasons.drink_id = usr_drinks.id
INNER JOIN usr_photos ON usr_photos.season_id = usr_seasons.id
AND(usr_photos.verified_kind = 1
OR usr_photos.verified_kind = 0)
WHERE
(usr_drinks.optimized_type_id = 1
AND usr_drinks.optimized_status = 1
AND usr_seasons.verified_at IS NULL
)
GROUP BY
usr_drinks.optimized_all
ORDER BY
count_optimized_all DESC
LIMIT 10;
Explain Analyze:
Limit (cost=150022.12..150022.12 rows=1 width=194) (actual time=4813.137..4923.631 rows=1 loops=1)
-> Sort (cost=150022.12..150111.98 rows=35945 width=194) (actual time=4813.136..4923.629 rows=1 loops=1)
Sort Key: (count(usr_drinks.optimized_all)) DESC
Sort Method: top-N heapsort Memory: 25kB
-> Finalize GroupAggregate (cost=144716.68..149842.39 rows=35945 width=194) (actual time=3675.407..4881.022 rows=314695 loops=1)
Group Key: usr_drinks.optimized_all
-> Gather Merge (cost=144716.68..149297.46 rows=37096 width=101) (actual time=3675.400..4799.409 rows=462144 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Partial GroupAggregate (cost=143716.62..143878.91 rows=9274 width=101) (actual time=3647.837..3914.241 rows=92429 loops=5)
Group Key: usr_drinks.optimized_all
-> Sort (cost=143716.62..143739.80 rows=9274 width=93) (actual time=3647.828..3867.945 rows=161362 loops=5)
Sort Key: usr_drinks.optimized_all
Sort Method: external merge Disk: 18848kB
Worker 0: Sort Method: external merge Disk: 16016kB
Worker 1: Sort Method: external merge Disk: 16016kB
Worker 2: Sort Method: external merge Disk: 16008kB
Worker 3: Sort Method: external merge Disk: 15752kB
-> Nested Loop (cost=1.30..143105.51 rows=9274 width=93) (actual time=12.400..3077.821 rows=161362 loops=5)
-> Nested Loop (cost=0.86..104531.30 rows=48751 width=109) (actual time=1.882..1242.603 rows=172132 loops=5)
-> Parallel Index Scan using usr_drinks_on_optimized_type_idx on usr_drinks (cost=0.43..35406.66 rows=44170 width=109) (actual time=0.097..216.641 rows=196036 loops=5)
Index Cond: (optimized_type_id = 1)
Filter: (optimized_status = 1)
Rows Removed by Filter: 9387
-> Index Scan using usr_seasons_on_drink_id_idx on usr_seasons (cost=0.43..1.54 rows=2 width=32) (actual time=0.005..0.005 rows=1 loops=980181)
Index Cond: (drink_id = usr_drinks.id)
Filter: (verified_at IS NULL)
Rows Removed by Filter: 0
-> Index Scan using usr_photos_on_season_id_idx on usr_photos (cost=0.43..0.78 rows=1 width=16) (actual time=0.008..0.010 rows=1 loops=860662)
Index Cond: (season_id = usr_seasons.id)
Filter: ((verified_kind = 1) OR (verified_kind = 0))
Rows Removed by Filter: 1
Planning Time: 1.120 ms
Execution Time: 4927.502 ms
Possible solution ?:
Storing the count in another table, but for my needs, it seems quite complicate to update the counters. Any new idea is welcome.
EDIT 1: I removed the 2 unnecessary INNER JOIN. Now there are only 2.
EDIT 2: I tried to replace the last 2 INNER JOIN by a double EXIST condition. I saved only 1 second. (request is now 4 seconds instead of 1)
SELECT
optimized_all,
COUNT(optimized_all) AS count_optimized_all
FROM
"usr_drinks"
WHERE (usr_drinks.optimized_type_id = 1
AND usr_drinks.optimized_status = 1)
AND EXISTS (
SELECT
*
FROM
usr_seasons
WHERE
usr_seasons.drink_id = usr_drinks.id
AND usr_seasons.verified_at IS NULL
AND EXISTS (
SELECT
*
FROM
usr_photos
WHERE
usr_photos.season_id = usr_seasons.id
AND(usr_photos.verified_kind = 1
OR usr_photos.verified_kind = 0)))
GROUP BY
usr_drinks.optimized_all
ORDER BY
count_optimized_all DESC
LIMIT 10;
EDIT 3: the current postgresql.conf settings are:
max_connections = 100
shared_buffers = 6GB
effective_cache_size = 18GB
maintenance_work_mem = 1536MB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 10485kB
min_wal_size = 1GB
max_wal_size = 2GB
max_worker_processes = 12
max_parallel_workers_per_gather = 6
max_parallel_workers = 12
Increasing work_mem, even to 256MB, doesn't help (surely because my disk is a SSD) ?

You are facing a structural problem of PostGreSQL which is unable to correctly optimize queries of type COUNT or SUM. This is due to PostGreSQL's internal architecture due to the way PostGreSQL handles MVCC (Multi Versioning Concurrency Control).
Take a look at the article I wrote about it.
The only way around this problem is to use a materialized view.

As I didn't find a way to speed up massively the request, due to the ORDER BY count delay, this is what I did:
I created a new table that stores the optimized_all field with the corresponding optimized_all_count - I didn't want to do this firstly, but it was just a 3 hours work for me.
I run a task once a day that fill this table with INSERT...SELECT... and the long request (it's a rails app)
Now, I just search in this new table... it's just a few milliseconds of course.
This is completely acceptable for my needs (an admin tool), but could not correspond to other scenarios.
Thanks to everybody for your suggestions.

Try creating below partial index on usr_photos
CREATE INDEX user_photos_session_id_partial_ix
ON usr_photos (season_id)
WHERE (verified_kind = 1) OR (verified_kind = 0);
it should reduce query time by 1.5 seconds.

Related

PostgreSQL slow order

I have table (over 100 millions records) on PostgreSQL 13.1
CREATE TABLE report
(
id serial primary key,
license_plate_id integer,
datetime timestamp
);
Indexes (for test I create both of them):
create index report_lp_datetime_index on report (license_plate_id, datetime);
create index report_lp_datetime_desc_index on report (license_plate_id desc, datetime desc);
So, my question is why query like
select * from report r
where r.license_plate_id in (1,2,4,5,6,7,8,10,15,22,34,75)
order by datetime desc
limit 100
Is very slow (~10sec). But query without order statement is fast (milliseconds).
Explain:
explain (analyze, buffers, format text) select * from report r
where r.license_plate_id in (1,2,4,5,6,7,8,10,15,22,34, 75,374,57123)
limit 100
Limit (cost=0.57..400.38 rows=100 width=316) (actual time=0.037..0.216 rows=100 loops=1)
Buffers: shared hit=103
-> Index Scan using report_lp_id_idx on report r (cost=0.57..44986.97 rows=11252 width=316) (actual time=0.035..0.202 rows=100 loops=1)
Index Cond: (license_plate_id = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75,374,57123}'::integer[]))
Buffers: shared hit=103
Planning Time: 0.228 ms
Execution Time: 0.251 ms
explain (analyze, buffers, format text) select * from report r
where r.license_plate_id in (1,2,4,5,6,7,8,10,15,22,34,75,374,57123)
order by datetime desc
limit 100
Limit (cost=44193.63..44193.88 rows=100 width=316) (actual time=4921.030..4921.047 rows=100 loops=1)
Buffers: shared hit=11455 read=671
-> Sort (cost=44193.63..44221.76 rows=11252 width=316) (actual time=4921.028..4921.035 rows=100 loops=1)
Sort Key: datetime DESC
Sort Method: top-N heapsort Memory: 128kB
Buffers: shared hit=11455 read=671
-> Bitmap Heap Scan on report r (cost=151.18..43763.59 rows=11252 width=316) (actual time=54.422..4911.927 rows=12148 loops=1)
Recheck Cond: (license_plate_id = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75,374,57123}'::integer[]))
Heap Blocks: exact=12063
Buffers: shared hit=11455 read=671
-> Bitmap Index Scan on report_lp_id_idx (cost=0.00..148.37 rows=11252 width=0) (actual time=52.631..52.632 rows=12148 loops=1)
Index Cond: (license_plate_id = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75,374,57123}'::integer[]))
Buffers: shared hit=59 read=4
Planning Time: 0.427 ms
Execution Time: 4921.128 ms
You seem to have rather slow storage, if reading 671 8kB-blocks from disk takes a couple of seconds.
The way to speed this up is to reorder the table in the same way as the index, so that you can find the required rows in the same or adjacent table blocks:
CLUSTER report_lp_id_idx USING report_lp_id_idx;
Be warned that rewriting the table in this way causes downtime – the table will not be available while it is being rewritten. Moreover, PostgreSQL does not maintain the table order, so subsequent data modifications will cause performance to gradually deteriorate, so that after a while you will have to run CLUSTER again.
But if you need this query to be fast no matter what, CLUSTER is the way to go.
Your two indices do exactly the same thing, so you can remove the second one, it's useless.
To optimize your query, the order of the fields inside the index must be reversed:
create index report_lp_datetime_index on report (datetime,license_plate_id);
BEGIN;
CREATE TABLE foo (d INTEGER, i INTEGER);
INSERT INTO foo SELECT random()*100000, random()*1000 FROM generate_series(1,1000000) s;
CREATE INDEX foo_d_i ON foo(d DESC,i);
COMMIT;
VACUUM ANALYZE foo;
EXPLAIN ANALYZE SELECT * FROM foo WHERE i IN (1,2,4,5,6,7,8,10,15,22,34,75) ORDER BY d DESC LIMIT 100;
Limit (cost=0.42..343.92 rows=100 width=8) (actual time=0.076..9.359 rows=100 loops=1)
-> Index Only Scan Backward using foo_d_i on foo (cost=0.42..40976.43 rows=11929 width=8) (actual time=0.075..9.339 rows=100 loops=1)
Filter: (i = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75}'::integer[]))
Rows Removed by Filter: 9016
Heap Fetches: 0
Planning Time: 0.339 ms
Execution Time: 9.387 ms
Note the index is not used to optimize the WHERE clause. It is used here as a compact and fast way to store references to the rows ordered by date DESC, so the ORDER BY can do an index-only scan and avoid sorting. By adding column id to the index, an index-only scan can be performed to test the condition on id, without hitting the table for every row. Since there is a low LIMIT value it does not need to scan the whole index, it only scans it in date DESC order until it finds enough rows satisfying the WHERE condition to return the result.
It will be faster if you create the index in date DESC order, this could be useful if you use ORDER BY date DESC + LIMIT in other queries too.
You forget that OP's table has a third column, and he is using SELECT *. So that wouldn't be an index-only scan.
Easy to work around. The optimum way to do this query would be an index-only scan to filter on WHERE conditions, then LIMIT, then hit the table to get the rows. For some reason if "select *" is used postgres takes the id column from the table instead of taking it from the index, which results in lots of unnecessary heap fetches for rows whose id is rejected by the WHERE condition.
Easy to work around, by doing it manually. I've also added another bogus column to make sure the SELECT * hits the table.
EXPLAIN (ANALYZE,buffers) SELECT * FROM foo
JOIN (SELECT d,i FROM foo WHERE i IN (1,2,4,5,6,7,8,10,15,22,34,75) ORDER BY d DESC LIMIT 100) f USING (d,i)
ORDER BY d DESC LIMIT 100;
Limit (cost=0.85..1281.94 rows=1 width=17) (actual time=0.052..3.618 rows=100 loops=1)
Buffers: shared hit=453
-> Nested Loop (cost=0.85..1281.94 rows=1 width=17) (actual time=0.050..3.594 rows=100 loops=1)
Buffers: shared hit=453
-> Limit (cost=0.42..435.44 rows=100 width=8) (actual time=0.037..2.953 rows=100 loops=1)
Buffers: shared hit=53
-> Index Only Scan using foo_d_i on foo foo_1 (cost=0.42..51936.43 rows=11939 width=8) (actual time=0.037..2.935 rows=100 loops=1)
Filter: (i = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75}'::integer[]))
Rows Removed by Filter: 9010
Heap Fetches: 0
Buffers: shared hit=53
-> Index Scan using foo_d_i on foo (cost=0.42..8.45 rows=1 width=17) (actual time=0.005..0.005 rows=1 loops=100)
Index Cond: ((d = foo_1.d) AND (i = foo_1.i))
Buffers: shared hit=400
Execution Time: 3.663 ms
Another option is to just add the primary key to the date,license_plate index.
SELECT * FROM foo JOIN (SELECT id FROM foo WHERE i IN (1,2,4,5,6,7,8,10,15,22,34,75) ORDER BY d DESC LIMIT 100) f USING (id) ORDER BY d DESC LIMIT 100;
Limit (cost=1357.98..1358.23 rows=100 width=17) (actual time=3.920..3.947 rows=100 loops=1)
Buffers: shared hit=473
-> Sort (cost=1357.98..1358.23 rows=100 width=17) (actual time=3.919..3.931 rows=100 loops=1)
Sort Key: foo.d DESC
Sort Method: quicksort Memory: 32kB
Buffers: shared hit=473
-> Nested Loop (cost=0.85..1354.66 rows=100 width=17) (actual time=0.055..3.858 rows=100 loops=1)
Buffers: shared hit=473
-> Limit (cost=0.42..509.41 rows=100 width=8) (actual time=0.039..3.116 rows=100 loops=1)
Buffers: shared hit=73
-> Index Only Scan using foo_d_i_id on foo foo_1 (cost=0.42..60768.43 rows=11939 width=8) (actual time=0.039..3.093 rows=100 loops=1)
Filter: (i = ANY ('{1,2,4,5,6,7,8,10,15,22,34,75}'::integer[]))
Rows Removed by Filter: 9010
Heap Fetches: 0
Buffers: shared hit=73
-> Index Scan using foo_pkey on foo (cost=0.42..8.44 rows=1 width=17) (actual time=0.006..0.006 rows=1 loops=100)
Index Cond: (id = foo_1.id)
Buffers: shared hit=400
Execution Time: 3.972 ms
Edit
After thinking about it... since the LIMIT restricts the output to 100 rows ordered by date desc, wouldn't it be nice if we could get the 100 most recent rows for each license_plate_id, put all that into a top-n sort, and only keep the best 100 for all license_plate_ids? That would avoid reading and throwing away a lot of rows from the index. Even if that's much faster than hitting the table, it will still load up these index pages in RAM and clog up your buffers with stuff you don't actually need to keep in cache. Let's use LATERAL JOIN:
EXPLAIN (ANALYZE,BUFFERS)
SELECT * FROM foo
JOIN (SELECT d,i FROM
(VALUES (1),(2),(4),(5),(6),(7),(8),(10),(15),(22),(34),(75)) idlist
CROSS JOIN LATERAL
(SELECT d,i FROM foo WHERE i=idlist.column1 ORDER BY d DESC LIMIT 100) f2
ORDER BY d DESC LIMIT 100
) f3 USING (d,i)
ORDER BY d DESC LIMIT 100;
It's even faster: 2ms, and it uses the index on (license_plate_id,date) instead of the other way around. Also, and this is important, since each subquery in the lateral hits only the index pages that contain rows that will actually be selected, while the previous queries hit much more index pages. So you save on RAM buffers.
If you don't need the index on (date,license_plate_id) and don't want to keep a useless index, that could be interesting since this query doesn't use it. On the other hand, if you need the index on (date,license_plate_id) for something else and want to keep it, then... maybe not.
Please post results for the winning query 🔥

Postgres uses Hash Join with Seq Scan when Inner Select Index Cond is faster

Postgres is using a much heavier Seq Scan on table tracking when an index is available. The first query was the original attempt, which uses a Seq Scan and therefore has a slow query. I attempted to force an Index Scan with an Inner Select, but postgres converted it back to effectively the same query with nearly the same runtime. I finally copied the list from the Inner Select of query two to make the third query. Finally postgres used the Index Scan, which dramatically decreased the runtime. The third query is not viable in a production environment. What will cause postgres to use the last query plan?
(vacuum was used on both tables)
Tables
tracking (worker_id, localdatetime) total records: 118664105
project_worker (id, project_id) total records: 12935
INDEX
CREATE INDEX tracking_worker_id_localdatetime_idx ON public.tracking USING btree (worker_id, localdatetime)
Queries
SELECT worker_id, localdatetime FROM tracking t JOIN project_worker pw ON t.worker_id = pw.id WHERE project_id = 68475018
Hash Join (cost=29185.80..2638162.26 rows=19294218 width=16) (actual time=16.912..18376.032 rows=177681 loops=1)
Hash Cond: (t.worker_id = pw.id)
-> Seq Scan on tracking t (cost=0.00..2297293.86 rows=118716186 width=16) (actual time=0.004..8242.891 rows=118674660 loops=1)
-> Hash (cost=29134.80..29134.80 rows=4080 width=8) (actual time=16.855..16.855 rows=2102 loops=1)
Buckets: 4096 Batches: 1 Memory Usage: 115kB
-> Seq Scan on project_worker pw (cost=0.00..29134.80 rows=4080 width=8) (actual time=0.004..16.596 rows=2102 loops=1)
Filter: (project_id = 68475018)
Rows Removed by Filter: 10833
Planning Time: 0.192 ms
Execution Time: 18382.698 ms
SELECT worker_id, localdatetime FROM tracking t WHERE worker_id IN (SELECT id FROM project_worker WHERE project_id = 68475018 LIMIT 500)
Hash Semi Join (cost=6905.32..2923969.14 rows=27733254 width=24) (actual time=19.715..20191.517 rows=20530 loops=1)
Hash Cond: (t.worker_id = project_worker.id)
-> Seq Scan on tracking t (cost=0.00..2296948.27 rows=118698327 width=24) (actual time=0.005..9184.676 rows=118657026 loops=1)
-> Hash (cost=6899.07..6899.07 rows=500 width=8) (actual time=1.103..1.103 rows=500 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 28kB
-> Limit (cost=0.00..6894.07 rows=500 width=8) (actual time=0.006..1.011 rows=500 loops=1)
-> Seq Scan on project_worker (cost=0.00..28982.65 rows=2102 width=8) (actual time=0.005..0.968 rows=500 loops=1)
Filter: (project_id = 68475018)
Rows Removed by Filter: 4493
Planning Time: 0.224 ms
Execution Time: 20192.421 ms
SELECT worker_id, localdatetime FROM tracking t WHERE worker_id IN (322016383,316007840,...,285702579)
Index Scan using tracking_worker_id_localdatetime_idx on tracking t (cost=0.57..4766798.31 rows=21877360 width=24) (actual time=0.079..29.756 rows=22112 loops=1)
" Index Cond: (worker_id = ANY ('{322016383,316007840,...,285702579}'::bigint[]))"
Planning Time: 1.162 ms
Execution Time: 30.884 ms
... is in place of the 500 id entries used in the query
Same query ran on another set of 500 id's
Index Scan using tracking_worker_id_localdatetime_idx on tracking t (cost=0.57..4776714.91 rows=21900980 width=24) (actual time=0.105..5528.109 rows=117838 loops=1)
" Index Cond: (worker_id = ANY ('{286237712,286237844,...,216724213}'::bigint[]))"
Planning Time: 2.105 ms
Execution Time: 5534.948 ms
The distribution of "worker_id" within "tracking" seems very skewed. For one thing, the number of rows in one of your instances of query 3 returns over 5 times as many rows as the other instance of it. For another, the estimated number of rows is 100 to 1000 times higher than the actual number. This can certainly lead to bad plans (although it is unlikely to be the complete picture).
What is the actual number of distinct values for worker_id within tracking: select count(distinct worker_id) from tracking? What does the planner think this value is: select n_distinct from pg_stats where tablename='tracking' and attname='worker_id'? If those values are far apart and you force the planner to use a more reasonable value with alter table tracking alter column worker_id set (n_distinct = <real value>); analyze tracking; does that change the plans?
If you want to nudge PostgreSQL towards a nested loop join, try the following:
Create an index on tracking that can be used for an index-only scan:
CREATE INDEX ON tracking (worker_id) INCLUDE (localdatetime);
Make sure that tracking is VACUUMed often, so that an index-only scan is effective.
Reduce random_page_cost and increase effective_cache_size so that the optimizer prices index scans lower (but don't use insane values).
Make sure that you have good estimates on project_worker:
ALTER TABLE project_worker ALTER project_id SET STATISTICS 1000;
ANALYZE project_worker;

Postgresql count performance

I am doing a count query on a postgresql table. Table name is simcards containing fields id, card_state and 10 more. Simcards contains around 13 million records
My query is
SELECT CAST(count(*) AS INT) FROM simcards WHERE card_state = 'ACTIVATED';
This is taking more than 6 seconds and I want to optimize it. I tried creating partial index below
CREATE INDEX activated_count on simcards (card_state) where card_state = 'ACTIVATED';
But no improvements. I think it is because I got more than 12 million records with card_state = 'ACTIVATED'. Note that card_state can be 'ACTIVATED', 'PREPROVISIONED', 'TERMINATED'
Anyone got an idea on how the count can be drastically improved?
Running EXPLAIN (ANALYZE, BUFFERS) SELECT CAST(count(*) AS INT) FROM simcards WHERE card_state = 'ACTIVATED'; gives
Finalize Aggregate (cost=540300.95..540300.96 rows=1 width=4) (actual time=7103.814..7103.814 rows=1 loops=1)
Buffers: shared hit=2295 read=155298
-> Gather (cost=540300.74..540300.95 rows=2 width=8) (actual time=7103.773..7103.810 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=2295 read=155298
-> Partial Aggregate (cost=539300.74..539300.75 rows=1 width=8) (actual time=7006.368..7006.368 rows=1 loops=3)
Buffers: shared hit=5983 read=455025
-> Parallel Seq Scan on simcards (cost=0.00..526282.77 rows=5207186 width=0) (actual time=2.677..6483.503 rows=4166620 loops=3)
Filter: (card_state = 'ACTIVATED'::text)
Rows Removed by Filter: 10965
Buffers: shared hit=5983 read=455025
Planning time: 0.333 ms
Execution time: 7123.739 ms
Counting is slow. Here are a few ideas how to improve it:
If you don't need exact results, use PostgreSQL's estimates:
/* this will improve the results */
ANALYZE simcards;
SELECT t.reltuples * freqs.freq AS count
FROM pg_class AS t
JOIN pg_stats AS s
ON t.relname = s.tablename
AND t.relnamespace::regnamespace::name = s.schemaname
CROSS JOIN
(LATERAL unnest(s.most_common_vals::text::text[]) WITH ORDINALITY AS vals(val,ord)
JOIN
LATERAL unnest(s.most_common_freqs::text::float8[]) WITH ORDINALITY AS freqs(freq,ord)
USING (ord)
)
WHERE s.tablename = 'simcards'
AND s.attname = 'card_state'
AND vals.val = 'ACTIVATED';
If you need exact counts, create an extra “counter table” and triggers on simcards that update the counter whenever rows are added, removed or modified.
For a more detailed discussion, read my blog post.
Do you test setting the max_parallel_workers_per_gather = 4; parameter?
Is probable that some extra worker help here
Regards

How to reduce the execution time of query since planning time is less than execution time?

EXPLAIN ANALYSE
SELECT "conversations".*
FROM "conversations"
INNER JOIN "messages"
ON "messages"."conversation_identifier" = "conversations"."conversation_identifier"
WHERE "conversations"."project_id" = 2
AND (person_messages_count > 0 and deleted IS NULL)
AND (conversations.status = 'closed')
AND ((messages.tsv_message_content)
## (to_tsquery('simple', ''' ' || 'help' || ' ''' || ':*')))
ORDER BY conversations.updated_at DESC LIMIT 30;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=6895.78..6895.85 rows=30 width=398) (actual time=197364.691..197364.730 rows=30 loops=1)
-> Sort (cost=6895.78..6895.86 rows=34 width=398) (actual time=197364.688..197364.702 rows=30 loops=1)
Sort Key: conversations.updated_at DESC
Sort Method: top-N heapsort Memory: 32kB
-> Nested Loop (cost=1.12..6894.91 rows=34 width=398) (actual time=9.625..197314.491 rows=24971 loops=1)
-> Index Scan using indexing_by_conversations_status on conversations (cost=0.56..704.27 rows=64 width=398) (actual time=2.832..14181.496 rows=25362 loops=1)
Index Cond: ((project_id = 2) AND (person_messages_count > 0) AND (deleted IS NULL) AND ((status)::text = 'closed'::text))
-> Index Scan using index_messages_on_conversation_identifier on messages (cost=0.56..96.63 rows=10 width=46) (actual time=3.709..7.217 rows=1 loops=25362)
Index Cond: ((conversation_identifier)::text = (conversations.conversation_identifier)::text)
Filter: (tsv_message_content ## '''help'':*'::tsquery)
Rows Removed by Filter: 15
Planning time: 46.814 ms
Execution time: 197366.064 ms
Planning time seems to be lower the actual execution time. Is there any way to reduce the execution time?
You have two problems:
The estimates on conversations are woefully wrong:
ANALYZE conversations;
You should index the selective full text search condition:
CREATE INDEX ON messages USING gin (tsv_message_content);
If ANALYZE (even with raised default_statistics_target) doesn't improve the mis-estimate, it is probably caused by correlation between the columns.
Try extended statistics to improve that:
CREATE STATISTICS conversations_stats (dependencies)
ON project_id, deleted, status FROM conversations;
A subsequent ANALYZE should improve the estimate.

Why does postgres do a table scan instead of using my index?

I'm working with the HackerNews dataset in Postgres. There are about 17M rows about 14.5M of them are comments and about 2.5M are stories. There is a very active user named "rbanffy" who has 25k submissions, about equal split stories/comments. Both "by" and "type" have separate indices.
I have a query:
SELECT *
FROM "hn_items"
WHERE by = 'rbanffy'
and type = 'story'
ORDER BY id DESC
LIMIT 20 OFFSET 0
That runs quickly (it's using the 'by' index). If I change the type to "comment" then it's very slow. From the explain, it doesn't use either index and does a scan.
Limit (cost=0.56..56948.32 rows=20 width=1937)
-> Index Scan using hn_items_pkey on hn_items (cost=0.56..45823012.32 rows=16093 width=1937)
Filter: (((by)::text = 'rbanffy'::text) AND ((type)::text = 'comment'::text))
If I change the query to have type||''='comment', then it will use the 'by' index and executes quickly.
Why is this happening? I understand from https://stackoverflow.com/a/309814/214545 that having to do a hack like this implies something is wrong. But I don't know what.
EDIT:
This is the explain for the type='story'
Limit (cost=72553.07..72553.12 rows=20 width=1255)
-> Sort (cost=72553.07..72561.25 rows=3271 width=1255)
Sort Key: id DESC
-> Bitmap Heap Scan on hn_items (cost=814.59..72466.03 rows=3271 width=1255)
Recheck Cond: ((by)::text = 'rbanffy'::text)
Filter: ((type)::text = 'story'::text)
-> Bitmap Index Scan on hn_items_by_index (cost=0.00..813.77 rows=19361 width=0)
Index Cond: ((by)::text = 'rbanffy'::text)
EDIT:
EXPLAIN (ANALYZE,BUFFERS)
Limit (cost=0.56..59510.10 rows=20 width=1255) (actual time=20.856..545.282 rows=20 loops=1)
Buffers: shared hit=21597 read=2658 dirtied=32
-> Index Scan using hn_items_pkey on hn_items (cost=0.56..47780210.70 rows=16058 width=1255) (actual time=20.855..545.271 rows=20 loops=1)
Filter: (((by)::text = 'rbanffy'::text) AND ((type)::text = 'comment'::text))
Rows Removed by Filter: 46798
Buffers: shared hit=21597 read=2658 dirtied=32
Planning time: 0.173 ms
Execution time: 545.318 ms
EDIT: EXPLAIN (ANALYZE,BUFFERS) of type='story'
Limit (cost=72553.07..72553.12 rows=20 width=1255) (actual time=44.121..44.127 rows=20 loops=1)
Buffers: shared hit=20137
-> Sort (cost=72553.07..72561.25 rows=3271 width=1255) (actual time=44.120..44.123 rows=20 loops=1)
Sort Key: id DESC
Sort Method: top-N heapsort Memory: 42kB
Buffers: shared hit=20137
-> Bitmap Heap Scan on hn_items (cost=814.59..72466.03 rows=3271 width=1255) (actual time=6.778..37.774 rows=11630 loops=1)
Recheck Cond: ((by)::text = 'rbanffy'::text)
Filter: ((type)::text = 'story'::text)
Rows Removed by Filter: 12587
Heap Blocks: exact=19985
Buffers: shared hit=20137
-> Bitmap Index Scan on hn_items_by_index (cost=0.00..813.77 rows=19361 width=0) (actual time=3.812..3.812 rows=24387 loops=1)
Index Cond: ((by)::text = 'rbanffy'::text)
Buffers: shared hit=152
Planning time: 0.156 ms
Execution time: 44.422 ms
EDIT: latest test results
I was playing around with the type='comment' query and noticed if changed the limit to a higher number like 100, it used the by index. I played with the values until I found the critical number was '47'. If I had a limit of 47, the by index was used, if I had a limit of 46, it was a full scan. I assume that number isn't magical, just happens to be the threshold for my dataset or some other variable I don't know. I'm don't know if this helps.
Since there ate many comments by rbanffy, PostgreSQL assumes that it will be fast enough if it searches the table in the order implied by the ORDER BY clause (which can use the primary key index) until it has found 20 rows that match the search condition.
Unfortunately it happens that in the guy has grown lazy lately — at any rate, PostgreSQL has to scan the 46798 highest ids until it has found its 20 hits. (You really shouldn't have removed the Backwards, that confused me.)
The best way to work around that is to confuse PostgreSQL so that it doesn't choose the primary key index, perhaps like this:
SELECT *
FROM (SELECT * FROM hn_items
WHERE by = 'rbanffy'
AND type = 'comment'
OFFSET 0) q
ORDER BY id DESC
LIMIT 20;