I have one complexe query generated by Hibernate for JBPM. I can't really modify it and i'm searching to optimize it as much as possible.
I found out that ORDER BY DESC is way slower than ORDER BY ASC, do you have any idea ?
PostgreSQL Version : 9.4
Schema : https://pastebin.com/qNZhrbef
Query :
select
taskinstan0_.ID_ as ID1_27_,
taskinstan0_.VERSION_ as VERSION3_27_,
taskinstan0_.NAME_ as NAME4_27_,
taskinstan0_.DESCRIPTION_ as DESCRIPT5_27_,
taskinstan0_.ACTORID_ as ACTORID6_27_,
taskinstan0_.CREATE_ as CREATE7_27_,
taskinstan0_.START_ as START8_27_,
taskinstan0_.END_ as END9_27_,
taskinstan0_.DUEDATE_ as DUEDATE10_27_,
taskinstan0_.PRIORITY_ as PRIORITY11_27_,
taskinstan0_.ISCANCELLED_ as ISCANCE12_27_,
taskinstan0_.ISSUSPENDED_ as ISSUSPE13_27_,
taskinstan0_.ISOPEN_ as ISOPEN14_27_,
taskinstan0_.ISSIGNALLING_ as ISSIGNA15_27_,
taskinstan0_.ISBLOCKING_ as ISBLOCKING16_27_,
taskinstan0_.LOCKED as LOCKED27_,
taskinstan0_.QUEUE as QUEUE27_,
taskinstan0_.TASK_ as TASK19_27_,
taskinstan0_.TOKEN_ as TOKEN20_27_,
taskinstan0_.PROCINST_ as PROCINST21_27_,
taskinstan0_.SWIMLANINSTANCE_ as SWIMLAN22_27_,
taskinstan0_.TASKMGMTINSTANCE_ as TASKMGM23_27_
from JBPM_TASKINSTANCE taskinstan0_, JBPM_VARIABLEINSTANCE stringinst1_, JBPM_PROCESSINSTANCE processins2_, JBPM_VARIABLEINSTANCE variablein3_
where stringinst1_.CLASS_='S'
and taskinstan0_.PROCINST_=processins2_.ID_
and taskinstan0_.ID_=variablein3_.TASKINSTANCE_
and variablein3_.NAME_ = 'NIR'
and taskinstan0_.QUEUE = 'ERT_TPS'
and (processins2_.ORGAPATH_ like '/ERT%')
and taskinstan0_.ISOPEN_= 't'
and variablein3_.ID_=stringinst1_.ID_
order by stringinst1_.STRINGVALUE_ ASC limit '10';
Explain result for ASC :
Limit (cost=1.71..11652.93 rows=10 width=646) (actual time=6.588..82.407 rows=10 loops=1)
-> Nested Loop (cost=1.71..6215929.27 rows=5335 width=646) (actual time=6.587..82.402 rows=10 loops=1)
-> Nested Loop (cost=1.29..6213170.78 rows=5335 width=646) (actual time=6.578..82.363 rows=10 loops=1)
-> Nested Loop (cost=1.00..6159814.66 rows=153812 width=13) (actual time=0.537..82.130 rows=149 loops=1)
-> Index Scan Backward using totoidx10 on jbpm_variableinstance stringinst1_ (cost=0.56..558481.07 rows=11199905 width=13) (actual time=0.018..11.914 rows=40182 loops=1)
Filter: (class_ = 'S'::bpchar)
-> Index Scan using jbpm_variableinstance_pkey on jbpm_variableinstance variablein3_ (cost=0.43..0.49 rows=1 width=16) (actual time=0.002..0.002 rows=0 loops=40182)
Index Cond: (id_ = stringinst1_.id_)
Filter: ((name_)::text = 'NIR'::text)
Rows Removed by Filter: 1
-> Index Scan using jbpm_taskinstance_pkey on jbpm_taskinstance taskinstan0_ (cost=0.29..0.34 rows=1 width=641) (actual time=0.001..0.001 rows=0 loops=149)
Index Cond: (id_ = variablein3_.taskinstance_)
Filter: (isopen_ AND ((queue)::text = 'ERT_TPS'::text))
Rows Removed by Filter: 0
-> Index Only Scan using idx_procin_2 on jbpm_processinstance processins2_ (cost=0.42..0.51 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=10)
Index Cond: (id_ = taskinstan0_.procinst_)
Filter: ((orgapath_)::text ~~ '/ERT%'::text)
Heap Fetches: 0
Planning time: 2.598 ms
Execution time: 82.513 ms
Explain result for DESC :
Limit (cost=1.71..11652.93 rows=10 width=646) (actual time=8144.871..8144.986 rows=10 loops=1)
-> Nested Loop (cost=1.71..6215929.27 rows=5335 width=646) (actual time=8144.870..8144.984 rows=10 loops=1)
-> Nested Loop (cost=1.29..6213170.78 rows=5335 width=646) (actual time=8144.858..8144.951 rows=10 loops=1)
-> Nested Loop (cost=1.00..6159814.66 rows=153812 width=13) (actual time=8144.838..8144.910 rows=20 loops=1)
-> Index Scan using totoidx10 on jbpm_variableinstance stringinst1_ (cost=0.56..558481.07 rows=11199905 width=13) (actual time=0.066..2351.727 rows=2619671 loops=1)
Filter: (class_ = 'S'::bpchar)
Rows Removed by Filter: 906237
-> Index Scan using jbpm_variableinstance_pkey on jbpm_variableinstance variablein3_ (cost=0.43..0.49 rows=1 width=16) (actual time=0.002..0.002 rows=0 loops=2619671)
Index Cond: (id_ = stringinst1_.id_)
Filter: ((name_)::text = 'NIR'::text)
Rows Removed by Filter: 1
-> Index Scan using jbpm_taskinstance_pkey on jbpm_taskinstance taskinstan0_ (cost=0.29..0.34 rows=1 width=641) (actual time=0.002..0.002 rows=0 loops=20)
Index Cond: (id_ = variablein3_.taskinstance_)
Filter: (isopen_ AND ((queue)::text = 'ERT_TPS'::text))
-> Index Only Scan using idx_procin_2 on jbpm_processinstance processins2_ (cost=0.42..0.51 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=10)
Index Cond: (id_ = taskinstan0_.procinst_)
Filter: ((orgapath_)::text ~~ '/ERT%'::text)
Heap Fetches: 0
Planning time: 2.080 ms
Execution time: 8145.053 ms
Tables infos :
jbpm_variableinstance 12100592 rows
jbpm_taskinstance 69913 rows
jbpm_processinstance 97546 rows
If you have any idea, thanks
This typically only happens when OFFSET and / or LIMIT are involved (as is the case here).
The key difference is this line in the EXPLAIN output for the query with DESC:
Rows Removed by Filter: 906237
Meaning that while the first 10 rows in the index totoidx10 match when scanning backwards (which matches your ASC ordering, obviously), Postgres has to filter ~ 900k rows before it finally finds qualifying rows when scanning the same index forward.
A matching multicolumn index (with the right sort order) might help a lot.
Or, since Postgres chooses an unfavorable query plan, maybe just updated (or more detailed) table statistics or cost settings.
Related:
Keep PostgreSQL from sometimes choosing a bad query plan
Optimizing queries on a range of timestamps (two columns)
Related
Indexes on table:
create index shifts_start_at_idx
on shifts (start_at);
Query 1 with at time zone:
SELECT shifts.id
FROM shifts
JOIN stores ON shifts.store_id = stores.id AND stores.deleted_at IS NULL
JOIN cities ON stores.city_id = cities.id
WHERE TRUE
AND (shifts.start_at >= '2022-05-06 03:00:00'::timestamp AT TIME ZONE
(EXTRACT(timezone FROM cities.time_zone) * INTERVAL '1 second'))
ORDER BY shifts.start_at DESC, shifts.end_at DESC, shifts.id DESC
LIMIT 100;
Explain query 1:
Limit (cost=0.86..298.93 rows=100 width=24) (actual time=0.143..25.257 rows=100 loops=1)
-> Nested Loop (cost=0.86..1485256.59 rows=498300 width=24) (actual time=0.131..23.317 rows=100 loops=1)
" Join Filter: (shifts.start_at >= timezone((date_part('timezone'::text, cities.time_zone) * '00:00:01'::interval), '2022-05-06 03:00:00'::timestamp without time zone))"
-> Nested Loop (cost=0.72..1209695.67 rows=1494900 width=32) (actual time=0.096..17.621 rows=100 loops=1)
-> Index Scan Backward using shifts_admin_order_by_idx on shifts (cost=0.43..291132.79 rows=3000000 width=32) (actual time=0.036..6.780 rows=205 loops=1)
-> Index Scan using stores_id_deleted_at_null_idx on stores (cost=0.29..0.31 rows=1 width=16) (actual time=0.025..0.025 rows=0 loops=205)
Index Cond: (id = shifts.store_id)
-> Index Scan using cities_pkey on cities (cost=0.14..0.16 rows=1 width=20) (actual time=0.017..0.017 rows=1 loops=100)
Index Cond: (id = stores.city_id)
Planning Time: 0.632 ms
Execution Time: 26.436 ms
Postgres doesn't use index
Query 2 without at time zone:
SELECT shifts.id
FROM shifts
JOIN stores ON shifts.store_id = stores.id AND stores.deleted_at IS NULL
JOIN cities ON stores.city_id = cities.id
WHERE TRUE
AND (shifts.start_at >= '2022-05-06 03:00:00')
ORDER BY shifts.start_at DESC, shifts.end_at DESC, shifts.id DESC
LIMIT 100;
Explain query 2:
Limit (cost=0.86..108.84 rows=100 width=24) (actual time=0.125..8.866 rows=100 loops=1)
-> Nested Loop (cost=0.86..898691.17 rows=832261 width=24) (actual time=0.115..7.886 rows=100 loops=1)
-> Nested Loop (cost=0.72..761958.37 rows=832261 width=32) (actual time=0.066..5.570 rows=100 loops=1)
-> Index Scan Backward using shifts_admin_order_by_idx on shifts (cost=0.43..248984.02 rows=1670200 width=32) (actual time=0.014..1.380 rows=205 loops=1)
Index Cond: (start_at >= '2022-05-06 03:00:00+00'::timestamp with time zone)
-> Index Scan using stores_id_deleted_at_null_idx on stores (cost=0.29..0.31 rows=1 width=16) (actual time=0.008..0.008 rows=0 loops=205)
Index Cond: (id = shifts.store_id)
-> Index Only Scan using cities_pkey on cities (cost=0.14..0.16 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=100)
Index Cond: (id = stores.city_id)
Heap Fetches: 100
Planning Time: 0.327 ms
Execution Time: 9.394 ms
It is not entirely clear why it does not want to use the index when converting the time to a time format with a timezone
Table: Customer
Type:
telephone1 | character varying(255)
telephone2 | character varying(255)
location_id | integer
Index:
"idx_customers_location_id" btree (location_id)
"idx_customers_telephone1_txt" btree (telephone1 text_pattern_ops)
"idx_customers_trim_telephone_1" btree (btrim(telephone1::text))
"idx_customers_trim_telephone2" btree (btrim(telephone2::text))
I have a table called customers, total rows are 141182. I was checking values in two columns (telephone1, telephone2), all the column telephone1 has data, but only 8 rows have value for the column telephone2
When I check for the value 1, getting this below execution time.
SELECT customers.id, location_id, telephone1, telephon2 FROM "customers" INNER JOIN "locations" ON
"locations"."id" = "customers"."location_id" WHERE (customers.location_id = 189 AND (telephone1 = '1'
OR telephone2 = '1')) GROUP BY customers.id LIMIT 20 OFFSET 0;
Limit (cost=519.62..519.64 rows=4 width=125) (actual time=25.895..25.898 rows=1 loops=1)
-> GroupAggregate (cost=519.62..519.64 rows=4 width=125) (actual time=25.893..25.896 rows=1 loops=1)
Group Key: customers.id
-> Sort (cost=519.62..519.62 rows=4 width=127) (actual time=25.876..25.879 rows=1 loops=1)
Sort Key: customers.id
Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=8.62..519.61 rows=4 width=127) (actual time=10.740..25.869 rows=1 loops=1)
-> Index Scan using locations_pkey on locations (cost=0.06..4.06 rows=1 width=70) (actual time=0.027..0.029 rows=1 loops=1)
Index Cond: (id = 189)
-> Bitmap Heap Scan on customers (cost=8.56..515.54 rows=4 width=61) (actual time=10.707..25.832 rows=1 loops=1)
Recheck Cond: (((telephone1)::text = '1'::text) OR ((telephone2)::text = '1'::text))
Filter: (location_id = 189)
Rows Removed by Filter: 1048
Heap Blocks: exact=1737
-> BitmapOr (cost=8.56..8.56 rows=259 width=0) (actual time=3.445..3.446 rows=0 loops=1)
-> Bitmap Index Scan on idx_customers_telephone1_txt (cost=0.00..2.10 rows=7 width=0) (actual time=0.065..0.066 rows=99 loops=1)
Index Cond: ((telephone1)::text = '1'::text)
-> Bitmap Index Scan on idx_customers_telephone2_txt (cost=0.00..6.47 rows=253 width=0) (actual time=3.378..3.378 rows=1664 loops=1)
Index Cond: ((telephone2)::text = '1'::text)
Planning Time: 0.419 ms
Execution Time: 25.995 ms
When I check for value 0 there is a huge change in the execution time (7753.216 ms)
Limit (cost=0.14..2440.90 rows=10 width=125) (actual time=5900.924..7753.133 rows=4 loops=1)
-> GroupAggregate (cost=0.14..292402.20 rows=1198 width=125) (actual time=5900.922..7753.129 rows=4 loops=1)
Group Key: customers.id
-> Nested Loop (cost=0.14..292395.61 rows=1198 width=127) (actual time=4350.358..7753.087 rows=4 loops=1)
-> Index Scan using customers_pkey on customers (cost=0.09..292387.36 rows=1198 width=61) (actual time=4350.338..7753.054 rows=4 loops=1)
Filter: ((location_id = 189) AND (((telephone1)::text = '0'::text) OR ((telephone2)::text = '0'::text)))
Rows Removed by Filter: 8484280
-> Materialize (cost=0.06..4.06 rows=1 width=70) (actual time=0.005..0.005 rows=1 loops=4)
-> Index Scan using locations_pkey on locations (cost=0.06..4.06 rows=1 width=70) (actual time=0.013..0.013 rows=1 loops=1)
Index Cond: (id = 189)
Planning Time: 0.322 ms
Execution Time: 7753.216 ms
Is there any particular reason, that takes more time to execute for the value 0 ? or anything wrong here?
One more thing I have noticed this issue happens only with column telephone2.
but only 8 rows have value for the column telephone2
Your explain plan indicates otherwise, finding 1664 rows with one specific value for telephone2. Now maybe most of those are not visible, but in that case you really need to VACUUM ANALYZE the table.
Nested Loop (cost=0.14..292395.61 rows=1198 width=127) (actual time=4350.358..7753.087 rows=4 loops=1)
With this second query, it thinks it will find 1198 rows (if run to completion). But it thinks it can stop after the first 20, so that would be 1.67% of index. But instead there are only 4 rows, so it unexpectedly has to scan the entire index without getting to stop early.
Why are the estimates off by so much? I don't know, it could just be stale statistics (again, VACUUM ANALYZE the table), or there could be some interrelation between the columns that make the estimation hard to do even with accurate statistics.
What is the point in joining to locations at all?
I have the following two tables.
person_addresses
address_normalization
The person_addresses table has a field named address_id as the primary key and address_normalization has the corresponding field address_id which has an index on it.
Now, when I explain the following query, I see a sequential scan.
SELECT
count(*)
FROM
mp_member2.person_addresses pa
JOIN mp_member2.address_normalization an ON
an.address_id = pa.address_id
WHERE
an.sr_modification_time >= 1550692189468;
-- Result: 2654
Please refer to the following screenshot.
You see that there is a sequential scan after the hash join. I'm not sure I understand this part; why would a sequential scan follow a hash join.
And as seen in the query above, the set of records returned is also low.
Is this expected behaviour or am I doing something wrong?
Update #1: I also have indices on the sr_modification_time fields of both the tables
Update #2: Full execution plan
Aggregate (cost=206944.74..206944.75 rows=1 width=0) (actual time=2807.844..2807.844 rows=1 loops=1)
Buffers: shared hit=4629 read=82217
-> Hash Join (cost=2881.95..206825.15 rows=47836 width=0) (actual time=0.775..2807.160 rows=2654 loops=1)
Hash Cond: (pa.address_id = an.address_id)
Buffers: shared hit=4629 read=82217
-> Seq Scan on person_addresses pa (cost=0.00..135924.93 rows=4911993 width=8) (actual time=0.005..1374.610 rows=4911993 loops=1)
Buffers: shared hit=4588 read=82217
-> Hash (cost=2432.05..2432.05 rows=35992 width=18) (actual time=0.756..0.756 rows=1005 loops=1)
Buckets: 4096 Batches: 1 Memory Usage: 41kB
Buffers: shared hit=41
-> Index Scan using mp_member2_address_normalization_mod_time on address_normalization an (cost=0.43..2432.05 rows=35992 width=18) (actual time=0.012..0.424 rows=1005 loops=1)
Index Cond: (sr_modification_time >= 1550692189468::bigint)
Buffers: shared hit=41
Planning time: 0.244 ms
Execution time: 2807.885 ms
Update #3: I tried with a newer timestamp and it used an index scan.
EXPLAIN (
ANALYZE
, buffers
, format TEXT
) SELECT
COUNT(*)
FROM
mp_member2.person_addresses pa
JOIN mp_member2.address_normalization an ON
an.address_id = pa.address_id
WHERE
an.sr_modification_time >= 1557507300342;
-- count: 1364
Query Plan:
Aggregate (cost=295.48..295.49 rows=1 width=0) (actual time=2.770..2.770 rows=1 loops=1)
Buffers: shared hit=1404
-> Nested Loop (cost=4.89..295.43 rows=19 width=0) (actual time=0.038..2.491 rows=1364 loops=1)
Buffers: shared hit=1404
-> Index Scan using mp_member2_address_normalization_mod_time on address_normalization an (cost=0.43..8.82 rows=14 width=18) (actual time=0.009..0.142 rows=341 loops=1)
Index Cond: (sr_modification_time >= 1557507300342::bigint)
Buffers: shared hit=14
-> Bitmap Heap Scan on person_addresses pa (cost=4.46..20.43 rows=4 width=8) (actual time=0.004..0.005 rows=4 loops=341)
Recheck Cond: (address_id = an.address_id)
Heap Blocks: exact=360
Buffers: shared hit=1390
-> Bitmap Index Scan on idx_mp_member2_person_addresses_address_id (cost=0.00..4.46 rows=4 width=0) (actual time=0.003..0.003 rows=4 loops=341)
Index Cond: (address_id = an.address_id)
Buffers: shared hit=1030
Planning time: 0.214 ms
Execution time: 2.816 ms
That is the expected behavior because you don't have index for sr_modification_time so after create the hash join db has to scan the whole set to check each row for the sr_modification_time value
You should create:
index for (sr_modification_time)
or composite index for (address_id , sr_modification_time )
I'm using postgres v9.6.5. I have a query which seems not that complicated and was wondering why is it so "slow" (it's not really that slow, but I don't have a lot of data actually - like a few thousand rows).
Here is the query:
SELECT o0.*
FROM "orders" AS o0
JOIN "balances" AS b1 ON b1."id" = o0."balance_id"
JOIN "users" AS u3 ON u3."id" = b1."user_id"
WHERE (u3."partner_id" = 3)
ORDER BY o0."id" DESC LIMIT 10;
And that's query plan:
Limit (cost=0.43..12.84 rows=10 width=148) (actual time=0.062..53.866 rows=4 loops=1)
-> Nested Loop (cost=0.43..4750.03 rows=3826 width=148) (actual time=0.061..53.864 rows=4 loops=1)
Join Filter: (b1.user_id = u3.id)
Rows Removed by Join Filter: 67404
-> Nested Loop (cost=0.43..3945.32 rows=17856 width=152) (actual time=0.025..38.457 rows=16852 loops=1)
-> Index Scan Backward using orders_pkey on orders o0 (cost=0.29..897.80 rows=17856 width=148) (actual time=0.016..11.558 rows=16852 loops=1)
-> Index Scan using balances_pkey on balances b1 (cost=0.14..0.16 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=16852)
Index Cond: (id = o0.balance_id)
-> Materialize (cost=0.00..1.19 rows=3 width=4) (actual time=0.000..0.000 rows=4 loops=16852)
-> Seq Scan on users u3 (cost=0.00..1.18 rows=3 width=4) (actual time=0.023..0.030 rows=4 loops=1)
Filter: (partner_id = 3)
Rows Removed by Filter: 12
Planning time: 0.780 ms
Execution time: 54.053 ms
I actually tried without LIMIT and I got quite different plan:
Sort (cost=874.23..883.80 rows=3826 width=148) (actual time=11.361..11.362 rows=4 loops=1)
Sort Key: o0.id DESC
Sort Method: quicksort Memory: 26kB
-> Hash Join (cost=3.77..646.55 rows=3826 width=148) (actual time=11.300..11.346 rows=4 loops=1)
Hash Cond: (o0.balance_id = b1.id)
-> Seq Scan on orders o0 (cost=0.00..537.56 rows=17856 width=148) (actual time=0.012..8.464 rows=16852 loops=1)
-> Hash (cost=3.55..3.55 rows=18 width=4) (actual time=0.125..0.125 rows=24 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Hash Join (cost=1.21..3.55 rows=18 width=4) (actual time=0.046..0.089 rows=24 loops=1)
Hash Cond: (b1.user_id = u3.id)
-> Seq Scan on balances b1 (cost=0.00..1.84 rows=84 width=8) (actual time=0.011..0.029 rows=96 loops=1)
-> Hash (cost=1.18..1.18 rows=3 width=4) (actual time=0.028..0.028 rows=4 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on users u3 (cost=0.00..1.18 rows=3 width=4) (actual time=0.014..0.021 rows=4 loops=1)
Filter: (partner_id = 3)
Rows Removed by Filter: 12
Planning time: 0.569 ms
Execution time: 11.420 ms
And also without WHERE (but with LIMIT):
Limit (cost=0.43..4.74 rows=10 width=148) (actual time=0.023..0.066 rows=10 loops=1)
-> Nested Loop (cost=0.43..7696.26 rows=17856 width=148) (actual time=0.022..0.065 rows=10 loops=1)
Join Filter: (b1.user_id = u3.id)
Rows Removed by Join Filter: 139
-> Nested Loop (cost=0.43..3945.32 rows=17856 width=152) (actual time=0.009..0.029 rows=10 loops=1)
-> Index Scan Backward using orders_pkey on orders o0 (cost=0.29..897.80 rows=17856 width=148) (actual time=0.007..0.015 rows=10 loops=1)
-> Index Scan using balances_pkey on balances b1 (cost=0.14..0.16 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=10)
Index Cond: (id = o0.balance_id)
-> Materialize (cost=0.00..1.21 rows=14 width=4) (actual time=0.001..0.001 rows=15 loops=10)
-> Seq Scan on users u3 (cost=0.00..1.14 rows=14 width=4) (actual time=0.005..0.007 rows=16 loops=1)
Planning time: 0.286 ms
Execution time: 0.097 ms
As you can see, without WHERE it's much faster. Can someone provide me with some information where can I look for explanations for those plans to better understand them? And also what can I do to make those queries faster (or I shouldn't worry cause with like 100 times more data they will still be fast enough? - 50ms is fine for me tbh)
PostgreSQL thinks that it will be fastest if it scans orders in the correct order until it finds a matching users entry that satisfies the WHERE condition.
However, it seems that the data distribution is such that it has to scan almost 17000 orders before it finds a match.
Since PostgreSQL doesn't know how values correlate across tables, there is nothing much you can do to change that.
You can force PostgreSQL to plan the query without the LIMIT clause like this:
SELECT *
FROM (<your query without ORDER BY and LIMIT> OFFSET 0) q
ORDER BY id DESC LIMIT 10;
With a top-N-sort this should perform better.
Trying to select users with most "followed_by" joining to filter by "tag". Both tables have millions of records. Using distinct to only select unique users.
select distinct u.*
from users u join posts p
on u.id=p.user_id
where p.tags #> ARRAY['love']
order by u.followed_by desc nulls last limit 21
It runs over 16s, seems because of the 'distinct' causing a Seq Scan over 6+ million users. Here is the explain analyse
Limit (cost=15509958.30..15509959.09 rows=21 width=292) (actual time=16882.861..16883.753 rows=21 loops=1)
-> Unique (cost=15509958.30..15595560.30 rows=2282720 width=292) (actual time=16882.859..16883.749 rows=21 loops=1)
-> Sort (cost=15509958.30..15515665.10 rows=2282720 width=292) (actual time=16882.857..16883.424 rows=525 loops=1)
Sort Key: u.followed_by DESC NULLS LAST, u.id, u.username, u.fullna
Sort Method: external merge Disk: 583064kBme, u.follows, u
-> Gather (cost=1000.57..14956785.06 rows=2282720 width=292) (actual time=0.377..11506.001 rows=1680890 loops=1).media, u.profile_pic_url_hd, u.is_private, u.is_verified, u.biography, u.external_url, u.updated, u.location_id, u.final_post
Workers Planned: 9
Workers Launched: 9
-> Nested Loop (cost=0.57..14727513.06 rows=253636 width=292) (actual time=1.013..12031.634 rows=168089 loops=10)
-> Parallel Seq Scan on posts p (cost=0.00..13187797.79 rows=253636 width=8) (actual time=0.940..10872.630 rows=168089 loops=10)
Filter: (tags #> '{love}'::text[])
Rows Removed by Filter: 6251355
-> Index Scan using user_pk on users u (cost=0.57..6.06 rows=1 width=292) (actual time=0.006..0.006 rows=1 loops=1680890)
Index Cond: (id = p.user_id)
Planning time: 1.276 ms
Execution time: 16964.271 ms
Would appreciate tips on how to make this fast.
Update
Thanks to #a_horse_with_no_name, for "love" tags it became really fast
Limit (cost=1.14..4293986.91 rows=21 width=292) (actual time=1.735..31.613 rows=21 loops=1)
-> Nested Loop Semi Join (cost=1.14..10959887484.70 rows=53600 width=292) (actual time=1.733..31.607 rows=21 loops=1)
-> Index Scan using idx_followed_by on users u (cost=0.57..322693786.19 rows=232404560 width=292) (actual time=0.011..0.103 rows=32 loops=1)
-> Index Scan using fki_user_fk1 on posts p (cost=0.57..1943.85 rows=43 width=8) (actual time=0.983..0.983 rows=1 loops=32)
Index Cond: (user_id = u.id)
Filter: (tags #> '{love}'::text[])
Rows Removed by Filter: 1699
Planning time: 1.322 ms
Execution time: 31.656 ms
However for some other tags like "beautiful" it's better, but still little slow. It also takes a different execution path
Limit (cost=3893365.84..3893365.89 rows=21 width=292) (actual time=2813.876..2813.892 rows=21 loops=1)
-> Sort (cost=3893365.84..3893499.84 rows=53600 width=292) (actual time=2813.874..2813.887 rows=21 loops=1)
Sort Key: u.followed_by DESC NULLS LAST
Sort Method: top-N heapsort Memory: 34kB
-> Nested Loop (cost=3437011.27..3891920.70 rows=53600 width=292) (actual time=1130.847..2779.928 rows=35230 loops=1)
-> HashAggregate (cost=3437010.70..3437546.70 rows=53600 width=8) (actual time=1130.809..1148.209 rows=35230 loops=1)
Group Key: p.user_id
-> Bitmap Heap Scan on posts p (cost=10484.20..3434173.21 rows=1134993 width=8) (actual time=268.602..972.390 rows=814919 loops=1)
Recheck Cond: (tags #> '{beautiful}'::text[])
Heap Blocks: exact=347002
-> Bitmap Index Scan on idx_tags (cost=0.00..10200.45 rows=1134993 width=0) (actual time=168.453..168.453 rows=814919 loops=1)
Index Cond: (tags #> '{beautiful}'::text[])
-> Index Scan using user_pk on users u (cost=0.57..8.47 rows=1 width=292) (actual time=0.045..0.046 rows=1 loops=35230)
Index Cond: (id = p.user_id)
Planning time: 1.388 ms
Execution time: 2814.132 ms
I did have a gin index for 'tags' already in place
This should be faster:
select *
from users u
where exists (select *
from posts p
where u.id=p.user_id
and p.tags #> ARRAY['love'])
order by u.followed_by desc nulls last
limit 21;
If there are only a few (<10%) posts with that tag, an index on posts.tags should help as well:
create index using gin on posts (tags);