I have this very slow query:
SELECT DISTINCT et.id
FROM elementtype et
where et.id = any
(SELECT elementtypeid
FROM
(SELECT ic.elementtypeid
FROM
(SELECT categoryid
FROM issue
WHERE clientid = '833e1f2f-ff44-4aca-bd12-0e4f67969a11'
AND deleteddate IS NULL
GROUP BY categoryid) i
JOIN issuecategory ic ON ic.id = i.categoryid
UNION SELECT tc.elementtypeid
FROM
(SELECT categoryid
FROM task
WHERE clientid = '833e1f2f-ff44-4aca-bd12-0e4f67969a11'
AND deleteddate IS NULL
GROUP BY categoryid) t
JOIN taskcategory tc ON tc.id = t.categoryid) icc)
I have tried to change the ANY operator with IN, made an join instead of IN (in line 3 of the query) but it is still very slow, when the result is not cached.
I think it might be the nested loop making the problem - but I dont know if I can get rid of it - and why et only
As you can see, I use a couple of indexes _idx an of course primary keys on every table.
the elementtype table has ~6000 rows
the issue sub-query with these conditions (not group by) returns ~33000 rows
the task sub-query with these conditions (not group by) returns ~148000 rows
Is there any way to optimize the query?
EDIT:
As requested by #a_horse_with_no_name I add a query plan using the command he/she surgested. The best way to post it in here, is is using an image, I think:
QUERY PLAN
Unique (cost=473976.82..474453.63 rows=4453 width=16) (actual time=69897.728..69897.737 rows=1 loops=1)
Buffers: shared hit=61346 read=19651
-> Merge Join (cost=473976.82..474442.49 rows=4453 width=16) (actual time=69897.724..69897.731 rows=1 loops=1)
Merge Cond: (et.id = ic.elementtypeid)
Buffers: shared hit=61346 read=19651
-> Index Only Scan using elementtype_pkey on elementtype et (cost=0.28..384.47 rows=5879 width=16) (actual time=0.021..32.618 rows=1784 loops=1)
Heap Fetches: 1784
Buffers: shared hit=1699 read=54
-> Sort (cost=473976.54..473987.67 rows=4453 width=16) (actual time=69863.461..69863.464 rows=1 loops=1)
Sort Key: ic.elementtypeid
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=59647 read=19597
-> HashAggregate (cost=473617.61..473662.14 rows=4453 width=16) (actual time=69863.432..69863.436 rows=1 loops=1)
Group Key: ic.elementtypeid
Buffers: shared hit=59647 read=19597
-> Append (cost=107927.43..473606.48 rows=4453 width=16) (actual time=114.259..69863.317 rows=55 loops=1)
Buffers: shared hit=59647 read=19597
-> Hash Join (cost=107927.43..109170.43 rows=3625 width=16) (actual time=114.257..208.716 rows=46 loops=1)
Hash Cond: (ic.id = issue.categoryid)
Buffers: shared hit=15431
-> Seq Scan on issuecategory ic (cost=0.00..1100.36 rows=54336 width=32) (actual time=0.011..47.327 rows=54336 loops=1)
Buffers: shared hit=557
-> Hash (cost=107882.12..107882.12 rows=3625 width=16) (actual time=113.850..113.850 rows=46 loops=1)
Buckets: 4096 Batches: 1 Memory Usage: 35kB
Buffers: shared hit=14874
-> HashAggregate (cost=107809.62..107845.87 rows=3625 width=16) (actual time=113.738..113.795 rows=46 loops=1)
Group Key: issue.categoryid
Buffers: shared hit=14874
-> Bitmap Heap Scan on issue (cost=1801.41..107730.88 rows=31493 width=16) (actual time=7.279..81.266 rows=33670 loops=1)
Recheck Cond: (clientid = '833e1f2f-ff44-4aca-bd12-0e4f67969a11'::uuid)
Filter: (deleteddate IS NULL)
Rows Removed by Filter: 1362
Heap Blocks: exact=14636
Buffers: shared hit=14874
-> Bitmap Index Scan on issue_clientid_ix (cost=0.00..1793.54 rows=32681 width=0) (actual time=5.165..5.166 rows=35064 loops=1)
Index Cond: (clientid = '833e1f2f-ff44-4aca-bd12-0e4f67969a11'::uuid)
Buffers: shared hit=238
-> Nested Loop (cost=360635.19..364391.52 rows=828 width=16) (actual time=69603.779..69654.505 rows=9 loops=1)
Buffers: shared hit=44216 read=19597
-> HashAggregate (cost=360634.78..360643.06 rows=828 width=16) (actual time=69592.635..69592.657 rows=9 loops=1)
Group Key: task.categoryid
Buffers: shared hit=44198 read=19579
-> Bitmap Heap Scan on task (cost=3438.67..360280.46 rows=141728 width=16) (actual time=33.283..69416.182 rows=147931 loops=1)
Recheck Cond: (clientid = '833e1f2f-ff44-4aca-bd12-0e4f67969a11'::uuid)
Filter: (deleteddate IS NULL)
Rows Removed by Filter: 2329
Heap Blocks: exact=63193
Buffers: shared hit=44198 read=19579
-> Bitmap Index Scan on task_clientid_ix (cost=0.00..3403.24 rows=148091 width=0) (actual time=20.865..20.866 rows=150975 loops=1)
Index Cond: (clientid = '833e1f2f-ff44-4aca-bd12-0e4f67969a11'::uuid)
Buffers: shared hit=584
-> Index Scan using taskcategory_pkey on taskcategory tc (cost=0.42..4.52 rows=1 width=32) (actual time=6.865..6.865 rows=1 loops=9)
Index Cond: (id = task.categoryid)
Buffers: shared hit=18 read=18
Planning time: 1.173 ms
Execution time: 69899.380 ms
EDIT2:
issuecategory has index on id, clintid, elementypeid
issue has index on clientid, deleteddate and categoryid
taskcategory has index on id, clientid, elementtypeid,
task has index on clientid, id, deleteddate, categoryid
The problem is the bitmap heap scans. They seem to be jumping to a lot of different parts of the disk to fetch the data they need.
The best solution is probably to create indexes on (clientid, categoryid, deleteddate) on each table, or maybe (clientid, categoryid) where deleteddate is null. This will allow those bitmap heap scans to be replaced with index-only scans (assuming your tables are vacuumed well enough).
Other approaches would be to CLUSTER the tables so that rows with the same clientid are physically grouped together, or increase effective_io_concurrency so more IO can be done at the same time (assuming your storage system has multiple spindles in RAID/JBOD, or whatever the SSD equivalent to that is).
Related
PostgreSQL 14.6 on x86_64-pc-linux-gnu, compiled by gcc, a 12d889e8a6 p ce8d8b4729, 64-bit
I have an organizations table and a (much smaller) partner_members table that associate some organizations with a partner_member_id.
There is also a convenience view to list organizations with their (potential) partner IDs, defined like this:
select
o.id,
o.name,
o.email,
o.created,
p.member_id AS partner_member_id
from organizations o
left join partner_members p on o.id= p.organization_id
However, this leads to an admin query that queries this view ending up like this:
select count(*) OVER (),"id","name","email","created"
from (
select
o.id,
o.name,
o.email,
o.created,
p.member_id AS partner_member_id
from organizations o
left join partner_members p on o.id= p.organization_id
) _
where ("name" ilike '%example#example.com%')
or ("email" ilike '%example#example.com%')
or ("partner_member_id" ilike '%example#example.com%')
or ("id" ilike '%example#example.com%')
order by "created" desc
offset 0 limit 50;
… which is super slow, since the partner_member_id constraint isn't “pushed” into the sub query, which means that the filtering happens way too late.
Is there a way to make a query such as this efficient, or is this convenience view a no-go here?
Here is the plan:
Limit (cost=12842.32..12848.77 rows=50 width=74) (actual time=2344.828..2385.234 rows=0 loops=1)
Buffers: shared hit=5246, temp read=3088 written=3120
-> WindowAgg (cost=12842.32..12853.80 rows=89 width=74) (actual time=2344.826..2385.232 rows=0 loops=1)
Buffers: shared hit=5246, temp read=3088 written=3120
-> Gather Merge (cost=12842.32..12852.69 rows=89 width=66) (actual time=2344.822..2385.226 rows=0 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=5246, temp read=3088 written=3120
-> Sort (cost=11842.30..11842.39 rows=37 width=66) (actual time=2322.988..2323.050 rows=0 loops=3)
Sort Key: o.created DESC
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=5246, temp read=3088 written=3120
Worker 0: Sort Method: quicksort Memory: 25kB
Worker 1: Sort Method: quicksort Memory: 25kB
-> Parallel Hash Left Join (cost=3368.61..11841.33 rows=37 width=66) (actual time=2322.857..2322.917 rows=0 loops=3)
Hash Cond: ((o.id)::text = p.organization_id)
Filter: (((o.name)::text ~~* '%example#example.com%'::text) OR ((o.email)::text ~~* '%example#example.com%'::text) OR (p.member_id ~~* '%example#example.com%'::text) OR ((o.id)::text ~~* '%example#example.com%'::text))
Rows Removed by Filter: 73800
Buffers: shared hit=5172, temp read=3088 written=3120
-> Parallel Seq Scan on organizations o (cost=0.00..4813.65 rows=92365 width=66) (actual time=0.020..200.111 rows=73800 loops=3)
Buffers: shared hit=3890
-> Parallel Hash (cost=1926.05..1926.05 rows=71005 width=34) (actual time=108.608..108.610 rows=40150 loops=3)
Buckets: 32768 Batches: 4 Memory Usage: 2432kB
Buffers: shared hit=1216, temp written=620
-> Parallel Seq Scan on partner_members p (cost=0.00..1926.05 rows=71005 width=34) (actual time=0.028..43.757 rows=40150 loops=3)
Buffers: shared hit=1216
Planning:
Buffers: shared hit=24
Planning Time: 1.837 ms
Execution Time: 2385.319 ms
I have a PostgreSQL database that I cloned.
Database 1 has varchar(36) as primary keys
Database 2 (the clone) has UUID as primary keys.
Both contain the same data. What I don't understand is why queries on Database 1 will use the index but Database 2 will not. Here's the query:
EXPLAIN (ANALYZE, BUFFERS)
select * from table1
INNER JOIN table2 on table1.id = table2.table1_id
where table1.id in (
'541edffc-7179-42db-8c99-727be8c9ffec',
'eaac06d3-e44e-4e4a-8e11-1cdc6e562996'
);
Database 1
Nested Loop (cost=16.13..7234.96 rows=14 width=803) (actual time=0.072..0.112 rows=8 loops=1)
Buffers: shared hit=23
-> Index Scan using table1_pk on table1 (cost=0.56..17.15 rows=2 width=540) (actual time=0.042..0.054 rows=2 loops=1)
" Index Cond: ((id)::text = ANY ('{541edffc-7179-42db-8c99-727be8c9ffec,eaac06d3-e44e-4e4a-8e11-1cdc6e562996}'::text[]))"
Buffers: shared hit=12
-> Bitmap Heap Scan on table2 (cost=15.57..3599.86 rows=904 width=263) (actual time=0.022..0.023 rows=4 loops=2)
Recheck Cond: ((table1_id)::text = (table1.id)::text)
Heap Blocks: exact=3
Buffers: shared hit=11
-> Bitmap Index Scan on table2_table1_id_fk (cost=0.00..15.34 rows=904 width=0) (actual time=0.019..0.019 rows=4 loops=2)
Index Cond: ((table1_id)::text = (table1.id)::text)
Buffers: shared hit=8
Planning:
Buffers: shared hit=416
Planning Time: 1.869 ms
Execution Time: 0.330 ms
Database 2
Gather (cost=1000.57..1801008.91 rows=14 width=740) (actual time=11.580..42863.893 rows=8 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=863 read=631539 dirtied=631979 written=2523
-> Nested Loop (cost=0.56..1800007.51 rows=6 width=740) (actual time=28573.119..42856.696 rows=3 loops=3)
Buffers: shared hit=863 read=631539 dirtied=631979 written=2523
-> Parallel Seq Scan on table1 (cost=0.00..678896.46 rows=1 width=519) (actual time=28573.112..42855.524 rows=1 loops=3)
" Filter: (id = ANY ('{541edffc-7179-42db-8c99-727be8c9ffec,eaac06d3-e44e-4e4a-8e11-1cdc6e562996}'::uuid[]))"
Rows Removed by Filter: 2976413
Buffers: shared hit=854 read=631536 dirtied=631979 written=2523
-> Index Scan using table2_table1_id_fk on table2 (cost=0.56..1117908.70 rows=320236 width=221) (actual time=1.736..1.745 rows=4 loops=2)
Index Cond: (table1_id = table1.id)
Buffers: shared hit=9 read=3
Planning:
Buffers: shared hit=376 read=15
Planning Time: 43.594 ms
Execution Time: 42864.044 ms
Some notes:
The query is orders of magnitude faster in Database 1
Having only one ID in the WHERE clause activates the index in both databases
Casting to ::uuid has no impact
I understand that these results are because the query planner calculates that the cost of the index in the UUID (Database 2) case is too high. But I'm trying to understand why it thinks that and if there's something I can do.
So I have identical databases with identical set of data, one is on production instance with 4GbRam another is i docker image.
I'm running same query:
select * from product p
inner join product_manufacturers pm on pm.product_id = p.id
inner join manufacturers m on pm.manufacturer_id = m.id
inner join brand b on p.brand_id = b.id
inner join tags t on t.product_id = p.id
inner join groups g on g.manufacturer_id = pm.id
inner join group_options gp on g.id = gp.group_id
inner join images i on i.product_id = p.id
where pm.enabled = true
and pm.available = true
and pm.launched = true
and p.available = true
and p.enabled = true
and p.id in (49, 77, 6, 12, 36)
order by b.id
Locally query execution time is 4 seconds and returned rows amount are 120k.
But on production execution time is 8 minutes and amount of returned rows are the same (as tables are identical).
What can be the issue, as I already told databases structure the same (same indexes etc.) and they have identical data, so why the execution time so different.... Also, when I'm runing the query on production directly through pgsql and storing result to the file, postgres execute query instantly but process of storing data to the file taking 8 minutes... what can be wrong with my instance???
I know a lot of unknowns here, but it's all what I have at the moment.
Local explain:
Nested Loop (cost=5.83..213.82 rows=1399 width=1820) (actual time=4.861..2352.400 rows=119503 loops=1)
Join Filter: (pm.product_id = t.product_id)
Buffers: shared hit=42646 read=85
-> Nested Loop (cost=5.55..107.33 rows=202 width=1797) (actual time=4.548..280.433 rows=12743 loops=1)
Buffers: shared hit=4420 read=82
-> Nested Loop (cost=5.26..81.32 rows=30 width=1763) (actual time=3.508..44.939 rows=950 loops=1)
Buffers: shared hit=931 read=38
-> Nested Loop (cost=4.98..77.05 rows=7 width=1697) (actual time=2.768..23.855 rows=166 loops=1)
Buffers: shared hit=436 read=35
-> Nested Loop (cost=4.71..75.14 rows=4 width=1677) (actual time=2.196..15.537 rows=83 loops=1)
Buffers: shared hit=195 read=26
-> Nested Loop (cost=4.56..74.36 rows=4 width=1399) (actual time=2.155..13.079 rows=83 loops=1)
Buffers: shared hit=30 read=25
-> Nested Loop (cost=0.27..40.11 rows=3 width=646) (actual time=1.803..3.986 rows=5 loops=1)
Join Filter: (p.brand_id = b.id)
Rows Removed by Join Filter: 158
Buffers: shared hit=17 read=4
-> Index Scan using product_pkey on product p (cost=0.27..34.93 rows=3 width=407) (actual time=0.830..1.995 rows=5 loops=1)
Index Cond: (id = ANY ('{49,77,6,12,36}'::integer[]))
Filter: (available AND enabled)
Buffers: shared hit=16 read=2
-> Materialize (cost=0.00..3.57 rows=38 width=239) (actual time=0.066..0.254 rows=33 loops=5)
Buffers: shared hit=1 read=2
-> Seq Scan on brand b (cost=0.00..3.38 rows=38 width=239) (actual time=0.298..0.461 rows=40 loops=1)
Buffers: shared hit=1 read=2
-> Bitmap Heap Scan on product_manufacturers pm (cost=4.29..11.41 rows=1 width=753) (actual time=0.674..1.615 rows=17 loops=5)
Recheck Cond: (product_id = p.id)
Filter: (available AND launched)
Rows Removed by Filter: 5
Heap Blocks: exact=21
Buffers: shared hit=13 read=21
-> Bitmap Index Scan on idx_e093f35a40e556f5 (cost=0.00..4.29 rows=2 width=0) (actual time=0.019..0.020 rows=27 loops=5)
Index Cond: (product_id = p.id)
Buffers: shared hit=9 read=1
-> Index Scan using manufacturer_pkey on manufacturers m (cost=0.14..0.20 rows=1 width=278) (actual time=0.011..0.011 rows=1 loops=83)
Index Cond: (id = pm.manufacturer_id)
Filter: enabled
Buffers: shared hit=165 read=1
-> Index Scan using idx_237d25c5d3e1ebb8 on groups g (cost=0.28..0.46 rows=2 width=20) (actual time=0.055..0.067 rows=2 loops=83)
Index Cond: (manufacturer_id = pm.id)
Buffers: shared hit=241 read=9
-> Index Scan using idx_4a5244b740e556f5 on icons i (cost=0.28..0.57 rows=4 width=66) (actual time=0.021..0.053 rows=6 loops=166)
Index Cond: (product_id = pm.product_id)
Buffers: shared hit=495 read=3
-> Index Scan using group_options_unique_idx on group_options gp (cost=0.29..0.80 rows=7 width=34) (actual time=0.023..0.104 rows=13 loops=950)
Index Cond: (group_id = g.id)
Buffers: shared hit=3489 read=44
-> Index Scan using idx_7edc9c5340e556f5 on tags t (cost=0.28..0.45 rows=6 width=19) (actual time=0.008..0.057 rows=9 loops=12743)
Index Cond: (product_id = i.product_id)
Buffers: shared hit=38226 read=3
Planning Time: 65.124 ms
Execution Time: 2926.918 ms
and here is production explain:
Nested Loop (cost=1.81..201.75 rows=1344 width=1760) (actual time=0.170..115.850 rows=119503 loops=1)
Join Filter: (pm.product_id = t.product_id)
Buffers: shared hit=43045
-> Nested Loop (cost=1.53..88.32 rows=211 width=1737) (actual time=0.145..13.585 rows=12743 loops=1)
Buffers: shared hit=4816
-> Nested Loop (cost=1.25..64.37 rows=29 width=1703) (actual time=0.120..2.492 rows=950 loops=1)
Buffers: shared hit=954
-> Nested Loop (cost=0.97..60.22 rows=7 width=1637) (actual time=0.103..1.468 rows=166 loops=1)
Buffers: shared hit=456
-> Nested Loop (cost=0.69..58.32 rows=4 width=1617) (actual time=0.084..0.950 rows=83 loops=1)
Join Filter: (p.brand_id = b.id)
Rows Removed by Join Filter: 598
Buffers: shared hit=206
-> Nested Loop (cost=0.69..53.64 rows=4 width=1411) (actual time=0.072..0.705 rows=83 loops=1)
Buffers: shared hit=205
-> Nested Loop (cost=0.55..52.62 rows=5 width=1153) (actual time=0.058..0.338 rows=83 loops=1)
Buffers: shared hit=39
-> Index Scan using product_pkey on product p (cost=0.27..22.60 rows=4 width=403) (actual time=0.039..0.072 rows=5 loops=1)
Index Cond: (id = ANY ('{49,77,6,12,36}'::integer[]))
Filter: (available AND enabled)
Buffers: shared hit=16
-> Index Scan using idx_e093f35a40e556f5 on product_manufacturers pm (cost=0.28..7.50 rows=1 width=750) (actual time=0.012..0.043 rows=17 loops=5)
Index Cond: (product_id = p.id)
Filter: (available AND launched)
Rows Removed by Filter: 5
Buffers: shared hit=23
-> Index Scan using manufacturer_pkey on manufacturer m (cost=0.14..0.20 rows=1 width=258) (actual time=0.003..0.003 rows=1 loops=83)
Index Cond: (id = pm.manufacturer_id)
Filter: enabled
Buffers: shared hit=166
-> Materialize (cost=0.00..2.56 rows=37 width=206) (actual time=0.000..0.001 rows=8 loops=83)
Buffers: shared hit=1
-> Seq Scan on brand b (cost=0.00..2.37 rows=37 width=206) (actual time=0.006..0.012 rows=27 loops=1)
Buffers: shared hit=1
-> Index Scan using idx_237d25c5d3e1ebb8 on groups g (cost=0.28..0.45 rows=2 width=20) (actual time=0.003..0.004 rows=2 loops=83)
Index Cond: (manufacturer_id = pm.id)
Buffers: shared hit=250
-> Index Scan using idx_4a5244b740e556f5 on icons i (cost=0.28..0.55 rows=4 width=66) (actual time=0.002..0.003 rows=6 loops=166)
Index Cond: (product_id = pm.product_id)
Buffers: shared hit=498
-> Index Scan using group_options_unique_idx on group_options gp (cost=0.29..0.76 rows=7 width=34) (actual time=0.003..0.007 rows=13 loops=950)
Index Cond: (group_id = g.id)
Buffers: shared hit=3862
-> Index Scan using idx_7edc9c5340e556f5 on tags t (cost=0.28..0.46 rows=6 width=19) (actual time=0.003..0.004 rows=9 loops=12743)
Index Cond: (product_id = i.product_id)
Buffers: shared hit=38229
Planning Time: 11.001 ms
Execution Time: 129.339 ms
Regarding indexes, I did not add any specific indexes yet, this query used in background and I did not try to optimize it yet as 3-4 seconds was allowable time but here it is, all the indexes I have:
CREATE UNIQUE INDEX product_pkey ON product USING btree (id);
CREATE INDEX idx_2645e26644f5d008 ON product USING btree (brand_id);
CREATE UNIQUE INDEX brand_pkey ON brand USING btree (id);
CREATE UNIQUE INDEX icon_pkey ON icons USING btree (id);
CREATE INDEX idx_4a5244b740e556f5 ON icons USING btree (product_id);
CREATE UNIQUE INDEX tags_pkey ON tags USING btree (id);
CREATE INDEX idx_7edc9c5340e556f5 ON tags USING btree (product_id);
CREATE INDEX tag_value_index ON tags USING btree (value);
CREATE UNIQUE INDEX manufacturer_pkey ON manufacturers USING btree (id);
CREATE UNIQUE INDEX groups_pkey ON groups_pkey USING btree (id);
CREATE INDEX idx_237d25c5d3e1ebb8 ON groups USING btree (manufacturer_id);
CREATE UNIQUE INDEX group_options_pkey ON group_options USING btree (id);
CREATE INDEX idx_2a964c28de23a8e3 ON group_options USING btree (group_id);
CREATE UNIQUE INDEX group_options_unique_idx ON group_options USING btree (group_id, option_id);
CREATE UNIQUE INDEX product_manufacturer_pkey ON product_manufacturers USING btree (id);
CREATE INDEX idx_e093f35a40e556f5 ON product_manufacturers USING btree (product_id);
CREATE UNIQUE INDEX manufacturer_unique_idx ON product_manufacturers USING btree (manufacturer_id, product_id);
CREATE INDEX idx_e093f35ad3e1ebb8 ON product_manufacturers USING btree (manufacturer_id)
I have the following two tables.
person_addresses
address_normalization
The person_addresses table has a field named address_id as the primary key and address_normalization has the corresponding field address_id which has an index on it.
Now, when I explain the following query, I see a sequential scan.
SELECT
count(*)
FROM
mp_member2.person_addresses pa
JOIN mp_member2.address_normalization an ON
an.address_id = pa.address_id
WHERE
an.sr_modification_time >= 1550692189468;
-- Result: 2654
Please refer to the following screenshot.
You see that there is a sequential scan after the hash join. I'm not sure I understand this part; why would a sequential scan follow a hash join.
And as seen in the query above, the set of records returned is also low.
Is this expected behaviour or am I doing something wrong?
Update #1: I also have indices on the sr_modification_time fields of both the tables
Update #2: Full execution plan
Aggregate (cost=206944.74..206944.75 rows=1 width=0) (actual time=2807.844..2807.844 rows=1 loops=1)
Buffers: shared hit=4629 read=82217
-> Hash Join (cost=2881.95..206825.15 rows=47836 width=0) (actual time=0.775..2807.160 rows=2654 loops=1)
Hash Cond: (pa.address_id = an.address_id)
Buffers: shared hit=4629 read=82217
-> Seq Scan on person_addresses pa (cost=0.00..135924.93 rows=4911993 width=8) (actual time=0.005..1374.610 rows=4911993 loops=1)
Buffers: shared hit=4588 read=82217
-> Hash (cost=2432.05..2432.05 rows=35992 width=18) (actual time=0.756..0.756 rows=1005 loops=1)
Buckets: 4096 Batches: 1 Memory Usage: 41kB
Buffers: shared hit=41
-> Index Scan using mp_member2_address_normalization_mod_time on address_normalization an (cost=0.43..2432.05 rows=35992 width=18) (actual time=0.012..0.424 rows=1005 loops=1)
Index Cond: (sr_modification_time >= 1550692189468::bigint)
Buffers: shared hit=41
Planning time: 0.244 ms
Execution time: 2807.885 ms
Update #3: I tried with a newer timestamp and it used an index scan.
EXPLAIN (
ANALYZE
, buffers
, format TEXT
) SELECT
COUNT(*)
FROM
mp_member2.person_addresses pa
JOIN mp_member2.address_normalization an ON
an.address_id = pa.address_id
WHERE
an.sr_modification_time >= 1557507300342;
-- count: 1364
Query Plan:
Aggregate (cost=295.48..295.49 rows=1 width=0) (actual time=2.770..2.770 rows=1 loops=1)
Buffers: shared hit=1404
-> Nested Loop (cost=4.89..295.43 rows=19 width=0) (actual time=0.038..2.491 rows=1364 loops=1)
Buffers: shared hit=1404
-> Index Scan using mp_member2_address_normalization_mod_time on address_normalization an (cost=0.43..8.82 rows=14 width=18) (actual time=0.009..0.142 rows=341 loops=1)
Index Cond: (sr_modification_time >= 1557507300342::bigint)
Buffers: shared hit=14
-> Bitmap Heap Scan on person_addresses pa (cost=4.46..20.43 rows=4 width=8) (actual time=0.004..0.005 rows=4 loops=341)
Recheck Cond: (address_id = an.address_id)
Heap Blocks: exact=360
Buffers: shared hit=1390
-> Bitmap Index Scan on idx_mp_member2_person_addresses_address_id (cost=0.00..4.46 rows=4 width=0) (actual time=0.003..0.003 rows=4 loops=341)
Index Cond: (address_id = an.address_id)
Buffers: shared hit=1030
Planning time: 0.214 ms
Execution time: 2.816 ms
That is the expected behavior because you don't have index for sr_modification_time so after create the hash join db has to scan the whole set to check each row for the sr_modification_time value
You should create:
index for (sr_modification_time)
or composite index for (address_id , sr_modification_time )
I'm using postgres 10, and have the following query
select
count(task.id) over() as _total_ ,
json_agg(u.*) as users,
task.*
from task
left outer join taskuserlink_history tu on (task.id = tu.taskid)
left outer join "user" u on (tu.userId = u.id)
group by task.id offset 10 limit 10;
this query takes approx 800ms to execute
if I remove the count(task.id) over() as _total_ , line, then it executes in 250ms
I have to confess being a complete sql noob, so the query itself may be completely borked
I was wondering if anyone could point to the flaws in the query, and make suggestions on how to speed it up.
The number of tasks is approx 15k, with an average of 5 users per task, linked through taskuserlink
I have looked at the pgadmin "explain" diagram
but to be honest can't really figure it out yet ;)
the table definitions are
task , with id (int) as primary column
taskuserlink_history, with taskId (int) and userId (int) (both as foreign key constraints, indexed)
user, with id (int) as primary column
the query plan is as follows
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=4.74..12.49 rows=10 width=44) (actual time=1178.016..1178.043 rows=10 loops=1)
Buffers: shared hit=3731, temp read=6655 written=6914
-> WindowAgg (cost=4.74..10248.90 rows=13231 width=44) (actual time=1178.014..1178.040 rows=10 loops=1)
Buffers: shared hit=3731, temp read=6655 written=6914
-> GroupAggregate (cost=4.74..10083.51 rows=13231 width=36) (actual time=0.417..1049.294 rows=13255 loops=1)
Group Key: task.id
Buffers: shared hit=3731
-> Nested Loop Left Join (cost=4.74..9586.77 rows=66271 width=36) (actual time=0.103..309.372 rows=66162 loops=1)
Join Filter: (taskuserlink_history.userid = user_archive.id)
Rows Removed by Join Filter: 1182904
Buffers: shared hit=3731
-> Merge Left Join (cost=0.58..5563.22 rows=66271 width=8) (actual time=0.044..73.598 rows=66162 loops=1)
Merge Cond: (task.id = taskuserlink_history.taskid)
Buffers: shared hit=3629
-> Index Only Scan using task_pkey on task (cost=0.29..1938.30 rows=13231 width=4) (actual time=0.026..7.683 rows=13255 loops=1)
Heap Fetches: 13255
Buffers: shared hit=1810
-> Index Scan using taskuserlink_history_task_fk_idx on taskuserlink_history (cost=0.29..2764.46 rows=66271 width=8) (actual time=0.015..40.109 rows=66162 loops=1)
Filter: (timeend IS NULL)
Rows Removed by Filter: 13368
Buffers: shared hit=1819
-> Materialize (cost=4.17..50.46 rows=4 width=36) (actual time=0.000..0.001 rows=19 loops=66162)
Buffers: shared hit=102
-> Bitmap Heap Scan on user_archive (cost=4.17..50.44 rows=4 width=36) (actual time=0.050..0.305 rows=45 loops=1)
Recheck Cond: (archived_at IS NULL)
Heap Blocks: exact=11
Buffers: shared hit=102
-> Bitmap Index Scan on user_unique_username (cost=0.00..4.16 rows=4 width=0) (actual time=0.014..0.014 rows=46 loops=1)
Buffers: shared hit=1
SubPlan 1
-> Aggregate (cost=8.30..8.31 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=45)
Buffers: shared hit=90
-> Index Scan using task_assignedto_idx on task task_1 (cost=0.29..8.30 rows=1 width=4) (actual time=0.002..0.002 rows=0 loops=45)
Index Cond: (assignedtoid = user_archive.id)
Buffers: shared hit=90
Planning time: 0.989 ms
Execution time: 1191.451 ms
(37 rows)
without the window function it is
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=4.74..12.36 rows=10 width=36) (actual time=0.510..1.763 rows=10 loops=1)
Buffers: shared hit=91
-> GroupAggregate (cost=4.74..10083.51 rows=13231 width=36) (actual time=0.509..1.759 rows=10 loops=1)
Group Key: task.id
Buffers: shared hit=91
-> Nested Loop Left Join (cost=4.74..9586.77 rows=66271 width=36) (actual time=0.073..0.744 rows=50 loops=1)
Join Filter: (taskuserlink_history.userid = user_archive.id)
Rows Removed by Join Filter: 361
Buffers: shared hit=91
-> Merge Left Join (cost=0.58..5563.22 rows=66271 width=8) (actual time=0.029..0.161 rows=50 loops=1)
Merge Cond: (task.id = taskuserlink_history.taskid)
Buffers: shared hit=7
-> Index Only Scan using task_pkey on task (cost=0.29..1938.30 rows=13231 width=4) (actual time=0.016..0.031 rows=11 loops=1)
Heap Fetches: 11
Buffers: shared hit=4
-> Index Scan using taskuserlink_history_task_fk_idx on taskuserlink_history (cost=0.29..2764.46 rows=66271 width=8) (actual time=0.009..0.081 rows=50 loops=1)
Filter: (timeend IS NULL)
Rows Removed by Filter: 11
Buffers: shared hit=3
-> Materialize (cost=4.17..50.46 rows=4 width=36) (actual time=0.001..0.009 rows=8 loops=50)
Buffers: shared hit=84
-> Bitmap Heap Scan on user_archive (cost=4.17..50.44 rows=4 width=36) (actual time=0.040..0.382 rows=38 loops=1)
Recheck Cond: (archived_at IS NULL)
Heap Blocks: exact=7
Buffers: shared hit=84
-> Bitmap Index Scan on user_unique_username (cost=0.00..4.16 rows=4 width=0) (actual time=0.012..0.012 rows=46 loops=1)
Buffers: shared hit=1
SubPlan 1
-> Aggregate (cost=8.30..8.31 rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=38)
Buffers: shared hit=76
-> Index Scan using task_assignedto_idx on task task_1 (cost=0.29..8.30 rows=1 width=4) (actual time=0.003..0.003 rows=0 loops=38)
Index Cond: (assignedtoid = user_archive.id)
Buffers: shared hit=76
Planning time: 0.895 ms
Execution time: 1.890 ms
(35 rows)|
I believe the LIMIT clause is making the difference. LIMIT is limiting the number of rows returned, not neccessarily the work involved:
Your second query can be aborted early after 20 rows have been constructed (10 for OFFSET and 10 for LIMIT).
However, your first query needs to go through the whole set to calculate the count(task.id).
Not what you were asking, but I say it anyway:
"user" is not a table, but a view. That is were both queries actually get slower than they should be (The "Materialize" in the plan).
Using OFFSET for paging calls for trouble because it will get slow when the OFFSET increases
Using OFFSET and LIMIT without an ORDER BY is most likely not what you want. The result sets might not be identical on consecutive calls.