I am a newbie to database optimisations,I have is around 29 million rows,it takes 13 seconds. What can I do to optimize performance?
"Properties" column is int array. I created a GIN index on F."Properties",
SELECT
F. "Id",
F. "Name",
F. "Url",
F. "CountryModel",
F. "Properties",
F. "PageRank",
F. "IsVerify",
count(*) AS Counter
FROM
public. "Firms" F,
LATERAL unnest(F."Properties") AS P
WHERE
F. "CountryId" = 1
AND P = ANY (ARRAY[126,128])
AND "Properties" && ARRAY[126,128]
AND F. "Deleted" = FALSE
GROUP BY
F. "Id"
ORDER BY
F. "IsVerify" DESC,
Counter DESC,
F. "PageRank" DESC OFFSET 0 ROWS FETCH FIRST 100 ROW ONLY```
Thats My Query Plan Analyze
"Limit (cost=801718.65..801718.70 rows=20 width=368) (actual time=12671.277..12674.826 rows=20 loops=1)"
" -> Sort (cost=801718.65..802180.37 rows=184689 width=368) (actual time=12671.276..12674.824 rows=20 loops=1)"
" Sort Key: f.""IsVerify"" DESC, (count(*)) DESC, f.""PageRank"" DESC"
" Sort Method: top-N heapsort Memory: 47kB"
" -> GroupAggregate (cost=763260.63..796804.14 rows=184689 width=368) (actual time=12284.752..12592.010 rows=201352 loops=1)"
" Group Key: f.""Id"""
" -> Nested Loop (cost=763260.63..793110.36 rows=369378 width=360) (actual time=12284.734..12488.106 rows=205124 loops=1)"
" -> Gather Merge (cost=763260.62..784770.69 rows=184689 width=360) (actual time=12284.716..12389.961 rows=201352 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" -> Sort (cost=762260.59..762452.98 rows=76954 width=360) (actual time=12258.175..12309.931 rows=67117 loops=3)"
" Sort Key: f.""Id"""
" Sort Method: external merge Disk: 35432kB"
" Worker 0: Sort Method: external merge Disk: 35536kB"
" Worker 1: Sort Method: external merge Disk: 35416kB"
" -> Parallel Bitmap Heap Scan on ""Firms"" f (cost=1731.34..743387.12 rows=76954 width=360) (actual time=57.500..12167.222 rows=67117 loops=3)"
" Recheck Cond: (""Properties"" && '{126,128}'::integer[])"
" Rows Removed by Index Recheck: 356198"
" Filter: ((NOT ""Deleted"") AND (""CountryId"" = 1))"
" Heap Blocks: exact=17412 lossy=47209"
" -> Bitmap Index Scan on ix_properties_gin (cost=0.00..1685.17 rows=184689 width=0) (actual time=61.628..61.628 rows=201354 loops=1)"
" Index Cond: (""Properties"" && '{126,128}'::integer[])"
" -> Memoize (cost=0.01..0.14 rows=2 width=0) (actual time=0.000..0.000 rows=1 loops=201352)"
" Cache Key: f.""Properties"""
" Hits: 179814 Misses: 21538 Evictions: 0 Overflows: 0 Memory Usage: 3076kB"
" -> Function Scan on unnest p (cost=0.00..0.13 rows=2 width=0) (actual time=0.001..0.001 rows=1 loops=21538)"
" Filter: (p = ANY ('{126,128}'::integer[]))"
" Rows Removed by Filter: 6"
"Planning Time: 2.542 ms"
"Execution Time: 12675.382 ms"
Thats EXPLAIN (ANALYZE, BUFFERS) result
"Limit (cost=793826.15..793826.20 rows=20 width=100) (actual time=12879.468..12882.414 rows=20 loops=1)"
" Buffers: shared hit=108 read=194121 written=1, temp read=3685 written=3697"
" -> Sort (cost=793826.15..794287.87 rows=184689 width=100) (actual time=12879.468..12882.412 rows=20 loops=1)"
" Sort Key: f.""IsVerify"" DESC, (count(*)) DESC, f.""PageRank"" DESC"
" Sort Method: top-N heapsort Memory: 29kB"
" Buffers: shared hit=108 read=194121 written=1, temp read=3685 written=3697"
" -> GroupAggregate (cost=755368.13..788911.64 rows=184689 width=100) (actual time=12623.980..12845.122 rows=201352 loops=1)"
" Group Key: f.""Id"""
" Buffers: shared hit=108 read=194121 written=1, temp read=3685 written=3697"
" -> Nested Loop (cost=755368.13..785217.86 rows=369378 width=92) (actual time=12623.971..12785.946 rows=205124 loops=1)"
" Buffers: shared hit=108 read=194121 written=1, temp read=3685 written=3697"
" -> Gather Merge (cost=755368.12..776878.19 rows=184689 width=120) (actual time=12623.945..12680.899 rows=201352 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" Buffers: shared hit=108 read=194121 written=1, temp read=3685 written=3697"
" -> Sort (cost=754368.09..754560.48 rows=76954 width=120) (actual time=12613.425..12624.658 rows=67117 loops=3)"
" Sort Key: f.""Id"""
" Sort Method: external merge Disk: 9848kB"
" Buffers: shared hit=108 read=194121 written=1, temp read=3685 written=3697"
" Worker 0: Sort Method: external merge Disk: 9824kB"
" Worker 1: Sort Method: external merge Disk: 9808kB"
" -> Parallel Bitmap Heap Scan on ""Firms"" f (cost=1731.34..743387.12 rows=76954 width=120) (actual time=42.098..12567.883 rows=67117 loops=3)"
" Recheck Cond: (""Properties"" && '{126,128}'::integer[])"
" Rows Removed by Index Recheck: 356198"
" Filter: ((NOT ""Deleted"") AND (""CountryId"" = 1))"
" Heap Blocks: exact=17323 lossy=47429"
" Buffers: shared hit=97 read=194118 written=1"
" -> Bitmap Index Scan on ix_properties_gin (cost=0.00..1685.17 rows=184689 width=0) (actual time=41.862..41.862 rows=201354 loops=1)"
" Index Cond: (""Properties"" && '{126,128}'::integer[])"
" Buffers: shared hit=4 read=74"
" -> Memoize (cost=0.01..0.14 rows=2 width=0) (actual time=0.000..0.000 rows=1 loops=201352)"
" Cache Key: f.""Properties"""
" Hits: 179814 Misses: 21538 Evictions: 0 Overflows: 0 Memory Usage: 3076kB"
" -> Function Scan on unnest p (cost=0.00..0.13 rows=2 width=0) (actual time=0.001..0.001 rows=1 loops=21538)"
" Filter: (p = ANY ('{126,128}'::integer[]))"
" Rows Removed by Filter: 6"
"Planning:"
" Buffers: shared hit=32 read=6 dirtied=1"
"Planning Time: 4.533 ms"
"Execution Time: 12883.604 ms"
You should increase work_mem to get rid of the lossy pages in the bitmap. I don't think this will make a big difference, because I suspect most of your time is going to read the pages from disk, and converting lossy pages to exact pages doesn't change how many pages get read (unless TOAST is involved, which I suspect is not--how large does the "Properties" array get?). But I might be wrong, so try it and see. Also, if you turn on track_io_timing and collect your plans with EXPLAIN (ANALYZE, BUFFERS), then we could immediately see if the IO read time was the problem.
Beyond that, this looks very hard to optimize with traditional methods. You can usually optimize ORDER BY...LIMIT by using an index to read rows already in order, but since the 2nd column in your ordering is computed dynamically, this is unlikely here. Are values within "Properties" unique? So can 126 and 128 each exist and be counted at most once per row, or can they exist and be counted multiple times?
The easiest way to optimize this might be on the app or business end. Do we really need to run this query at all, and why? What if we queried only "IsVerify" is true, rather than sorting by it? If that only returns 95 rows, is it really necessary to go back and fill in 5 more with "IsVerify" is false?, etc.
we're building a translation editor and one of the main use cases is to find similar translations in the database. The main entities are: segment, translation_record and user. Segment can be either source or target segment (text) with translation_record being the connecting entity.
For similarity we use pg_trgm.
We have these indices implemented:
CREATE INDEX IF NOT EXISTS segment_content_gin ON segments USING gin (content gin_trgm_ops);
CREATE INDEX IF NOT EXISTS segment_language_id_idx ON segments USING btree (language_id);
CREATE INDEX IF NOT EXISTS translation_record_language_combination_idx ON translation_records USING btree (language_combination);
This is the query we use (note interpolated values as per ruby language):
set pg_trgm.similarity_threshold TO 0.#{sim_score};
SELECT SIMILARITY(segments.content, '#{source_for_lookup}') AS similarity,
translation_records.id AS translation_record_id,
translation_records.source_segment_id AS source_segment_id,
segments.content AS source_segment_content,
translation_records.target_segment_id AS target_segment_id,
target_segments.content AS target_segment_content,
creators.username AS created_by_username,
updaters.username AS updated_by_username,
translation_records.created_at,
translation_records.updated_at,
translation_records.project_name,
translation_records.import_comment,
translation_records.style_id,
translation_records.domain_id,
segments.language_id AS source_language_id,
target_segments.language_id AS target_language_id
FROM segments
JOIN translation_records
ON segments.id = translation_records.source_segment_id
JOIN segments AS target_segments
ON translation_records.target_segment_id = target_segments.id
JOIN users AS creators
ON translation_records.created_by = creators.id
LEFT JOIN users AS updaters
ON translation_records.updated_by = updaters.id
WHERE segments.content % '#{source_for_lookup}'
AND translation_records.language_combination = '#{lang_lookup_combo}'
ORDER BY SIMILARITY(segments.content, '#{source_for_lookup}') DESC
LIMIT #{max_results};
The execution time on my dev laptop on a 4.7 M segments is around 400 ms. My question is, can I further optimize this query by using joins and WHERE differently or by making any other changes?
EDIT: explain (buffers, analyze) with ordering by similarity
"Limit (cost=59749.56..59750.14 rows=5 width=356) (actual time=458.808..462.364 rows=2 loops=1)"
" Buffers: shared hit=15821 read=37693"
" I/O Timings: read=58.698"
" -> Gather Merge (cost=59749.56..59774.99 rows=218 width=356) (actual time=458.806..462.360 rows=2 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" Buffers: shared hit=15821 read=37693"
" I/O Timings: read=58.698"
" -> Sort (cost=58749.53..58749.81 rows=109 width=356) (actual time=434.602..434.606 rows=1 loops=3)"
" Sort Key: (similarity(segments.content, 'Coop Himmelb(l)au, Vienna, Austria'::text)) DESC"
" Sort Method: quicksort Memory: 25kB"
" Buffers: shared hit=15821 read=37693"
" I/O Timings: read=58.698"
" Worker 0: Sort Method: quicksort Memory: 25kB"
" Worker 1: Sort Method: quicksort Memory: 25kB"
" -> Hash Left Join (cost=4326.38..58747.72 rows=109 width=356) (actual time=433.628..434.588 rows=1 loops=3)"
" Hash Cond: (translation_records.updated_by = updaters.id)"
" Buffers: shared hit=15805 read=37693"
" I/O Timings: read=58.698"
" -> Nested Loop (cost=4309.86..58730.64 rows=109 width=324) (actual time=433.603..434.562 rows=1 loops=3)"
" Buffers: shared hit=15803 read=37693"
" I/O Timings: read=58.698"
" -> Nested Loop (cost=4309.70..58727.69 rows=109 width=296) (actual time=433.593..434.551 rows=1 loops=3)"
" Buffers: shared hit=15798 read=37693"
" I/O Timings: read=58.698"
" -> Hash Join (cost=4309.27..58658.80 rows=109 width=174) (actual time=433.578..434.535 rows=1 loops=3)"
" Hash Cond: (translation_records.source_segment_id = segments.id)"
" Buffers: shared hit=15789 read=37693"
" I/O Timings: read=58.698"
" -> Parallel Seq Scan on translation_records (cost=0.00..51497.78 rows=1086382 width=52) (actual time=0.024..145.197 rows=869773 loops=3)"
" Filter: (language_combination = '2_1'::text)"
" Buffers: shared hit=225 read=37693"
" I/O Timings: read=58.698"
" -> Hash (cost=4303.61..4303.61 rows=453 width=126) (actual time=229.792..229.793 rows=2 loops=3)"
" Buckets: 1024 Batches: 1 Memory Usage: 9kB"
" Buffers: shared hit=15558"
" -> Bitmap Heap Scan on segments (cost=2575.51..4303.61 rows=453 width=126) (actual time=225.687..229.789 rows=2 loops=3)"
" Recheck Cond: (content % 'Coop Himmelb(l)au, Vienna, Austria'::text)"
" Rows Removed by Index Recheck: 63"
" Heap Blocks: exact=60"
" Buffers: shared hit=15558"
" -> Bitmap Index Scan on segment_content_gin (cost=0.00..2575.40 rows=453 width=0) (actual time=225.653..225.653 rows=65 loops=3)"
" Index Cond: (content % 'Coop Himmelb(l)au, Vienna, Austria'::text)"
" Buffers: shared hit=15378"
" -> Index Scan using segments_pkey on segments target_segments (cost=0.43..0.63 rows=1 width=126) (actual time=0.019..0.019 rows=1 loops=2)"
" Index Cond: (id = translation_records.target_segment_id)"
" Buffers: shared hit=9"
" -> Memoize (cost=0.16..0.18 rows=1 width=36) (actual time=0.012..0.013 rows=1 loops=2)"
" Cache Key: translation_records.created_by"
" Cache Mode: logical"
" Hits: 0 Misses: 1 Evictions: 0 Overflows: 0 Memory Usage: 1kB"
" Buffers: shared hit=5"
" Worker 0: Hits: 0 Misses: 1 Evictions: 0 Overflows: 0 Memory Usage: 1kB"
" -> Index Scan using users_pkey on users creators (cost=0.15..0.17 rows=1 width=36) (actual time=0.010..0.010 rows=1 loops=2)"
" Index Cond: (id = translation_records.created_by)"
" Buffers: shared hit=5"
" -> Hash (cost=12.90..12.90 rows=290 width=36) (actual time=0.014..0.014 rows=12 loops=2)"
" Buckets: 1024 Batches: 1 Memory Usage: 9kB"
" Buffers: shared hit=2"
" -> Seq Scan on users updaters (cost=0.00..12.90 rows=290 width=36) (actual time=0.010..0.011 rows=12 loops=2)"
" Buffers: shared hit=2"
"Planning:"
" Buffers: shared hit=28"
"Planning Time: 5.739 ms"
"Execution Time: 462.490 ms"
END EDIT
EDIT explain (buffers, analyze) without ordering
When I remove the ORDER BY line the query slows down. Also I went with GIN index due to % operator.
"Limit (cost=4310.00..5796.68 rows=5 width=356) (actual time=777.107..780.931 rows=2 loops=1)"
" Buffers: shared hit=5519 read=37597"
" I/O Timings: read=55.820"
" -> Nested Loop Left Join (cost=4310.00..81914.70 rows=261 width=356) (actual time=777.105..780.929 rows=2 loops=1)"
" Buffers: shared hit=5519 read=37597"
" I/O Timings: read=55.820"
" -> Nested Loop (cost=4309.85..81870.96 rows=261 width=324) (actual time=777.085..780.900 rows=2 loops=1)"
" Buffers: shared hit=5519 read=37597"
" I/O Timings: read=55.820"
" -> Nested Loop (cost=4309.70..81827.91 rows=261 width=296) (actual time=777.080..780.892 rows=2 loops=1)"
" Buffers: shared hit=5515 read=37597"
" I/O Timings: read=55.820"
" -> Hash Join (cost=4309.27..81662.97 rows=261 width=174) (actual time=777.062..780.869 rows=2 loops=1)"
" Hash Cond: (translation_records.source_segment_id = segments.id)"
" Buffers: shared hit=5507 read=37597"
" I/O Timings: read=55.820"
" -> Seq Scan on translation_records (cost=0.00..70509.48 rows=2607318 width=52) (actual time=0.019..387.974 rows=2609320 loops=1)"
" Filter: (language_combination = '2_1'::text)"
" Buffers: shared hit=321 read=37597"
" I/O Timings: read=55.820"
" -> Hash (cost=4303.61..4303.61 rows=453 width=126) (actual time=229.363..229.364 rows=2 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 9kB"
" Buffers: shared hit=5186"
" -> Bitmap Heap Scan on segments (cost=2575.51..4303.61 rows=453 width=126) (actual time=225.850..229.358 rows=2 loops=1)"
" Recheck Cond: (content % 'Coop Himmelb(l)au, Vienna, Austria'::text)"
" Rows Removed by Index Recheck: 63"
" Heap Blocks: exact=60"
" Buffers: shared hit=5186"
" -> Bitmap Index Scan on segment_content_gin (cost=0.00..2575.40 rows=453 width=0) (actual time=225.817..225.817 rows=65 loops=1)"
" Index Cond: (content % 'Coop Himmelb(l)au, Vienna, Austria'::text)"
" Buffers: shared hit=5126"
" -> Index Scan using segments_pkey on segments target_segments (cost=0.43..0.63 rows=1 width=126) (actual time=0.008..0.008 rows=1 loops=2)"
" Index Cond: (id = translation_records.target_segment_id)"
" Buffers: shared hit=8"
" -> Index Scan using users_pkey on users creators (cost=0.15..0.17 rows=1 width=36) (actual time=0.002..0.002 rows=1 loops=2)"
" Index Cond: (id = translation_records.created_by)"
" Buffers: shared hit=4"
" -> Index Scan using users_pkey on users updaters (cost=0.15..0.17 rows=1 width=36) (actual time=0.000..0.000 rows=0 loops=2)"
" Index Cond: (id = translation_records.updated_by)"
"Planning:"
" Buffers: shared hit=28"
"Planning Time: 4.569 ms"
"Execution Time: 781.066 ms"
EDIT 2
Based on the first answer and all comments I've created two additional indices: btree on translation_records.source_segment_id and on translation_records.target_segment_id. Then I also switched to GIST index for segments.content as well as used the <-> operator for ordering. The above query actually slowed down to 4.5 seconds.
It should be noted that the searched text has the 1908 row in the DB. Also, the similarity threshold is 0.45. The same query takes between 4.6 seconds for the last row in the DB. The ordering operator doesn't have any effect.
With the GIN index it went back to 407 ms, regardless of the ordering operator. It takes about 6 seconds for the last segment in the DB.
What I have overlooked before is that the similarity threshold has a huge impact on this. By changing it to 0.55 the time drops from 6 seconds to 1.2 seconds for the last row in the DB on my dev laptop.
END EDIT 2
Best, Seba
Because you are ordering by similarity, a gist index should be used.
Then, in the order by, use the distance operator (<->) instead of the similarity function, as the former makes use of the gist index.
I have a query which runs fast during off period but when there is load it runs very slow.
In the New Relic it sometimes shows to run 5-8mins.
The query looks simple but the View definition may be not that simple.
So wanted to know if there is any scope of optimization
Database version - "PostgreSQL 10.14 on x86_64-pc-linux-gnu, compiled by x86_64-unknown-linux-gnu-gcc (GCC) 4.9.4, 64-bit"
The query which comes up in any monitoring tool is:
SELECT
esnpartvie0_.esn_id AS col_0_0_,
esnpartvie0_.esn AS col_1_0_,
esnpartvie0_.quarter_point AS col_2_0_,
esnpartvie0_.work_order_number AS col_3_0_,
esnpartvie0_.site AS col_4_0_,
sum(esnpartvie0_.critical) AS col_5_0_,
sum(esnpartvie0_.numshort) AS col_6_0_,
sum(esnpartvie0_.wa) AS col_7_0_,
esnpartvie0_.customer AS col_8_0_,
esnpartvie0_.adj_accum_date AS col_9_0_,
esnpartvie0_.g2_otr AS col_10_0_,
esnpartvie0_.induct_date AS col_11_0_,
min(esnpartvie0_.delta) AS col_12_0_,
esnpartvie0_.fiscal_week_bucket_date AS col_13_0_
FROM
moa.esn_part_view esnpartvie0_
WHERE
esnpartvie0_.esn_id = 140339
GROUP BY
esnpartvie0_.esn_id,
esnpartvie0_.esn,
esnpartvie0_.quarter_point,
esnpartvie0_.work_order_number,
esnpartvie0_.site,
esnpartvie0_.customer,
esnpartvie0_.adj_accum_date,
esnpartvie0_.g2_otr,
esnpartvie0_.induct_date,
esnpartvie0_.fiscal_week_bucket_date
The Explain Analyze, buffer plan for the same is and the link (https://explain.depesz.com/s/mr76#html)
"GroupAggregate (cost=69684.12..69684.17 rows=1 width=82) (actual time=976.163..976.228 rows=1 loops=1)"
" Group Key: esnpartvie0_.esn_id, esnpartvie0_.esn, esnpartvie0_.quarter_point, esnpartvie0_.work_order_number, esnpartvie0_.site, esnpartvie0_.customer, esnpartvie0_.adj_accum_date, esnpartvie0_.g2_otr, esnpartvie0_.induct_date, esnpartvie0_.fiscal_week_bucket_date"
" Buffers: shared hit=20301, temp read=48936 written=6835"
" -> Sort (cost=69684.12..69684.13 rows=1 width=70) (actual time=976.153..976.219 rows=14 loops=1)"
" Sort Key: esnpartvie0_.esn, esnpartvie0_.quarter_point, esnpartvie0_.work_order_number, esnpartvie0_.site, esnpartvie0_.customer, esnpartvie0_.adj_accum_date, esnpartvie0_.g2_otr, esnpartvie0_.induct_date, esnpartvie0_.fiscal_week_bucket_date"
" Sort Method: quicksort Memory: 26kB"
" Buffers: shared hit=20301, temp read=48936 written=6835"
" -> Subquery Scan on esnpartvie0_ (cost=69684.02..69684.11 rows=1 width=70) (actual time=976.078..976.158 rows=14 loops=1)"
" Buffers: shared hit=20290, temp read=48936 written=6835"
" -> GroupAggregate (cost=69684.02..69684.10 rows=1 width=2016) (actual time=976.077..976.155 rows=14 loops=1)"
" Group Key: e.esn_id, w.number, ed.adj_accum_date, (COALESCE(ed.gate_2_otr, 0)), ed.gate_0_start, ed.gate_1_stop, p.part_id, st.name, mat.name, so.name, dr.name, hpc.hpc_status_name, module.module_name, c.customer_id, m.model_id, ef.engine_family_id, s.site_id, ws.name, ic.comment"
" Buffers: shared hit=20290, temp read=48936 written=6835"
" CTE indexed_comments"
" -> WindowAgg (cost=40573.82..45076.80 rows=225149 width=118) (actual time=182.537..291.895 rows=216974 loops=1)"
" Buffers: shared hit=5226, temp read=3319 written=3327"
" -> Sort (cost=40573.82..41136.69 rows=225149 width=110) (actual time=182.528..215.549 rows=216974 loops=1)"
" Sort Key: part_comment.part_id, part_comment.created_at DESC"
" Sort Method: external merge Disk: 26552kB"
" Buffers: shared hit=5226, temp read=3319 written=3327"
" -> Seq Scan on part_comment (cost=0.00..7474.49 rows=225149 width=110) (actual time=0.014..38.209 rows=216974 loops=1)"
" Buffers: shared hit=5223"
" -> Sort (cost=24607.21..24607.22 rows=1 width=717) (actual time=976.069..976.133 rows=14 loops=1)"
" Sort Key: w.number, ed.adj_accum_date, (COALESCE(ed.gate_2_otr, 0)), ed.gate_0_start, ed.gate_1_stop, p.part_id, st.name, mat.name, so.name, dr.name, hpc.hpc_status_name, module.module_name, c.customer_id, m.model_id, ef.engine_family_id, s.site_id, ws.name, ic.comment"
" Sort Method: quicksort Memory: 28kB"
" Buffers: shared hit=20290, temp read=48936 written=6835"
" -> Nested Loop (cost=1010.23..24607.20 rows=1 width=717) (actual time=442.381..976.017 rows=14 loops=1)"
" Buffers: shared hit=20287, temp read=48936 written=6835"
" -> Nested Loop Left Join (cost=1009.94..24598.88 rows=1 width=697) (actual time=442.337..975.670 rows=14 loops=1)"
" Join Filter: (ic.part_id = p.part_id)"
" Rows Removed by Join Filter: 824838"
" Buffers: shared hit=20245, temp read=48936 written=6835"
" -> Nested Loop Left Join (cost=1009.94..19518.95 rows=1 width=181) (actual time=56.148..57.676 rows=14 loops=1)"
" Buffers: shared hit=15019"
" -> Nested Loop Left Join (cost=1009.81..19518.35 rows=1 width=183) (actual time=56.139..57.635 rows=14 loops=1)"
" Buffers: shared hit=15019"
" -> Nested Loop Left Join (cost=1009.67..19517.67 rows=1 width=181) (actual time=56.133..57.598 rows=14 loops=1)"
" Buffers: shared hit=15019"
" -> Nested Loop Left Join (cost=1009.55..19516.82 rows=1 width=179) (actual time=56.124..57.544 rows=14 loops=1)"
" Buffers: shared hit=15019"
" -> Nested Loop Left Join (cost=1009.42..19516.04 rows=1 width=178) (actual time=56.105..57.439 rows=14 loops=1)"
" Buffers: shared hit=14991"
" -> Nested Loop Left Join (cost=1009.28..19515.37 rows=1 width=175) (actual time=56.089..57.335 rows=14 loops=1)"
" Buffers: shared hit=14963"
" -> Nested Loop Left Join (cost=1009.14..19514.77 rows=1 width=170) (actual time=56.068..57.206 rows=14 loops=1)"
" Join Filter: (e.work_scope_id = ws.work_scope_id)"
" Buffers: shared hit=14935"
" -> Nested Loop Left Join (cost=1009.14..19513.55 rows=1 width=166) (actual time=56.043..57.102 rows=14 loops=1)"
" Join Filter: (e.esn_id = p.esn_id)"
" Buffers: shared hit=14921"
" -> Nested Loop (cost=9.14..31.40 rows=1 width=125) (actual time=0.081..0.130 rows=1 loops=1)"
" Buffers: shared hit=15"
" -> Nested Loop (cost=8.87..23.08 rows=1 width=118) (actual time=0.069..0.117 rows=1 loops=1)"
" Buffers: shared hit=12"
" -> Nested Loop (cost=8.73..21.86 rows=1 width=108) (actual time=0.055..0.102 rows=1 loops=1)"
" Buffers: shared hit=10"
" -> Nested Loop (cost=8.60..21.65 rows=1 width=46) (actual time=0.046..0.091 rows=1 loops=1)"
" Buffers: shared hit=8"
" -> Hash Join (cost=8.31..13.34 rows=1 width=41) (actual time=0.036..0.081 rows=1 loops=1)"
" Hash Cond: (m.model_id = e.model_id)"
" Buffers: shared hit=5"
" -> Seq Scan on model m (cost=0.00..4.39 rows=239 width=17) (actual time=0.010..0.038 rows=240 loops=1)"
" Buffers: shared hit=2"
" -> Hash (cost=8.30..8.30 rows=1 width=28) (actual time=0.009..0.010 rows=1 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 9kB"
" Buffers: shared hit=3"
" -> Index Scan using esn_pkey on esn e (cost=0.29..8.30 rows=1 width=28) (actual time=0.006..0.006 rows=1 loops=1)"
" Index Cond: (esn_id = 140339)"
" Filter: active"
" Buffers: shared hit=3"
" -> Index Scan using work_order_pkey on work_order w (cost=0.29..8.30 rows=1 width=13) (actual time=0.008..0.008 rows=1 loops=1)"
" Index Cond: (work_order_id = e.work_order_id)"
" Buffers: shared hit=3"
" -> Index Scan using engine_family_pkey on engine_family ef (cost=0.14..0.20 rows=1 width=66) (actual time=0.009..0.009 rows=1 loops=1)"
" Index Cond: (engine_family_id = m.engine_family_id)"
" Buffers: shared hit=2"
" -> Index Scan using site_pkey on site s (cost=0.14..1.15 rows=1 width=14) (actual time=0.013..0.013 rows=1 loops=1)"
" Index Cond: (site_id = ef.site_id)"
" Buffers: shared hit=2"
" -> Index Scan using customer_pkey on customer c (cost=0.27..8.29 rows=1 width=11) (actual time=0.012..0.012 rows=1 loops=1)"
" Index Cond: (customer_id = e.customer_id)"
" Buffers: shared hit=3"
" -> Gather (cost=1000.00..19481.78 rows=29 width=41) (actual time=55.958..56.949 rows=14 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" Buffers: shared hit=14906"
" -> Parallel Seq Scan on part p (cost=0.00..18478.88 rows=12 width=41) (actual time=51.855..52.544 rows=5 loops=3)"
" Filter: (active AND (esn_id = 140339))"
" Rows Removed by Filter: 226662"
" Buffers: shared hit=14906"
" -> Seq Scan on work_scope ws (cost=0.00..1.10 rows=10 width=12) (actual time=0.004..0.004 rows=1 loops=14)"
" Buffers: shared hit=14"
" -> Index Scan using source_pkey on source so (cost=0.14..0.57 rows=1 width=13) (actual time=0.005..0.005 rows=1 loops=14)"
" Index Cond: (p.source_id = source_id)"
" Buffers: shared hit=28"
" -> Index Scan using status_pkey on status st (cost=0.13..0.56 rows=1 width=11) (actual time=0.004..0.004 rows=1 loops=14)"
" Index Cond: (p.status_id = status_id)"
" Buffers: shared hit=28"
" -> Index Scan using material_stream_pkey on material_stream mat (cost=0.13..0.56 rows=1 width=9) (actual time=0.004..0.004 rows=1 loops=14)"
" Index Cond: (p.material_stream_id = material_stream_id)"
" Buffers: shared hit=28"
" -> Index Scan using dr_status_pkey on dr_status dr (cost=0.13..0.56 rows=1 width=10) (actual time=0.001..0.001 rows=0 loops=14)"
" Index Cond: (p.dr_status_id = dr_status_id)"
" -> Index Scan using hpc_status_pkey on hpc_status hpc (cost=0.13..0.56 rows=1 width=10) (actual time=0.001..0.001 rows=0 loops=14)"
" Index Cond: (p.hpc_status_id = hpc_status_id)"
" -> Index Scan using module_pkey on module (cost=0.14..0.57 rows=1 width=6) (actual time=0.001..0.001 rows=0 loops=14)"
" Index Cond: (p.module_id = module_id)"
" -> CTE Scan on indexed_comments ic (cost=0.00..5065.85 rows=1126 width=520) (actual time=13.043..61.251 rows=58917 loops=14)"
" Filter: (comment_index = 1)"
" Rows Removed by Filter: 158057"
" Buffers: shared hit=5226, temp read=48936 written=6835"
" -> Index Scan using esn_dates_esn_id_key on esn_dates ed (cost=0.29..8.32 rows=1 width=20) (actual time=0.019..0.020 rows=1 loops=14)"
" Index Cond: (esn_id = 140339)"
" Filter: ((gate_3_stop_actual AND (gate_3_stop >= now())) OR (gate_3_stop IS NULL) OR ((NOT gate_3_stop_actual) AND (gate_3_stop IS NOT NULL) AND (gate_3_stop >= (now() - '730 days'::interval))))"
" Buffers: shared hit=42"
"Planning time: 6.564 ms"
"Execution time: 988.335 ms"
The actual View definition on which the above select is running
with indexed_comments as (
select
part_comment.part_id,
part_comment.comment,
row_number() over (partition by part_comment.part_id
order by
part_comment.created_at desc) as comment_index
from
moa.part_comment
)
select
e.esn_id,
e.name as esn,
e.is_qp_engine as quarter_point,
w.number as work_order_number,
case
when (p.part_id is null) then 0
else p.part_id
end as part_id,
p.part_number,
p.part_description,
p.quantity,
st.name as status,
p.status_id,
mat.name as material_stream,
p.material_stream_id,
so.name as source,
p.source_id,
p.oem,
p.po_number,
p.manual_cso_commit,
p.auto_cso_commit,
coalesce(p.manual_cso_commit, p.auto_cso_commit) as calculated_cso_commit,
(coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)), ed.gate_0_start) + p.accum_offset) as adjusted_accum,
dr.name as dr_status,
p.dr_status_id,
p.airway_bill,
p.core_material,
hpc.hpc_status_name as hpc_status,
p.hpc_status_id,
module.module_name,
p.module_id,
c.name as customer,
c.customer_id,
m.name as model,
m.model_id,
ef.name as engine_family,
ef.engine_family_id,
s.label as site,
s.site_id,
case
when (coalesce(p.manual_cso_commit, p.auto_cso_commit) > coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)), ed.gate_0_start)) then 1
else 0
end as critical,
case
when (coalesce(p.manual_cso_commit, p.auto_cso_commit) <= coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)), ed.gate_0_start)) then 1
else 0
end as numshort,
case
when ((p.esn_id is not null)
and (coalesce(p.manual_cso_commit, p.auto_cso_commit) is null)) then 1
else 0
end as wa,
ed.adj_accum_date,
(ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)) as g2_otr,
ed.gate_0_start as induct_date,
coalesce((coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0))) - max(coalesce(p.manual_cso_commit, p.auto_cso_commit))), 0) as delta,
coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)), ed.gate_0_start) as fiscal_week_bucket_date,
p.po_line_num,
p.ship_out,
p.receipt,
p.crit_ship,
e.work_scope_id,
ws.name as work_scope,
p.late_call,
p.ex_esn,
p.accum_offset,
ic.comment as latest_comment
from
(((((((((((((((moa.esn e
join moa.work_order w
using (work_order_id))
join moa.model m
using (model_id))
join moa.engine_family ef on
((m.engine_family_id = ef.engine_family_id)))
join moa.site s on
((ef.site_id = s.site_id)))
join moa.customer c
using (customer_id))
left join moa.part p on
(((e.esn_id = p.esn_id)
and (p.active <> false))))
left join moa.work_scope ws on
((e.work_scope_id = ws.work_scope_id)))
left join moa.source so on
((p.source_id = so.source_id)))
left join moa.status st on
((p.status_id = st.status_id)))
left join moa.material_stream mat
using (material_stream_id))
left join moa.dr_status dr
using (dr_status_id))
left join moa.hpc_status hpc
using (hpc_status_id))
left join moa.module module
using (module_id))
left join indexed_comments ic on
(((ic.part_id = p.part_id)
and (ic.comment_index = 1))))
join moa.esn_dates ed on
((e.esn_id = ed.esn_id)))
where
((e.active = true)
and (((ed.gate_3_stop_actual = true)
and (ed.gate_3_stop >= now()))
or (ed.gate_3_stop is null)
or ((ed.gate_3_stop_actual = false)
and (ed.gate_3_stop is not null)
and (ed.gate_3_stop >= (now() - '730 days'::interval)))))
group by
e.esn_id,
w.number,
s.label,
c.name,
p.active,
ed.adj_accum_date,
coalesce(ed.gate_2_otr, 0),
ed.gate_0_start,
ed.gate_1_stop,
p.part_id,
st.name,
mat.name,
so.name,
dr.name,
hpc.hpc_status_name,
module.module_name,
c.customer_id,
m.name,
m.model_id,
ef.name,
ef.engine_family_id,
s.site_id,
ws.name,
ic.comment;
What a horrific query.
Most of the time is going to this:
-> CTE Scan on indexed_comments ic (cost=0.00..5065.85 rows=1126 width=520) (actual time=13.043..61.251 rows=58917 loops=14)"
And the main culprit there is a misestimation of upper sibling node. It thinks it will need to do the CTE Scan one time, but it actually needs to do it 14 times (although apparently returning the same answer each time). If it knew it would do it repeatedly, it would set up a hash table rather than just iterate through it each time. But since setting up the hash requires one iteration through it, it doesn't seem to save anything if it thinks it only needs one iteration in the first place.
I don't know how to fix the estimation problem. But you could compute the ranks on the fly, rather than computing all up front then needing to search through them. You would do that with a LATERAL join.
Change
left join indexed_comments ic on
(((ic.part_id = p.part_id)
and (ic.comment_index = 1))))
to
left join lateral (select comment from part_comment pc where p.part_id=pc.part_id order by created_at desc limit 1) ic on true
and get rid of the with indexed_comments as...
For this to be fast you would need an index ON part_comment (part_id, created_at)
I am quite new to optimizing the speed of a select, but I have the one below which is time consuming. I would be grateful for suggestions to improve performance.
SELECT DISTINCT p.id "pub_id",
p.submission_year,
pip.level,
mv_s.integer_value "total_citations",
1 "count_pub"
FROM publication p
JOIN organisation_association "oa" ON (oa.publication_id = p.id
AND oa.organisation_id IN (249189578,
249189824))
JOIN bfi_2017 "pip" ON (p.uuid = pip.uuid
AND pip.bfi_score > 0
AND pip.bfi_score IS NOT NULL)
LEFT JOIN metric_value mv_s ON (mv_s.name = 'citations'
AND EXISTS
(SELECT *
FROM publication_metrics pm_s
JOIN metrics m_s ON (m_s.id = pm_s.metrics_id
AND m_s.source_id = 210247389
AND pm_s.publication_id = p.id
AND mv_s.metrics_id = m_s.id)))
WHERE p.peer_review = 'true'
AND (p.type_classification_id IN (57360320,
57360322,
57360324,
57360326,
57360350))
AND p.submission_year = 2017
Execute plan:
"Unique (cost=532129954.32..532286422.32 rows=4084080 width=24) (actual time=1549616.424..1549616.582 rows=699 loops=1)"
" Buffers: shared read=27411, temp read=1774656 written=2496"
" -> Sort (cost=532129954.32..532169071.32 rows=15646800 width=24) (actual time=1549616.422..1549616.445 rows=712 loops=1)"
" Sort Key: p.id, pip.level, mv_s.integer_value"
" Sort Method: quicksort Memory: 80kB"
" Buffers: shared read=27411, temp read=1774656 written=2496"
" -> Nested Loop Left Join (cost=393.40..529618444.45 rows=15646800 width=24) (actual time=1832.122..1549614.196 rows=712 loops=1)"
" Join Filter: (SubPlan 1)"
" Rows Removed by Join Filter: 607313310"
" Buffers: shared read=27411, temp read=1774656 written=2496"
" -> Nested Loop (cost=393.40..8704.01 rows=37 width=16) (actual time=5.470..125.773 rows=712 loops=1)"
" Buffers: shared hit=20313 read=4585"
" -> Hash Join (cost=392.97..7886.65 rows=72 width=16) (actual time=5.160..77.182 rows=3417 loops=1)"
" Hash Cond: ((p.uuid)::text = (pip.uuid)::text)"
" Buffers: shared hit=2 read=3670"
" -> Bitmap Heap Scan on publication p (cost=160.30..7643.44 rows=2618 width=49) (actual time=2.335..67.546 rows=4527 loops=1)"
" Recheck Cond: (submission_year = 2017)"
" Filter: (peer_review AND (type_classification_id = ANY ('{57360320,57360322,57360324,57360326,57360350}'::bigint[])))"
" Rows Removed by Filter: 3975"
" Heap Blocks: exact=3556"
" Buffers: shared hit=2 read=3581"
" -> Bitmap Index Scan on idx_in2ix3rvuzxxf76bsipgn4l4sy (cost=0.00..159.64 rows=8430 width=0) (actual time=1.784..1.784 rows=8502 loops=1)"
" Index Cond: (submission_year = 2017)"
" Buffers: shared read=27"
" -> Hash (cost=181.61..181.61 rows=4085 width=41) (actual time=2.787..2.787 rows=4085 loops=1)"
" Buckets: 4096 Batches: 1 Memory Usage: 324kB"
" Buffers: shared read=89"
" -> Seq Scan on bfi_2017 pip (cost=0.00..181.61 rows=4085 width=41) (actual time=0.029..2.034 rows=4085 loops=1)"
" Filter: ((bfi_score IS NOT NULL) AND (bfi_score > '0'::double precision))"
" Rows Removed by Filter: 3324"
" Buffers: shared read=89"
" -> Index Only Scan using org_ass_publication_idx on organisation_association oa (cost=0.43..11.34 rows=1 width=8) (actual time=0.011..0.012 rows=0 loops=3417)"
" Index Cond: ((publication_id = p.id) AND (organisation_id = ANY ('{249189578,249189824}'::bigint[])))"
" Heap Fetches: 712"
" Buffers: shared hit=20311 read=915"
" -> Materialize (cost=0.00..53679.95 rows=845773 width=12) (actual time=0.012..93.456 rows=852969 loops=712)"
" Buffers: shared read=20873, temp read=1774656 written=2496"
" -> Seq Scan on metric_value mv_s (cost=0.00..45321.09 rows=845773 width=12) (actual time=0.043..470.590 rows=852969 loops=1)"
" Filter: ((name)::text = 'citations'::text)"
" Rows Removed by Filter: 1102878"
" Buffers: shared read=20873"
" SubPlan 1"
" -> Nested Loop (cost=0.85..16.91 rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=607313928)"
" Buffers: shared read=1953"
" -> Index Scan using idx_w4wbsbxcqvjmqu64ubjlmqywdy on publication_metrics pm_s (cost=0.43..8.45 rows=1 width=8) (actual time=0.002..0.002 rows=0 loops=607313928)"
" Index Cond: (metrics_id = mv_s.metrics_id)"
" Filter: (publication_id = p.id)"
" Rows Removed by Filter: 1"
" -> Index Scan using metrics_pkey on metrics m_s (cost=0.43..8.45 rows=1 width=8) (actual time=0.027..0.027 rows=0 loops=3108)"
" Index Cond: (id = mv_s.metrics_id)"
" Filter: (source_id = 210247389)"
" Rows Removed by Filter: 1"
" Buffers: shared hit=10496 read=1953"
"Planning Time: 1.833 ms"
"Execution Time: 1549621.523 ms"
We are using PostgreSQL 9.5.2
We have 11 tables with around average of 10K records in each table
One of the table contains text column for which maximum content size is 12K characters.
When we exclude text column from select statement, it comes in around 5 seconds, and when we include text column, it take around 55 seconds. if we select any other column from same table, it works fine, but as soon as we take text column, performance goes on toss.
All tables are inner joined.
Can you please suggest on how to solve this?
Explain output shows 378ms but in real, it take around 1 minute to get these data.
so when we exclude text column from "ic" table, get result in 4-5 seconds.
"Nested Loop Left Join (cost=4.04..156.40 rows=10 width=616) (actual time=3.092..377.128 rows=24118 loops=1)"
" -> Nested Loop Left Join (cost=3.90..59.92 rows=7 width=603) (actual time=2.834..110.842 rows=14325 loops=1)"
" -> Nested Loop Left Join (cost=3.76..58.56 rows=7 width=604) (actual time=2.832..101.481 rows=12340 loops=1)"
" -> Nested Loop (cost=3.62..57.19 rows=7 width=590) (actual time=2.830..90.614 rows=8436 loops=1)"
" Join Filter: (i."Id" = ic."ImId")"
" -> Nested Loop (cost=3.33..51.42 rows=7 width=210) (actual time=2.807..65.782 rows=8436 loops=1)"
" -> Nested Loop (cost=3.19..50.21 rows=7 width=187) (actual time=2.424..54.596 rows=8436 loops=1)"
" -> Nested Loop (cost=2.77..46.16 rows=7 width=175) (actual time=1.944..32.056 rows=8436 loops=1)"
" -> Nested Loop (cost=2.35..23.66 rows=5 width=87) (actual time=1.750..1.877 rows=4 loops=1)"
" -> Hash Join (cost=2.22..22.84 rows=5 width=55) (actual time=1.492..1.605 rows=4 loops=1)"
" Hash Cond: (i."ImtypId" = it."Id")"
" -> Nested Loop (cost=0.84..21.29 rows=34 width=51) (actual time=1.408..1.507 rows=30 loops=1)"
" -> Nested Loop (cost=0.56..9.68 rows=34 width=35) (actual time=1.038..1.053 rows=30 loops=1)"
" -> Index Only Scan using ev_query on "table_Ev" e (cost=0.28..4.29 rows=1 width=31) (actual time=0.523..0.523 rows=1 loops=1)"
" Index Cond: ("Id" = 1301)"
" Heap Fetches: 0"
" -> Index Only Scan using asmitm_query on "table_AsmItm" ai (cost=0.28..5.07 rows=31 width=8) (actual time=0.499..0.508 rows=30 loops=1)"
" Index Cond: (("AsmId" = e."AsmId") AND ("IsActive" = true))"
" Filter: "IsActive""
" Heap Fetches: 0"
" -> Index Only Scan using itm_query on "table_Itm" i (cost=0.28..0.33 rows=1 width=16) (actual time=0.014..0.014 rows=1 loops=30)"
" Index Cond: ("Id" = ai."ImId")"
" Heap Fetches: 0"
" -> Hash (cost=1.33..1.33 rows=4 width=12) (actual time=0.026..0.026 rows=4 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 9kB"
" -> Seq Scan on "ItmTyp" it (cost=0.00..1.33 rows=4 width=12) (actual time=0.013..0.018 rows=4 loops=1)"
" Filter: ("ParentId" = 12)"
" Rows Removed by Filter: 22"
" -> Index Only Scan using jur_query on "table_Jur" j (cost=0.14..0.15 rows=1 width=36) (actual time=0.065..0.066 rows=1 loops=4)"
" Index Cond: ("Id" = i."JurId")"
" Heap Fetches: 4"
" -> Index Scan using pwsres_evid_ImId_canid_query on "table_PwsRes" p (cost=0.42..3.78 rows=72 width=92) (actual time=0.056..6.562 rows=2109 loops=4)"
" Index Cond: (("EvId" = 1301) AND ("ImId" = i."Id"))"
" -> Index Only Scan using user_query on "table_User" u (cost=0.42..0.57 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=8436)"
" Index Cond: ("Id" = p."CanId")"
" Heap Fetches: 0"
" -> Index Only Scan using ins_query on "table_Ins" ins (cost=0.14..0.16 rows=1 width=31) (actual time=0.001..0.001 rows=1 loops=8436)"
" Index Cond: ("Id" = u."InsId")"
" Heap Fetches: 0"
" -> Index Scan using "IX_ItmCont_ImId" on "table_ItmCont" ic (cost=0.29..0.81 rows=1 width=392) (actual time=0.002..0.002 rows=1 loops=8436)"
" Index Cond: ("ImId" = p."ImId")"
" Filter: ("ContTyp" = 'CP'::text)"
" Rows Removed by Filter: 1"
" -> Index Scan using "IX_FreDetail_FreId" on "table_FreDetail" f (cost=0.14..0.18 rows=2 width=22) (actual time=0.000..0.001 rows=1 loops=8436)"
" Index Cond: ("FreId" = p."FreId")"
" -> Index Scan using "IX_DurDetail_DurId" on "table_DurDetail" d (cost=0.14..0.17 rows=2 width=7) (actual time=0.000..0.000 rows=0 loops=12340)"
" Index Cond: ("DurId" = p."DurId")"
" -> Index Scan using "IX_DruConsRouteDetail_DruConsRouId" on "table_DruConsRouDetail" dr (cost=0.14..0.18 rows=2 width=21) (actual time=0.001..0.001 rows=1 loops=14325)"
" Index Cond: ("DruConsRouteId" = p."RouteId")"
" SubPlan 1"
" -> Index Only Scan using asm_query on "table_Asm" (cost=0.14..8.16 rows=1 width=26) (actual time=0.001..0.001 rows=1 loops=24118)"
" Index Cond: ("Id" = e."AsmId")"
" Heap Fetches: 24118"
" SubPlan 2"
" -> Seq Scan on "ItmTyp" ity (cost=0.00..1.33 rows=1 width=4) (actual time=0.003..0.003 rows=1 loops=24118)"
" Filter: ("Id" = it."ParentId")"
" Rows Removed by Filter: 25"
"Planning time: 47.056 ms"
"Execution time: 378.229 ms"
If the explain analyze output is taking 378ms, that is how long the query is taking and there's probably not a lot of room for improvement there. If it's taking 1 minute to transfer and load the data, you need to work on that end.
If you're trying to view very wide rows in psql or pgadmin, it can take some time to calculate the row widths or render the html, but that has nothing to do with query performance.