postgres optimize scope search query - postgresql

I use postgres 10.4 in linux environment .
I have a query which is very slow when I run it.
the search using subject with arabic language is very slow.
also the problem exist with subject in different language.
I have about 1 million record per year.
I try to add index in the table transfer but the result is the same
CREATE INDEX subject
ON public.transfer
USING btree
(subject COLLATE pg_catalog."default" varchar_pattern_ops);
this is the query .
select * from ( with scope as (
select unit_id from public.sec_unit
where emp_id= 'EM-001'and app_type in ('S','E') )
select CAST (row_number() OVER (PARTITION BY advsearch.correspondenceid)
as VARCHAR(15)) as numline, advsearch.*
from (
SELECT Transfer.id AS id, CORRESP.id AS correspondenceId,
Transfer.correspondencecopy_id AS correspondencecopyId, Transfer.datesendjctransfer AS datesendjctransfer
FROM Transfer Transfer
Left outer JOIN correspondencecopy CORRESPCPY ON Transfer.correspondencecopy_id = CORRESPCPY.id
Left outer JOIN correspondence CORRESP ON CORRESP.id = CORRESPCPY.correspondence_id
LEFT OUTER JOIN scope sc on sc.unit_id = Transfer.unittransto_id or sc.unit_id='allorg'
LEFT OUTER JOIN employee emp on emp.id = 'EM-001'
WHERE transfer.status='1' AND (Transfer.docyearhjr='1441' )
AND (Transfer.subject like '%'||'رقم'||'%')
AND ( sc.unit_id is not null )
AND (coalesce(emp.confidentiel,'0') >= coalesce(Transfer.confidentiel,'0'))
)
advsearch ) Searchlist
WHERE Searchlist.numline='1'
ORDER BY Searchlist.datesendjctransfer
can someone help me to optimise the query
updated
I try to change the query.
I eliminate the use of scope.
I change it by simple condition.
I have the same result ( the same number of record )
but the problem is the same : the query is still very slow
select * from (
select CAST (row_number() OVER (PARTITION BY advsearch.correspondenceid)
as VARCHAR(15)) as numline, advsearch.*
from (
SELECT Transfer.id AS id, CORRESP.id AS correspondenceId,
Transfer.correspondencecopy_id AS correspondencecopyId, Transfer.datesendjctransfer AS datesendjctransfer
FROM Transfer Transfer
Left outer JOIN correspondencecopy CORRESPCPY ON Transfer.correspondencecopy_id = CORRESPCPY.id
Left outer JOIN correspondence CORRESP ON CORRESP.id = CORRESPCPY.correspondence_id
LEFT OUTER JOIN employee emp on emp.id = 'EM-001'
WHERE transfer.status='1' and ( Transfer.unittransto_id in (
select unit_id from public.security_employee_unit
where employee_id= 'EM-001'and app_type in ('E','S') )
or 'allorg' in ( select unit_id from public.security_employee_unit
where employee_id= 'EM-001'and app_type in ('S')))
AND (Transfer.docyearhjr='1441' )
AND (Transfer.subject like '%'||'رقم'||'%')
AND (coalesce(emp.confidentiel,'0') >= coalesce(Transfer.confidentiel,'0'))
)
advsearch ) Searchlist
WHERE Searchlist.numline='1'
ORDER BY Searchlist.datesendjctransfer
Updated :
I try to analyze the query using EXPLAIN ANALYZE
this the result :
"Sort (cost=412139.09..412139.13 rows=17 width=87) (actual time=1481.951..1482.166 rows=4497 loops=1)"
" Sort Key: searchlist.datesendjctransfer"
" Sort Method: quicksort Memory: 544kB"
" -> Subquery Scan on searchlist (cost=412009.59..412138.74 rows=17 width=87) (actual time=1457.717..1480.381 rows=4497 loops=1)"
" Filter: ((searchlist.numline)::text = '1'::text)"
" Rows Removed by Filter: 38359"
" -> WindowAgg (cost=412009.59..412095.69 rows=3444 width=87) (actual time=1457.715..1477.146 rows=42856 loops=1)"
" CTE scope"
" -> Bitmap Heap Scan on security_employee_unit (cost=8.59..15.83 rows=2 width=7) (actual time=0.043..0.058 rows=2 loops=1)"
" Recheck Cond: (((employee_id)::text = 'EM-001'::text) AND ((app_type)::text = ANY ('{SE,I}'::text[])))"
" Heap Blocks: exact=2"
" -> Bitmap Index Scan on employeeidkey (cost=0.00..8.59 rows=2 width=0) (actual time=0.037..0.037 rows=2 loops=1)"
" Index Cond: (((employee_id)::text = 'EM-001'::text) AND ((app_type)::text = ANY ('{SE,I}'::text[])))"
" -> Sort (cost=411993.77..412002.38 rows=3444 width=39) (actual time=1457.702..1461.773 rows=42856 loops=1)"
" Sort Key: corresp.id"
" Sort Method: external merge Disk: 2440kB"
" -> Nested Loop Left Join (cost=18315.99..411791.43 rows=3444 width=39) (actual time=1271.209..1295.423 rows=42856 loops=1)"
" Filter: ((COALESCE(emp.confidentiel, '0'::character varying))::text >= (COALESCE(transfer.confidentiel, '0'::character varying))::text)"
" -> Nested Loop (cost=18315.71..411628.14 rows=10333 width=41) (actual time=1271.165..1283.365 rows=42856 loops=1)"
" Join Filter: (((sc.unit_id)::text = (transfer.unittransto_id)::text) OR ((sc.unit_id)::text = 'allorg'::text))"
" Rows Removed by Join Filter: 42856"
" -> CTE Scan on scope sc (cost=0.00..0.04 rows=2 width=48) (actual time=0.045..0.064 rows=2 loops=1)"
" Filter: (unit_id IS NOT NULL)"
" -> Materialize (cost=18315.71..411292.44 rows=10328 width=48) (actual time=53.970..635.651 rows=42856 loops=2)"
" -> Gather (cost=18315.71..411240.80 rows=10328 width=48) (actual time=107.919..1254.600 rows=42856 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" -> Nested Loop Left Join (cost=17315.71..409208.00 rows=4303 width=48) (actual time=104.436..1250.461 rows=14285 loops=3)"
" -> Nested Loop Left Join (cost=17315.28..405979.02 rows=4303 width=48) (actual time=104.382..1136.591 rows=14285 loops=3)"
" -> Parallel Bitmap Heap Scan on transfer (cost=17314.85..377306.25 rows=4303 width=39) (actual time=104.287..996.609 rows=14285 loops=3)"
" Recheck Cond: ((docyearhjr)::text = '1441'::text)"
" Rows Removed by Index Recheck: 437299"
" Filter: (((subject)::text ~~ '%رقم%'::text) AND ((status)::text = '1'::text))"
" Rows Removed by Filter: 297178"
" Heap Blocks: exact=14805 lossy=44734"
" -> Bitmap Index Scan on docyearhjr (cost=0.00..17312.27 rows=938112 width=0) (actual time=96.028..96.028 rows=934389 loops=1)"
" Index Cond: ((docyearhjr)::text = '1441'::text)"
" -> Index Scan using pk_correspondencecopy on correspondencecopy correspcpy (cost=0.43..6.66 rows=1 width=21) (actual time=0.009..0.009 rows=1 loops=42856)"
" Index Cond: ((transfer.correspondencecopy_id)::text = (id)::text)"
" -> Index Only Scan using pk_correspondence on correspondence corresp (cost=0.42..0.75 rows=1 width=9) (actual time=0.007..0.007 rows=1 loops=42856)"
" Index Cond: (id = (correspcpy.correspondence_id)::text)"
" Heap Fetches: 14227"
" -> Materialize (cost=0.28..8.31 rows=1 width=2) (actual time=0.000..0.000 rows=1 loops=42856)"
" -> Index Scan using pk_employee on employee emp (cost=0.28..8.30 rows=1 width=2) (actual time=0.038..0.038 rows=1 loops=1)"
" Index Cond: ((id)::text = 'EM-001'::text)"
"Planning time: 1.595 ms"
"Execution time: 1487.303 ms"

Related

Postgres query performance improvement

I am a newbie to database optimisations,
The table data I have is around 29 million rows,
I am running on Pgadmin to do select * on the rows and it takes 9 seconds.
What can I do to optimize performance?
SELECT
F."Id",
F."Name",
F."Url",
F."CountryModel",
F."RegionModel",
F."CityModel",
F."Street",
F."Phone",
F."PostCode",
F."Images",
F."Rank",
F."CommentCount",
F."PageRank",
F."Description",
F."Properties",
F."IsVerify",
count(*) AS Counter
FROM
public."Firms" F,
LATERAL unnest(F."Properties") AS P
WHERE
F."CountryId" = 1
AND F."RegionId" = 7
AND F."CityId" = 4365
AND P = ANY (ARRAY[126, 128])
AND F."Deleted" = FALSE
GROUP BY
F."Id"
ORDER BY
Counter DESC,
F."IsVerify" DESC,
F."PageRank" DESC OFFSET 10 ROWS FETCH FIRST 20 ROW ONLY
Thats my query plan
" -> Sort (cost=11945.20..11948.15 rows=1178 width=369) (actual time=8981.514..8981.515 rows=30 loops=1)"
" Sort Key: (count(*)) DESC, f.""IsVerify"" DESC, f.""PageRank"" DESC"
" Sort Method: top-N heapsort Memory: 58kB"
" -> HashAggregate (cost=11898.63..11910.41 rows=1178 width=369) (actual time=8981.234..8981.305 rows=309 loops=1)"
" Group Key: f.""Id"""
" Batches: 1 Memory Usage: 577kB"
" -> Nested Loop (cost=7050.07..11886.85 rows=2356 width=360) (actual time=79.408..8980.167 rows=322 loops=1)"
" -> Bitmap Heap Scan on ""Firms"" f (cost=7050.06..11716.04 rows=1178 width=360) (actual time=78.414..8909.649 rows=56071 loops=1)"
" Recheck Cond: ((""CityId"" = 4365) AND (""RegionId"" = 7))"
" Filter: ((NOT ""Deleted"") AND (""CountryId"" = 1))"
" Heap Blocks: exact=55330"
" -> BitmapAnd (cost=7050.06..7050.06 rows=1178 width=0) (actual time=70.947..70.947 rows=0 loops=1)"
" -> Bitmap Index Scan on ""IX_Firms_CityId"" (cost=0.00..635.62 rows=58025 width=0) (actual time=11.563..11.563 rows=56072 loops=1)"
" Index Cond: (""CityId"" = 4365)"
" -> Bitmap Index Scan on ""IX_Firms_RegionId"" (cost=0.00..6413.60 rows=588955 width=0) (actual time=57.795..57.795 rows=598278 loops=1)"
" Index Cond: (""RegionId"" = 7)"
" -> Function Scan on unnest p (cost=0.00..0.13 rows=2 width=0) (actual time=0.001..0.001 rows=0 loops=56071)"
" Filter: (p = ANY ('{126,128}'::integer[]))"
" Rows Removed by Filter: 2"
"Planning Time: 0.351 ms"
"Execution Time: 8981.725 ms"```
Create a GIN index on F."Properties",
create index on "Firms" using gin ("Properties");
then add a clause to your WHERE
...
AND P = ANY (ARRAY[126, 128])
AND "Properties" && ARRAY[126, 128]
....
That added clause is redundant to the one preceding it, but the planner is not smart enough to reason through that so you need to make it explicit.

Postgresql Query performance optimization

I have a query which runs fast during off period but when there is load it runs very slow.
In the New Relic it sometimes shows to run 5-8mins.
The query looks simple but the View definition may be not that simple.
So wanted to know if there is any scope of optimization
Database version - "PostgreSQL 10.14 on x86_64-pc-linux-gnu, compiled by x86_64-unknown-linux-gnu-gcc (GCC) 4.9.4, 64-bit"
The query which comes up in any monitoring tool is:
SELECT
esnpartvie0_.esn_id AS col_0_0_,
esnpartvie0_.esn AS col_1_0_,
esnpartvie0_.quarter_point AS col_2_0_,
esnpartvie0_.work_order_number AS col_3_0_,
esnpartvie0_.site AS col_4_0_,
sum(esnpartvie0_.critical) AS col_5_0_,
sum(esnpartvie0_.numshort) AS col_6_0_,
sum(esnpartvie0_.wa) AS col_7_0_,
esnpartvie0_.customer AS col_8_0_,
esnpartvie0_.adj_accum_date AS col_9_0_,
esnpartvie0_.g2_otr AS col_10_0_,
esnpartvie0_.induct_date AS col_11_0_,
min(esnpartvie0_.delta) AS col_12_0_,
esnpartvie0_.fiscal_week_bucket_date AS col_13_0_
FROM
moa.esn_part_view esnpartvie0_
WHERE
esnpartvie0_.esn_id = 140339
GROUP BY
esnpartvie0_.esn_id,
esnpartvie0_.esn,
esnpartvie0_.quarter_point,
esnpartvie0_.work_order_number,
esnpartvie0_.site,
esnpartvie0_.customer,
esnpartvie0_.adj_accum_date,
esnpartvie0_.g2_otr,
esnpartvie0_.induct_date,
esnpartvie0_.fiscal_week_bucket_date
The Explain Analyze, buffer plan for the same is and the link (https://explain.depesz.com/s/mr76#html)
"GroupAggregate (cost=69684.12..69684.17 rows=1 width=82) (actual time=976.163..976.228 rows=1 loops=1)"
" Group Key: esnpartvie0_.esn_id, esnpartvie0_.esn, esnpartvie0_.quarter_point, esnpartvie0_.work_order_number, esnpartvie0_.site, esnpartvie0_.customer, esnpartvie0_.adj_accum_date, esnpartvie0_.g2_otr, esnpartvie0_.induct_date, esnpartvie0_.fiscal_week_bucket_date"
" Buffers: shared hit=20301, temp read=48936 written=6835"
" -> Sort (cost=69684.12..69684.13 rows=1 width=70) (actual time=976.153..976.219 rows=14 loops=1)"
" Sort Key: esnpartvie0_.esn, esnpartvie0_.quarter_point, esnpartvie0_.work_order_number, esnpartvie0_.site, esnpartvie0_.customer, esnpartvie0_.adj_accum_date, esnpartvie0_.g2_otr, esnpartvie0_.induct_date, esnpartvie0_.fiscal_week_bucket_date"
" Sort Method: quicksort Memory: 26kB"
" Buffers: shared hit=20301, temp read=48936 written=6835"
" -> Subquery Scan on esnpartvie0_ (cost=69684.02..69684.11 rows=1 width=70) (actual time=976.078..976.158 rows=14 loops=1)"
" Buffers: shared hit=20290, temp read=48936 written=6835"
" -> GroupAggregate (cost=69684.02..69684.10 rows=1 width=2016) (actual time=976.077..976.155 rows=14 loops=1)"
" Group Key: e.esn_id, w.number, ed.adj_accum_date, (COALESCE(ed.gate_2_otr, 0)), ed.gate_0_start, ed.gate_1_stop, p.part_id, st.name, mat.name, so.name, dr.name, hpc.hpc_status_name, module.module_name, c.customer_id, m.model_id, ef.engine_family_id, s.site_id, ws.name, ic.comment"
" Buffers: shared hit=20290, temp read=48936 written=6835"
" CTE indexed_comments"
" -> WindowAgg (cost=40573.82..45076.80 rows=225149 width=118) (actual time=182.537..291.895 rows=216974 loops=1)"
" Buffers: shared hit=5226, temp read=3319 written=3327"
" -> Sort (cost=40573.82..41136.69 rows=225149 width=110) (actual time=182.528..215.549 rows=216974 loops=1)"
" Sort Key: part_comment.part_id, part_comment.created_at DESC"
" Sort Method: external merge Disk: 26552kB"
" Buffers: shared hit=5226, temp read=3319 written=3327"
" -> Seq Scan on part_comment (cost=0.00..7474.49 rows=225149 width=110) (actual time=0.014..38.209 rows=216974 loops=1)"
" Buffers: shared hit=5223"
" -> Sort (cost=24607.21..24607.22 rows=1 width=717) (actual time=976.069..976.133 rows=14 loops=1)"
" Sort Key: w.number, ed.adj_accum_date, (COALESCE(ed.gate_2_otr, 0)), ed.gate_0_start, ed.gate_1_stop, p.part_id, st.name, mat.name, so.name, dr.name, hpc.hpc_status_name, module.module_name, c.customer_id, m.model_id, ef.engine_family_id, s.site_id, ws.name, ic.comment"
" Sort Method: quicksort Memory: 28kB"
" Buffers: shared hit=20290, temp read=48936 written=6835"
" -> Nested Loop (cost=1010.23..24607.20 rows=1 width=717) (actual time=442.381..976.017 rows=14 loops=1)"
" Buffers: shared hit=20287, temp read=48936 written=6835"
" -> Nested Loop Left Join (cost=1009.94..24598.88 rows=1 width=697) (actual time=442.337..975.670 rows=14 loops=1)"
" Join Filter: (ic.part_id = p.part_id)"
" Rows Removed by Join Filter: 824838"
" Buffers: shared hit=20245, temp read=48936 written=6835"
" -> Nested Loop Left Join (cost=1009.94..19518.95 rows=1 width=181) (actual time=56.148..57.676 rows=14 loops=1)"
" Buffers: shared hit=15019"
" -> Nested Loop Left Join (cost=1009.81..19518.35 rows=1 width=183) (actual time=56.139..57.635 rows=14 loops=1)"
" Buffers: shared hit=15019"
" -> Nested Loop Left Join (cost=1009.67..19517.67 rows=1 width=181) (actual time=56.133..57.598 rows=14 loops=1)"
" Buffers: shared hit=15019"
" -> Nested Loop Left Join (cost=1009.55..19516.82 rows=1 width=179) (actual time=56.124..57.544 rows=14 loops=1)"
" Buffers: shared hit=15019"
" -> Nested Loop Left Join (cost=1009.42..19516.04 rows=1 width=178) (actual time=56.105..57.439 rows=14 loops=1)"
" Buffers: shared hit=14991"
" -> Nested Loop Left Join (cost=1009.28..19515.37 rows=1 width=175) (actual time=56.089..57.335 rows=14 loops=1)"
" Buffers: shared hit=14963"
" -> Nested Loop Left Join (cost=1009.14..19514.77 rows=1 width=170) (actual time=56.068..57.206 rows=14 loops=1)"
" Join Filter: (e.work_scope_id = ws.work_scope_id)"
" Buffers: shared hit=14935"
" -> Nested Loop Left Join (cost=1009.14..19513.55 rows=1 width=166) (actual time=56.043..57.102 rows=14 loops=1)"
" Join Filter: (e.esn_id = p.esn_id)"
" Buffers: shared hit=14921"
" -> Nested Loop (cost=9.14..31.40 rows=1 width=125) (actual time=0.081..0.130 rows=1 loops=1)"
" Buffers: shared hit=15"
" -> Nested Loop (cost=8.87..23.08 rows=1 width=118) (actual time=0.069..0.117 rows=1 loops=1)"
" Buffers: shared hit=12"
" -> Nested Loop (cost=8.73..21.86 rows=1 width=108) (actual time=0.055..0.102 rows=1 loops=1)"
" Buffers: shared hit=10"
" -> Nested Loop (cost=8.60..21.65 rows=1 width=46) (actual time=0.046..0.091 rows=1 loops=1)"
" Buffers: shared hit=8"
" -> Hash Join (cost=8.31..13.34 rows=1 width=41) (actual time=0.036..0.081 rows=1 loops=1)"
" Hash Cond: (m.model_id = e.model_id)"
" Buffers: shared hit=5"
" -> Seq Scan on model m (cost=0.00..4.39 rows=239 width=17) (actual time=0.010..0.038 rows=240 loops=1)"
" Buffers: shared hit=2"
" -> Hash (cost=8.30..8.30 rows=1 width=28) (actual time=0.009..0.010 rows=1 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 9kB"
" Buffers: shared hit=3"
" -> Index Scan using esn_pkey on esn e (cost=0.29..8.30 rows=1 width=28) (actual time=0.006..0.006 rows=1 loops=1)"
" Index Cond: (esn_id = 140339)"
" Filter: active"
" Buffers: shared hit=3"
" -> Index Scan using work_order_pkey on work_order w (cost=0.29..8.30 rows=1 width=13) (actual time=0.008..0.008 rows=1 loops=1)"
" Index Cond: (work_order_id = e.work_order_id)"
" Buffers: shared hit=3"
" -> Index Scan using engine_family_pkey on engine_family ef (cost=0.14..0.20 rows=1 width=66) (actual time=0.009..0.009 rows=1 loops=1)"
" Index Cond: (engine_family_id = m.engine_family_id)"
" Buffers: shared hit=2"
" -> Index Scan using site_pkey on site s (cost=0.14..1.15 rows=1 width=14) (actual time=0.013..0.013 rows=1 loops=1)"
" Index Cond: (site_id = ef.site_id)"
" Buffers: shared hit=2"
" -> Index Scan using customer_pkey on customer c (cost=0.27..8.29 rows=1 width=11) (actual time=0.012..0.012 rows=1 loops=1)"
" Index Cond: (customer_id = e.customer_id)"
" Buffers: shared hit=3"
" -> Gather (cost=1000.00..19481.78 rows=29 width=41) (actual time=55.958..56.949 rows=14 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" Buffers: shared hit=14906"
" -> Parallel Seq Scan on part p (cost=0.00..18478.88 rows=12 width=41) (actual time=51.855..52.544 rows=5 loops=3)"
" Filter: (active AND (esn_id = 140339))"
" Rows Removed by Filter: 226662"
" Buffers: shared hit=14906"
" -> Seq Scan on work_scope ws (cost=0.00..1.10 rows=10 width=12) (actual time=0.004..0.004 rows=1 loops=14)"
" Buffers: shared hit=14"
" -> Index Scan using source_pkey on source so (cost=0.14..0.57 rows=1 width=13) (actual time=0.005..0.005 rows=1 loops=14)"
" Index Cond: (p.source_id = source_id)"
" Buffers: shared hit=28"
" -> Index Scan using status_pkey on status st (cost=0.13..0.56 rows=1 width=11) (actual time=0.004..0.004 rows=1 loops=14)"
" Index Cond: (p.status_id = status_id)"
" Buffers: shared hit=28"
" -> Index Scan using material_stream_pkey on material_stream mat (cost=0.13..0.56 rows=1 width=9) (actual time=0.004..0.004 rows=1 loops=14)"
" Index Cond: (p.material_stream_id = material_stream_id)"
" Buffers: shared hit=28"
" -> Index Scan using dr_status_pkey on dr_status dr (cost=0.13..0.56 rows=1 width=10) (actual time=0.001..0.001 rows=0 loops=14)"
" Index Cond: (p.dr_status_id = dr_status_id)"
" -> Index Scan using hpc_status_pkey on hpc_status hpc (cost=0.13..0.56 rows=1 width=10) (actual time=0.001..0.001 rows=0 loops=14)"
" Index Cond: (p.hpc_status_id = hpc_status_id)"
" -> Index Scan using module_pkey on module (cost=0.14..0.57 rows=1 width=6) (actual time=0.001..0.001 rows=0 loops=14)"
" Index Cond: (p.module_id = module_id)"
" -> CTE Scan on indexed_comments ic (cost=0.00..5065.85 rows=1126 width=520) (actual time=13.043..61.251 rows=58917 loops=14)"
" Filter: (comment_index = 1)"
" Rows Removed by Filter: 158057"
" Buffers: shared hit=5226, temp read=48936 written=6835"
" -> Index Scan using esn_dates_esn_id_key on esn_dates ed (cost=0.29..8.32 rows=1 width=20) (actual time=0.019..0.020 rows=1 loops=14)"
" Index Cond: (esn_id = 140339)"
" Filter: ((gate_3_stop_actual AND (gate_3_stop >= now())) OR (gate_3_stop IS NULL) OR ((NOT gate_3_stop_actual) AND (gate_3_stop IS NOT NULL) AND (gate_3_stop >= (now() - '730 days'::interval))))"
" Buffers: shared hit=42"
"Planning time: 6.564 ms"
"Execution time: 988.335 ms"
The actual View definition on which the above select is running
with indexed_comments as (
select
part_comment.part_id,
part_comment.comment,
row_number() over (partition by part_comment.part_id
order by
part_comment.created_at desc) as comment_index
from
moa.part_comment
)
select
e.esn_id,
e.name as esn,
e.is_qp_engine as quarter_point,
w.number as work_order_number,
case
when (p.part_id is null) then 0
else p.part_id
end as part_id,
p.part_number,
p.part_description,
p.quantity,
st.name as status,
p.status_id,
mat.name as material_stream,
p.material_stream_id,
so.name as source,
p.source_id,
p.oem,
p.po_number,
p.manual_cso_commit,
p.auto_cso_commit,
coalesce(p.manual_cso_commit, p.auto_cso_commit) as calculated_cso_commit,
(coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)), ed.gate_0_start) + p.accum_offset) as adjusted_accum,
dr.name as dr_status,
p.dr_status_id,
p.airway_bill,
p.core_material,
hpc.hpc_status_name as hpc_status,
p.hpc_status_id,
module.module_name,
p.module_id,
c.name as customer,
c.customer_id,
m.name as model,
m.model_id,
ef.name as engine_family,
ef.engine_family_id,
s.label as site,
s.site_id,
case
when (coalesce(p.manual_cso_commit, p.auto_cso_commit) > coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)), ed.gate_0_start)) then 1
else 0
end as critical,
case
when (coalesce(p.manual_cso_commit, p.auto_cso_commit) <= coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)), ed.gate_0_start)) then 1
else 0
end as numshort,
case
when ((p.esn_id is not null)
and (coalesce(p.manual_cso_commit, p.auto_cso_commit) is null)) then 1
else 0
end as wa,
ed.adj_accum_date,
(ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)) as g2_otr,
ed.gate_0_start as induct_date,
coalesce((coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0))) - max(coalesce(p.manual_cso_commit, p.auto_cso_commit))), 0) as delta,
coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)), ed.gate_0_start) as fiscal_week_bucket_date,
p.po_line_num,
p.ship_out,
p.receipt,
p.crit_ship,
e.work_scope_id,
ws.name as work_scope,
p.late_call,
p.ex_esn,
p.accum_offset,
ic.comment as latest_comment
from
(((((((((((((((moa.esn e
join moa.work_order w
using (work_order_id))
join moa.model m
using (model_id))
join moa.engine_family ef on
((m.engine_family_id = ef.engine_family_id)))
join moa.site s on
((ef.site_id = s.site_id)))
join moa.customer c
using (customer_id))
left join moa.part p on
(((e.esn_id = p.esn_id)
and (p.active <> false))))
left join moa.work_scope ws on
((e.work_scope_id = ws.work_scope_id)))
left join moa.source so on
((p.source_id = so.source_id)))
left join moa.status st on
((p.status_id = st.status_id)))
left join moa.material_stream mat
using (material_stream_id))
left join moa.dr_status dr
using (dr_status_id))
left join moa.hpc_status hpc
using (hpc_status_id))
left join moa.module module
using (module_id))
left join indexed_comments ic on
(((ic.part_id = p.part_id)
and (ic.comment_index = 1))))
join moa.esn_dates ed on
((e.esn_id = ed.esn_id)))
where
((e.active = true)
and (((ed.gate_3_stop_actual = true)
and (ed.gate_3_stop >= now()))
or (ed.gate_3_stop is null)
or ((ed.gate_3_stop_actual = false)
and (ed.gate_3_stop is not null)
and (ed.gate_3_stop >= (now() - '730 days'::interval)))))
group by
e.esn_id,
w.number,
s.label,
c.name,
p.active,
ed.adj_accum_date,
coalesce(ed.gate_2_otr, 0),
ed.gate_0_start,
ed.gate_1_stop,
p.part_id,
st.name,
mat.name,
so.name,
dr.name,
hpc.hpc_status_name,
module.module_name,
c.customer_id,
m.name,
m.model_id,
ef.name,
ef.engine_family_id,
s.site_id,
ws.name,
ic.comment;
What a horrific query.
Most of the time is going to this:
-> CTE Scan on indexed_comments ic (cost=0.00..5065.85 rows=1126 width=520) (actual time=13.043..61.251 rows=58917 loops=14)"
And the main culprit there is a misestimation of upper sibling node. It thinks it will need to do the CTE Scan one time, but it actually needs to do it 14 times (although apparently returning the same answer each time). If it knew it would do it repeatedly, it would set up a hash table rather than just iterate through it each time. But since setting up the hash requires one iteration through it, it doesn't seem to save anything if it thinks it only needs one iteration in the first place.
I don't know how to fix the estimation problem. But you could compute the ranks on the fly, rather than computing all up front then needing to search through them. You would do that with a LATERAL join.
Change
left join indexed_comments ic on
(((ic.part_id = p.part_id)
and (ic.comment_index = 1))))
to
left join lateral (select comment from part_comment pc where p.part_id=pc.part_id order by created_at desc limit 1) ic on true
and get rid of the with indexed_comments as...
For this to be fast you would need an index ON part_comment (part_id, created_at)

Optimisation on postgres query

I am looking for optimization suggestions for the below query on postgres. Not a DBA so looking for some expert advice in here.
Devices table holds device_id which are hexadecimal.
To achieve high throughput we run 6 instances of this query in parallel with pattern matching for device_id
beginning with [0-2], [3-5], [6-9], [a-c], [d-f]
When we run just one instance of the query it works fine, but with 6 instances we get error -
[6669]:FATAL: connection to client lost
explain analyze select notifications.id, notifications.status, events.alert_type,
events.id as event_id, events.payload, notifications.device_id as device_id,
device_endpoints.region, device_endpoints.device_endpoint as endpoint
from notifications
inner join events
on notifications.event_id = events.id
inner join devices
on notifications.device_id = devices.id
inner join device_endpoints
on devices.id = device_endpoints.device_id
where notifications.status = 'pending' AND notifications.region = 'ap-southeast-2'
AND devices.device_id ~ '[0-9a-f].*'
limit 10000;
Output of explain analyse
"Limit (cost=25.62..1349.23 rows=206 width=202) (actual time=0.359..0.359 rows=0 loops=1)"
" -> Nested Loop (cost=25.62..1349.23 rows=206 width=202) (actual time=0.357..0.357 rows=0 loops=1)"
" Join Filter: (notifications.device_id = devices.id)"
" -> Nested Loop (cost=25.33..1258.73 rows=206 width=206) (actual time=0.357..0.357 rows=0 loops=1)"
" -> Hash Join (cost=25.04..61.32 rows=206 width=52) (actual time=0.043..0.172 rows=193 loops=1)"
" Hash Cond: (notifications.event_id = events.id)"
" -> Index Scan using idx_notifications_status on notifications (cost=0.42..33.87 rows=206 width=16) (actual time=0.013..0.100 rows=193 loops=1)"
" Index Cond: (status = 'pending'::notification_status)"
" Filter: (region = 'ap-southeast-2'::text)"
" -> Hash (cost=16.50..16.50 rows=650 width=40) (actual time=0.022..0.022 rows=34 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 14kB"
" -> Seq Scan on events (cost=0.00..16.50 rows=650 width=40) (actual time=0.005..0.014 rows=34 loops=1)"
" -> Index Scan using idx_device_endpoints_device_id on device_endpoints (cost=0.29..5.80 rows=1 width=154) (actual time=0.001..0.001 rows=0 loops=193)"
" Index Cond: (device_id = notifications.device_id)"
" -> Index Scan using devices_pkey on devices (cost=0.29..0.43 rows=1 width=4) (never executed)"
" Index Cond: (id = device_endpoints.device_id)"
" Filter: (device_id ~ '[0-9a-f].*'::text)"
"Planning time: 0.693 ms"
"Execution time: 0.404 ms"

PostgreSQL Big Query optimization

select aa.id as attendance_approvals_id, al.id, al.check_in_time, al.check_out_time,
al.check_in_selfie, al.check_out_selfie, al.check_in_approval, al.check_out_approval,
al.check_in_distance_variation, al.check_out_distance_variation, al.updated_at,
ar.attendance_reason, a.start_date, a.end_date, e.first_name as employee_name,e.profile_picture,
aa.action as action, aa.updated_at, b.branch_name, e.emp_id, e.id as employee_id
from attendances a
inner join attendance_logs al on a.id = al.attendance_id
and a.delete_flag = 0 and a.start_date between '2018-11-21' and '2018-11-28'
inner join attendance_approvals aa on aa.attendance_log_id = al.id
and aa.approval_flag = 0 and aa.active_flag = 1
inner join attendance_reasons ar on ar.id = al.reason_id
inner join employees e on e.id = a.employee_id and e.manager_id = 16266
inner join branches b on b.id = e.branch_id
inner join employee_shift_mappings esm on esm.emp_id = e.id and esm.shift_date = a.start_date
group by aa.id, al.id, al.check_in_time, al.check_out_time, al.check_in_selfie,
al.check_out_selfie, al.check_in_approval,
al.check_out_approval, al.check_in_distance_variation, al.check_out_distance_variation,
al.updated_at, ar.attendance_reason,
a.start_date, a.end_date, e.first_name, e.profile_picture, aa.action, aa.updated_at,
b.branch_name,e.emp_id, e.id
order by aa.id desc
How to Optimize this query, this will take more than 35sec for execution.
attendances table has 7 700 000 records.
attendance_logs table has 6 400 000 records.
attendance_approvals table has 3 900 000 records.
attendance_reasons table has 570 records.
employees table has 60 000 records.
employee_shift_mappings table has 1 300 000 records.
Please find my Query Plan
"Group (cost=94304.75..94304.77 rows=1 width=365) (actual time=23724.836..23724.859 rows=43 loops=1)"
" Group Key: aa.id, al.id, ar.attendance_reason, a.start_date, a.end_date, b.branch_name, e.id"
" -> Sort (cost=94304.75..94304.76 rows=1 width=365) (actual time=23724.832..23724.839 rows=43 loops=1)"
" Sort Key: aa.id DESC, al.id, ar.attendance_reason, a.start_date, a.end_date, b.branch_name, e.id"
" Sort Method: quicksort Memory: 47kB"
" -> Nested Loop (cost=2.15..94304.74 rows=1 width=365) (actual time=258.375..23724.602 rows=43 loops=1)"
" -> Nested Loop (cost=1.86..94297.49 rows=1 width=350) (actual time=258.364..23724.098 rows=43 loops=1)"
" Join Filter: (al.reason_id = ar.id)"
" Rows Removed by Join Filter: 24467"
" -> Nested Loop (cost=1.86..94277.15 rows=1 width=338) (actual time=258.344..23719.534 rows=43 loops=1)"
" -> Nested Loop (cost=1.42..5220.10 rows=2 width=322) (actual time=0.615..26.969 rows=344 loops=1)"
" -> Nested Loop (cost=0.99..5204.67 rows=2 width=127) (actual time=0.600..22.920 rows=394 loops=1)"
" Join Filter: (e.id = esm.emp_id)"
" -> Nested Loop (cost=0.56..5116.43 rows=11 width=131) (actual time=0.573..17.221 rows=394 loops=1)"
" -> Seq Scan on employees e (cost=0.00..4437.62 rows=79 width=111) (actual time=0.334..15.277 rows=84 loops=1)"
" Filter: (manager_id = 16266)"
" Rows Removed by Filter: 59110"
" -> Index Scan using uk_attendance_status on attendances a (cost=0.56..8.58 rows=1 width=20) (actual time=0.008..0.017 rows=5 loops=84)"
" Index Cond: ((employee_id = e.id) AND (start_date >= '2018-11-21 00:00:00'::timestamp without time zone) AND (start_date <= '2018-11-28 00:00:00'::timestamp without time zone) AND (delete_flag = 0))"
" -> Index Only Scan using idx_shift_mapping on employee_shift_mappings esm (cost=0.43..8.01 rows=1 width=8) (actual time=0.010..0.011 rows=1 loops=394)"
" Index Cond: ((emp_id = a.employee_id) AND (shift_date = a.start_date))"
" Heap Fetches: 394"
" -> Index Scan using index_attendance_logs on attendance_logs al (cost=0.43..7.71 rows=1 width=203) (actual time=0.007..0.008 rows=1 loops=394)"
" Index Cond: (attendance_id = a.id)"
" -> Index Scan using index_attendance_approvals on attendance_approvals aa (cost=0.43..44528.51 rows=1 width=20) (actual time=68.758..68.872 rows=0 loops=344)"
" Index Cond: ((attendance_log_id = al.id) AND (active_flag = 1))"
" Filter: (approval_flag = 0)"
" Rows Removed by Filter: 1"
" -> Seq Scan on attendance_reasons ar (cost=0.00..12.93 rows=593 width=20) (actual time=0.004..0.055 rows=570 loops=43)"
" -> Index Scan using branches_pkey on branches b (cost=0.29..7.24 rows=1 width=23) (actual time=0.008..0.008 rows=1 loops=43)"
" Index Cond: (id = e.branch_id)"
"Planning time: 3.099 ms"
"Execution time: 23725.179 ms"

Postgres Explain Plans Are DIfferent for same query with different value

I have databases running Postgres 9.56 on heroku.
I'm running the following SQL with different parameter value, but getting very different results in the performance.
Query 1
SELECT COUNT(s), DATE_TRUNC('MONTH', t.departure)
FROM tk_seat s
LEFT JOIN tk_trip t ON t.trip_id = s.trip_id
WHERE DATE_PART('year', t.departure)= '2017'
AND t.trip_status = 'BOOKABLE'
AND t.route_id = '278'
AND s.seat_status_type != 'NONE'
AND s.operator_id = '15'
GROUP BY DATE_TRUNC('MONTH', t.departure)
ORDER BY DATE_TRUNC('MONTH', t.departure)
Query 2
SELECT COUNT(s), DATE_TRUNC('MONTH', t.departure)
FROM tk_seat s
LEFT JOIN tk_trip t ON t.trip_id = s.trip_id
WHERE DATE_PART('year', t.departure)= '2017'
AND t.trip_status = 'BOOKABLE'
AND t.route_id = '150'
AND s.seat_status_type != 'NONE'
AND s.operator_id = '15'
GROUP BY DATE_TRUNC('MONTH', t.departure)
ORDER BY DATE_TRUNC('MONTH', t.departure)
Only Difference is t.route_id value.
So, I tried running explain and give me very different result.
For Query 1
"GroupAggregate (cost=279335.17..279335.19 rows=1 width=298)"
" Group Key: (date_trunc('MONTH'::text, t.departure))"
" -> Sort (cost=279335.17..279335.17 rows=1 width=298)"
" Sort Key: (date_trunc('MONTH'::text, t.departure))"
" -> Nested Loop (cost=0.00..279335.16 rows=1 width=298)"
" Join Filter: (s.trip_id = t.trip_id)"
" -> Seq Scan on tk_trip t (cost=0.00..5951.88 rows=1 width=12)"
" Filter: (((trip_status)::text = 'BOOKABLE'::text) AND (route_id = '278'::bigint) AND (date_part('year'::text, departure) = '2017'::double precision))"
" -> Seq Scan on tk_seat s (cost=0.00..271738.35 rows=131594 width=298)"
" Filter: (((seat_status_type)::text <> 'NONE'::text) AND (operator_id = '15'::bigint))"
For Query 2
"Sort (cost=278183.94..278183.95 rows=1 width=298)"
" Sort Key: (date_trunc('MONTH'::text, t.departure))"
" -> HashAggregate (cost=278183.92..278183.93 rows=1 width=298)"
" Group Key: date_trunc('MONTH'::text, t.departure)"
" -> Hash Join (cost=5951.97..278183.88 rows=7 width=298)"
" Hash Cond: (s.trip_id = t.trip_id)"
" -> Seq Scan on tk_seat s (cost=0.00..271738.35 rows=131594 width=298)"
" Filter: (((seat_status_type)::text <> 'NONE'::text) AND (operator_id = '15'::bigint))"
" -> Hash (cost=5951.88..5951.88 rows=7 width=12)"
" -> Seq Scan on tk_trip t (cost=0.00..5951.88 rows=7 width=12)"
" Filter: (((trip_status)::text = 'BOOKABLE'::text) AND (route_id = '150'::bigint) AND (date_part('year'::text, departure) = '2017'::double precision))"
My question is why and how can i make it same? Because first Query give me very bad performance
Query 1 Analyze
"GroupAggregate (cost=274051.28..274051.31 rows=1 width=8) (actual time=904682.606..904684.283 rows=7 loops=1)"
" Group Key: (date_trunc('MONTH'::text, t.departure))"
" -> Sort (cost=274051.28..274051.29 rows=1 width=8) (actual time=904682.432..904682.917 rows=13520 loops=1)"
" Sort Key: (date_trunc('MONTH'::text, t.departure))"
" Sort Method: quicksort Memory: 1018kB"
" -> Nested Loop (cost=0.42..274051.27 rows=1 width=8) (actual time=1133.925..904676.254 rows=13520 loops=1)"
" Join Filter: (s.trip_id = t.trip_id)"
" Rows Removed by Join Filter: 42505528"
" -> Index Scan using tk_trip_route_id_idx on tk_trip t (cost=0.42..651.34 rows=1 width=12) (actual time=0.020..2.720 rows=338 loops=1)"
" Index Cond: (route_id = '278'::bigint)"
" Filter: (((trip_status)::text = 'BOOKABLE'::text) AND (date_part('year'::text, departure) = '2017'::double precision))"
" Rows Removed by Filter: 28"
" -> Seq Scan on tk_seat s (cost=0.00..271715.83 rows=134728 width=8) (actual time=0.071..2662.102 rows=125796 loops=338)"
" Filter: (((seat_status_type)::text <> 'NONE'::text) AND (operator_id = '15'::bigint))"
" Rows Removed by Filter: 6782294"
"Planning time: 1.172 ms"
"Execution time: 904684.570 ms"
Query 2 Analyze
"Sort (cost=275018.88..275018.89 rows=1 width=8) (actual time=2153.843..2153.843 rows=9 loops=1)"
" Sort Key: (date_trunc('MONTH'::text, t.departure))"
" Sort Method: quicksort Memory: 25kB"
" -> HashAggregate (cost=275018.86..275018.87 rows=1 width=8) (actual time=2153.833..2153.834 rows=9 loops=1)"
" Group Key: date_trunc('MONTH'::text, t.departure)"
" -> Hash Join (cost=2797.67..275018.82 rows=7 width=8) (actual time=2.472..2147.093 rows=36565 loops=1)"
" Hash Cond: (s.trip_id = t.trip_id)"
" -> Seq Scan on tk_seat s (cost=0.00..271715.83 rows=134728 width=8) (actual time=0.127..2116.153 rows=125796 loops=1)"
" Filter: (((seat_status_type)::text <> 'NONE'::text) AND (operator_id = '15'::bigint))"
" Rows Removed by Filter: 6782294"
" -> Hash (cost=2797.58..2797.58 rows=7 width=12) (actual time=1.853..1.853 rows=1430 loops=1)"
" Buckets: 2048 (originally 1024) Batches: 1 (originally 1) Memory Usage: 78kB"
" -> Bitmap Heap Scan on tk_trip t (cost=32.21..2797.58 rows=7 width=12) (actual time=0.176..1.559 rows=1430 loops=1)"
" Recheck Cond: (route_id = '150'::bigint)"
" Filter: (((trip_status)::text = 'BOOKABLE'::text) AND (date_part('year'::text, departure) = '2017'::double precision))"
" Rows Removed by Filter: 33"
" Heap Blocks: exact=333"
" -> Bitmap Index Scan on tk_trip_route_id_idx (cost=0.00..32.21 rows=1572 width=0) (actual time=0.131..0.131 rows=1463 loops=1)"
" Index Cond: (route_id = '150'::bigint)"
"Planning time: 0.211 ms"
"Execution time: 2153.972 ms"
You can - possibly - make them the same if you hint postgres to not use Nested Loops:
SET enable_nestloop = 'off';
You can make it permanent by setting it to either server, role, inside function definition or server configuration:
ALTER DATABASE postgres
SET enable_nestloop = 'off';
ALTER ROLE lkaminski
SET enable_nestloop = 'off';
CREATE FUNCTION add(integer, integer) RETURNS integer
AS 'select $1 + $2;'
LANGUAGE SQL
SET enable_nestloop = 'off'
IMMUTABLE
RETURNS NULL ON NULL INPUT;
As for why - you change search condition and planner estimates that from tk_trip he will get 1 row instead of 7, so it changes plan because it seems like Nested Loop will be better. Sometimes it is wrong about those and you might get slower execution time. But if you "force" it to not use Nested Loops then for different parameter it could be slower to use second plan instead of first one (with Nested Loop).
You can make planner estimates more accurate by increasing how much statistics it gathers per column. It might help.
ALTER TABLE tk_trip ALTER COLUMN route_id SET STATISTICS 1000;
As a side note - your LEFT JOIN is actually INNER JOIN, because you have put conditions for that table inside WHERE instead of ON. You should get different plan (and result) if you move them over to ON - assuming you wanted LEFT JOIN.