PostgreSQL Big Query optimization - postgresql

select aa.id as attendance_approvals_id, al.id, al.check_in_time, al.check_out_time,
al.check_in_selfie, al.check_out_selfie, al.check_in_approval, al.check_out_approval,
al.check_in_distance_variation, al.check_out_distance_variation, al.updated_at,
ar.attendance_reason, a.start_date, a.end_date, e.first_name as employee_name,e.profile_picture,
aa.action as action, aa.updated_at, b.branch_name, e.emp_id, e.id as employee_id
from attendances a
inner join attendance_logs al on a.id = al.attendance_id
and a.delete_flag = 0 and a.start_date between '2018-11-21' and '2018-11-28'
inner join attendance_approvals aa on aa.attendance_log_id = al.id
and aa.approval_flag = 0 and aa.active_flag = 1
inner join attendance_reasons ar on ar.id = al.reason_id
inner join employees e on e.id = a.employee_id and e.manager_id = 16266
inner join branches b on b.id = e.branch_id
inner join employee_shift_mappings esm on esm.emp_id = e.id and esm.shift_date = a.start_date
group by aa.id, al.id, al.check_in_time, al.check_out_time, al.check_in_selfie,
al.check_out_selfie, al.check_in_approval,
al.check_out_approval, al.check_in_distance_variation, al.check_out_distance_variation,
al.updated_at, ar.attendance_reason,
a.start_date, a.end_date, e.first_name, e.profile_picture, aa.action, aa.updated_at,
b.branch_name,e.emp_id, e.id
order by aa.id desc
How to Optimize this query, this will take more than 35sec for execution.
attendances table has 7 700 000 records.
attendance_logs table has 6 400 000 records.
attendance_approvals table has 3 900 000 records.
attendance_reasons table has 570 records.
employees table has 60 000 records.
employee_shift_mappings table has 1 300 000 records.
Please find my Query Plan
"Group (cost=94304.75..94304.77 rows=1 width=365) (actual time=23724.836..23724.859 rows=43 loops=1)"
" Group Key: aa.id, al.id, ar.attendance_reason, a.start_date, a.end_date, b.branch_name, e.id"
" -> Sort (cost=94304.75..94304.76 rows=1 width=365) (actual time=23724.832..23724.839 rows=43 loops=1)"
" Sort Key: aa.id DESC, al.id, ar.attendance_reason, a.start_date, a.end_date, b.branch_name, e.id"
" Sort Method: quicksort Memory: 47kB"
" -> Nested Loop (cost=2.15..94304.74 rows=1 width=365) (actual time=258.375..23724.602 rows=43 loops=1)"
" -> Nested Loop (cost=1.86..94297.49 rows=1 width=350) (actual time=258.364..23724.098 rows=43 loops=1)"
" Join Filter: (al.reason_id = ar.id)"
" Rows Removed by Join Filter: 24467"
" -> Nested Loop (cost=1.86..94277.15 rows=1 width=338) (actual time=258.344..23719.534 rows=43 loops=1)"
" -> Nested Loop (cost=1.42..5220.10 rows=2 width=322) (actual time=0.615..26.969 rows=344 loops=1)"
" -> Nested Loop (cost=0.99..5204.67 rows=2 width=127) (actual time=0.600..22.920 rows=394 loops=1)"
" Join Filter: (e.id = esm.emp_id)"
" -> Nested Loop (cost=0.56..5116.43 rows=11 width=131) (actual time=0.573..17.221 rows=394 loops=1)"
" -> Seq Scan on employees e (cost=0.00..4437.62 rows=79 width=111) (actual time=0.334..15.277 rows=84 loops=1)"
" Filter: (manager_id = 16266)"
" Rows Removed by Filter: 59110"
" -> Index Scan using uk_attendance_status on attendances a (cost=0.56..8.58 rows=1 width=20) (actual time=0.008..0.017 rows=5 loops=84)"
" Index Cond: ((employee_id = e.id) AND (start_date >= '2018-11-21 00:00:00'::timestamp without time zone) AND (start_date <= '2018-11-28 00:00:00'::timestamp without time zone) AND (delete_flag = 0))"
" -> Index Only Scan using idx_shift_mapping on employee_shift_mappings esm (cost=0.43..8.01 rows=1 width=8) (actual time=0.010..0.011 rows=1 loops=394)"
" Index Cond: ((emp_id = a.employee_id) AND (shift_date = a.start_date))"
" Heap Fetches: 394"
" -> Index Scan using index_attendance_logs on attendance_logs al (cost=0.43..7.71 rows=1 width=203) (actual time=0.007..0.008 rows=1 loops=394)"
" Index Cond: (attendance_id = a.id)"
" -> Index Scan using index_attendance_approvals on attendance_approvals aa (cost=0.43..44528.51 rows=1 width=20) (actual time=68.758..68.872 rows=0 loops=344)"
" Index Cond: ((attendance_log_id = al.id) AND (active_flag = 1))"
" Filter: (approval_flag = 0)"
" Rows Removed by Filter: 1"
" -> Seq Scan on attendance_reasons ar (cost=0.00..12.93 rows=593 width=20) (actual time=0.004..0.055 rows=570 loops=43)"
" -> Index Scan using branches_pkey on branches b (cost=0.29..7.24 rows=1 width=23) (actual time=0.008..0.008 rows=1 loops=43)"
" Index Cond: (id = e.branch_id)"
"Planning time: 3.099 ms"
"Execution time: 23725.179 ms"

Related

Postgres query performance improvement

I am a newbie to database optimisations,
The table data I have is around 29 million rows,
I am running on Pgadmin to do select * on the rows and it takes 9 seconds.
What can I do to optimize performance?
SELECT
F."Id",
F."Name",
F."Url",
F."CountryModel",
F."RegionModel",
F."CityModel",
F."Street",
F."Phone",
F."PostCode",
F."Images",
F."Rank",
F."CommentCount",
F."PageRank",
F."Description",
F."Properties",
F."IsVerify",
count(*) AS Counter
FROM
public."Firms" F,
LATERAL unnest(F."Properties") AS P
WHERE
F."CountryId" = 1
AND F."RegionId" = 7
AND F."CityId" = 4365
AND P = ANY (ARRAY[126, 128])
AND F."Deleted" = FALSE
GROUP BY
F."Id"
ORDER BY
Counter DESC,
F."IsVerify" DESC,
F."PageRank" DESC OFFSET 10 ROWS FETCH FIRST 20 ROW ONLY
Thats my query plan
" -> Sort (cost=11945.20..11948.15 rows=1178 width=369) (actual time=8981.514..8981.515 rows=30 loops=1)"
" Sort Key: (count(*)) DESC, f.""IsVerify"" DESC, f.""PageRank"" DESC"
" Sort Method: top-N heapsort Memory: 58kB"
" -> HashAggregate (cost=11898.63..11910.41 rows=1178 width=369) (actual time=8981.234..8981.305 rows=309 loops=1)"
" Group Key: f.""Id"""
" Batches: 1 Memory Usage: 577kB"
" -> Nested Loop (cost=7050.07..11886.85 rows=2356 width=360) (actual time=79.408..8980.167 rows=322 loops=1)"
" -> Bitmap Heap Scan on ""Firms"" f (cost=7050.06..11716.04 rows=1178 width=360) (actual time=78.414..8909.649 rows=56071 loops=1)"
" Recheck Cond: ((""CityId"" = 4365) AND (""RegionId"" = 7))"
" Filter: ((NOT ""Deleted"") AND (""CountryId"" = 1))"
" Heap Blocks: exact=55330"
" -> BitmapAnd (cost=7050.06..7050.06 rows=1178 width=0) (actual time=70.947..70.947 rows=0 loops=1)"
" -> Bitmap Index Scan on ""IX_Firms_CityId"" (cost=0.00..635.62 rows=58025 width=0) (actual time=11.563..11.563 rows=56072 loops=1)"
" Index Cond: (""CityId"" = 4365)"
" -> Bitmap Index Scan on ""IX_Firms_RegionId"" (cost=0.00..6413.60 rows=588955 width=0) (actual time=57.795..57.795 rows=598278 loops=1)"
" Index Cond: (""RegionId"" = 7)"
" -> Function Scan on unnest p (cost=0.00..0.13 rows=2 width=0) (actual time=0.001..0.001 rows=0 loops=56071)"
" Filter: (p = ANY ('{126,128}'::integer[]))"
" Rows Removed by Filter: 2"
"Planning Time: 0.351 ms"
"Execution Time: 8981.725 ms"```
Create a GIN index on F."Properties",
create index on "Firms" using gin ("Properties");
then add a clause to your WHERE
...
AND P = ANY (ARRAY[126, 128])
AND "Properties" && ARRAY[126, 128]
....
That added clause is redundant to the one preceding it, but the planner is not smart enough to reason through that so you need to make it explicit.

Postgresql Query performance optimization

I have a query which runs fast during off period but when there is load it runs very slow.
In the New Relic it sometimes shows to run 5-8mins.
The query looks simple but the View definition may be not that simple.
So wanted to know if there is any scope of optimization
Database version - "PostgreSQL 10.14 on x86_64-pc-linux-gnu, compiled by x86_64-unknown-linux-gnu-gcc (GCC) 4.9.4, 64-bit"
The query which comes up in any monitoring tool is:
SELECT
esnpartvie0_.esn_id AS col_0_0_,
esnpartvie0_.esn AS col_1_0_,
esnpartvie0_.quarter_point AS col_2_0_,
esnpartvie0_.work_order_number AS col_3_0_,
esnpartvie0_.site AS col_4_0_,
sum(esnpartvie0_.critical) AS col_5_0_,
sum(esnpartvie0_.numshort) AS col_6_0_,
sum(esnpartvie0_.wa) AS col_7_0_,
esnpartvie0_.customer AS col_8_0_,
esnpartvie0_.adj_accum_date AS col_9_0_,
esnpartvie0_.g2_otr AS col_10_0_,
esnpartvie0_.induct_date AS col_11_0_,
min(esnpartvie0_.delta) AS col_12_0_,
esnpartvie0_.fiscal_week_bucket_date AS col_13_0_
FROM
moa.esn_part_view esnpartvie0_
WHERE
esnpartvie0_.esn_id = 140339
GROUP BY
esnpartvie0_.esn_id,
esnpartvie0_.esn,
esnpartvie0_.quarter_point,
esnpartvie0_.work_order_number,
esnpartvie0_.site,
esnpartvie0_.customer,
esnpartvie0_.adj_accum_date,
esnpartvie0_.g2_otr,
esnpartvie0_.induct_date,
esnpartvie0_.fiscal_week_bucket_date
The Explain Analyze, buffer plan for the same is and the link (https://explain.depesz.com/s/mr76#html)
"GroupAggregate (cost=69684.12..69684.17 rows=1 width=82) (actual time=976.163..976.228 rows=1 loops=1)"
" Group Key: esnpartvie0_.esn_id, esnpartvie0_.esn, esnpartvie0_.quarter_point, esnpartvie0_.work_order_number, esnpartvie0_.site, esnpartvie0_.customer, esnpartvie0_.adj_accum_date, esnpartvie0_.g2_otr, esnpartvie0_.induct_date, esnpartvie0_.fiscal_week_bucket_date"
" Buffers: shared hit=20301, temp read=48936 written=6835"
" -> Sort (cost=69684.12..69684.13 rows=1 width=70) (actual time=976.153..976.219 rows=14 loops=1)"
" Sort Key: esnpartvie0_.esn, esnpartvie0_.quarter_point, esnpartvie0_.work_order_number, esnpartvie0_.site, esnpartvie0_.customer, esnpartvie0_.adj_accum_date, esnpartvie0_.g2_otr, esnpartvie0_.induct_date, esnpartvie0_.fiscal_week_bucket_date"
" Sort Method: quicksort Memory: 26kB"
" Buffers: shared hit=20301, temp read=48936 written=6835"
" -> Subquery Scan on esnpartvie0_ (cost=69684.02..69684.11 rows=1 width=70) (actual time=976.078..976.158 rows=14 loops=1)"
" Buffers: shared hit=20290, temp read=48936 written=6835"
" -> GroupAggregate (cost=69684.02..69684.10 rows=1 width=2016) (actual time=976.077..976.155 rows=14 loops=1)"
" Group Key: e.esn_id, w.number, ed.adj_accum_date, (COALESCE(ed.gate_2_otr, 0)), ed.gate_0_start, ed.gate_1_stop, p.part_id, st.name, mat.name, so.name, dr.name, hpc.hpc_status_name, module.module_name, c.customer_id, m.model_id, ef.engine_family_id, s.site_id, ws.name, ic.comment"
" Buffers: shared hit=20290, temp read=48936 written=6835"
" CTE indexed_comments"
" -> WindowAgg (cost=40573.82..45076.80 rows=225149 width=118) (actual time=182.537..291.895 rows=216974 loops=1)"
" Buffers: shared hit=5226, temp read=3319 written=3327"
" -> Sort (cost=40573.82..41136.69 rows=225149 width=110) (actual time=182.528..215.549 rows=216974 loops=1)"
" Sort Key: part_comment.part_id, part_comment.created_at DESC"
" Sort Method: external merge Disk: 26552kB"
" Buffers: shared hit=5226, temp read=3319 written=3327"
" -> Seq Scan on part_comment (cost=0.00..7474.49 rows=225149 width=110) (actual time=0.014..38.209 rows=216974 loops=1)"
" Buffers: shared hit=5223"
" -> Sort (cost=24607.21..24607.22 rows=1 width=717) (actual time=976.069..976.133 rows=14 loops=1)"
" Sort Key: w.number, ed.adj_accum_date, (COALESCE(ed.gate_2_otr, 0)), ed.gate_0_start, ed.gate_1_stop, p.part_id, st.name, mat.name, so.name, dr.name, hpc.hpc_status_name, module.module_name, c.customer_id, m.model_id, ef.engine_family_id, s.site_id, ws.name, ic.comment"
" Sort Method: quicksort Memory: 28kB"
" Buffers: shared hit=20290, temp read=48936 written=6835"
" -> Nested Loop (cost=1010.23..24607.20 rows=1 width=717) (actual time=442.381..976.017 rows=14 loops=1)"
" Buffers: shared hit=20287, temp read=48936 written=6835"
" -> Nested Loop Left Join (cost=1009.94..24598.88 rows=1 width=697) (actual time=442.337..975.670 rows=14 loops=1)"
" Join Filter: (ic.part_id = p.part_id)"
" Rows Removed by Join Filter: 824838"
" Buffers: shared hit=20245, temp read=48936 written=6835"
" -> Nested Loop Left Join (cost=1009.94..19518.95 rows=1 width=181) (actual time=56.148..57.676 rows=14 loops=1)"
" Buffers: shared hit=15019"
" -> Nested Loop Left Join (cost=1009.81..19518.35 rows=1 width=183) (actual time=56.139..57.635 rows=14 loops=1)"
" Buffers: shared hit=15019"
" -> Nested Loop Left Join (cost=1009.67..19517.67 rows=1 width=181) (actual time=56.133..57.598 rows=14 loops=1)"
" Buffers: shared hit=15019"
" -> Nested Loop Left Join (cost=1009.55..19516.82 rows=1 width=179) (actual time=56.124..57.544 rows=14 loops=1)"
" Buffers: shared hit=15019"
" -> Nested Loop Left Join (cost=1009.42..19516.04 rows=1 width=178) (actual time=56.105..57.439 rows=14 loops=1)"
" Buffers: shared hit=14991"
" -> Nested Loop Left Join (cost=1009.28..19515.37 rows=1 width=175) (actual time=56.089..57.335 rows=14 loops=1)"
" Buffers: shared hit=14963"
" -> Nested Loop Left Join (cost=1009.14..19514.77 rows=1 width=170) (actual time=56.068..57.206 rows=14 loops=1)"
" Join Filter: (e.work_scope_id = ws.work_scope_id)"
" Buffers: shared hit=14935"
" -> Nested Loop Left Join (cost=1009.14..19513.55 rows=1 width=166) (actual time=56.043..57.102 rows=14 loops=1)"
" Join Filter: (e.esn_id = p.esn_id)"
" Buffers: shared hit=14921"
" -> Nested Loop (cost=9.14..31.40 rows=1 width=125) (actual time=0.081..0.130 rows=1 loops=1)"
" Buffers: shared hit=15"
" -> Nested Loop (cost=8.87..23.08 rows=1 width=118) (actual time=0.069..0.117 rows=1 loops=1)"
" Buffers: shared hit=12"
" -> Nested Loop (cost=8.73..21.86 rows=1 width=108) (actual time=0.055..0.102 rows=1 loops=1)"
" Buffers: shared hit=10"
" -> Nested Loop (cost=8.60..21.65 rows=1 width=46) (actual time=0.046..0.091 rows=1 loops=1)"
" Buffers: shared hit=8"
" -> Hash Join (cost=8.31..13.34 rows=1 width=41) (actual time=0.036..0.081 rows=1 loops=1)"
" Hash Cond: (m.model_id = e.model_id)"
" Buffers: shared hit=5"
" -> Seq Scan on model m (cost=0.00..4.39 rows=239 width=17) (actual time=0.010..0.038 rows=240 loops=1)"
" Buffers: shared hit=2"
" -> Hash (cost=8.30..8.30 rows=1 width=28) (actual time=0.009..0.010 rows=1 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 9kB"
" Buffers: shared hit=3"
" -> Index Scan using esn_pkey on esn e (cost=0.29..8.30 rows=1 width=28) (actual time=0.006..0.006 rows=1 loops=1)"
" Index Cond: (esn_id = 140339)"
" Filter: active"
" Buffers: shared hit=3"
" -> Index Scan using work_order_pkey on work_order w (cost=0.29..8.30 rows=1 width=13) (actual time=0.008..0.008 rows=1 loops=1)"
" Index Cond: (work_order_id = e.work_order_id)"
" Buffers: shared hit=3"
" -> Index Scan using engine_family_pkey on engine_family ef (cost=0.14..0.20 rows=1 width=66) (actual time=0.009..0.009 rows=1 loops=1)"
" Index Cond: (engine_family_id = m.engine_family_id)"
" Buffers: shared hit=2"
" -> Index Scan using site_pkey on site s (cost=0.14..1.15 rows=1 width=14) (actual time=0.013..0.013 rows=1 loops=1)"
" Index Cond: (site_id = ef.site_id)"
" Buffers: shared hit=2"
" -> Index Scan using customer_pkey on customer c (cost=0.27..8.29 rows=1 width=11) (actual time=0.012..0.012 rows=1 loops=1)"
" Index Cond: (customer_id = e.customer_id)"
" Buffers: shared hit=3"
" -> Gather (cost=1000.00..19481.78 rows=29 width=41) (actual time=55.958..56.949 rows=14 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" Buffers: shared hit=14906"
" -> Parallel Seq Scan on part p (cost=0.00..18478.88 rows=12 width=41) (actual time=51.855..52.544 rows=5 loops=3)"
" Filter: (active AND (esn_id = 140339))"
" Rows Removed by Filter: 226662"
" Buffers: shared hit=14906"
" -> Seq Scan on work_scope ws (cost=0.00..1.10 rows=10 width=12) (actual time=0.004..0.004 rows=1 loops=14)"
" Buffers: shared hit=14"
" -> Index Scan using source_pkey on source so (cost=0.14..0.57 rows=1 width=13) (actual time=0.005..0.005 rows=1 loops=14)"
" Index Cond: (p.source_id = source_id)"
" Buffers: shared hit=28"
" -> Index Scan using status_pkey on status st (cost=0.13..0.56 rows=1 width=11) (actual time=0.004..0.004 rows=1 loops=14)"
" Index Cond: (p.status_id = status_id)"
" Buffers: shared hit=28"
" -> Index Scan using material_stream_pkey on material_stream mat (cost=0.13..0.56 rows=1 width=9) (actual time=0.004..0.004 rows=1 loops=14)"
" Index Cond: (p.material_stream_id = material_stream_id)"
" Buffers: shared hit=28"
" -> Index Scan using dr_status_pkey on dr_status dr (cost=0.13..0.56 rows=1 width=10) (actual time=0.001..0.001 rows=0 loops=14)"
" Index Cond: (p.dr_status_id = dr_status_id)"
" -> Index Scan using hpc_status_pkey on hpc_status hpc (cost=0.13..0.56 rows=1 width=10) (actual time=0.001..0.001 rows=0 loops=14)"
" Index Cond: (p.hpc_status_id = hpc_status_id)"
" -> Index Scan using module_pkey on module (cost=0.14..0.57 rows=1 width=6) (actual time=0.001..0.001 rows=0 loops=14)"
" Index Cond: (p.module_id = module_id)"
" -> CTE Scan on indexed_comments ic (cost=0.00..5065.85 rows=1126 width=520) (actual time=13.043..61.251 rows=58917 loops=14)"
" Filter: (comment_index = 1)"
" Rows Removed by Filter: 158057"
" Buffers: shared hit=5226, temp read=48936 written=6835"
" -> Index Scan using esn_dates_esn_id_key on esn_dates ed (cost=0.29..8.32 rows=1 width=20) (actual time=0.019..0.020 rows=1 loops=14)"
" Index Cond: (esn_id = 140339)"
" Filter: ((gate_3_stop_actual AND (gate_3_stop >= now())) OR (gate_3_stop IS NULL) OR ((NOT gate_3_stop_actual) AND (gate_3_stop IS NOT NULL) AND (gate_3_stop >= (now() - '730 days'::interval))))"
" Buffers: shared hit=42"
"Planning time: 6.564 ms"
"Execution time: 988.335 ms"
The actual View definition on which the above select is running
with indexed_comments as (
select
part_comment.part_id,
part_comment.comment,
row_number() over (partition by part_comment.part_id
order by
part_comment.created_at desc) as comment_index
from
moa.part_comment
)
select
e.esn_id,
e.name as esn,
e.is_qp_engine as quarter_point,
w.number as work_order_number,
case
when (p.part_id is null) then 0
else p.part_id
end as part_id,
p.part_number,
p.part_description,
p.quantity,
st.name as status,
p.status_id,
mat.name as material_stream,
p.material_stream_id,
so.name as source,
p.source_id,
p.oem,
p.po_number,
p.manual_cso_commit,
p.auto_cso_commit,
coalesce(p.manual_cso_commit, p.auto_cso_commit) as calculated_cso_commit,
(coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)), ed.gate_0_start) + p.accum_offset) as adjusted_accum,
dr.name as dr_status,
p.dr_status_id,
p.airway_bill,
p.core_material,
hpc.hpc_status_name as hpc_status,
p.hpc_status_id,
module.module_name,
p.module_id,
c.name as customer,
c.customer_id,
m.name as model,
m.model_id,
ef.name as engine_family,
ef.engine_family_id,
s.label as site,
s.site_id,
case
when (coalesce(p.manual_cso_commit, p.auto_cso_commit) > coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)), ed.gate_0_start)) then 1
else 0
end as critical,
case
when (coalesce(p.manual_cso_commit, p.auto_cso_commit) <= coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)), ed.gate_0_start)) then 1
else 0
end as numshort,
case
when ((p.esn_id is not null)
and (coalesce(p.manual_cso_commit, p.auto_cso_commit) is null)) then 1
else 0
end as wa,
ed.adj_accum_date,
(ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)) as g2_otr,
ed.gate_0_start as induct_date,
coalesce((coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0))) - max(coalesce(p.manual_cso_commit, p.auto_cso_commit))), 0) as delta,
coalesce(ed.adj_accum_date, (ed.gate_1_stop + coalesce(ed.gate_2_otr, 0)), ed.gate_0_start) as fiscal_week_bucket_date,
p.po_line_num,
p.ship_out,
p.receipt,
p.crit_ship,
e.work_scope_id,
ws.name as work_scope,
p.late_call,
p.ex_esn,
p.accum_offset,
ic.comment as latest_comment
from
(((((((((((((((moa.esn e
join moa.work_order w
using (work_order_id))
join moa.model m
using (model_id))
join moa.engine_family ef on
((m.engine_family_id = ef.engine_family_id)))
join moa.site s on
((ef.site_id = s.site_id)))
join moa.customer c
using (customer_id))
left join moa.part p on
(((e.esn_id = p.esn_id)
and (p.active <> false))))
left join moa.work_scope ws on
((e.work_scope_id = ws.work_scope_id)))
left join moa.source so on
((p.source_id = so.source_id)))
left join moa.status st on
((p.status_id = st.status_id)))
left join moa.material_stream mat
using (material_stream_id))
left join moa.dr_status dr
using (dr_status_id))
left join moa.hpc_status hpc
using (hpc_status_id))
left join moa.module module
using (module_id))
left join indexed_comments ic on
(((ic.part_id = p.part_id)
and (ic.comment_index = 1))))
join moa.esn_dates ed on
((e.esn_id = ed.esn_id)))
where
((e.active = true)
and (((ed.gate_3_stop_actual = true)
and (ed.gate_3_stop >= now()))
or (ed.gate_3_stop is null)
or ((ed.gate_3_stop_actual = false)
and (ed.gate_3_stop is not null)
and (ed.gate_3_stop >= (now() - '730 days'::interval)))))
group by
e.esn_id,
w.number,
s.label,
c.name,
p.active,
ed.adj_accum_date,
coalesce(ed.gate_2_otr, 0),
ed.gate_0_start,
ed.gate_1_stop,
p.part_id,
st.name,
mat.name,
so.name,
dr.name,
hpc.hpc_status_name,
module.module_name,
c.customer_id,
m.name,
m.model_id,
ef.name,
ef.engine_family_id,
s.site_id,
ws.name,
ic.comment;
What a horrific query.
Most of the time is going to this:
-> CTE Scan on indexed_comments ic (cost=0.00..5065.85 rows=1126 width=520) (actual time=13.043..61.251 rows=58917 loops=14)"
And the main culprit there is a misestimation of upper sibling node. It thinks it will need to do the CTE Scan one time, but it actually needs to do it 14 times (although apparently returning the same answer each time). If it knew it would do it repeatedly, it would set up a hash table rather than just iterate through it each time. But since setting up the hash requires one iteration through it, it doesn't seem to save anything if it thinks it only needs one iteration in the first place.
I don't know how to fix the estimation problem. But you could compute the ranks on the fly, rather than computing all up front then needing to search through them. You would do that with a LATERAL join.
Change
left join indexed_comments ic on
(((ic.part_id = p.part_id)
and (ic.comment_index = 1))))
to
left join lateral (select comment from part_comment pc where p.part_id=pc.part_id order by created_at desc limit 1) ic on true
and get rid of the with indexed_comments as...
For this to be fast you would need an index ON part_comment (part_id, created_at)

postgres optimize scope search query

I use postgres 10.4 in linux environment .
I have a query which is very slow when I run it.
the search using subject with arabic language is very slow.
also the problem exist with subject in different language.
I have about 1 million record per year.
I try to add index in the table transfer but the result is the same
CREATE INDEX subject
ON public.transfer
USING btree
(subject COLLATE pg_catalog."default" varchar_pattern_ops);
this is the query .
select * from ( with scope as (
select unit_id from public.sec_unit
where emp_id= 'EM-001'and app_type in ('S','E') )
select CAST (row_number() OVER (PARTITION BY advsearch.correspondenceid)
as VARCHAR(15)) as numline, advsearch.*
from (
SELECT Transfer.id AS id, CORRESP.id AS correspondenceId,
Transfer.correspondencecopy_id AS correspondencecopyId, Transfer.datesendjctransfer AS datesendjctransfer
FROM Transfer Transfer
Left outer JOIN correspondencecopy CORRESPCPY ON Transfer.correspondencecopy_id = CORRESPCPY.id
Left outer JOIN correspondence CORRESP ON CORRESP.id = CORRESPCPY.correspondence_id
LEFT OUTER JOIN scope sc on sc.unit_id = Transfer.unittransto_id or sc.unit_id='allorg'
LEFT OUTER JOIN employee emp on emp.id = 'EM-001'
WHERE transfer.status='1' AND (Transfer.docyearhjr='1441' )
AND (Transfer.subject like '%'||'رقم'||'%')
AND ( sc.unit_id is not null )
AND (coalesce(emp.confidentiel,'0') >= coalesce(Transfer.confidentiel,'0'))
)
advsearch ) Searchlist
WHERE Searchlist.numline='1'
ORDER BY Searchlist.datesendjctransfer
can someone help me to optimise the query
updated
I try to change the query.
I eliminate the use of scope.
I change it by simple condition.
I have the same result ( the same number of record )
but the problem is the same : the query is still very slow
select * from (
select CAST (row_number() OVER (PARTITION BY advsearch.correspondenceid)
as VARCHAR(15)) as numline, advsearch.*
from (
SELECT Transfer.id AS id, CORRESP.id AS correspondenceId,
Transfer.correspondencecopy_id AS correspondencecopyId, Transfer.datesendjctransfer AS datesendjctransfer
FROM Transfer Transfer
Left outer JOIN correspondencecopy CORRESPCPY ON Transfer.correspondencecopy_id = CORRESPCPY.id
Left outer JOIN correspondence CORRESP ON CORRESP.id = CORRESPCPY.correspondence_id
LEFT OUTER JOIN employee emp on emp.id = 'EM-001'
WHERE transfer.status='1' and ( Transfer.unittransto_id in (
select unit_id from public.security_employee_unit
where employee_id= 'EM-001'and app_type in ('E','S') )
or 'allorg' in ( select unit_id from public.security_employee_unit
where employee_id= 'EM-001'and app_type in ('S')))
AND (Transfer.docyearhjr='1441' )
AND (Transfer.subject like '%'||'رقم'||'%')
AND (coalesce(emp.confidentiel,'0') >= coalesce(Transfer.confidentiel,'0'))
)
advsearch ) Searchlist
WHERE Searchlist.numline='1'
ORDER BY Searchlist.datesendjctransfer
Updated :
I try to analyze the query using EXPLAIN ANALYZE
this the result :
"Sort (cost=412139.09..412139.13 rows=17 width=87) (actual time=1481.951..1482.166 rows=4497 loops=1)"
" Sort Key: searchlist.datesendjctransfer"
" Sort Method: quicksort Memory: 544kB"
" -> Subquery Scan on searchlist (cost=412009.59..412138.74 rows=17 width=87) (actual time=1457.717..1480.381 rows=4497 loops=1)"
" Filter: ((searchlist.numline)::text = '1'::text)"
" Rows Removed by Filter: 38359"
" -> WindowAgg (cost=412009.59..412095.69 rows=3444 width=87) (actual time=1457.715..1477.146 rows=42856 loops=1)"
" CTE scope"
" -> Bitmap Heap Scan on security_employee_unit (cost=8.59..15.83 rows=2 width=7) (actual time=0.043..0.058 rows=2 loops=1)"
" Recheck Cond: (((employee_id)::text = 'EM-001'::text) AND ((app_type)::text = ANY ('{SE,I}'::text[])))"
" Heap Blocks: exact=2"
" -> Bitmap Index Scan on employeeidkey (cost=0.00..8.59 rows=2 width=0) (actual time=0.037..0.037 rows=2 loops=1)"
" Index Cond: (((employee_id)::text = 'EM-001'::text) AND ((app_type)::text = ANY ('{SE,I}'::text[])))"
" -> Sort (cost=411993.77..412002.38 rows=3444 width=39) (actual time=1457.702..1461.773 rows=42856 loops=1)"
" Sort Key: corresp.id"
" Sort Method: external merge Disk: 2440kB"
" -> Nested Loop Left Join (cost=18315.99..411791.43 rows=3444 width=39) (actual time=1271.209..1295.423 rows=42856 loops=1)"
" Filter: ((COALESCE(emp.confidentiel, '0'::character varying))::text >= (COALESCE(transfer.confidentiel, '0'::character varying))::text)"
" -> Nested Loop (cost=18315.71..411628.14 rows=10333 width=41) (actual time=1271.165..1283.365 rows=42856 loops=1)"
" Join Filter: (((sc.unit_id)::text = (transfer.unittransto_id)::text) OR ((sc.unit_id)::text = 'allorg'::text))"
" Rows Removed by Join Filter: 42856"
" -> CTE Scan on scope sc (cost=0.00..0.04 rows=2 width=48) (actual time=0.045..0.064 rows=2 loops=1)"
" Filter: (unit_id IS NOT NULL)"
" -> Materialize (cost=18315.71..411292.44 rows=10328 width=48) (actual time=53.970..635.651 rows=42856 loops=2)"
" -> Gather (cost=18315.71..411240.80 rows=10328 width=48) (actual time=107.919..1254.600 rows=42856 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" -> Nested Loop Left Join (cost=17315.71..409208.00 rows=4303 width=48) (actual time=104.436..1250.461 rows=14285 loops=3)"
" -> Nested Loop Left Join (cost=17315.28..405979.02 rows=4303 width=48) (actual time=104.382..1136.591 rows=14285 loops=3)"
" -> Parallel Bitmap Heap Scan on transfer (cost=17314.85..377306.25 rows=4303 width=39) (actual time=104.287..996.609 rows=14285 loops=3)"
" Recheck Cond: ((docyearhjr)::text = '1441'::text)"
" Rows Removed by Index Recheck: 437299"
" Filter: (((subject)::text ~~ '%رقم%'::text) AND ((status)::text = '1'::text))"
" Rows Removed by Filter: 297178"
" Heap Blocks: exact=14805 lossy=44734"
" -> Bitmap Index Scan on docyearhjr (cost=0.00..17312.27 rows=938112 width=0) (actual time=96.028..96.028 rows=934389 loops=1)"
" Index Cond: ((docyearhjr)::text = '1441'::text)"
" -> Index Scan using pk_correspondencecopy on correspondencecopy correspcpy (cost=0.43..6.66 rows=1 width=21) (actual time=0.009..0.009 rows=1 loops=42856)"
" Index Cond: ((transfer.correspondencecopy_id)::text = (id)::text)"
" -> Index Only Scan using pk_correspondence on correspondence corresp (cost=0.42..0.75 rows=1 width=9) (actual time=0.007..0.007 rows=1 loops=42856)"
" Index Cond: (id = (correspcpy.correspondence_id)::text)"
" Heap Fetches: 14227"
" -> Materialize (cost=0.28..8.31 rows=1 width=2) (actual time=0.000..0.000 rows=1 loops=42856)"
" -> Index Scan using pk_employee on employee emp (cost=0.28..8.30 rows=1 width=2) (actual time=0.038..0.038 rows=1 loops=1)"
" Index Cond: ((id)::text = 'EM-001'::text)"
"Planning time: 1.595 ms"
"Execution time: 1487.303 ms"

PostgreSQL Calls All Data For Group By Limit Operation

I have a query like below:
SELECT
MAX(m.org_id) as orgId,
MAX(m.org_name) as orgName,
MAX(m.app_id) as appId,
MAX(r.country_or_region) as country,
MAX(r.local_spend_currency) as currency,
SUM(r.local_spend_amount) as spend,
SUM(r.impressions) as impressions
...
FROM report r
LEFT JOIN metadata m
ON m.org_id = r.org_id
AND m.campaign_id = r.campaign_id
AND m.ad_group_id = r.ad_group_id
WHERE (r.report_date BETWEEN '2019-01-01' AND '2019-10-10')
AND r.org_id = 1
GROUP BY r.country_or_region, r.ad_group_id, r.keyword_id, r.keyword, r.text
OFFSET 0
LIMIT 20
Explain Analyze:
"Limit (cost=1308.04..1308.14 rows=1 width=562) (actual time=267486.538..267487.067 rows=20 loops=1)"
" -> GroupAggregate (cost=1308.04..1308.14 rows=1 width=562) (actual time=267486.537..267487.061 rows=20 loops=1)"
" Group Key: r.country_or_region, r.ad_group_id, r.keyword_id, r.keyword, r.text"
" -> Sort (cost=1308.04..1308.05 rows=1 width=221) (actual time=267486.429..267486.536 rows=567 loops=1)"
" Sort Key: r.country_or_region, r.ad_group_id, r.keyword_id, r.keyword, r.text"
" Sort Method: external merge Disk: 667552kB"
" -> Nested Loop (cost=1.13..1308.03 rows=1 width=221) (actual time=0.029..235158.692 rows=2742789 loops=1)"
" -> Nested Loop Semi Join (cost=0.44..89.76 rows=1 width=127) (actual time=0.016..8.967 rows=1506 loops=1)"
" Join Filter: (m.org_id = (479360))"
" -> Nested Loop (cost=0.44..89.05 rows=46 width=123) (actual time=0.013..4.491 rows=1506 loops=1)"
" -> HashAggregate (cost=0.02..0.03 rows=1 width=4) (actual time=0.003..0.003 rows=1 loops=1)"
" Group Key: 479360"
" -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)"
" -> Index Scan using pmx_org_cmp_adg on metadata m (cost=0.41..88.55 rows=46 width=119) (actual time=0.008..1.947 rows=1506 loops=1)"
" Index Cond: (org_id = (479360))"
" -> Materialize (cost=0.00..0.03 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1506)"
" -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)"
" -> Index Scan using report_unx on search_term_report r (cost=0.69..1218.26 rows=1 width=118) (actual time=51.983..155.421 rows=1821 loops=1506)"
" Index Cond: ((org_id = m.org_id) AND (report_date >= '2019-07-01'::date) AND (report_date <= '2019-10-10'::date) AND (campaign_id = m.campaign_id) AND (ad_group_id = m.ad_group_id))"
"Planning Time: 0.988 ms"
"Execution Time: 267937.889 ms"
I have indexes on metadata and report table like: metadata(org_id, campaign_id, ad_group_id); report(org_id, report_date, campaign_id, ad_group_id)
I just want to call random 20 items with limit. But PostgreSQL takes so long time to call it? How can I improve it?
You want to have 20 groups. But for building these groups (to be sure, there is nothing missing in any group), you need to fetch all raw data.
When you say "random items", I assume you mean "random reports", as you have no item table.
with r as (select * from report WHERE r.report_date BETWEEN '2019-01-01' AND '2019-10-10' AND r.org_id = 1 order by random() limit 20)
select <whatever> from r left join <whatever>
You might need to tweak your aggregates a but. Does every record in "metadata" belong to exactly one record in "report"?

Slow query ordering by a column in a joined table in postgresql

How can I optimize this query ?
I try to increase work_mem value and create index for name. But It does not work.
EXPLAIN ANALYZE
SELECT b.* FROM book b
JOIN category c ON c.id = b.categoryid
ORDER BY c.name, b.name
LIMIT 20 OFFSET 1
"Limit (cost=328.82..328.87 rows=20 width=207) (actual time=11.942..11.955 rows=20 loops=1)"
" -> Sort (cost=328.81..341.64 rows=5132 width=207) (actual time=11.940..11.944 rows=21 loops=1)"
" Sort Key: c.name, b.name"
" Sort Method: top-N heapsort Memory: 34kB"
" -> Hash Join (cost=10.37..190.45 rows=5132 width=207) (actual time=0.143..4.963 rows=5132 loops=1)"
" Hash Cond: (b.categoryid = c.id)"
" -> Seq Scan on book b (cost=0.00..166.32 rows=5132 width=196) (actual time=0.007..2.070 rows=5132 loops=1)"
" -> Hash (cost=7.94..7.94 rows=194 width=27) (actual time=0.129..0.129 rows=194 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 20kB"
" -> Seq Scan on category c (cost=0.00..7.94 rows=194 width=27) (actual time=0.002..0.061 rows=194 loops=1)"
"Planning time: 0.283 ms"
"Execution time: 11.999 ms"