Why postgres choose wrong execution plan - postgresql

I have a simple query
select count(*)
from taxi_order.ta_orders o
inner join public.t_bases b on b.id = o.id_base
where o.c_phone2 = '012356789'
and b.id_organization = 1
and o.c_date_end < '2017-12-01'::date
group by date_trunc('month', o.c_date_end);
Most of time this query runs fast in less then 100 ms but sometimes it starts run very slow up to 4 seconds for some c_phone2, id_organization combinations.
Execution plan for fast case:
HashAggregate (cost=7005.05..7005.62 rows=163 width=8)
Group Key: date_trunc('month'::text, o.c_date_end)
-> Hash Join (cost=94.30..7004.23 rows=163 width=8)
Hash Cond: (o.id_base = b.id)
-> Index Scan using ix_ta_orders_c_phone2 on ta_orders o (cost=0.57..6899.41 rows=2806 width=12)
Index Cond: ((c_phone2)::text = $3)
Filter: (c_date_end < $4)
-> Hash (cost=93.26..93.26 rows=133 width=4)
-> Bitmap Heap Scan on t_bases b (cost=4.71..93.26 rows=133 width=4)
Recheck Cond: (id_organization = $2)
-> Bitmap Index Scan on ix_t_bases_id_organization (cost=0.00..4.68 rows=133 width=0)
Index Cond: (id_organization = $2)
Execution plan for slow case:
HashAggregate (cost=6604.97..6604.98 rows=1 width=8)
Group Key: date_trunc('month'::text, o.c_date_end)
-> Nested Loop (cost=2195.33..6604.97 rows=1 width=8)
-> Bitmap Heap Scan on t_bases b (cost=2.29..7.78 rows=3 width=4)
Recheck Cond: (id_organization = $2)
-> Bitmap Index Scan on ix_t_bases_id_organization (cost=0.00..2.29 rows=3 width=0)
Index Cond: (id_organization = $2)
-> Bitmap Heap Scan on ta_orders o (cost=2193.04..2199.06 rows=3 width=12)
Recheck Cond: (((c_phone2)::text = $3) AND (id_base = b.id) AND (c_date_end < $4))
-> BitmapAnd (cost=2193.04..2193.04 rows=3 width=0)
-> Bitmap Index Scan on ix_ta_orders_c_phone2 (cost=0.00..58.84 rows=3423 width=0)
Index Cond: ((c_phone2)::text = $3)
-> Bitmap Index Scan on ix_ta_orders_id_base_date_end (cost=0.00..2133.66 rows=83472 width=0)
Index Cond: ((id_base = b.id) AND (c_date_end < $4))
Why query planer chooses so slow ineffective plan sometimes?
EDIT
Schema for tables:
craete table taxi_order.ta_orders (
id bigserial not null,
id_base integer not null,
c_phone2 character varying(30),
c_date_end timestamp with time zone,
...
CONSTRAINT pk_ta_orders PRIMARY KEY (id),
CONSTRAINT fk_ta_orders_t_bases REFERENCES public.t_bases (id)
);
craete table public.t_bases (
id serial not null,
id_organization integer not null,
...
CONSTRAINT pk_t_bases PRIMARY KEY (id)
);
ta_orders ~ 100M rows, t_bases ~ 2K rows.
EDIT2
Explain analyze for slow case:
HashAggregate (cost=6355.29..6355.29 rows=1 width=8) (actual time=4075.847..4075.847 rows=1 loops=1)
Group Key: date_trunc('month'::text, o.c_date_end)
-> Nested Loop (cost=2112.10..6355.28 rows=1 width=8) (actual time=114.871..4075.803 rows=2 loops=1)
-> Bitmap Heap Scan on t_bases b (cost=2.29..7.78 rows=3 width=4) (actual time=0.061..0.375 rows=133 loops=1)
Recheck Cond: (id_organization = $2)
Heap Blocks: exact=45
-> Bitmap Index Scan on ix_t_bases_id_organization (cost=0.00..2.29 rows=3 width=0) (actual time=0.045..0.045 rows=133 loops=1)
Index Cond: (id_organization = $2)
-> Bitmap Heap Scan on ta_orders o (cost=2109.81..2115.83 rows=3 width=12) (actual time=30.638..30.638 rows=0 loops=133)
Recheck Cond: (((c_phone2)::text = $3) AND (id_base = b.id) AND (c_date_end < $4))
Heap Blocks: exact=2
-> BitmapAnd (cost=2109.81..2109.81 rows=3 width=0) (actual time=30.635..30.635 rows=0 loops=133)
-> Bitmap Index Scan on ix_ta_orders_c_phone2 (cost=0.00..58.85 rows=3427 width=0) (actual time=0.032..0.032 rows=6 loops=133)
Index Cond: ((c_phone2)::text = $3)
-> Bitmap Index Scan on ix_ta_orders_id_base_date_end (cost=0.00..2050.42 rows=80216 width=0) (actual time=30.108..30.108 rows=94206 loops=133)
Index Cond: ((id_base = b.id) AND (c_date_end < $4))
Explain analyze for fast case:
HashAggregate (cost=7005.05..7005.62 rows=163 width=8) (actual time=0.927..0.928 rows=1 loops=1)
Group Key: date_trunc('month'::text, o.c_date_end)
-> Hash Join (cost=94.30..7004.23 rows=163 width=8) (actual time=0.903..0.913 rows=2 loops=1)
Hash Cond: (o.id_base = b.id)
-> Index Scan using ix_ta_orders_c_phone2 on ta_orders o (cost=0.57..6899.41 rows=2806 width=12) (actual time=0.591..0.604 rows=4 loops=1)
Index Cond: ((c_phone2)::text = $3)
Filter: (c_date_end < $4)
Rows Removed by Filter: 2
-> Hash (cost=93.26..93.26 rows=133 width=4) (actual time=0.237..0.237 rows=133 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 13kB
-> Bitmap Heap Scan on t_bases b (cost=4.71..93.26 rows=133 width=4) (actual time=0.058..0.196 rows=133 loops=1)
Recheck Cond: (id_organization = $2)
Heap Blocks: exact=45
-> Bitmap Index Scan on ix_t_bases_id_organization (cost=0.00..4.68 rows=133 width=0) (actual time=0.044..0.044 rows=133 loops=1)
Index Cond: (id_organization = $2)
I know I can create separate index for every query to speed it up. But I want to know what is the reason for choosing wrong plan? What is wrong with my statistic?

You'd have to give us EXPLAIN (ANALYZE, BUFFERS) output for a definitive answer.
The difference between the plans is that the second plan chooses a nested loop join because it estimates that only very few rows will be selected from t_bases. Since you complain that the query is slow, that estimate is probably wrong, resulting in too many loops over the inner table.
Try to improve your table statistics by running ANALYZE, perhaps after increasing default_statistics_target.
A multi-column index on ta_orders(c_phone2, id_base, c_date_end) would improve the execution time for the nested loop plan.

Not sure, but I can suggest a possible improvement to your query: remove the inner join. You're not selecting anything from that table, so why bother querying it? You should be able to add where o.id_base = ? to your query.
If you want this query to run quickly every time you should add the following index to ta_orders: (id_base, c_phone2, c_date_end). It's important that the column with the > or < where clause is at the end (otherwise Postgres will not be able to make use of it).

Related

How does a string operation on a column in a filter condition of a Postgresql query have on the plan it chooses

I was working on optimising a query, with dumb luck I tried something and it improved the query but I am unable to explain why.
Below is the query with poor performance
with ctedata1 as(
select
sum(total_visit_count) as total_visit_count,
sum(sh_visit_count) as sh_visit_count,
sum(ec_visit_count) as ec_visit_count,
sum(total_like_count) as total_like_count,
sum(sh_like_count) as sh_like_count,
sum(ec_like_count) as ec_like_count,
sum(total_order_count) as total_order_count,
sum(sh_order_count) as sh_order_count,
sum(ec_order_count) as ec_order_count,
sum(total_sales_amount) as total_sales_amount,
sum(sh_sales_amount) as sh_sales_amount,
sum(ec_sales_amount) as ec_sales_amount,
sum(ec_order_online_count) as ec_order_online_count,
sum(ec_sales_online_amount) as ec_sales_online_amount,
sum(ec_order_in_store_count) as ec_order_in_store_count,
sum(ec_sales_in_store_amount) as ec_sales_in_store_amount,
table2.im_name,
table2.brand as kpibrand,
table2.id_region as kpiregion
from
table2
where
deleted_at is null
and id_region = any('{1}')
group by
im_name,
kpiregion,
kpibrand ),
ctedata2 as (
select
ctedata1.*,
rank() over (partition by (kpiregion,
kpibrand)
order by
coalesce(ctedata1.total_sales_amount, 0) desc) rank,
count(*) over (partition by (kpiregion,
kpibrand)) as total_count
from
ctedata1 )
select
table1.id_pf_item,
table1.product_id,
table1.color_code,
table1.l1_code,
table1.local_title as product_name,
table1.id_region,
table1.gender,
case
when table1.created_at is null then '1970/01/01 00:00:00'
else table1.created_at
end as created_at,
(
select
count(distinct id_outfit)
from
table3
left join table4 on
table3.id_item = table4.id_item
and table4.deleted_at is null
where
table3.deleted_at is null
and table3.id_pf_item = table1.id_pf_item) as outfit_count,
count(*) over() as total_matched,
case
when table1.v8_im_name = '' then table1.im_name
else table1.v8_im_name
end as im_name,
case
when table1.id_region != 1 then null
else
case
when table1.sales_start_at is null then '1970/01/01 00:00:00'
else table1.sales_start_at
end
end as sales_start_date,
table1.category_ids,
array_to_string(table1.intermediate_category_ids, ','),
table1.image_url,
table1.brand,
table1.pdp_url,
coalesce(ctedata2.total_visit_count, 0) as total_visit_count,
coalesce(ctedata2.sh_visit_count, 0) as sh_visit_count,
coalesce(ctedata2.ec_visit_count, 0) as ec_visit_count,
coalesce(ctedata2.total_like_count, 0) as total_like_count,
coalesce(ctedata2.sh_like_count, 0) as sh_like_count,
coalesce(ctedata2.ec_like_count, 0) as ec_like_count,
coalesce(ctedata2.total_order_count, 0) as total_order_count,
coalesce(ctedata2.sh_order_count, 0) as sh_order_count,
coalesce(ctedata2.ec_order_count, 0) as ec_order_count,
coalesce(ctedata2.total_sales_amount, 0) as total_sales_amount,
coalesce(ctedata2.sh_sales_amount, 0) as sh_sales_amount,
coalesce(ctedata2.ec_sales_amount, 0) as ec_sales_amount,
coalesce(ctedata2.ec_order_online_count, 0) as ec_order_online_count,
coalesce(ctedata2.ec_sales_online_amount, 0) as ec_sales_online_amount,
coalesce(ctedata2.ec_order_in_store_count, 0) as ec_order_in_store_count,
coalesce(ctedata2.ec_sales_in_store_amount, 0) as ec_sales_in_store_amount,
ctedata2.rank,
ctedata2.total_count,
table1.department,
table1.seasons
from
table1
left join ctedata2 on
table1.im_name = ctedata2.im_name
and table1.brand = ctedata2.kpibrand
where
table1.deleted_at is null
and table1.id_region = any('{1}')
and lower(table1.brand) = any('{"brand1","brand2"}')
and 'season1' = any(lower(seasons::text)::text[])
and table1.department = 'Department1'
order by
total_sales_amount desc offset 0
limit 100
The explain output for above query is
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=172326.55..173435.38 rows=1 width=952) (actual time=85664.201..85665.970 rows=100 loops=1)
CTE ctedata1
-> GroupAggregate (cost=0.42..80478.71 rows=43468 width=530) (actual time=0.063..708.069 rows=73121 loops=1)
Group Key: table2.im_name, table2.id_region, table2.brand
-> Index Scan using udx_table2_im_name_id_region_brand_target_date_key on table2 (cost=0.42..59699.18 rows=391708 width=146) (actual time=0.029..308.582 rows=391779 loops=1)
Filter: ((deleted_at IS NULL) AND (id_region = ANY ('{1}'::integer[])))
Rows Removed by Filter: 20415
CTE ctedata2
-> WindowAgg (cost=16104.06..17842.78 rows=43468 width=628) (actual time=1012.994..1082.057 rows=73121 loops=1)
-> WindowAgg (cost=16104.06..17082.09 rows=43468 width=620) (actual time=945.755..1014.656 rows=73121 loops=1)
-> Sort (cost=16104.06..16212.73 rows=43468 width=612) (actual time=945.747..963.254 rows=73121 loops=1)
Sort Key: ctedata1.kpiregion, ctedata1.kpibrand, (COALESCE(ctedata1.total_sales_amount, '0'::numeric)) DESC
Sort Method: external merge Disk: 6536kB
-> CTE Scan on ctedata1 (cost=0.00..869.36 rows=43468 width=612) (actual time=0.069..824.841 rows=73121 loops=1)
-> Result (cost=74005.05..75113.88 rows=1 width=952) (actual time=85664.199..85665.950 rows=100 loops=1)
-> Sort (cost=74005.05..74005.05 rows=1 width=944) (actual time=85664.072..85664.089 rows=100 loops=1)
Sort Key: (COALESCE(ctedata2.total_sales_amount, '0'::numeric)) DESC
Sort Method: top-N heapsort Memory: 76kB
-> WindowAgg (cost=10960.95..74005.04 rows=1 width=944) (actual time=85658.049..85661.393 rows=3151 loops=1)
-> Nested Loop Left Join (cost=10960.95..74005.02 rows=1 width=927) (actual time=1075.219..85643.595 rows=3151 loops=1)
Join Filter: (((table1.im_name)::text = ctedata2.im_name) AND ((table1.brand)::text = ctedata2.kpibrand))
Rows Removed by Join Filter: 230402986
-> Bitmap Heap Scan on table1 (cost=10960.95..72483.64 rows=1 width=399) (actual time=45.466..278.376 rows=3151 loops=1)
Recheck Cond: (id_region = ANY ('{1}'::integer[]))
Filter: ((deleted_at IS NULL) AND (department = 'Department1'::text) AND (lower((brand)::text) = ANY ('{brand1, brand2}'::text[])) AND ('season1'::text = ANY ((lower((seasons)::text))::text[])))
Rows Removed by Filter: 106335
Heap Blocks: exact=42899
-> Bitmap Index Scan on table1_im_name_id_region_key (cost=0.00..10960.94 rows=110619 width=0) (actual time=38.307..38.307 rows=109486 loops=1)
Index Cond: (id_region = ANY ('{1}'::integer[]))
-> CTE Scan on ctedata2 (cost=0.00..869.36 rows=43468 width=592) (actual time=0.325..21.721 rows=73121 loops=3151)
SubPlan 3
-> Aggregate (cost=1108.80..1108.81 rows=1 width=8) (actual time=0.018..0.018 rows=1 loops=100)
-> Nested Loop Left Join (cost=5.57..1108.57 rows=93 width=4) (actual time=0.007..0.016 rows=3 loops=100)
-> Bitmap Heap Scan on table3 (cost=5.15..350.95 rows=93 width=4) (actual time=0.005..0.008 rows=3 loops=100)
Recheck Cond: (id_pf_item = table1.id_pf_item)
Filter: (deleted_at IS NULL)
Heap Blocks: exact=107
-> Bitmap Index Scan on idx_id_pf_item (cost=0.00..5.12 rows=93 width=0) (actual time=0.003..0.003 rows=3 loops=100)
Index Cond: (id_pf_item = table1.id_pf_item)
-> Index Scan using index_table4_id_item on table4 (cost=0.42..8.14 rows=1 width=8) (actual time=0.002..0.002 rows=0 loops=303)
Index Cond: (table3.id_item = id_item)
Filter: (deleted_at IS NULL)
Rows Removed by Filter: 0
Planning time: 1.023 ms
Execution time: 85669.512 ms
I changed
and lower(table1.brand) = any('{"brand1","brand2"}')
in the query to
and table1.brand = any('{"Brand1","Brand2"}')
and the plan changed to
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=173137.44..188661.06 rows=14 width=952) (actual time=1444.123..1445.653 rows=100 loops=1)
CTE ctedata1
-> GroupAggregate (cost=0.42..80478.71 rows=43468 width=530) (actual time=0.040..769.982 rows=73121 loops=1)
Group Key: table2.im_name, table2.id_region, table2.brand
-> Index Scan using udx_table2_item_im_name_id_region_brand_target_date_key on table2 (cost=0.42..59699.18 rows=391708 width=146) (actual time=0.021..350.774 rows=391779 loops=1)
Filter: ((deleted_at IS NULL) AND (id_region = ANY ('{1}'::integer[])))
Rows Removed by Filter: 20415
CTE ctedata2
-> WindowAgg (cost=16104.06..17842.78 rows=43468 width=628) (actual time=1088.905..1153.749 rows=73121 loops=1)
-> WindowAgg (cost=16104.06..17082.09 rows=43468 width=620) (actual time=1020.017..1089.117 rows=73121 loops=1)
-> Sort (cost=16104.06..16212.73 rows=43468 width=612) (actual time=1020.011..1037.170 rows=73121 loops=1)
Sort Key: ctedata1.kpiregion, ctedata1.kpibrand, (COALESCE(ctedata1.total_sales_amount, '0'::numeric)) DESC
Sort Method: external merge Disk: 6536kB
-> CTE Scan on ctedata1 (cost=0.00..869.36 rows=43468 width=612) (actual time=0.044..891.653 rows=73121 loops=1)
-> Result (cost=74815.94..90339.56 rows=14 width=952) (actual time=1444.121..1445.635 rows=100 loops=1)
-> Sort (cost=74815.94..74815.98 rows=14 width=944) (actual time=1444.053..1444.065 rows=100 loops=1)
Sort Key: (COALESCE(ctedata2.total_sales_amount, '0'::numeric)) DESC
Sort Method: top-N heapsort Memory: 76kB
-> WindowAgg (cost=72207.31..74815.68 rows=14 width=944) (actual time=1439.128..1441.885 rows=3151 loops=1)
-> Hash Right Join (cost=72207.31..74815.40 rows=14 width=927) (actual time=1307.531..1437.246 rows=3151 loops=1)
Hash Cond: ((ctedata2.im_name = (table1.im_name)::text) AND (ctedata2.kpibrand = (table1.brand)::text))
-> CTE Scan on ctedata2 (cost=0.00..869.36 rows=43468 width=592) (actual time=1088.911..1209.646 rows=73121 loops=1)
-> Hash (cost=72207.10..72207.10 rows=14 width=399) (actual time=216.850..216.850 rows=3151 loops=1)
Buckets: 4096 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1249kB
-> Bitmap Heap Scan on table1 (cost=10960.95..72207.10 rows=14 width=399) (actual time=46.434..214.246 rows=3151 loops=1)
Recheck Cond: (id_region = ANY ('{1}'::integer[]))
Filter: ((deleted_at IS NULL) AND (department = 'Department1'::text) AND ((brand)::text = ANY ('{Brand1, Brand2}'::text[])) AND ('season1'::text = ANY ((lower((seasons)::text))::text[])))
Rows Removed by Filter: 106335
Heap Blocks: exact=42899
-> Bitmap Index Scan on table1_im_name_id_region_key (cost=0.00..10960.94 rows=110619 width=0) (actual time=34.849..34.849 rows=109486 loops=1)
Index Cond: (id_region = ANY ('{1}'::integer[]))
SubPlan 3
-> Aggregate (cost=1108.80..1108.81 rows=1 width=8) (actual time=0.015..0.015 rows=1 loops=100)
-> Nested Loop Left Join (cost=5.57..1108.57 rows=93 width=4) (actual time=0.006..0.014 rows=3 loops=100)
-> Bitmap Heap Scan on table3 (cost=5.15..350.95 rows=93 width=4) (actual time=0.004..0.006 rows=3 loops=100)
Recheck Cond: (id_pf_item = table1.id_pf_item)
Filter: (deleted_at IS NULL)
Heap Blocks: exact=107
-> Bitmap Index Scan on idx_id_pf_item (cost=0.00..5.12 rows=93 width=0) (actual time=0.003..0.003 rows=3 loops=100)
Index Cond: (id_pf_item = table1.id_pf_item)
-> Index Scan using index_table4_id_item on table4 (cost=0.42..8.14 rows=1 width=8) (actual time=0.002..0.002 rows=0 loops=303)
Index Cond: (table3.id_item = id_item)
Filter: (deleted_at IS NULL)
Rows Removed by Filter: 0
Planning time: 0.760 ms
Execution time: 1448.848 ms
My Observation
The join strategy for table1 left join ctedata2 changes after the lower() function is avoided. The strategy changes from nested loop left join to hash right join.
The CTE Scan node on ctedata2 is executed only once in the better performing query.
Postgres Version
9.6
Please help me to understand this behaviour. I will supply additional info if required.
It is almost not worthwhile taking a deep dive into the inner workings of a nearly-obsolete version. That time and energy is probably better spent jollying along an upgrade.
But the problem is pretty plain. Your scan on table1 is estimated dreadfully, although 14 times less dreadful in the better plan.
-> Bitmap Heap Scan on table1 (cost=10960.95..72483.64 rows=1 width=399) (actual time=45.466..278.376 rows=3151 loops=1)
-> Bitmap Heap Scan on table1 (cost=10960.95..72207.10 rows=14 width=399) (actual time=46.434..214.246 rows=3151 loops=1)
Your use of lower(), apparently without reason, surely contributes to the poor estimation. And dynamically converting a string into an array certainly doesn't help either. If it were stored as a real array in the first place, the statistics system could get its hands on it and generate more reasonable estimates.

How can I speed up my PostgreSQL SELECT function that uses a list for its WHERE clause?

I have a function SELECT that takes in a list of symbol of parameters.
CREATE OR REPLACE FUNCTION api.stats(p_stocks text[])
RETURNS TABLE(symbol character, industry text, adj_close money, week52high money, week52low money, marketcap money,
pe_ratio int, beta numeric, dividend_yield character)
as $$
SELECT DISTINCT ON (t1.symbol) t1.symbol,
t3.industry,
cast(t2.adj_close as money),
cast(t1.week52high as money),
cast(t1.week52low as money),
cast(t1.marketcap as money),
cast(t1.pe_ratio as int),
ROUND(t1.beta,2),
to_char(t1.dividend_yield * 100, '99D99%%')
FROM api.security_stats as t1
LEFT JOIN api.security_price as t2 USING (symbol)
LEFT JOIN api.security as t3 USING (symbol)
WHERE symbol = any($1) ORDER BY t1.symbol, t2.date DESC
$$ language sql
PARALLEL SAFE;
I'm trying to speed up the initial query by adding indexes and other methods, it did reduce my query time by half the speed, but only when the list has ONE value, it's still pretty slow with more than one value.
For brevity, I've added the original select statement below, with only one symbol as a parameter, AAPL:
SELECT DISTINCT ON (t1.symbol) t1.symbol,
t3.industry,
cast(t2.adj_close as money),
cast(t1.week52high as money),
cast(t1.week52low as money),
cast(t1.marketcap as money),
cast(t1.pe_ratio as int),
ROUND(t1.beta,2),
to_char(t1.dividend_yield * 100, '99D99%%')
FROM api.security_stats as t1
LEFT JOIN api.security_price as t2 USING (symbol)
LEFT JOIN api.security as t3 USING (symbol)
WHERE symbol = 'AAPL' ORDER BY t1.symbol, t2.date DESC;
Here are the details on performance:
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=71365.86..72083.62 rows=52 width=130) (actual time=828.301..967.263 rows=1 loops=1)
-> Sort (cost=71365.86..72083.62 rows=287101 width=130) (actual time=828.299..946.342 rows=326894 loops=1)
Sort Key: t2.date DESC
Sort Method: external merge Disk: 33920kB
-> Hash Right Join (cost=304.09..25710.44 rows=287101 width=130) (actual time=0.638..627.083 rows=326894 loops=1)
Hash Cond: ((t2.symbol)::text = (t1.symbol)::text)
-> Bitmap Heap Scan on security_price t2 (cost=102.41..16523.31 rows=5417 width=14) (actual time=0.317..2.658 rows=4478 loops=1)
Recheck Cond: ((symbol)::text = 'AAPL'::text)
Heap Blocks: exact=153
-> Bitmap Index Scan on symbol_price_idx (cost=0.00..101.06 rows=5417 width=0) (actual time=0.292..0.293 rows=4478 loops=1)
Index Cond: ((symbol)::text = 'AAPL'::text)
-> Hash (cost=201.02..201.02 rows=53 width=79) (actual time=0.290..0.295 rows=73 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 17kB
-> Nested Loop Left Join (cost=4.98..201.02 rows=53 width=79) (actual time=0.062..0.252 rows=73 loops=1)
Join Filter: ((t1.symbol)::text = (t3.symbol)::text)
-> Bitmap Heap Scan on security_stats t1 (cost=4.70..191.93 rows=53 width=57) (actual time=0.046..0.195 rows=73 loops=1)
Recheck Cond: ((symbol)::text = 'AAPL'::text)
Heap Blocks: exact=73
-> Bitmap Index Scan on symbol_stats_idx (cost=0.00..4.69 rows=53 width=0) (actual time=0.029..0.029 rows=73 loops=1)
Index Cond: ((symbol)::text = 'AAPL'::text)
-> Materialize (cost=0.28..8.30 rows=1 width=26) (actual time=0.000..0.000 rows=1 loops=73)
-> Index Scan using symbol_security_idx on security t3 (cost=0.28..8.29 rows=1 width=26) (actual time=0.011..0.011 rows=1 loops=1)
Index Cond: ((symbol)::text = 'AAPL'::text)
Planning Time: 0.329 ms
Execution Time: 973.894 ms
Now, I will take the same SELECT statement above and change the where clause to WHERE symbol in ('AAPL','TLSA') to replicate my original FUNCTION first mentioned.
EDIT: Here is the new test using multiple values, when I changed work_mem to 10mb:
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=253542.02..255477.13 rows=101 width=130) (actual time=5239.415..5560.114 rows=2 loops=1)
-> Sort (cost=253542.02..254509.58 rows=387022 width=130) (actual time=5239.412..5507.122 rows=430439 loops=1)
Sort Key: t1.symbol, t2.date DESC
Sort Method: external merge Disk: 43056kB
-> Hash Left Join (cost=160938.84..191162.40 rows=387022 width=130) (actual time=2558.718..3509.201 rows=430439 loops=1)
Hash Cond: ((t1.symbol)::text = (t2.symbol)::text)
-> Hash Left Join (cost=50.29..400.99 rows=107 width=79) (actual time=0.617..0.864 rows=112 loops=1)
Hash Cond: ((t1.symbol)::text = (t3.symbol)::text)
-> Bitmap Heap Scan on security_stats t1 (cost=9.40..359.81 rows=107 width=57) (actual time=0.051..0.246 rows=112 loops=1)
Recheck Cond: ((symbol)::text = ANY ('{AAPL,TSLA}'::text[]))
Heap Blocks: exact=112
-> Bitmap Index Scan on symbol_stats_idx (cost=0.00..9.38 rows=107 width=0) (actual time=0.030..0.031 rows=112 loops=1)
Index Cond: ((symbol)::text = ANY ('{AAPL,TSLA}'::text[]))
-> Hash (cost=28.73..28.73 rows=973 width=26) (actual time=0.558..0.559 rows=973 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 64kB
-> Seq Scan on security t3 (cost=0.00..28.73 rows=973 width=26) (actual time=0.009..0.274 rows=973 loops=1)
-> Hash (cost=99479.91..99479.91 rows=3532691 width=14) (actual time=2537.403..2537.404 rows=3532691 loops=1)
Buckets: 262144 Batches: 32 Memory Usage: 6170kB
-> Seq Scan on security_price t2 (cost=0.00..99479.91 rows=3532691 width=14) (actual time=0.302..1347.778 rows=3532691 loops=1)
Planning Time: 1.409 ms
Execution Time: 5569.160 ms
I've managed to solve the problem by removing a adj_close from my original query. My function is now fast. Thank you for helping me point out the problem within my query planner.

Subquery is very slow when add another column query

SELECT a1.object_id,
(SELECT COALESCE(json_agg(b1), '[]')
FROM (
SELECT c1.root_id,
(SELECT COALESCE(json_agg(d1), '[]')
FROM (
SELECT e1.root_id,
(SELECT COALESCE(json_agg(f1), '[]')
FROM (
SELECT g1.root_id,
(SELECT SUM(h2.amount)
FROM table_5 AS h1
LEFT OUTER JOIN table_6 AS h2 on h1.hash_id = h2.hash_id
WHERE g1.object_id = h1.root_id
AND h2.mode = 'Real'
AND h2.type = 'Reject'
) AS amount
FROM table_4 AS g1
LEFT OUTER JOIN table_5 AS g2
ON g1.general_id = g2.object_id
LEFT OUTER JOIN table_6 AS g3
ON g1.properties_hash_id = g3.hash_id
WHERE e1.object_id = g1.root_id
) AS f1) AS tickets
FROM table_3 AS e1
WHERE c1.object_id = e1.root_id
) AS d1) AS blocks
FROM table_2 AS c1
WHERE a1.object_id = c1.root_id
) AS b1) AS sources
FROM table_1 AS a1
With AND h2.type = 'Reject' line the query takes ~5 Seconds
...
SubPlan 1
-> Aggregate (cost=22.77..22.78 rows=1 width=32) (actual time=0.296..0.296 rows=1 loops=16502)
-> Nested Loop (cost=10.72..22.76 rows=1 width=6) (actual time=0.184..0.295 rows=1 loops=16502)
-> Index Scan using t041_type_idx on table_6 h2 (cost=0.15..8.17 rows=1 width=10) (actual time=0.001..0.007 rows=20 loops=16502)
Index Cond: (type = 'Reject'::valid_optimized_segment_types)
Filter: (mode = 'Real'::valid_optimized_segment_modes)
Rows Removed by Filter: 20
-> Bitmap Heap Scan on table_5 h1 (cost=10.57..14.58 rows=1 width=4) (actual time=0.012..0.012 rows=0 loops=330040)
Recheck Cond: ((g1.object_id = root_id) AND (hash_id = h2.hash_id))
Heap Blocks: exact=12379
-> BitmapAnd (cost=10.57..10.57 rows=1 width=0) (actual time=0.011..0.011 rows=0 loops=330040)
-> Bitmap Index Scan on t031_root_id_fkey (cost=0.00..4.32 rows=3 width=0) (actual time=0.001..0.001 rows=2 loops=330040)
Index Cond: (root_id = g1.object_id)
-> Bitmap Index Scan on t031_md5_id_idx (cost=0.00..6.00 rows=228 width=0) (actual time=0.010..0.010 rows=619 loops=330040)
Index Cond: (hash_id = h2.hash_id)
Planning Time: 0.894 ms
Execution Time: 4925.776 ms
Without AND h2.type = 'Reject' line the query takes ~770ms
SubPlan 1
-> Aggregate (cost=25.77..25.78 rows=1 width=32) (actual time=0.045..0.046 rows=1 loops=16502)
-> Hash Join (cost=15.32..25.77 rows=2 width=6) (actual time=0.019..0.041 rows=1 loops=16502)
Hash Cond: (h2.hash_id = h1.hash_id)
-> Seq Scan on table_6 h2 (cost=0.00..8.56 rows=298 width=10) (actual time=0.001..0.026 rows=285 loops=16502)
Filter: (mode = 'Real'::valid_optimized_segment_modes)
Rows Removed by Filter: 80
-> Hash (cost=15.28..15.28 rows=3 width=4) (actual time=0.002..0.002 rows=2 loops=16502)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Index Scan using t031_root_id_fkey on table_5 h1 (cost=0.29..15.28 rows=3 width=4) (actual time=0.001..0.002 rows=2 loops=16502)
Index Cond: (root_id = g1.object_id)
Planning Time: 0.889 ms
Execution Time: 787.264 ms
Why is this single line causing so much differences (h2.type is
BTREE indexed)?
How else can I achieve the same result with better efficiency?
-> Index Scan using t041_type_idx on table_6 h2 (cost=0.15..8.17 rows=1 width=10) (actual time=0.001..0.007 rows=20 loops=16502)
It thinks there will be 1 row, and there are 20. Meaning the next node is executed 20 times more than it thought it would be. If it knew here would be 20, it might have chosen a different plan.
You could try to create cross-column statistics so it can get a better estimate for that row count.
Or you could just speed up the next node, by adding the multi column index on table_5 (root_id,hash_id). It would still get executed far more times than the planner thinks it will, but each execution will be faster.
Or you might just force it into a different plan by making one of those indexes unusable:
...JOIN table_6 AS h2 on h1.hash_id+0 = h2.hash_id

Postgres Table Slow Performance

We have a Product table in postgres DB. This is hosted on Heroku. We have 8 GB RAM and 250 GB disk space. 1000 IPOP allowed.
We are having proper indexes on columns.
Platform
PostgreSQL 9.5.12 on x86_64-pc-linux-gnu (Ubuntu 9.5.12-1.pgdg14.04+1), compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4, 64-bit
We are running a keywords search query on this table. We are having 2.8 millions records in this table. Our search query is too slow. Its giving us result in about 50 seconds. Which is too slow.
Query
SELECT
P .sfid AS prodsfid,
P .image_url__c image,
P .productcode sku,
P .Short_Description__c shortDesc,
P . NAME pname,
P .category__c,
P .price__c price,
P .description,
P .vendor_name__c vname,
P .vendor__c supSfid
FROM
staging.product2 P
JOIN (
SELECT
p1.sfid
FROM
staging.product2 p1
WHERE
p1. NAME ILIKE '%s%'
OR p1.productcode ILIKE '%s%'
) AS TEMP ON (P .sfid = TEMP .sfid)
WHERE
P .status__c = 'Available'
AND LOWER (
P .vendor_shipping_country__c
) = ANY (
VALUES
('us'),
('usa'),
('united states'),
('united states of america')
)
AND P .vendor_catalog_tier__c = ANY (
VALUES
('a1c37000000oljnAAA'),
('a1c37000000oljQAAQ'),
('a1c37000000oljQAAQ'),
('a1c37000000pT7IAAU'),
('a1c37000000omDjAAI'),
('a1c37000000oljMAAQ'),
('a1c37000000oljaAAA'),
('a1c37000000pT7SAAU'),
('a1c0R000000AFcVQAW'),
('a1c0R000000A1HAQA0'),
('a1c0R0000000OpWQAU'),
('a1c0R0000005TZMQA2'),
('a1c37000000oljdAAA'),
('a1c37000000ooTqAAI'),
('a1c37000000omLBAAY'),
('a1c0R0000005N8GQAU')
)
Here is the explain plan:
Nested Loop (cost=31.85..33886.54 rows=3681 width=750)
-> Hash Join (cost=31.77..31433.07 rows=4415 width=750)
Hash Cond: (lower((p.vendor_shipping_country__c)::text) = "*VALUES*".column1)
-> Nested Loop (cost=31.73..31423.67 rows=8830 width=761)
-> HashAggregate (cost=0.06..0.11 rows=16 width=32)
Group Key: "*VALUES*_1".column1
-> Values Scan on "*VALUES*_1" (cost=0.00..0.06 rows=16 width=32)
-> Bitmap Heap Scan on product2 p (cost=31.66..1962.32 rows=552 width=780)
Recheck Cond: ((vendor_catalog_tier__c)::text = "*VALUES*_1".column1)
Filter: ((status__c)::text = 'Available'::text)
-> Bitmap Index Scan on vendor_catalog_tier_prd_idx (cost=0.00..31.64 rows=1016 width=0)
Index Cond: ((vendor_catalog_tier__c)::text = "*VALUES*_1".column1)
-> Hash (cost=0.03..0.03 rows=4 width=32)
-> Unique (cost=0.02..0.03 rows=4 width=32)
-> Sort (cost=0.02..0.02 rows=4 width=32)
Sort Key: "*VALUES*".column1
-> Values Scan on "*VALUES*" (cost=0.00..0.01 rows=4 width=32)
-> Index Scan using sfid_prd_idx on product2 p1 (cost=0.09..0.55 rows=1 width=19)
Index Cond: ((sfid)::text = (p.sfid)::text)
Filter: (((name)::text ~~* '%s%'::text) OR ((productcode)::text ~~* '%s%'::text))
Its returning around 140,576 records. By the way we need only top 5,000 records only. Will putting Limit help here?
Let me know how to make it fast and what is causing this slow.
EXPLAIN ANALYZE
#RaymondNijland Here is the explain analyze
Nested Loop (cost=31.83..33427.28 rows=4039 width=750) (actual time=1.903..4384.221 rows=140576 loops=1)
-> Hash Join (cost=31.74..30971.32 rows=4369 width=750) (actual time=1.852..1094.964 rows=164353 loops=1)
Hash Cond: (lower((p.vendor_shipping_country__c)::text) = "*VALUES*".column1)
-> Nested Loop (cost=31.70..30962.02 rows=8738 width=761) (actual time=1.800..911.738 rows=164353 loops=1)
-> HashAggregate (cost=0.06..0.11 rows=16 width=32) (actual time=0.012..0.019 rows=15 loops=1)
Group Key: "*VALUES*_1".column1
-> Values Scan on "*VALUES*_1" (cost=0.00..0.06 rows=16 width=32) (actual time=0.004..0.005 rows=16 loops=1)
-> Bitmap Heap Scan on product2 p (cost=31.64..1933.48 rows=546 width=780) (actual time=26.004..57.290 rows=10957 loops=15)
Recheck Cond: ((vendor_catalog_tier__c)::text = "*VALUES*_1".column1)
Filter: ((status__c)::text = 'Available'::text)
Rows Removed by Filter: 645
Heap Blocks: exact=88436
-> Bitmap Index Scan on vendor_catalog_tier_prd_idx (cost=0.00..31.61 rows=1000 width=0) (actual time=24.811..24.811 rows=11601 loops=15)
Index Cond: ((vendor_catalog_tier__c)::text = "*VALUES*_1".column1)
-> Hash (cost=0.03..0.03 rows=4 width=32) (actual time=0.032..0.032 rows=4 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Unique (cost=0.02..0.03 rows=4 width=32) (actual time=0.026..0.027 rows=4 loops=1)
-> Sort (cost=0.02..0.02 rows=4 width=32) (actual time=0.026..0.026 rows=4 loops=1)
Sort Key: "*VALUES*".column1
Sort Method: quicksort Memory: 25kB
-> Values Scan on "*VALUES*" (cost=0.00..0.01 rows=4 width=32) (actual time=0.001..0.002 rows=4 loops=1)
-> Index Scan using sfid_prd_idx on product2 p1 (cost=0.09..0.56 rows=1 width=19) (actual time=0.019..0.020 rows=1 loops=164353)
Index Cond: ((sfid)::text = (p.sfid)::text)
Filter: (((name)::text ~~* '%s%'::text) OR ((productcode)::text ~~* '%s%'::text))
Rows Removed by Filter: 0
Planning time: 2.488 ms
Execution time: 4391.378 ms
Another query version, with order by , but it seems very slow as well (140 seconds)
SELECT
P .sfid AS prodsfid,
P .image_url__c image,
P .productcode sku,
P .Short_Description__c shortDesc,
P . NAME pname,
P .category__c,
P .price__c price,
P .description,
P .vendor_name__c vname,
P .vendor__c supSfid
FROM
staging.product2 P
WHERE
P .status__c = 'Available'
AND P .vendor_shipping_country__c IN (
'us',
'usa',
'united states',
'united states of america'
)
AND P .vendor_catalog_tier__c IN (
'a1c37000000omDQAAY',
'a1c37000000omDTAAY',
'a1c37000000omDXAAY',
'a1c37000000omDYAAY',
'a1c37000000omDZAAY',
'a1c37000000omDdAAI',
'a1c37000000omDfAAI',
'a1c37000000omDiAAI',
'a1c37000000oml6AAA',
'a1c37000000oljPAAQ',
'a1c37000000oljRAAQ',
'a1c37000000oljWAAQ',
'a1c37000000oljXAAQ',
'a1c37000000oljZAAQ',
'a1c37000000oljcAAA',
'a1c37000000oljdAAA',
'a1c37000000oljlAAA',
'a1c37000000oljoAAA',
'a1c37000000oljqAAA',
'a1c37000000olnvAAA',
'a1c37000000olnwAAA',
'a1c37000000olnxAAA',
'a1c37000000olnyAAA',
'a1c37000000olo0AAA',
'a1c37000000olo1AAA',
'a1c37000000olo4AAA',
'a1c37000000olo8AAA',
'a1c37000000olo9AAA',
'a1c37000000oloCAAQ',
'a1c37000000oloFAAQ',
'a1c37000000oloIAAQ',
'a1c37000000oloJAAQ',
'a1c37000000oloMAAQ',
'a1c37000000oloNAAQ',
'a1c37000000oloSAAQ',
'a1c37000000olodAAA',
'a1c37000000oloeAAA',
'a1c37000000olzCAAQ',
'a1c37000000om0xAAA',
'a1c37000000ooV1AAI',
'a1c37000000oog8AAA',
'a1c37000000oogDAAQ',
'a1c37000000oonzAAA',
'a1c37000000oluuAAA',
'a1c37000000pT7SAAU',
'a1c37000000oljnAAA',
'a1c37000000olumAAA',
'a1c37000000oljpAAA',
'a1c37000000pUm2AAE',
'a1c37000000olo3AAA',
'a1c37000000oo1MAAQ',
'a1c37000000oo1vAAA',
'a1c37000000pWxgAAE',
'a1c37000000pYJkAAM',
'a1c37000000omDjAAI',
'a1c37000000ooTgAAI',
'a1c37000000op2GAAQ',
'a1c37000000one0AAA',
'a1c37000000oljYAAQ',
'a1c37000000pUlxAAE',
'a1c37000000oo9SAAQ',
'a1c37000000pcIYAAY',
'a1c37000000pamtAAA',
'a1c37000000pd2QAAQ',
'a1c37000000pdCOAAY',
'a1c37000000OpPaAAK',
'a1c37000000OphZAAS',
'a1c37000000olNkAAI'
)
ORDER BY p.productcode asc
LIMIT 5000
Here is the explain analyse for this:
Limit (cost=0.09..45271.54 rows=5000 width=750) (actual time=48593.355..86376.864 rows=5000 loops=1)
-> Index Scan using productcode_prd_idx on product2 p (cost=0.09..743031.39 rows=82064 width=750) (actual time=48593.353..86376.283 rows=5000 loops=1)
Filter: (((status__c)::text = 'Available'::text) AND ((vendor_shipping_country__c)::text = ANY ('{us,usa,"united states","united states of america"}'::text[])) AND ((vendor_catalog_tier__c)::text = ANY ('{a1c37000000omDQAAY,a1c37000000omDTAAY,a1c37000000omDXAAY,a1c37000000omDYAAY,a1c37000000omDZAAY,a1c37000000omDdAAI,a1c37000000omDfAAI,a1c37000000omDiAAI,a1c37000000oml6AAA,a1c37000000oljPAAQ,a1c37000000oljRAAQ,a1c37000000oljWAAQ,a1c37000000oljXAAQ,a1c37000000oljZAAQ,a1c37000000oljcAAA,a1c37000000oljdAAA,a1c37000000oljlAAA,a1c37000000oljoAAA,a1c37000000oljqAAA,a1c37000000olnvAAA,a1c37000000olnwAAA,a1c37000000olnxAAA,a1c37000000olnyAAA,a1c37000000olo0AAA,a1c37000000olo1AAA,a1c37000000olo4AAA,a1c37000000olo8AAA,a1c37000000olo9AAA,a1c37000000oloCAAQ,a1c37000000oloFAAQ,a1c37000000oloIAAQ,a1c37000000oloJAAQ,a1c37000000oloMAAQ,a1c37000000oloNAAQ,a1c37000000oloSAAQ,a1c37000000olodAAA,a1c37000000oloeAAA,a1c37000000olzCAAQ,a1c37000000om0xAAA,a1c37000000ooV1AAI,a1c37000000oog8AAA,a1c37000000oogDAAQ,a1c37000000oonzAAA,a1c37000000oluuAAA,a1c37000000pT7SAAU,a1c37000000oljnAAA,a1c37000000olumAAA,a1c37000000oljpAAA,a1c37000000pUm2AAE,a1c37000000olo3AAA,a1c37000000oo1MAAQ,a1c37000000oo1vAAA,a1c37000000pWxgAAE,a1c37000000pYJkAAM,a1c37000000omDjAAI,a1c37000000ooTgAAI,a1c37000000op2GAAQ,a1c37000000one0AAA,a1c37000000oljYAAQ,a1c37000000pUlxAAE,a1c37000000oo9SAAQ,a1c37000000pcIYAAY,a1c37000000pamtAAA,a1c37000000pd2QAAQ,a1c37000000pdCOAAY,a1c37000000OpPaAAK,a1c37000000OphZAAS,a1c37000000olNkAAI}'::text[])))
Rows Removed by Filter: 1707920
Planning time: 1.685 ms
Execution time: 86377.139 ms
Thanks
Aslam Bari
You might want to consider a GIN or GIST index on your staging.product2 table. Double-sided ILIKEs are slow and difficult to improve substantially. I've seen a GIN index improve a similar query by 60-80%.
See this doc.

postgres two column sort low performance

I've got a query that performs multiple joins. I try to get only those positions of each keyword that are latest in results.
Here is the query:
SELECT DISTINCT ON (p.keyword_id)
a.id AS account_id,
w.parent_id AS parent_id,
w.name AS name,
p.position AS position
FROM websites w
JOIN accounts a ON w.account_id = a.id
JOIN keywords k ON k.website_id = w.parent_id
JOIN positions p ON p.website_id = w.parent_id
WHERE a.amount > 0 AND w.parent_id NOTNULL AND (round((a.amount / a.payment_renewal_period), 2) BETWEEN 1 AND 19)
ORDER BY p.keyword_id, p.created_at DESC;
Plan with costs for that query is as follows:
Unique (cost=73673.65..76630.38 rows=264 width=40) (actual time=30777.117..49143.023 rows=259 loops=1)
-> Sort (cost=73673.65..75152.02 rows=591347 width=40) (actual time=30777.116..47352.373 rows=10891486 loops=1)
Sort Key: p.keyword_id, p.created_at DESC
Sort Method: external merge Disk: 512672kB
-> Merge Join (cost=219.59..812.26 rows=591347 width=40) (actual time=3.487..3827.028 rows=10891486 loops=1)
Merge Cond: (w.parent_id = k.website_id)
-> Nested Loop (cost=128.46..597.73 rows=1268 width=44) (actual time=3.378..108.915 rows=61582 loops=1)
-> Nested Loop (cost=2.28..39.86 rows=1 width=28) (actual time=0.026..0.216 rows=7 loops=1)
-> Index Scan using index_websites_on_parent_id on websites w (cost=0.14..15.08 rows=4 width=28) (actual time=0.004..0.023 rows=7 loops=1)
Index Cond: (parent_id IS NOT NULL)
-> Bitmap Heap Scan on accounts a (cost=2.15..6.18 rows=1 width=4) (actual time=0.019..0.020 rows=1 loops=7)
Recheck Cond: (id = w.account_id)
Filter: ((amount > '0'::numeric) AND (round((amount / (payment_renewal_period)::numeric), 2) >= '1'::numeric) AND (round((amount / (payment_renewal_period)::numeric), 2) <= '19'::numeric))
Heap Blocks: exact=7
-> Bitmap Index Scan on accounts_pkey (cost=0.00..2.15 rows=1 width=0) (actual time=0.006..0.006 rows=1 loops=7)
Index Cond: (id = w.account_id)
-> Bitmap Heap Scan on positions p (cost=126.18..511.57 rows=4631 width=16) (actual time=0.994..8.226 rows=8797 loops=7)
Recheck Cond: (website_id = w.parent_id)
Heap Blocks: exact=1004
-> Bitmap Index Scan on index_positions_on_5_columns (cost=0.00..125.02 rows=4631 width=0) (actual time=0.965..0.965 rows=8797 loops=7)
Index Cond: (website_id = w.parent_id)
-> Sort (cost=18.26..18.92 rows=264 width=4) (actual time=0.106..1013.966 rows=10891487 loops=1)
Sort Key: k.website_id
Sort Method: quicksort Memory: 37kB
-> Seq Scan on keywords k (cost=0.00..7.64 rows=264 width=4) (actual time=0.005..0.039 rows=263 loops=1)
Planning time: 1.081 ms
Execution time: 49184.222 ms
The thing is when I run query with w.id instead of w.parent_id in join positions part total cost decreases to
Unique (cost=3621.07..3804.99 rows=264 width=40) (actual time=128.430..139.550 rows=259 loops=1)
-> Sort (cost=3621.07..3713.03 rows=36784 width=40) (actual time=128.429..135.444 rows=40385 loops=1)
Sort Key: p.keyword_id, p.created_at DESC
Sort Method: external sort Disk: 2000kB
-> Merge Join (cost=128.73..831.59 rows=36784 width=40) (actual time=25.521..63.299 rows=40385 loops=1)
Merge Cond: (k.website_id = w.id)
-> Index Only Scan using index_keywords_on_website_id_deleted_at on keywords k (cost=0.27..24.23 rows=264 width=4) (actual time=0.137..0.274 rows=263 loops=1)
Heap Fetches: 156
-> Materialize (cost=128.46..606.85 rows=1268 width=44) (actual time=3.772..49.587 rows=72242 loops=1)
-> Nested Loop (cost=128.46..603.68 rows=1268 width=44) (actual time=3.769..30.530 rows=61582 loops=1)
-> Nested Loop (cost=2.28..45.80 rows=1 width=32) (actual time=0.047..0.204 rows=7 loops=1)
-> Index Scan using websites_pkey on websites w (cost=0.14..21.03 rows=4 width=32) (actual time=0.007..0.026 rows=7 loops=1)
Filter: (parent_id IS NOT NULL)
Rows Removed by Filter: 4
-> Bitmap Heap Scan on accounts a (cost=2.15..6.18 rows=1 width=4) (actual time=0.018..0.019 rows=1 loops=7)
Recheck Cond: (id = w.account_id)
Filter: ((amount > '0'::numeric) AND (round((amount / (payment_renewal_period)::numeric), 2) >= '1'::numeric) AND (round((amount / (payment_renewal_period)::numeric), 2) <= '19'::numeric))
Heap Blocks: exact=7
-> Bitmap Index Scan on accounts_pkey (cost=0.00..2.15 rows=1 width=0) (actual time=0.004..0.004 rows=1 loops=7)
Index Cond: (id = w.account_id)
-> Bitmap Heap Scan on positions p (cost=126.18..511.57 rows=4631 width=16) (actual time=0.930..2.341 rows=8797 loops=7)
Recheck Cond: (website_id = w.parent_id)
Heap Blocks: exact=1004
-> Bitmap Index Scan on index_positions_on_5_columns (cost=0.00..125.02 rows=4631 width=0) (actual time=0.906..0.906 rows=8797 loops=7)
Index Cond: (website_id = w.parent_id)
Planning time: 1.124 ms
Execution time: 157.167 ms
Indexes on websites
Indexes:
"websites_pkey" PRIMARY KEY, btree (id)
"index_websites_on_account_id" btree (account_id)
"index_websites_on_deleted_at" btree (deleted_at)
"index_websites_on_domain_id" btree (domain_id)
"index_websites_on_parent_id" btree (parent_id)
"index_websites_on_processed_at" btree (processed_at)
Indexes on positions
Indexes:
"positions_pkey" PRIMARY KEY, btree (id)
"index_positions_on_5_columns" UNIQUE, btree (website_id, keyword_id, created_at, engine_id, region_id)
"overlap_index" btree (keyword_id, created_at)
The second EXPLAIN output shows more than 200 times fewer rows, so it is hardly surprising that sorting is much faster.
You will notice that the sort spills to disk in both cases (Sort Method: external merge Disk: ...kB). If you can keep the sort in memory by raising work_mem, it will be much faster.
But the first sort is so large that you won't be able to fit it in memory.
Ideas to speed up the query:
An index on (keyword_id, created_at)for positions. Not sure if that helps though.
Do the filtering first, like this:
SELECT
a.id AS account_id,
w.parent_id AS parent_id,
w.name AS name,
p.position AS position
FROM (SELECT DISTINCT ON (keyword_id)
positions,
website_id,
keyword_id,
created_at
FROM positions
ORDER BY keyword_id, created_at DESC) p
JOIN ...
WHERE ...
ORDER BY p.keyword_id, p.created_at DESC;
Remark: The DISTINCT ON is somewhat strange, since you do not ORDER BY the values of the SELECT list, so the result values are not well defined.