I have a query where the Postgres is performing a Hash join with sequence scan instead of an Index join with Nested loop, when I use an OR condition. This is causing the query to take 2 seconds instead of completing in < 100ms. I have run VACUUM ANALYZE and have rebuilt the index on the PATIENTCHARTNOTE table (which is about 5GB) but its still using hash join. Do you have any suggestions on how I can improve this?
explain analyze
SELECT Count (_pcn.id) AS total_open_note
FROM patientchartnote _pcn
INNER JOIN appointment _appt
ON _appt.id = _pcn.appointment_id
INNER JOIN patient _pt
ON _pt.id = _appt.patient_id
LEFT OUTER JOIN person _ps
ON _ps.id = _pt.appuser_id
WHERE _pcn.active = true
AND _pt.active = true
AND _appt.datecomplete IS NULL
AND _pcn.title IS NOT NULL
AND _pcn.title <> ''
AND ( _pt.assigned_to_user_id = '136964'
OR _pcn.createdby_id = '136964'
);
Aggregate (cost=237655.59..237655.60 rows=1 width=8) (actual time=1602.069..1602.069 rows=1 loops=1)
-> Hash Join (cost=83095.43..237645.30 rows=4117 width=4) (actual time=944.850..1602.014 rows=241 loops=1)
Hash Cond: (_appt.patient_id = _pt.id)
Join Filter: ((_pt.assigned_to_user_id = 136964) OR (_pcn.createdby_id = 136964))
Rows Removed by Join Filter: 94036
-> Hash Join (cost=46650.68..182243.64 rows=556034 width=12) (actual time=415.862..1163.812 rows=94457 loops=1)
Hash Cond: (_pcn.appointment_id = _appt.id)
-> Seq Scan on patientchartnote _pcn (cost=0.00..112794.20 rows=1073978 width=12) (actual time=0.016..423.262 rows=1
073618 loops=1)
Filter: (active AND (title IS NOT NULL) AND ((title)::text <> ''::text))
Rows Removed by Filter: 22488
-> Hash (cost=35223.61..35223.61 rows=696486 width=8) (actual time=414.749..414.749 rows=692839 loops=1)
Buckets: 131072 Batches: 16 Memory Usage: 2732kB
-> Seq Scan on appointment _appt (cost=0.00..35223.61 rows=696486 width=8) (actual time=0.010..271.208 rows=69
2839 loops=1)
Filter: (datecomplete IS NULL)
Rows Removed by Filter: 652426
-> Hash (cost=24698.57..24698.57 rows=675694 width=12) (actual time=351.566..351.566 rows=674929 loops=1)
Buckets: 131072 Batches: 16 Memory Usage: 2737kB
-> Seq Scan on patient _pt (cost=0.00..24698.57 rows=675694 width=12) (actual time=0.013..197.268 rows=674929 loops=
1)
Filter: active
Rows Removed by Filter: 17426
Planning time: 1.533 ms
Execution time: 1602.715 ms
When I replace "OR _pcn.createdby_id = '136964'" with "AND _pcn.createdby_id = '136964'", Postgres performs an index scan
Aggregate (cost=29167.56..29167.57 rows=1 width=8) (actual time=937.743..937.743 rows=1 loops=1)
-> Nested Loop (cost=1.28..29167.55 rows=7 width=4) (actual time=19.136..937.669 rows=37 loops=1)
-> Nested Loop (cost=0.85..27393.03 rows=1654 width=4) (actual time=2.154..910.250 rows=1649 loops=1)
-> Index Scan using patient_activeassigned_idx on patient _pt (cost=0.42..3075.00 rows=1644 width=8) (actual time=1.
599..11.820 rows=1627 loops=1)
Index Cond: ((active = true) AND (assigned_to_user_id = 136964))
Filter: active
-> Index Scan using appointment_datepatient_idx on appointment _appt (cost=0.43..14.75 rows=4 width=8) (actual time=
0.543..0.550 rows=1 loops=1627)
Index Cond: ((patient_id = _pt.id) AND (datecomplete IS NULL))
-> Index Scan using patientchartnote_activeappointment_idx on patientchartnote _pcn (cost=0.43..1.06 rows=1 width=8) (actual time=0.014..0.014 rows=0 loops=1649)
Index Cond: ((active = true) AND (createdby_id = 136964) AND (appointment_id = _appt.id) AND (title IS NOT NULL))
Filter: (active AND ((title)::text <> ''::text))
Planning time: 1.489 ms
Execution time: 937.910 ms
(13 rows)
Using OR in SQL queries usually results in bad performance.
That is because – different from AND – it does not restrict, but extend the number of rows in the query result. With AND, you can use an index scan for one part of the condition and further restrict the result set with a filter on the second condition. That is not possible with OR.
So PostgreSQL does the only thing left: it computes the whole join and then filters out all rows that do not match the condition. Of course that is very inefficient when you are joining three tables (I didn't count the outer join).
Assuming that all columns called id are primary keys, you could rewrite the query as follows:
SELECT count(*) FROM
(SELECT _pcn.id
FROM patientchartnote _pcn
INNER JOIN appointment _appt
ON _appt.id = _pcn.appointment_id
INNER JOIN patient _pt
ON _pt.id = _appt.patient_id
LEFT OUTER JOIN person _ps
ON _ps.id = _pt.appuser_id
WHERE _pcn.active = true
AND _pt.active = true
AND _appt.datecomplete IS NULL
AND _pcn.title IS NOT NULL
AND _pcn.title <> ''
AND _pt.assigned_to_user_id = '136964'
UNION
SELECT _pcn.id
FROM patientchartnote _pcn
INNER JOIN appointment _appt
ON _appt.id = _pcn.appointment_id
INNER JOIN patient _pt
ON _pt.id = _appt.patient_id
LEFT OUTER JOIN person _ps
ON _ps.id = _pt.appuser_id
WHERE _pcn.active = true
AND _pt.active = true
AND _appt.datecomplete IS NULL
AND _pcn.title IS NOT NULL
AND _pcn.title <> ''
AND _pcn.createdby_id = '136964'
) q;
Even though this is running the query twice, indexes can be used to filter out all but a few rows early on, so this query should perform better.
Related
We have a PostgreSQL query with multiple tables and left outer joins, and is running very slow.
It is completing in 25-40s, so we want to optimize it more and want to decrease run time to 1-2 sec.
select a.campaignid, b.campaign_name , case when b.message_type_id = 1 then 'Promotional'
when b.message_type_id = 2 then 'Transactional'
else 'Other' end as Campaign_type, c.username , aggregator_type,
e.cli_manager_id as senderID,
b.schedule_time as campaign_schedule_date,
count(a.mobile) as campaign_submitted_count, count(case when a.status = 'DELIVRD' then mobile end) as Delivered,
count(a.mobile) as Total_count,
count(case when a.status = 'FAILED' then mobile end) as failure_count,
count(case when a.status = 'DND_check_failed' then mobile end) as DND_count,
sum(credits_used) as credits_used
from tbl_cdr_test a left outer join tbl_campaign b
on a.campaignid = b.tbl_campaign_id left outer join tbl_users_master c
on b.user_id =c.user_master_id
left outer join tbl_cli_manager e on b.user_id = e.user_id
left outer join tbl_user_channel f on b.user_id =f.user_id
left outer join tbl_user_configurations g on b.user_id = g.user_id
where date(insert_datetime) between '2020-05-23' and '2020-06-23'
and c.username = coalesce(null, c.username)
and g.msg_cat_id = coalesce(null, g.msg_cat_id)
and a.campaignid = coalesce(null, a.campaignid)
and e.cli_manager_id = coalesce(null, e.cli_manager_id)
group by a.campaignid, b.campaign_name , b.message_type_id,c.username , b.schedule_time,
aggregator_type, e.cli_manager_id;
We have create appropriate indexes as well, but still it is taking time.
Moreover there is "external merge disk" sorting method in execution plan whereas to resolve same I have set work_mem = 50MB. Still it is using disk sort instead of memory.Please suggest
Below is execution plan:
GroupAggregate (cost=4872.01..4872.07 rows=1 width=543) (actual time=20564.239..27415.264 rows=8 loops=1)
Group Key: a.campaignid, b.campaign_name, b.message_type_id, c.username, b.schedule_time, f.aggregator_type, e.cli_manager_id
-> Sort (cost=4872.01..4872.01 rows=1 width=483) (actual time=19627.424..25020.702 rows=3206196 loops=1)
Sort Key: a.campaignid, b.campaign_name, b.message_type_id, c.username, b.schedule_time, f.aggregator_type, e.cli_manager_id
Sort Method: external merge Disk: 281456kB
-> Nested Loop (cost=22.03..4872.00 rows=1 width=483) (actual time=99.704..12086.244 rows=3206196 loops=1)
Join Filter: (b.user_id = g.user_id)
-> Nested Loop Left Join (cost=21.89..4871.79 rows=1 width=495) (actual time=99.688..4518.533 rows=3206196 loops=1)
-> Nested Loop (cost=21.75..4871.54 rows=1 width=77) (actual time=99.664..935.689 rows=356244 loops=1)
-> Nested Loop (cost=21.33..31.57 rows=1 width=65) (actual time=0.295..2.376 rows=588 loops=1)
Join Filter: (b.user_id = c.user_master_id)
-> Merge Join (cost=21.18..30.22 rows=6 width=46) (actual time=0.246..0.663 rows=588 loops=1)
Merge Cond: (e.user_id = b.user_id)
-> Index Scan using "idx_FK_7hc6agd_tbl_cli_ma_1592228110_32" on tbl_cli_manager e (cost=0.42..6281.84 rows=762 width=12) (actual time=0.014..0.035 rows=5 loops=1)
Filter: (cli_manager_id = COALESCE(cli_manager_id))
-> Sort (cost=20.76..21.13 rows=147 width=34) (actual time=0.225..0.333 rows=585 loops=1)
Sort Key: b.user_id
Sort Method: quicksort Memory: 36kB
-> Seq Scan on tbl_campaign b (cost=0.00..15.47 rows=147 width=34) (actual time=0.013..0.154 rows=147 loops=1)
-> Index Scan using ind_user_master_c_user on tbl_users_master c (cost=0.14..0.21 rows=1 width=19) (actual time=0.002..0.002 rows=1 loops=588)
Index Cond: (user_master_id = e.user_id)
Filter: ((username)::text = (COALESCE(username))::text)
-> Append (cost=0.42..4839.94 rows=3 width=20) (actual time=0.546..1.426 rows=606 loops=588)
-> Index Scan using testh11_campaignid_idx on testh11 a (cost=0.42..4253.99 rows=2 width=20) (actual time=0.543..0.543 rows=0 loops=588)
Index Cond: (campaignid = b.tbl_campaign_id)
Filter: ((campaignid = COALESCE(campaignid)) AND (date(insert_datetime) >= '2020-05-23'::date) AND (date(insert_datetime) <= '2020-06-23'::date))
Rows Removed by Filter: 656
-> Index Scan using testh21_campaignid_idx on testh21 a_1 (cost=0.42..585.94 rows=1 width=20) (actual time=0.002..0.796 rows=606 loops=588)
Index Cond: (campaignid = b.tbl_campaign_id)
Filter: ((campaignid = COALESCE(campaignid)) AND (date(insert_datetime) >= '2020-05-23'::date) AND (date(insert_datetime) <= '2020-06-23'::date))
-> Index Scan using idx_user_id_tbl_user_c_1592227657_19 on tbl_user_channel f (cost=0.14..0.24 rows=1 width=422) (actual time=0.002..0.004 rows=9 loops=356244)
Index Cond: (user_id = b.user_id)
-> Index Scan using "idx_FK_6958qvy_tbl_user_c_1592228774_151" on tbl_user_configurations g (cost=0.14..0.20 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=3206196)
Index Cond: (user_id = e.user_id)
Filter: (msg_cat_id = COALESCE(msg_cat_id))
Planning Time: 6.561 ms
Execution Time: 27477.860 ms
There is a gross underestimate of the result rows for the index scan on testh21. The consequence is that PostgreSQL chooses nested loop joins, which is where your time is spent.
Try the following:
New statistics:
ANALYZE testh21;
If that improves the estimate, make sure that autoanalyze treats the table more often.
Prevent bad estimates caused by correlation:
CREATE STATISTICS testh21_stat (dependencies)
ON campaignid, insert_datetime FROM testh21;
ANALYZE testh21;
Perhaps there is a correlation between the columns, and that improves the estimate.
More detailed statistics: try raising default_statistics_target before ANALYZE of the table.
If you cannot improve the estimates, take the hammer and set enable_nestloop = off for the duration of the query.
I'm trying to take advantages of partitioning in one case:
I have table "events" which partitioned by list by field "dt_pk" which is foreign key to table "dates".
-- Schema
drop schema if exists test cascade;
create schema test;
-- Tables
create table if not exists test.dates (
id bigint primary key,
dt date not null
);
create sequence test.seq_events_id;
create table if not exists test.events
(
id bigint not null,
dt_pk bigint not null,
content_int bigint,
foreign key (dt_pk) references test.dates(id) on delete cascade,
primary key (dt_pk, id)
)
partition by list (dt_pk);
-- Partitions
create table test.events_1 partition of test.events for values in (1);
create table test.events_2 partition of test.events for values in (2);
create table test.events_3 partition of test.events for values in (3);
-- Fill tables
insert into test.dates (id, dt)
select id, dt
from (
select 1 id, '2020-01-01'::date as dt
union all
select 2 id, '2020-01-02'::date as dt
union all
select 3 id, '2020-01-03'::date as dt
) t;
do $$
declare
dts record;
begin
for dts in (
select id
from test.dates
) loop
for k in 1..10000 loop
insert into test.events (id, dt_pk, content_int)
values (nextval('test.seq_events_id'), dts.id, random_between(1, 1000000));
end loop;
commit;
end loop;
end;
$$;
vacuum analyze test.dates, test.events;
I want to run select like this:
select *
from test.events e
join test.dates d on e.dt_pk = d.id
where d.dt between '2020-01-02'::date and '2020-01-03'::date;
But in this case partition pruning doesn't work. It's clear, I don't have constant for partition key. But from documentation I know that there is partition pruning at execution time, which works with value obtained from a subquery:
Partition pruning can be performed not only during the planning of a
given query, but also during its execution. This is useful as it can
allow more partitions to be pruned when clauses contain expressions
whose values are not known at query planning time, for example,
parameters defined in a PREPARE statement, using a value obtained from
a subquery, or using a parameterized value on the inner side of a
nested loop join.
So I rewrite my query like this and I expected partitionin pruning:
select *
from test.events e
where e.dt_pk in (
select d.id
from test.dates d
where d.dt between '2020-01-02'::date and '2020-01-03'::date
);
But explain for this select says:
Hash Join (cost=1.07..833.07 rows=20000 width=24) (actual time=3.581..15.989 rows=20000 loops=1)
Hash Cond: (e.dt_pk = d.id)
-> Append (cost=0.00..642.00 rows=30000 width=24) (actual time=0.005..6.361 rows=30000 loops=1)
-> Seq Scan on events_1 e (cost=0.00..164.00 rows=10000 width=24) (actual time=0.005..1.104 rows=10000 loops=1)
-> Seq Scan on events_2 e_1 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.005..1.127 rows=10000 loops=1)
-> Seq Scan on events_3 e_2 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.008..1.097 rows=10000 loops=1)
-> Hash (cost=1.04..1.04 rows=2 width=8) (actual time=0.006..0.006 rows=2 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on dates d (cost=0.00..1.04 rows=2 width=8) (actual time=0.004..0.004 rows=2 loops=1)
Filter: ((dt >= '2020-01-02'::date) AND (dt <= '2020-01-03'::date))
Rows Removed by Filter: 1
Planning Time: 0.206 ms
Execution Time: 17.237 ms
So, we read all partitions. I even tried to the planner to use nested loop join, because I read in documentation "parameterized value on the inner side of a nested loop join", but it didn't work:
set enable_hashjoin to off;
set enable_mergejoin to off;
And again:
Nested Loop (cost=0.00..1443.05 rows=20000 width=24) (actual time=9.160..25.252 rows=20000 loops=1)
Join Filter: (e.dt_pk = d.id)
Rows Removed by Join Filter: 30000
-> Append (cost=0.00..642.00 rows=30000 width=24) (actual time=0.008..6.280 rows=30000 loops=1)
-> Seq Scan on events_1 e (cost=0.00..164.00 rows=10000 width=24) (actual time=0.008..1.105 rows=10000 loops=1)
-> Seq Scan on events_2 e_1 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.008..1.047 rows=10000 loops=1)
-> Seq Scan on events_3 e_2 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.007..1.082 rows=10000 loops=1)
-> Materialize (cost=0.00..1.05 rows=2 width=8) (actual time=0.000..0.000 rows=2 loops=30000)
-> Seq Scan on dates d (cost=0.00..1.04 rows=2 width=8) (actual time=0.004..0.004 rows=2 loops=1)
Filter: ((dt >= '2020-01-02'::date) AND (dt <= '2020-01-03'::date))
Rows Removed by Filter: 1
Planning Time: 0.202 ms
Execution Time: 26.516 ms
Then I noticed that in every example of "partition pruning at execution time" I see only = condition, not in.
And it really works that way:
explain (analyze) select * from test.events e where e.dt_pk = (select id from test.dates where id = 2);
Append (cost=1.04..718.04 rows=30000 width=24) (actual time=0.014..3.018 rows=10000 loops=1)
InitPlan 1 (returns $0)
-> Seq Scan on dates (cost=0.00..1.04 rows=1 width=8) (actual time=0.007..0.008 rows=1 loops=1)
Filter: (id = 2)
Rows Removed by Filter: 2
-> Seq Scan on events_1 e (cost=0.00..189.00 rows=10000 width=24) (never executed)
Filter: (dt_pk = $0)
-> Seq Scan on events_2 e_1 (cost=0.00..189.00 rows=10000 width=24) (actual time=0.004..2.009 rows=10000 loops=1)
Filter: (dt_pk = $0)
-> Seq Scan on events_3 e_2 (cost=0.00..189.00 rows=10000 width=24) (never executed)
Filter: (dt_pk = $0)
Planning Time: 0.135 ms
Execution Time: 3.639 ms
And here is my final question: does partition pruning at execution time work only with subquery returning one item, or there is a way to get advantages of partition pruning with subquery returning a list?
And why doesn't it work with nested loop join, did I understand something wrong in words:
This includes values from subqueries and values from execution-time
parameters such as those from parameterized nested loop joins.
Or "parameterized nested loop joins" is something different from regular nested loop joins?
There is no partition pruning in your nested loop join because the partitioned table is on the outer side, which is always scanned completely. The inner side is scanned with the join key from the outer side as parameter (hence parameterized scan), so if the partitioned table were on the inner side of the nested loop join, partition pruning could happen.
Partition pruning with IN lists can take place if the list vales are known at plan time:
EXPLAIN (COSTS OFF)
SELECT * FROM test.events WHERE dt_pk IN (1, 2);
QUERY PLAN
---------------------------------------------------
Append
-> Seq Scan on events_1
Filter: (dt_pk = ANY ('{1,2}'::bigint[]))
-> Seq Scan on events_2
Filter: (dt_pk = ANY ('{1,2}'::bigint[]))
(5 rows)
But no attempts are made to flatten a subquery, and PostgreSQL doesn't use partition pruning, even if you force the partitioned table to be on the inner side (enable_material = off, enable_hashjoin = off, enable_mergejoin = off):
EXPLAIN (ANALYZE)
SELECT * FROM test.events WHERE dt_pk IN (SELECT 1 UNION SELECT 2);
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.06..2034.09 rows=20000 width=24) (actual time=0.057..15.523 rows=20000 loops=1)
Join Filter: (events_1.dt_pk = (1))
Rows Removed by Join Filter: 40000
-> Unique (cost=0.06..0.07 rows=2 width=4) (actual time=0.026..0.029 rows=2 loops=1)
-> Sort (cost=0.06..0.07 rows=2 width=4) (actual time=0.024..0.025 rows=2 loops=1)
Sort Key: (1)
Sort Method: quicksort Memory: 25kB
-> Append (cost=0.00..0.05 rows=2 width=4) (actual time=0.006..0.009 rows=2 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)
-> Append (cost=0.00..642.00 rows=30000 width=24) (actual time=0.012..4.334 rows=30000 loops=2)
-> Seq Scan on events_1 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.011..1.057 rows=10000 loops=2)
-> Seq Scan on events_2 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.004..0.641 rows=10000 loops=2)
-> Seq Scan on events_3 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.002..0.594 rows=10000 loops=2)
Planning Time: 0.531 ms
Execution Time: 16.567 ms
(16 rows)
I am not certain, but it may be because the tables are so small. You might want to try with bigger tables.
If you care more about get it working than the fine details, and you haven't tried this yet: you can rewrite the query to something like
explain analyze select *
from test.dates d
join test.events e on e.dt_pk = d.id
where
d.dt between '2020-01-02'::date and '2020-01-03'::date
and e.dt_pk in (extract(day from '2020-01-02'::date)::int,
extract(day from '2020-01-03'::date)::int);
which will give the expected pruning.
Plan query:
Finalize Aggregate (cost=20465.85..20465.86 rows=1 width=8) (actual time=21363.036..21363.037 rows=1 loops=1)
-> Gather (cost=20465.63..20465.84 rows=2 width=8) (actual time=21363.014..21363.108 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=19465.63..19465.64 rows=1 width=8) (actual time=19548.388..19548.388 rows=1 loops=3)
-> Nested Loop (cost=10766.23..19465.62 rows=6 width=0) (actual time=7204.795..19548.383 rows=1 loops=3)
-> Hash Join (cost=10765.81..15255.58 rows=1142 width=16) (actual time=115.799..19505.353 rows=3937 loops=3)
Hash Cond: (exc.packageid = ex.id)
Join Filter: (ex1.notificationcontent ~~* (('%'::text || (CASE WHEN (exc.packageuid IS NOT NULL) THEN exc.packageuid ELSE ex.packageuid END)::text) || '%'::text))
Rows Removed by Join Filter: 1044143
-> Merge Join (cost=9575.89..13397.74 rows=254444 width=496) (actual time=104.064..456.209 rows=1048080 loops=3)
Merge Cond: (ex1.correspondent_uid = exc.correspondentuid)
-> Sort (cost=8011.08..8014.32 rows=1295 width=492) (actual time=96.345..97.537 rows=4011 loops=3)
Sort Key: ex1.correspondent_uid
Sort Method: quicksort Memory: 5797kB
-> Parallel Seq Scan on ex_in ex1 (cost=0.00..7944.14 rows=1295 width=492) (actual time=0.082..90.635 rows=4011 loops=3)
Filter: ((doctypeid = ANY ('{201,140005}'::integer[])) AND (notificationcontent ~~* '%success%'::text))
Rows Removed by Filter: 12263
-> Sort (cost=1564.81..1596.49 rows=12675 width=36) (actual time=7.714..132.269 rows=1048164 loops=3)
Sort Key: exc.correspondentuid
Sort Method: quicksort Memory: 1376kB
-> Seq Scan on ex_out_correspondents exc (cost=0.00..701.03 rows=12675 width=36) (actual time=0.032..4.151 rows=12694 loops=3)
Filter: (exchangestate = 2)
Rows Removed by Filter: 20268
-> Hash (cost=884.00..884.00 rows=24473 width=36) (actual time=11.028..11.028 rows=24489 loops=3)
Buckets: 32768 Batches: 1 Memory Usage: 1874kB
-> Seq Scan on ex_out ex (cost=0.00..884.00 rows=24473 width=36) (actual time=0.025..6.597 rows=24489 loops=3)
Filter: ((doctypeid < 200) AND (exchangetype = 7))
Rows Removed by Filter: 2780
-> Index Scan using ipk__ds_documents__73ba3083 on ds_documents d (cost=0.42..3.69 rows=1 width=16) (actual time=0.010..0.010 rows=0 loops=11811)
Index Cond: (uid = ex.docuid)
Filter: ((NOT pruz) AND ((reg_date)::date >= '2019-02-01'::date) AND ((reg_date)::date <= '2019-02-28'::date))
Rows Removed by Filter: 1
Planning time: 4.169 ms
Execution time: 21363.604 ms
If I remove one of the like conditions, then the sample is constructed in 100 milliseconds, versus 30 seconds, if I leave both conditions.
EXPLAIN(ANALYZE)
SELECT count(*)
FROM DS_DOCUMENTS d
INNER JOIN EX_Out ex ON d.UID = ex.DocUID and ExchangeType IN (7) and DocTypeID < 200 and d.pruz = false
INNER JOIN EX_Out_Correspondents exc ON ex.ID = exc.PackageID and exc.ExchangeState = 2
INNER JOIN Ex_in ex1 ON ex1.Correspondent_UID = exc.CorrespondentUID
and ex1.DocTypeID in (201, 140005)
and ex1.NotificationContent ilike '%' || (case when exc.PackageUID is not null then exc.PackageUID else ex.PackageUID end)|| '%'
and ex1.NotificationContent ilike '%success%'
WHERE cast(reg_date as date) between '01.02.2019' and '28.02.2019'
What am I doing wrong?
The problem is in this condition, but I do not know why.
and ex1.NotificationContent ilike '%' || (case when exc.PackageUID is not null then exc.PackageUID else ex.PackageUID end)|| '%'
and ex1.NotificationContent ilike '%success%'
Here is an example of building the query piece-wise with the idea of trimming the working set prior to adding the expensive JOIN condition (which if possible should be optimized). I tried to handle the CASE condition prior to the JOIN also. Is the 'success' string located in a common place in ex1.NotificationContent? If so, apply that in the first condition.
WITH d AS
(
SELECT *
FROM DS_DOCUMENTS d
WHERE reg_date::date BETWEEN '01.02.2019' AND '28.02.2019'
), ex_exc AS
(
SELECT d.* -- you may choose to select only the required fields in place of *
, ex.*
, exc.*
, (CASE WHEN exc.PackageUID IS NOT NULL THEN exc.PackageUID ELSE ex.PackageUID END) AS join_package_uid
FROM d
JOIN EX_Out ex ON (d.UID = ex.DocUID
AND ExchangeType = 7
AND DocTypeID < 200
AND d.pruz = FALSE
)
JOIN EX_Out_Correspondents exc ON (ex.ID = exc.PackageID
AND exc.ExchangeState = 2
)
)
SELECT count(*)
FROM ex_exc
JOIN Ex_in ex1 ON (ex1.Correspondent_UID = exc.CorrespondentUID
AND ex1.DocTypeID IN (201, 140005)
AND ex1.NotificationContent ILIKE '%' || ex_exc.join_package_uid || '%'
AND ex1.NotificationContent ILIKE '%success%'
)
;
From your execution plan it seems like the first of these conditions is your main problem:
-> Hash Join (cost=10765.81..15255.58 rows=1142 width=16) (actual time=115.799..19505.353 rows=3937 loops=3)
Hash Cond: (exc.packageid = ex.id)
Join Filter: (ex1.notificationcontent ~~* (('%'::text || (CASE WHEN (exc.packageuid IS NOT NULL) THEN exc.packageuid ELSE ex.packageuid END)::text) || '%'::text))
Rows Removed by Join Filter: 1044143
All your time is spent there, and it must be the 1000000 executions of the expression match that take so long.
Maybe you can come up with a cheaper test. Even if you can find an additional cheaper condition that reduces the number of rows for the expensive test, you would win.
I have very simple query which uses json data for joining on primary table:
WITH
timecode_range AS
(
SELECT
(t->>'table_id')::integer AS table_id,
(t->>'timecode_from')::bigint AS timecode_from,
(t->>'timecode_to')::bigint AS timecode_to
FROM (SELECT '{"table_id":1,"timecode_from":19890328,"timecode_to":119899328}'::jsonb t) rowset
)
SELECT n.*
FROM partition.json_notification n
INNER JOIN timecode_range r ON n.table_id = r.table_id AND n.timecode > r.timecode_from AND n.timecode <= r.timecode_to
It works perfectly when "timecode_range" returns only 1 record:
Nested Loop (cost=0.43..4668.80 rows=1416 width=97) (actual time=0.352..0.352 rows=0 loops=1)
CTE timecode_range
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.002..0.002 rows=1 loops=1)
-> CTE Scan on timecode_range r (cost=0.00..0.02 rows=1 width=20) (actual time=0.007..0.007 rows=1 loops=1)
-> Index Scan using json_notification_pkey on json_notification n (cost=0.42..4654.61 rows=1416 width=97) (actual time=0.322..0.322 rows=0 loops=1)
Index Cond: ((timecode > r.timecode_from) AND (timecode <= r.timecode_to))
Filter: (r.table_id = table_id)
Planning time: 2.292 ms
Execution time: 0.665 ms
But when I need to return several records:
WITH
timecode_range AS
(
SELECT
(t->>'table_id')::integer AS table_id,
(t->>'timecode_from')::bigint AS timecode_from,
(t->>'timecode_to')::bigint AS timecode_to
FROM (SELECT json_array_elements('[{"table_id":1,"timecode_from":19890328,"timecode_to":119899328}]') t) rowset
)
SELECT n.*
FROM partition.json_notification n
INNER JOIN timecode_range r ON n.table_id = r.table_id AND n.timecode > r.timecode_from AND n.timecode <= r.timecode_to
It starts using sequential scan and execution time dramatically grows :(
Hash Join (cost=7.01..37289.68 rows=92068 width=97) (actual time=418.563..418.563 rows=0 loops=1)
Hash Cond: (n.table_id = r.table_id)
Join Filter: ((n.timecode > r.timecode_from) AND (n.timecode <= r.timecode_to))
Rows Removed by Join Filter: 14444
CTE timecode_range
-> Subquery Scan on rowset (cost=0.00..3.76 rows=100 width=32) (actual time=0.233..0.234 rows=1 loops=1)
-> Result (cost=0.00..0.51 rows=100 width=0) (actual time=0.218..0.218 rows=1 loops=1)
-> Seq Scan on json_notification n (cost=0.00..21703.36 rows=840036 width=97) (actual time=0.205..312.991 rows=840036 loops=1)
-> Hash (cost=2.00..2.00 rows=100 width=20) (actual time=0.239..0.239 rows=1 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> CTE Scan on timecode_range r (cost=0.00..2.00 rows=100 width=20) (actual time=0.235..0.236 rows=1 loops=1)
Planning time: 4.729 ms
Execution time: 418.937 ms
What am I doing wrong?
PostgreSQL has no possibility to estimate the number of rows returned from a table function, so it uses the ROWS value specified in CREATE FUNCTION (default 1000).
For json_array_elements this value is set to 100:
SELECT prorows FROM pg_proc WHERE proname = 'json_array_elements';
┌─────────┐
│ prorows │
├─────────┤
│ 100 │
└─────────┘
(1 row)
But in your case the function returns only 1 row.
This misestimate makes PostgreSQL choose another join strategy (hash join instead of nested loop), which causes the longer execution time.
If you can choose some other construct than such a table function (e.g. a VALUES statement) that PostgreSQL can estimate, you'll get a better plan.
An alternative is to use a LIMIT clause on the CTE definition if you can safely specify an upper limit.
If you think that PostgreSQL is wrong when it switches to a hash join beyond a certain row count, you can test as follows:
Run the query (using a sequential scan and a hash join) and measure the duration (psql's \timing command will help).
Force a nested loop join:
SET enable_hashjoin=off;
SET enable_mergejoin=off;
Run the query again (with a nested loop join) and measure the duration.
If PostgreSQL is indeed wrong, you could adjust the optimizer parameters by lowering random_page_cost to a value closer to seq_page_cost.
my query takes long time to execure even it;s limited and orderer by integer value index. As I red the problem is with count(*) in subquery - but I didn't find solution
POSTGRESQL 9.1
QUERY:
SELECT
sms.id,
(select count(*)
from sms_received, sms_recipient
where sms.id = sms_recipient.sms_id
and sms_recipient.id = sms_received.sms_recipient_id ) as pocet_resp
FROM "sms" WHERE done = true
ORDER BY "sms"."id" desc limit 100;
EXPLAIN ANALYZE Output:
Limit (cost=0.00..377992.17 rows=100 width=4) (actual time=58.566..5549.074 rows=100 loops=1)
-> Index Scan using sms_id on sms (cost=0.00..1701422117.01 rows=450121 width=4) (actual time=58.564..5548.913 rows=100 loops=1)
Filter: done
SubPlan 1
-> Aggregate (cost=3778.61..3778.62 rows=1 width=0) (actual time=55.471..55.471 rows=1 loops=100)
-> Hash Join (cost=660.83..3778.59 rows=6 width=0) (actual time=55.276..55.456 rows=0 loops=100)
Hash Cond: (sms_received.sms_recipient_id = sms_recipient.id)
-> Seq Scan on sms_received (cost=0.00..2656.33 rows=123033 width=4) (actual time=0.002..30.758 rows=123039 loops=100)
-> Hash (cost=658.73..658.73 rows=168 width=4) (actual time=0.060..0.060 rows=27 loops=100)
Buckets: 1024 Batches: 1 Memory Usage: 1kB
-> Bitmap Heap Scan on sms_recipient (cost=5.92..658.73 rows=168 width=4) (actual time=0.036..0.047 rows=27 loops=100)
Recheck Cond: (sms.id = sms_id)
-> Bitmap Index Scan on sms_rec_sms_id (cost=0.00..5.87 rows=168 width=0) (actual time=0.026..0.026 rows=140 loops=100)
Index Cond: (sms.id = sms_id)
Total runtime: 5549.237 ms
Perhaps this will help:
select sms.id,
count(*)
from sms
left join sms_received on sms.id = sms_recipient.sms_id
left join sms_recipient on sms_recipient.id = sms_received.sms_recipient_id
where sms.done = true and
sms.id in (select id from sms order by id desc limit 100)
group by sms.id
order by sms.id desc
You might also try:
select sms.id,
count(*)
from sms
left join sms_received on sms.id = sms_recipient.sms_id
left join sms_recipient on sms_recipient.id = sms_received.sms_recipient_id
where sms.done = true and
group by sms.id
order by sms.id desc
limit 100
... but I'm not sure that it will be as efficient.
I've solved that issue with trigger - trigger counts inserted rows, so I don't need to use count(*).