Subquery is very slow when add another column query - postgresql

SELECT a1.object_id,
(SELECT COALESCE(json_agg(b1), '[]')
FROM (
SELECT c1.root_id,
(SELECT COALESCE(json_agg(d1), '[]')
FROM (
SELECT e1.root_id,
(SELECT COALESCE(json_agg(f1), '[]')
FROM (
SELECT g1.root_id,
(SELECT SUM(h2.amount)
FROM table_5 AS h1
LEFT OUTER JOIN table_6 AS h2 on h1.hash_id = h2.hash_id
WHERE g1.object_id = h1.root_id
AND h2.mode = 'Real'
AND h2.type = 'Reject'
) AS amount
FROM table_4 AS g1
LEFT OUTER JOIN table_5 AS g2
ON g1.general_id = g2.object_id
LEFT OUTER JOIN table_6 AS g3
ON g1.properties_hash_id = g3.hash_id
WHERE e1.object_id = g1.root_id
) AS f1) AS tickets
FROM table_3 AS e1
WHERE c1.object_id = e1.root_id
) AS d1) AS blocks
FROM table_2 AS c1
WHERE a1.object_id = c1.root_id
) AS b1) AS sources
FROM table_1 AS a1
With AND h2.type = 'Reject' line the query takes ~5 Seconds
...
SubPlan 1
-> Aggregate (cost=22.77..22.78 rows=1 width=32) (actual time=0.296..0.296 rows=1 loops=16502)
-> Nested Loop (cost=10.72..22.76 rows=1 width=6) (actual time=0.184..0.295 rows=1 loops=16502)
-> Index Scan using t041_type_idx on table_6 h2 (cost=0.15..8.17 rows=1 width=10) (actual time=0.001..0.007 rows=20 loops=16502)
Index Cond: (type = 'Reject'::valid_optimized_segment_types)
Filter: (mode = 'Real'::valid_optimized_segment_modes)
Rows Removed by Filter: 20
-> Bitmap Heap Scan on table_5 h1 (cost=10.57..14.58 rows=1 width=4) (actual time=0.012..0.012 rows=0 loops=330040)
Recheck Cond: ((g1.object_id = root_id) AND (hash_id = h2.hash_id))
Heap Blocks: exact=12379
-> BitmapAnd (cost=10.57..10.57 rows=1 width=0) (actual time=0.011..0.011 rows=0 loops=330040)
-> Bitmap Index Scan on t031_root_id_fkey (cost=0.00..4.32 rows=3 width=0) (actual time=0.001..0.001 rows=2 loops=330040)
Index Cond: (root_id = g1.object_id)
-> Bitmap Index Scan on t031_md5_id_idx (cost=0.00..6.00 rows=228 width=0) (actual time=0.010..0.010 rows=619 loops=330040)
Index Cond: (hash_id = h2.hash_id)
Planning Time: 0.894 ms
Execution Time: 4925.776 ms
Without AND h2.type = 'Reject' line the query takes ~770ms
SubPlan 1
-> Aggregate (cost=25.77..25.78 rows=1 width=32) (actual time=0.045..0.046 rows=1 loops=16502)
-> Hash Join (cost=15.32..25.77 rows=2 width=6) (actual time=0.019..0.041 rows=1 loops=16502)
Hash Cond: (h2.hash_id = h1.hash_id)
-> Seq Scan on table_6 h2 (cost=0.00..8.56 rows=298 width=10) (actual time=0.001..0.026 rows=285 loops=16502)
Filter: (mode = 'Real'::valid_optimized_segment_modes)
Rows Removed by Filter: 80
-> Hash (cost=15.28..15.28 rows=3 width=4) (actual time=0.002..0.002 rows=2 loops=16502)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Index Scan using t031_root_id_fkey on table_5 h1 (cost=0.29..15.28 rows=3 width=4) (actual time=0.001..0.002 rows=2 loops=16502)
Index Cond: (root_id = g1.object_id)
Planning Time: 0.889 ms
Execution Time: 787.264 ms
Why is this single line causing so much differences (h2.type is
BTREE indexed)?
How else can I achieve the same result with better efficiency?

-> Index Scan using t041_type_idx on table_6 h2 (cost=0.15..8.17 rows=1 width=10) (actual time=0.001..0.007 rows=20 loops=16502)
It thinks there will be 1 row, and there are 20. Meaning the next node is executed 20 times more than it thought it would be. If it knew here would be 20, it might have chosen a different plan.
You could try to create cross-column statistics so it can get a better estimate for that row count.
Or you could just speed up the next node, by adding the multi column index on table_5 (root_id,hash_id). It would still get executed far more times than the planner thinks it will, but each execution will be faster.
Or you might just force it into a different plan by making one of those indexes unusable:
...JOIN table_6 AS h2 on h1.hash_id+0 = h2.hash_id

Related

Improving performance of query

We have a PostgreSQL query with multiple tables and left outer joins, and is running very slow.
It is completing in 25-40s, so we want to optimize it more and want to decrease run time to 1-2 sec.
select a.campaignid, b.campaign_name , case when b.message_type_id = 1 then 'Promotional'
when b.message_type_id = 2 then 'Transactional'
else 'Other' end as Campaign_type, c.username , aggregator_type,
e.cli_manager_id as senderID,
b.schedule_time as campaign_schedule_date,
count(a.mobile) as campaign_submitted_count, count(case when a.status = 'DELIVRD' then mobile end) as Delivered,
count(a.mobile) as Total_count,
count(case when a.status = 'FAILED' then mobile end) as failure_count,
count(case when a.status = 'DND_check_failed' then mobile end) as DND_count,
sum(credits_used) as credits_used
from tbl_cdr_test a left outer join tbl_campaign b
on a.campaignid = b.tbl_campaign_id left outer join tbl_users_master c
on b.user_id =c.user_master_id
left outer join tbl_cli_manager e on b.user_id = e.user_id
left outer join tbl_user_channel f on b.user_id =f.user_id
left outer join tbl_user_configurations g on b.user_id = g.user_id
where date(insert_datetime) between '2020-05-23' and '2020-06-23'
and c.username = coalesce(null, c.username)
and g.msg_cat_id = coalesce(null, g.msg_cat_id)
and a.campaignid = coalesce(null, a.campaignid)
and e.cli_manager_id = coalesce(null, e.cli_manager_id)
group by a.campaignid, b.campaign_name , b.message_type_id,c.username , b.schedule_time,
aggregator_type, e.cli_manager_id;
We have create appropriate indexes as well, but still it is taking time.
Moreover there is "external merge disk" sorting method in execution plan whereas to resolve same I have set work_mem = 50MB. Still it is using disk sort instead of memory.Please suggest
Below is execution plan:
GroupAggregate (cost=4872.01..4872.07 rows=1 width=543) (actual time=20564.239..27415.264 rows=8 loops=1)
Group Key: a.campaignid, b.campaign_name, b.message_type_id, c.username, b.schedule_time, f.aggregator_type, e.cli_manager_id
-> Sort (cost=4872.01..4872.01 rows=1 width=483) (actual time=19627.424..25020.702 rows=3206196 loops=1)
Sort Key: a.campaignid, b.campaign_name, b.message_type_id, c.username, b.schedule_time, f.aggregator_type, e.cli_manager_id
Sort Method: external merge Disk: 281456kB
-> Nested Loop (cost=22.03..4872.00 rows=1 width=483) (actual time=99.704..12086.244 rows=3206196 loops=1)
Join Filter: (b.user_id = g.user_id)
-> Nested Loop Left Join (cost=21.89..4871.79 rows=1 width=495) (actual time=99.688..4518.533 rows=3206196 loops=1)
-> Nested Loop (cost=21.75..4871.54 rows=1 width=77) (actual time=99.664..935.689 rows=356244 loops=1)
-> Nested Loop (cost=21.33..31.57 rows=1 width=65) (actual time=0.295..2.376 rows=588 loops=1)
Join Filter: (b.user_id = c.user_master_id)
-> Merge Join (cost=21.18..30.22 rows=6 width=46) (actual time=0.246..0.663 rows=588 loops=1)
Merge Cond: (e.user_id = b.user_id)
-> Index Scan using "idx_FK_7hc6agd_tbl_cli_ma_1592228110_32" on tbl_cli_manager e (cost=0.42..6281.84 rows=762 width=12) (actual time=0.014..0.035 rows=5 loops=1)
Filter: (cli_manager_id = COALESCE(cli_manager_id))
-> Sort (cost=20.76..21.13 rows=147 width=34) (actual time=0.225..0.333 rows=585 loops=1)
Sort Key: b.user_id
Sort Method: quicksort Memory: 36kB
-> Seq Scan on tbl_campaign b (cost=0.00..15.47 rows=147 width=34) (actual time=0.013..0.154 rows=147 loops=1)
-> Index Scan using ind_user_master_c_user on tbl_users_master c (cost=0.14..0.21 rows=1 width=19) (actual time=0.002..0.002 rows=1 loops=588)
Index Cond: (user_master_id = e.user_id)
Filter: ((username)::text = (COALESCE(username))::text)
-> Append (cost=0.42..4839.94 rows=3 width=20) (actual time=0.546..1.426 rows=606 loops=588)
-> Index Scan using testh11_campaignid_idx on testh11 a (cost=0.42..4253.99 rows=2 width=20) (actual time=0.543..0.543 rows=0 loops=588)
Index Cond: (campaignid = b.tbl_campaign_id)
Filter: ((campaignid = COALESCE(campaignid)) AND (date(insert_datetime) >= '2020-05-23'::date) AND (date(insert_datetime) <= '2020-06-23'::date))
Rows Removed by Filter: 656
-> Index Scan using testh21_campaignid_idx on testh21 a_1 (cost=0.42..585.94 rows=1 width=20) (actual time=0.002..0.796 rows=606 loops=588)
Index Cond: (campaignid = b.tbl_campaign_id)
Filter: ((campaignid = COALESCE(campaignid)) AND (date(insert_datetime) >= '2020-05-23'::date) AND (date(insert_datetime) <= '2020-06-23'::date))
-> Index Scan using idx_user_id_tbl_user_c_1592227657_19 on tbl_user_channel f (cost=0.14..0.24 rows=1 width=422) (actual time=0.002..0.004 rows=9 loops=356244)
Index Cond: (user_id = b.user_id)
-> Index Scan using "idx_FK_6958qvy_tbl_user_c_1592228774_151" on tbl_user_configurations g (cost=0.14..0.20 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=3206196)
Index Cond: (user_id = e.user_id)
Filter: (msg_cat_id = COALESCE(msg_cat_id))
Planning Time: 6.561 ms
Execution Time: 27477.860 ms
There is a gross underestimate of the result rows for the index scan on testh21. The consequence is that PostgreSQL chooses nested loop joins, which is where your time is spent.
Try the following:
New statistics:
ANALYZE testh21;
If that improves the estimate, make sure that autoanalyze treats the table more often.
Prevent bad estimates caused by correlation:
CREATE STATISTICS testh21_stat (dependencies)
ON campaignid, insert_datetime FROM testh21;
ANALYZE testh21;
Perhaps there is a correlation between the columns, and that improves the estimate.
More detailed statistics: try raising default_statistics_target before ANALYZE of the table.
If you cannot improve the estimates, take the hammer and set enable_nestloop = off for the duration of the query.

Query using WHERE on a fixed value performs badly. Why is this?

I use a simple query with a WHERE on a fixed value. This query does a left join on a temporary view. For some reason this query is performing very badly. I guess that the view is being executed for every row and not only for the selected rows. When I replace the fixed value with a value from a temporary table, the query performs MUCH better (about 15-20 times faster). Why is this?
I use postgresql version 9.2.15.
I added a temporary table 'wrkovz91-selecties' with only 1 record to pass the selection-value of the WHERE instruction to the query.
The view 'view_wrk_012_000001' that is joined in the query is pretty 'heavy', because it contains a nested other view ('view_wrk_013_000006').
First create the temporary table and add one record:
CREATE TEMPORARY TABLE WRKOVZ91_SELECTIES
(
NR DECIMAL(006) NOT NULL,
KLANTNR DECIMAL(008),
CONSTRAINT WRKOVZ91_SELECTIES_KEY_001 PRIMARY KEY (NR)
);
INSERT INTO WRKOVZ91_SELECTIES
(NR, KLANTNR) VALUES (1, 1);
Then create the temporary view (with a nested view):
CREATE TEMPORARY VIEW view_wrk_013_000006 AS
SELECT vrr.klantnr AS klantnr,
UPPER(vrr.artnr) AS artnr,
vrr.partijnr AS partijnr,
SUM(vrr.kollo) AS colli_op_voorraad
FROM voorraad vrr
WHERE (vrr.status = 'A'::text)
GROUP BY 1,
2,
3;
CREATE TEMPORARY VIEW view_wrk_012_000001 AS
SELECT COALESCE(wrd.klantnr,0) AS klantnr,
COALESCE(wrd.wrkopdrnr,0) AS wrkopdrnr,
MIN(COALESCE(wrd.recept_benodigd_colli,0) *
COALESCE(wrk.te_produceren,0)) AS vrije_voorraad_ind,
MIN(COALESCE(v40.colli_op_voorraad,0)) AS hulpveld
FROM wrkopdr wrk
LEFT JOIN wrkopdrd wrd ON wrd.klantnr = wrk.klantnr
AND wrd.wrkopdrnr = wrk.wrkopdrnr
LEFT JOIN view_wrk_013_000006 v40 ON v40.klantnr =
COALESCE(wrd.klantnr_grondstof,0)
AND v40.artnr = COALESCE(wrd.artnr_grondstof,'')
AND v40.partijnr = COALESCE(wrd.partijnr_grondstof,0)
LEFT JOIN artikel art ON art.klantnr = COALESCE(wrd.klantnr_grondstof,0)
AND art.artnr = COALESCE(wrd.artnr_grondstof,'')
WHERE wrk.status = 'A'::text
AND art.voorraadhoudend_jn = 'J'::text
GROUP BY 1,
2;
The query that performs badly is simple:
SELECT WRK.KLANTNR,
WRK.WRKOPDRNR,
WRK.MIL_UITVOER_DATUM
FROM WRKOPDR WRK
LEFT JOIN VIEW_WRK_012_000001 V38 ON V38.KLANTNR = WRK.KLANTNR AND
V38.WRKOPDRNR = WRK.WRKOPDRNR
LEFT JOIN WRKOVZ91_SELECTIES S02 ON S02.NR = 1
WHERE WRK.KLANTNR = 1
LIMIT 9999;
The query that performs MUCH better is (note the small difference):
SELECT WRK.KLANTNR,
WRK.WRKOPDRNR,
WRK.MIL_UITVOER_DATUM
FROM WRKOPDR WRK
LEFT JOIN VIEW_WRK_012_000001 V38 ON V38.KLANTNR = WRK.KLANTNR AND
V38.WRKOPDRNR = WRK.WRKOPDRNR
LEFT JOIN WRKOVZ91_SELECTIES S02 ON S02.NR = 1
WHERE WRK.KLANTNR = S02.KLANTNR
LIMIT 9999;
I cannot understand why the slow query is performing so badly. It takes in my test-data about 219 secondes. The fast query is taking only 12 secondes. The only difference between the 2 queries is the selection-value in the WHERE.
Does anyone have an explanation for this behaviour?
In addition, the output of the explain analyse of the slow query is:
Limit (cost=19460.18..19972.73 rows=9999 width=18) (actual time=221573.343..221585.953 rows=9999 loops=1)
-> Hash Left Join (cost=19460.18..20913.07 rows=28344 width=18) (actual time=221573.341..221583.701 rows=9999 loops=1)
Hash Cond: ((wrk.klantnr = v38.klantnr) AND (wrk.wrkopdrnr = v38.wrkopdrnr))
-> Seq Scan on wrkopdr wrk (cost=0.00..1240.30 rows=28344 width=18) (actual time=0.055..5.490 rows=9999 loops=1)
Filter: (klantnr = 1::numeric)
-> Hash (cost=19460.17..19460.17 rows=1 width=64) (actual time=221573.254..221573.254 rows=2621 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 113kB
-> Subquery Scan on v38 (cost=19460.15..19460.17 rows=1 width=64) (actual time=221570.429..221572.158 rows=2621 loops=1)
-> HashAggregate (cost=19460.15..19460.16 rows=1 width=52) (actual time=221570.429..221571.499 rows=2621 loops=1)
-> Nested Loop Left Join (cost=14049.90..19460.14 rows=1 width=52) (actual time=225.848..221495.813 rows=6801 loops=1)
Join Filter: ((vrr.klantnr = COALESCE(wrd.klantnr_grondstof, 0::numeric)) AND ((upper(vrr.artnr)) = COALESCE(wrd.artnr_grondstof, ''::text)) AND (vrr.partijnr = COALESCE(wrd.partijnr_grondstof, 0::numeric)))
Rows Removed by Join Filter: 308209258
-> Nested Loop (cost=1076.77..6011.60 rows=1 width=43) (actual time=9.506..587.824 rows=6801 loops=1)
-> Hash Right Join (cost=1076.77..5828.70 rows=69 width=43) (actual time=9.428..204.601 rows=7861 loops=1)
Hash Cond: ((wrd.klantnr = wrk.klantnr) AND (wrd.wrkopdrnr = wrk.wrkopdrnr))
Filter: (COALESCE(wrd.klantnr, 0::numeric) = 1::numeric)
Rows Removed by Filter: 1
-> Seq Scan on wrkopdrd wrd (cost=0.00..3117.73 rows=116873 width=38) (actual time=0.013..65.472 rows=116873 loops=1)
-> Hash (cost=1026.34..1026.34 rows=3362 width=16) (actual time=9.324..9.324 rows=3362 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 161kB
-> Bitmap Heap Scan on wrkopdr wrk (cost=98.31..1026.34 rows=3362 width=16) (actual time=0.850..7.843 rows=3362 loops=1)
Recheck Cond: (status = 'A'::text)
-> Bitmap Index Scan on wrkopdr_key_002 (cost=0.00..97.47 rows=3362 width=0) (actual time=0.763..0.763 rows=3362 loops=1)
Index Cond: (status = 'A'::text)
-> Index Scan using artikel_key_001 on artikel art (cost=0.00..2.64 rows=1 width=17) (actual time=0.037..0.043 rows=1 loops=7861)
Index Cond: ((klantnr = COALESCE(wrd.klantnr_grondstof, 0::numeric)) AND (artnr = COALESCE(wrd.artnr_grondstof, ''::text)))
Filter: (voorraadhoudend_jn = 'J'::text)
Rows Removed by Filter: 0
-> HashAggregate (cost=12973.14..13121.70 rows=11885 width=25) (actual time=0.027..19.661 rows=45319 loops=6801)
-> Seq Scan on voorraad vrr (cost=0.00..12340.71 rows=63243 width=25) (actual time=0.456..122.855 rows=62655 loops=1)
Filter: (status = 'A'::text)
Rows Removed by Filter: 113953
Total runtime: 221587.386 ms
(33 rows)
The output of the explain analyse of the fast query is:
Limit (cost=56294.55..57997.06 rows=142 width=18) (actual time=445.371..12739.474 rows=9999 loops=1)
-> Nested Loop Left Join (cost=56294.55..57997.06 rows=142 width=18) (actual time=445.368..12736.035 rows=9999 loops=1)
Join Filter: ((v38.klantnr = wrk.klantnr) AND (v38.wrkopdrnr = wrk.wrkopdrnr))
Rows Removed by Join Filter: 26206807
-> Nested Loop (cost=0.00..1532.01 rows=142 width=18) (actual time=0.055..18.652 rows=9999 loops=1)
Join Filter: (wrk.klantnr = s02.klantnr)
-> Index Scan using wrkovz91_selecties_key_001 on wrkovz91_selecties s02 (cost=0.00..8.27 rows=1 width=14) (actual time=0.021..0.021 rows=1 loops=1)
Index Cond: (nr = 1::numeric)
-> Seq Scan on wrkopdr wrk (cost=0.00..1169.44 rows=28344 width=18) (actual time=0.026..11.905 rows=9999 loops=1)
-> Materialize (cost=56294.55..56296.25 rows=68 width=64) (actual time=0.044..0.380 rows=2621 loops=9999)
-> Subquery Scan on v38 (cost=56294.55..56295.91 rows=68 width=64) (actual time=441.797..443.503 rows=2621 loops=1)
-> HashAggregate (cost=56294.55..56295.23 rows=68 width=52) (actual time=441.795..442.848 rows=2621 loops=1)
-> Hash Left Join (cost=14525.30..56293.70 rows=68 width=52) (actual time=255.847..433.386 rows=6801 loops=1)
Hash Cond: ((COALESCE(wrd.klantnr_grondstof, 0::numeric) = v40.klantnr) AND (COALESCE(wrd.artnr_grondstof, ''::text) = v40.artnr) AND (COALESCE(wrd.partijnr_grondstof, 0::numeric) = v40.partijnr))
-> Nested Loop (cost=1076.77..42541.70 rows=68 width=43) (actual time=10.356..171.502 rows=6801 loops=1)
-> Hash Right Join (cost=1076.77..5794.04 rows=13863 width=43) (actual time=10.286..91.471 rows=7862 loops=1)
Hash Cond: ((wrd.klantnr = wrk.klantnr) AND (wrd.wrkopdrnr = wrk.wrkopdrnr))
-> Seq Scan on wrkopdrd wrd (cost=0.00..3117.73 rows=116873 width=38) (actual time=0.014..38.276 rows=116873 loops=1)
-> Hash (cost=1026.34..1026.34 rows=3362 width=16) (actual time=10.179..10.179 rows=3362 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 161kB
-> Bitmap Heap Scan on wrkopdr wrk (cost=98.31..1026.34 rows=3362 width=16) (actual time=0.835..8.667 rows=3362 loops=1)
Recheck Cond: (status = 'A'::text)
-> Bitmap Index Scan on wrkopdr_key_002 (cost=0.00..97.47 rows=3362 width=0) (actual time=0.748..0.748 rows=3362 loops=1)
Index Cond: (status = 'A'::text)
-> Index Scan using artikel_key_001 on artikel art (cost=0.00..2.64 rows=1 width=17) (actual time=0.009..0.009 rows=1 loops=7862)
Index Cond: ((klantnr = COALESCE(wrd.klantnr_grondstof, 0::numeric)) AND (artnr = COALESCE(wrd.artnr_grondstof, ''::text)))
Filter: (voorraadhoudend_jn = 'J'::text)
Rows Removed by Filter: 0
-> Hash (cost=13240.55..13240.55 rows=11885 width=74) (actual time=245.430..245.430 rows=45319 loops=1)
Buckets: 2048 Batches: 1 Memory Usage: 2645kB
-> Subquery Scan on v40 (cost=12973.14..13240.55 rows=11885 width=74) (actual time=186.156..222.042 rows=45319 loops=1)
-> HashAggregate (cost=12973.14..13121.70 rows=11885 width=25) (actual time=186.154..209.633 rows=45319 loops=1)
-> Seq Scan on voorraad vrr (cost=0.00..12340.71 rows=63243 width=25) (actual time=0.453..126.361 rows=62655 loops=1)
Filter: (status = 'A'::text)
Rows Removed by Filter: 113953
Total runtime: 12742.125 ms
(36 rows)

Analyze: Why a query taking could take so long, seems costs are low?

I am having these results for analyze for a simple query that does not return more than 150 records from tables less than 200 records most of them, as I have a table that stores latest value and the other fields are FK of the data.
Update: see the new results from same query some our later. The site is not public and/or there should be not users right now as it is in development.
explain analyze
SELECT lv.station_id,
s.name AS station_name,
s.latitude,
s.longitude,
s.elevation,
lv.element_id,
e.symbol AS element_symbol,
u.symbol,
e.name AS element_name,
lv.last_datetime AS datetime,
lv.last_value AS valor,
s.basin_id,
s.municipality_id
FROM (((element_station lv /*350 records*/
JOIN stations s ON ((lv.station_id = s.id))) /*40 records*/
JOIN elements e ON ((lv.element_id = e.id))) /*103 records*/
JOIN units u ON ((e.unit_id = u.id))) /* 32 records */
WHERE s.id = lv.station_id AND e.id = lv.element_id AND lv.interval_id = 6 and
lv.last_datetime >= ((now() - '06:00:00'::interval) - '01:00:00'::interval)
I have already tried VACUUM and after that some is saved, but again after some times it goes up. I have implemented an index on the fields.
Nested Loop (cost=0.29..2654.66 rows=1 width=92) (actual time=1219.390..35296.253 rows=157 loops=1)
Join Filter: (e.unit_id = u.id)
Rows Removed by Join Filter: 4867
-> Nested Loop (cost=0.29..2652.93 rows=1 width=92) (actual time=1219.383..35294.083 rows=157 loops=1)
Join Filter: (lv.element_id = e.id)
Rows Removed by Join Filter: 16014
-> Nested Loop (cost=0.29..2648.62 rows=1 width=61) (actual time=1219.301..35132.373 rows=157 loops=1)
-> Seq Scan on element_station lv (cost=0.00..2640.30 rows=1 width=20) (actual time=1219.248..1385.517 rows=157 loops=1)
Filter: ((interval_id = 6) AND (last_datetime >= ((now() - '06:00:00'::interval) - '01:00:00'::interval)))
Rows Removed by Filter: 168
-> Index Scan using stations_pkey on stations s (cost=0.29..8.31 rows=1 width=45) (actual time=3.471..214.941 rows=1 loops=157)
Index Cond: (id = lv.station_id)
-> Seq Scan on elements e (cost=0.00..3.03 rows=103 width=35) (actual time=0.003..0.999 rows=103 loops=157)
-> Seq Scan on units u (cost=0.00..1.32 rows=32 width=8) (actual time=0.002..0.005 rows=32 loops=157)
Planning time: 8.312 ms
Execution time: 35296.427 ms
update, same query running it tonight; no changes:
Sort (cost=601.74..601.88 rows=55 width=92) (actual time=1.822..1.841 rows=172 loops=1)
Sort Key: lv.last_datetime DESC
Sort Method: quicksort Memory: 52kB
-> Nested Loop (cost=11.60..600.15 rows=55 width=92) (actual time=0.287..1.680 rows=172 loops=1)
-> Hash Join (cost=11.31..248.15 rows=55 width=51) (actual time=0.263..0.616 rows=172 loops=1)
Hash Cond: (e.unit_id = u.id)
-> Hash Join (cost=9.59..245.60 rows=75 width=51) (actual time=0.225..0.528 rows=172 loops=1)
Hash Cond: (lv.element_id = e.id)
-> Bitmap Heap Scan on element_station lv (cost=5.27..240.25 rows=75 width=20) (actual time=0.150..0.359 rows=172 loops=1)
Recheck Cond: ((last_datetime >= ((now() - '06:00:00'::interval) - '01:00:00'::interval)) AND (interval_id = 6))
Heap Blocks: exact=22
-> Bitmap Index Scan on element_station_latest (cost=0.00..5.25 rows=75 width=0) (actual time=0.136..0.136 rows=226 loops=1)
Index Cond: ((last_datetime >= ((now() - '06:00:00'::interval) - '01:00:00'::interval)) AND (interval_id = 6))
-> Hash (cost=3.03..3.03 rows=103 width=35) (actual time=0.062..0.062 rows=103 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 15kB
-> Seq Scan on elements e (cost=0.00..3.03 rows=103 width=35) (actual time=0.006..0.031 rows=103 loops=1)
-> Hash (cost=1.32..1.32 rows=32 width=8) (actual time=0.019..0.019 rows=32 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 10kB
-> Seq Scan on units u (cost=0.00..1.32 rows=32 width=8) (actual time=0.003..0.005 rows=32 loops=1)
-> Index Scan using stations_pkey on stations s (cost=0.29..6.39 rows=1 width=45) (actual time=0.005..0.006 rows=1 loops=172)
Index Cond: (id = lv.station_id)
Planning time: 2.390 ms
Execution time: 2.009 ms
The problem is the misestimate of the number of rows in the sequential scan on element_station. Either autoanalyze has kicked in and calculated new statistics for the table or the data changed.
The problem is probably that PostgreSQL doesn't know the result of
((now() - '06:00:00'::interval) - '01:00:00'::interval)
at query planning time.
If that is possible for you, do it in two steps: First, calculate the expression above (either in PostgreSQL or on the client side). Then run the query with the result as a constant. That will make it easier for PostgreSQL to estimate the result count.

Postgres Table Slow Performance

We have a Product table in postgres DB. This is hosted on Heroku. We have 8 GB RAM and 250 GB disk space. 1000 IPOP allowed.
We are having proper indexes on columns.
Platform
PostgreSQL 9.5.12 on x86_64-pc-linux-gnu (Ubuntu 9.5.12-1.pgdg14.04+1), compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4, 64-bit
We are running a keywords search query on this table. We are having 2.8 millions records in this table. Our search query is too slow. Its giving us result in about 50 seconds. Which is too slow.
Query
SELECT
P .sfid AS prodsfid,
P .image_url__c image,
P .productcode sku,
P .Short_Description__c shortDesc,
P . NAME pname,
P .category__c,
P .price__c price,
P .description,
P .vendor_name__c vname,
P .vendor__c supSfid
FROM
staging.product2 P
JOIN (
SELECT
p1.sfid
FROM
staging.product2 p1
WHERE
p1. NAME ILIKE '%s%'
OR p1.productcode ILIKE '%s%'
) AS TEMP ON (P .sfid = TEMP .sfid)
WHERE
P .status__c = 'Available'
AND LOWER (
P .vendor_shipping_country__c
) = ANY (
VALUES
('us'),
('usa'),
('united states'),
('united states of america')
)
AND P .vendor_catalog_tier__c = ANY (
VALUES
('a1c37000000oljnAAA'),
('a1c37000000oljQAAQ'),
('a1c37000000oljQAAQ'),
('a1c37000000pT7IAAU'),
('a1c37000000omDjAAI'),
('a1c37000000oljMAAQ'),
('a1c37000000oljaAAA'),
('a1c37000000pT7SAAU'),
('a1c0R000000AFcVQAW'),
('a1c0R000000A1HAQA0'),
('a1c0R0000000OpWQAU'),
('a1c0R0000005TZMQA2'),
('a1c37000000oljdAAA'),
('a1c37000000ooTqAAI'),
('a1c37000000omLBAAY'),
('a1c0R0000005N8GQAU')
)
Here is the explain plan:
Nested Loop (cost=31.85..33886.54 rows=3681 width=750)
-> Hash Join (cost=31.77..31433.07 rows=4415 width=750)
Hash Cond: (lower((p.vendor_shipping_country__c)::text) = "*VALUES*".column1)
-> Nested Loop (cost=31.73..31423.67 rows=8830 width=761)
-> HashAggregate (cost=0.06..0.11 rows=16 width=32)
Group Key: "*VALUES*_1".column1
-> Values Scan on "*VALUES*_1" (cost=0.00..0.06 rows=16 width=32)
-> Bitmap Heap Scan on product2 p (cost=31.66..1962.32 rows=552 width=780)
Recheck Cond: ((vendor_catalog_tier__c)::text = "*VALUES*_1".column1)
Filter: ((status__c)::text = 'Available'::text)
-> Bitmap Index Scan on vendor_catalog_tier_prd_idx (cost=0.00..31.64 rows=1016 width=0)
Index Cond: ((vendor_catalog_tier__c)::text = "*VALUES*_1".column1)
-> Hash (cost=0.03..0.03 rows=4 width=32)
-> Unique (cost=0.02..0.03 rows=4 width=32)
-> Sort (cost=0.02..0.02 rows=4 width=32)
Sort Key: "*VALUES*".column1
-> Values Scan on "*VALUES*" (cost=0.00..0.01 rows=4 width=32)
-> Index Scan using sfid_prd_idx on product2 p1 (cost=0.09..0.55 rows=1 width=19)
Index Cond: ((sfid)::text = (p.sfid)::text)
Filter: (((name)::text ~~* '%s%'::text) OR ((productcode)::text ~~* '%s%'::text))
Its returning around 140,576 records. By the way we need only top 5,000 records only. Will putting Limit help here?
Let me know how to make it fast and what is causing this slow.
EXPLAIN ANALYZE
#RaymondNijland Here is the explain analyze
Nested Loop (cost=31.83..33427.28 rows=4039 width=750) (actual time=1.903..4384.221 rows=140576 loops=1)
-> Hash Join (cost=31.74..30971.32 rows=4369 width=750) (actual time=1.852..1094.964 rows=164353 loops=1)
Hash Cond: (lower((p.vendor_shipping_country__c)::text) = "*VALUES*".column1)
-> Nested Loop (cost=31.70..30962.02 rows=8738 width=761) (actual time=1.800..911.738 rows=164353 loops=1)
-> HashAggregate (cost=0.06..0.11 rows=16 width=32) (actual time=0.012..0.019 rows=15 loops=1)
Group Key: "*VALUES*_1".column1
-> Values Scan on "*VALUES*_1" (cost=0.00..0.06 rows=16 width=32) (actual time=0.004..0.005 rows=16 loops=1)
-> Bitmap Heap Scan on product2 p (cost=31.64..1933.48 rows=546 width=780) (actual time=26.004..57.290 rows=10957 loops=15)
Recheck Cond: ((vendor_catalog_tier__c)::text = "*VALUES*_1".column1)
Filter: ((status__c)::text = 'Available'::text)
Rows Removed by Filter: 645
Heap Blocks: exact=88436
-> Bitmap Index Scan on vendor_catalog_tier_prd_idx (cost=0.00..31.61 rows=1000 width=0) (actual time=24.811..24.811 rows=11601 loops=15)
Index Cond: ((vendor_catalog_tier__c)::text = "*VALUES*_1".column1)
-> Hash (cost=0.03..0.03 rows=4 width=32) (actual time=0.032..0.032 rows=4 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Unique (cost=0.02..0.03 rows=4 width=32) (actual time=0.026..0.027 rows=4 loops=1)
-> Sort (cost=0.02..0.02 rows=4 width=32) (actual time=0.026..0.026 rows=4 loops=1)
Sort Key: "*VALUES*".column1
Sort Method: quicksort Memory: 25kB
-> Values Scan on "*VALUES*" (cost=0.00..0.01 rows=4 width=32) (actual time=0.001..0.002 rows=4 loops=1)
-> Index Scan using sfid_prd_idx on product2 p1 (cost=0.09..0.56 rows=1 width=19) (actual time=0.019..0.020 rows=1 loops=164353)
Index Cond: ((sfid)::text = (p.sfid)::text)
Filter: (((name)::text ~~* '%s%'::text) OR ((productcode)::text ~~* '%s%'::text))
Rows Removed by Filter: 0
Planning time: 2.488 ms
Execution time: 4391.378 ms
Another query version, with order by , but it seems very slow as well (140 seconds)
SELECT
P .sfid AS prodsfid,
P .image_url__c image,
P .productcode sku,
P .Short_Description__c shortDesc,
P . NAME pname,
P .category__c,
P .price__c price,
P .description,
P .vendor_name__c vname,
P .vendor__c supSfid
FROM
staging.product2 P
WHERE
P .status__c = 'Available'
AND P .vendor_shipping_country__c IN (
'us',
'usa',
'united states',
'united states of america'
)
AND P .vendor_catalog_tier__c IN (
'a1c37000000omDQAAY',
'a1c37000000omDTAAY',
'a1c37000000omDXAAY',
'a1c37000000omDYAAY',
'a1c37000000omDZAAY',
'a1c37000000omDdAAI',
'a1c37000000omDfAAI',
'a1c37000000omDiAAI',
'a1c37000000oml6AAA',
'a1c37000000oljPAAQ',
'a1c37000000oljRAAQ',
'a1c37000000oljWAAQ',
'a1c37000000oljXAAQ',
'a1c37000000oljZAAQ',
'a1c37000000oljcAAA',
'a1c37000000oljdAAA',
'a1c37000000oljlAAA',
'a1c37000000oljoAAA',
'a1c37000000oljqAAA',
'a1c37000000olnvAAA',
'a1c37000000olnwAAA',
'a1c37000000olnxAAA',
'a1c37000000olnyAAA',
'a1c37000000olo0AAA',
'a1c37000000olo1AAA',
'a1c37000000olo4AAA',
'a1c37000000olo8AAA',
'a1c37000000olo9AAA',
'a1c37000000oloCAAQ',
'a1c37000000oloFAAQ',
'a1c37000000oloIAAQ',
'a1c37000000oloJAAQ',
'a1c37000000oloMAAQ',
'a1c37000000oloNAAQ',
'a1c37000000oloSAAQ',
'a1c37000000olodAAA',
'a1c37000000oloeAAA',
'a1c37000000olzCAAQ',
'a1c37000000om0xAAA',
'a1c37000000ooV1AAI',
'a1c37000000oog8AAA',
'a1c37000000oogDAAQ',
'a1c37000000oonzAAA',
'a1c37000000oluuAAA',
'a1c37000000pT7SAAU',
'a1c37000000oljnAAA',
'a1c37000000olumAAA',
'a1c37000000oljpAAA',
'a1c37000000pUm2AAE',
'a1c37000000olo3AAA',
'a1c37000000oo1MAAQ',
'a1c37000000oo1vAAA',
'a1c37000000pWxgAAE',
'a1c37000000pYJkAAM',
'a1c37000000omDjAAI',
'a1c37000000ooTgAAI',
'a1c37000000op2GAAQ',
'a1c37000000one0AAA',
'a1c37000000oljYAAQ',
'a1c37000000pUlxAAE',
'a1c37000000oo9SAAQ',
'a1c37000000pcIYAAY',
'a1c37000000pamtAAA',
'a1c37000000pd2QAAQ',
'a1c37000000pdCOAAY',
'a1c37000000OpPaAAK',
'a1c37000000OphZAAS',
'a1c37000000olNkAAI'
)
ORDER BY p.productcode asc
LIMIT 5000
Here is the explain analyse for this:
Limit (cost=0.09..45271.54 rows=5000 width=750) (actual time=48593.355..86376.864 rows=5000 loops=1)
-> Index Scan using productcode_prd_idx on product2 p (cost=0.09..743031.39 rows=82064 width=750) (actual time=48593.353..86376.283 rows=5000 loops=1)
Filter: (((status__c)::text = 'Available'::text) AND ((vendor_shipping_country__c)::text = ANY ('{us,usa,"united states","united states of america"}'::text[])) AND ((vendor_catalog_tier__c)::text = ANY ('{a1c37000000omDQAAY,a1c37000000omDTAAY,a1c37000000omDXAAY,a1c37000000omDYAAY,a1c37000000omDZAAY,a1c37000000omDdAAI,a1c37000000omDfAAI,a1c37000000omDiAAI,a1c37000000oml6AAA,a1c37000000oljPAAQ,a1c37000000oljRAAQ,a1c37000000oljWAAQ,a1c37000000oljXAAQ,a1c37000000oljZAAQ,a1c37000000oljcAAA,a1c37000000oljdAAA,a1c37000000oljlAAA,a1c37000000oljoAAA,a1c37000000oljqAAA,a1c37000000olnvAAA,a1c37000000olnwAAA,a1c37000000olnxAAA,a1c37000000olnyAAA,a1c37000000olo0AAA,a1c37000000olo1AAA,a1c37000000olo4AAA,a1c37000000olo8AAA,a1c37000000olo9AAA,a1c37000000oloCAAQ,a1c37000000oloFAAQ,a1c37000000oloIAAQ,a1c37000000oloJAAQ,a1c37000000oloMAAQ,a1c37000000oloNAAQ,a1c37000000oloSAAQ,a1c37000000olodAAA,a1c37000000oloeAAA,a1c37000000olzCAAQ,a1c37000000om0xAAA,a1c37000000ooV1AAI,a1c37000000oog8AAA,a1c37000000oogDAAQ,a1c37000000oonzAAA,a1c37000000oluuAAA,a1c37000000pT7SAAU,a1c37000000oljnAAA,a1c37000000olumAAA,a1c37000000oljpAAA,a1c37000000pUm2AAE,a1c37000000olo3AAA,a1c37000000oo1MAAQ,a1c37000000oo1vAAA,a1c37000000pWxgAAE,a1c37000000pYJkAAM,a1c37000000omDjAAI,a1c37000000ooTgAAI,a1c37000000op2GAAQ,a1c37000000one0AAA,a1c37000000oljYAAQ,a1c37000000pUlxAAE,a1c37000000oo9SAAQ,a1c37000000pcIYAAY,a1c37000000pamtAAA,a1c37000000pd2QAAQ,a1c37000000pdCOAAY,a1c37000000OpPaAAK,a1c37000000OphZAAS,a1c37000000olNkAAI}'::text[])))
Rows Removed by Filter: 1707920
Planning time: 1.685 ms
Execution time: 86377.139 ms
Thanks
Aslam Bari
You might want to consider a GIN or GIST index on your staging.product2 table. Double-sided ILIKEs are slow and difficult to improve substantially. I've seen a GIN index improve a similar query by 60-80%.
See this doc.

query with IN expression running slower than query with OR

I have two queries one is using OR expression and is running very fast. The other query is similar but is using IN expression instead of OR and is running very slow. I would appreciate if you could let me know how to make the query using IN as fast as the one using OR. The table has 15 million records
SELECT e.id
FROM events e,
resources r
WHERE e.resource_id = r.id
AND resource_type_id IN (19872817,
282)
ORDER BY occurrence_date DESC LIMIT 100
Limit (cost=0.85..228363.80 rows=100 width=12) (actual time=238.668..57470.017 rows=19 loops=1)
-> Nested Loop (cost=0.85..26211499.28 rows=11478 width=12) (actual time=238.667..57470.010 rows=19 loops=1)
Join Filter: (e.resource_id = r.id)
Rows Removed by Join Filter: 507548495
-> Index Scan using eventoccurrencedateindex on events e (cost=0.43..603333.83 rows=15380258 width=16) (actual time=0.023..2798.538 rows=15380258 loops=1)
-> Materialize (cost=0.42..36.16 rows=111 width=4) (actual time=0.000..0.001 rows=33 loops=15380258)
-> Index Scan using resources_type_fk_index on resources r (cost=0.42..35.60 rows=111 width=4) (actual time=0.014..0.107 rows=33 loops=1)
Index Cond: (resource_type_id = ANY ('{19872817,282}'::integer[]))
Total runtime: 57470.057 ms
SELECT e.id
FROM events e,
resources r
WHERE e.resource_id = r.id
AND (resource_type_id = '19872817' OR resource_type_id = '282')
ORDER BY occurrence_date DESC LIMIT 100
Limit (cost=10.17..14.22 rows=100 width=12) (actual time=0.060..0.181 rows=100 loops=1)
-> Nested Loop (cost=10.17..34747856.23 rows=858030913 width=12) (actual time=0.059..0.167 rows=100 loops=1)
Join Filter: (((e.resource_id = r.id) AND (r.resource_type_id = 19872817)) OR (r.resource_type_id = 282))
-> Index Scan using eventoccurrencedateindex on events e (cost=0.43..603333.83 rows=15380258 width=16) (actual time=0.018..0.019 rows=4 loops=1)
-> Materialize (cost=9.74..349.92 rows=111 width=8) (actual time=0.009..0.023 rows=25
loops=4)
-> Bitmap Heap Scan on resources r (cost=9.74..349.36 rows=111 width=8) (actual time=0.034..0.081 rows=33 loops=1)
Recheck Cond: ((resource_type_id = 19872817) OR (resource_type_id = 282))
-> BitmapOr (cost=9.74..9.74 rows=111 width=0) (actual time=0.023..0.023 rows=0 loops=1)
-> Bitmap Index Scan on resources_type_fk_index (cost=0.00..4.84 rows=56 width=0) (actual time=0.009..0.009 rows=0 loops=1)
Index Cond: (resource_type_id = 19872817)
-> Bitmap Index Scan on resources_type_fk_index (cost=0.00..4.84 rows=56 width=0) (actual time=0.014..0.014 rows=33 loops=1)
Index Cond: (resource_type_id = 282)" "Total runtime: 0.242 ms
This is strange in the or version:
Join Filter: (
((e.resource_id = r.id) AND (r.resource_type_id = 19872817))
OR
(r.resource_type_id = 282)
)
It does e.resource_id = r.id AND r.resource_type_id = 19872817 first and then OR r.resource_type_id = 282 which is wrong. Are you sure you issued the correct condition in that query? Notice that there must be parenthesis wrapping the OR:
e.resource_id = r.id
AND
(r.resource_type_id = 19872817 OR r.resource_type_id = 282)