Postgresql shows slow query but low execution time in analyzed output - postgresql

Postgresql Version: 13.6
Server size: 2cx4g
Logs of Postgresql shows duration 6215.926ms, but explain shows that the execution time and planning time are less than 200ms
Is slow reading speed of the client a cause of the problem?
2022-04-18 16:28:20.491 CST [1159] LOG: duration: 6215.926 ms statement: EXPLAIN (ANALYZE ON, BUFFERS ON) select 1 from information_schema.tables where table_schema = 'pg_catalog' and table_name = 'pg_file_settings'
Nested Loop (cost=0.68..17.88 rows=1 width=4) (actual time=0.126..0.128 rows=1 loops=1)
Join Filter: (c.relnamespace = nc.oid)
Buffers: shared hit=8 read=2
I/O Timings: read=0.027
-> Nested Loop Left Join (cost=0.68..16.81 rows=1 width=4) (actual time=0.085..0.087 rows=1 loops=1)
Buffers: shared hit=3 read=2
I/O Timings: read=0.027
-> Index Scan using pg_class_relname_nsp_index on pg_class c (cost=0.27..8.33 rows=1 width=8) (actual time=0.065..0.066 rows=1 loops=1)
Index Cond: (relname = 'pg_file_settings'::name)
Filter: ((relkind = ANY ('{r,v,f,p}'::"char"[])) AND (pg_has_role(relowner, 'USAGE'::text) OR has_table_privilege(oid, 'SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))
Buffers: shared hit=1 read=2
I/O Timings: read=0.027
-> Nested Loop (cost=0.40..8.47 rows=1 width=4) (actual time=0.017..0.018 rows=0 loops=1)
Buffers: shared hit=2
-> Index Scan using pg_type_oid_index on pg_type t (cost=0.27..8.29 rows=1 width=8) (actual time=0.016..0.016 rows=0 loops=1)
Index Cond: (oid = c.reloftype)
Buffers: shared hit=2
-> Index Only Scan using pg_namespace_oid_index on pg_namespace nt (cost=0.13..0.17 rows=1 width=4) (never executed)
Index Cond: (oid = t.typnamespace)
Heap Fetches: 0
-> Seq Scan on pg_namespace nc (cost=0.00..1.06 rows=1 width=4) (actual time=0.038..0.038 rows=1 loops=1)
Filter: ((NOT pg_is_other_temp_schema(oid)) AND (nspname = 'pg_catalog'::name))
Rows Removed by Filter: 1
Buffers: shared hit=5
Planning:
Buffers: shared hit=168 read=52 written=5
I/O Timings: read=103.513 write=43.230
Planning Time: 184.492 ms
Execution Time: 0.438 ms
Edit:
After running analyze on the database, same results:
2022-04-18 17:53:04.563 CST [16398] LOG: duration: 5987.077 ms statement: EXPLAIN (ANALYZE ON, BUFFERS ON) select 1 from information_schema.tables where table_schema = 'pg_catalog' and table_name = 'pg_file_settings'
Nested Loop (cost=0.68..17.88 rows=1 width=4) (actual time=0.125..0.127 rows=1 loops=1)
Join Filter: (c.relnamespace = nc.oid)
Buffers: shared hit=8 read=2
I/O Timings: read=0.025
-> Nested Loop Left Join (cost=0.68..16.81 rows=1 width=4) (actual time=0.088..0.090 rows=1 loops=1)
Buffers: shared hit=3 read=2
I/O Timings: read=0.025
-> Index Scan using pg_class_relname_nsp_index on pg_class c (cost=0.27..8.33 rows=1 width=8) (actual time=0.069..0.070 rows=1 loops=1)
Index Cond: (relname = 'pg_file_settings'::name)
Filter: ((relkind = ANY ('{r,v,f,p}'::"char"[])) AND (pg_has_role(relowner, 'USAGE'::text) OR has_table_privilege(oid, 'SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))
Buffers: shared hit=1 read=2
I/O Timings: read=0.025
-> Nested Loop (cost=0.40..8.47 rows=1 width=4) (actual time=0.016..0.016 rows=0 loops=1)
Buffers: shared hit=2
-> Index Scan using pg_type_oid_index on pg_type t (cost=0.27..8.29 rows=1 width=8) (actual time=0.014..0.014 rows=0 loops=1)
Index Cond: (oid = c.reloftype)
Buffers: shared hit=2
-> Index Only Scan using pg_namespace_oid_index on pg_namespace nt (cost=0.13..0.17 rows=1 width=4) (never executed)
Index Cond: (oid = t.typnamespace)
Heap Fetches: 0
-> Seq Scan on pg_namespace nc (cost=0.00..1.06 rows=1 width=4) (actual time=0.035..0.035 rows=1 loops=1)
Filter: ((NOT pg_is_other_temp_schema(oid)) AND (nspname = 'pg_catalog'::name))
Rows Removed by Filter: 1
Buffers: shared hit=5
Planning:
Buffers: shared hit=170 read=59 written=9
I/O Timings: read=105.688 write=79.773
Planning Time: 262.826 ms
Execution Time: 0.366 ms
Edit:
Analyze output after increasing shared_buffers to 25% of memory
Nested Loop (cost=0.68..17.88 rows=1 width=4) (actual time=0.096..0.098 rows=1 loops=1)
Join Filter: (c.relnamespace = nc.oid)
Buffers: shared hit=10
-> Nested Loop Left Join (cost=0.68..16.81 rows=1 width=4) (actual time=0.057..0.059 rows=1 loops=1)
Buffers: shared hit=5
-> Index Scan using pg_class_relname_nsp_index on pg_class c (cost=0.27..8.33 rows=1 width=8) (actual time=0.040..0.042 rows=1 loops=1)
Index Cond: (relname = 'pg_file_settings'::name)
Filter: ((relkind = ANY ('{r,v,f,p}'::"char"[])) AND (pg_has_role(relowner, 'USAGE'::text) OR has_table_privilege(oid, 'SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))
Buffers: shared hit=3
-> Nested Loop (cost=0.40..8.47 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1)
Buffers: shared hit=2
-> Index Scan using pg_type_oid_index on pg_type t (cost=0.27..8.29 rows=1 width=8) (actual time=0.012..0.012 rows=0 loops=1)
Index Cond: (oid = c.reloftype)
Buffers: shared hit=2
-> Index Only Scan using pg_namespace_oid_index on pg_namespace nt (cost=0.13..0.17 rows=1 width=4) (never executed)
Index Cond: (oid = t.typnamespace)
Heap Fetches: 0
-> Seq Scan on pg_namespace nc (cost=0.00..1.06 rows=1 width=4) (actual time=0.036..0.036 rows=1 loops=1)
Filter: ((NOT pg_is_other_temp_schema(oid)) AND (nspname = 'pg_catalog'::name))
Rows Removed by Filter: 1
Buffers: shared hit=5
Planning:
Buffers: shared hit=229
Planning Time: 7.866 ms
Execution Time: 0.306 ms

Related

Bitmap heap scan slow with same condition as index scan

I have a query with joins to rather large tables, but do not understand the slow performance of it.
Especially this part of the query plan seems weird to me (complete plan and query below):
-> Bitmap Heap Scan on order_line (cost=65.45..11521.37 rows=3228 width=20) (actual time=22.555..7764.120 rows=6250 loops=12)
Recheck Cond: (product_id = catalogue_product.id)
Heap Blocks: exact=71735
Buffers: shared hit=55299 read=16686
-> Bitmap Index Scan on order_line_product_id_e620902d (cost=0.00..64.65 rows=3228 width=0) (actual time=21.532..21.532 rows=6269 loops=12)
Index Cond: (product_id = catalogue_product.id)
Buffers: shared hit=143 read=107
Why does it need to recheck product_id = catalogue_product.id which is the same as in index and then take so much time?
As far as i understand recheck is needed if a) only part of the condition can be covered by index or b) bitmap is too big and must be compressed - but then there should be a lossy=x entry, right?
Complete query:
SELECT ("order_order"."date_placed" AT TIME ZONE 'UTC')::date, "partner_partner"."odoo_id", "catalogue_product"."odoo_id", SUM("order_line"."quantity") AS "orders"
FROM "order_line"
INNER JOIN "order_order" ON ("order_line"."order_id" = "order_order"."id")
INNER JOIN "catalogue_product" ON ("order_line"."product_id" = "catalogue_product"."id")
INNER JOIN "partner_stockrecord" ON ("order_line"."stockrecord_id" = "partner_stockrecord"."id")
INNER JOIN "partner_partner" ON ("partner_stockrecord"."partner_id" = "partner_partner"."id")
WHERE (("order_order"."date_placed" AT TIME ZONE 'UTC')::date IN ('2022-11-22'::DATE)
AND "catalogue_product"."odoo_id" IN (6241, 6499, 6500, 49195, 44753, 44754, 53427, 6452, 44755, 44787, 6427, 6428)
AND "partner_partner"."odoo_id" IS NOT NULL AND NOT ("order_order"."status" IN ('Pending', 'PaymentDeclined', 'Canceled')))
GROUP BY ("order_order"."date_placed" AT TIME ZONE 'UTC')::date, "partner_partner"."odoo_id", "catalogue_product"."odoo_id", "order_line"."id"
ORDER BY "order_line"."id" ASC
Complete plan:
GroupAggregate (cost=141002.93..141003.41 rows=16 width=24) (actual time=93629.346..93629.369 rows=52 loops=1)
Group Key: order_line.id, ((timezone('UTC'::text, order_order.date_placed))::date), partner_partner.odoo_id, catalogue_product.odoo_id
Buffers: shared hit=56537 read=16693
-> Sort (cost=141002.93..141002.97 rows=16 width=20) (actual time=93629.331..93629.335 rows=52 loops=1)
Sort Key: order_line.id, partner_partner.odoo_id, catalogue_product.odoo_id
Sort Method: quicksort Memory: 29kB
Buffers: shared hit=56537 read=16693
-> Hash Join (cost=2319.22..141002.61 rows=16 width=20) (actual time=859.917..93629.204 rows=52 loops=1)
Hash Cond: (partner_stockrecord.partner_id = partner_partner.id)
Buffers: shared hit=56537 read=16693
-> Nested Loop (cost=2318.11..141001.34 rows=16 width=24) (actual time=859.853..93628.903 rows=52 loops=1)
Buffers: shared hit=56536 read=16693
-> Hash Join (cost=2317.69..140994.41 rows=16 width=24) (actual time=859.824..93627.791 rows=52 loops=1)
Hash Cond: (order_line.order_id = order_order.id)
Buffers: shared hit=56328 read=16693
-> Nested Loop (cost=108.94..138731.32 rows=20700 width=20) (actual time=1.566..93206.434 rows=74999 loops=1)
Buffers: shared hit=55334 read=16686
-> Bitmap Heap Scan on catalogue_product (cost=43.48..87.52 rows=12 width=8) (actual time=0.080..0.183 rows=12 loops=1)
Recheck Cond: (odoo_id = ANY ('{6241,6499,6500,49195,44753,44754,53427,6452,44755,44787,6427,6428}'::integer[]))
Heap Blocks: exact=11
Buffers: shared hit=35
-> Bitmap Index Scan on catalogue_product_odoo_id_c5e41bad (cost=0.00..43.48 rows=12 width=0) (actual time=0.072..0.072 rows=12 loops=1)
Index Cond: (odoo_id = ANY ('{6241,6499,6500,49195,44753,44754,53427,6452,44755,44787,6427,6428}'::integer[]))
Buffers: shared hit=24
-> Bitmap Heap Scan on order_line (cost=65.45..11521.37 rows=3228 width=20) (actual time=22.555..7764.120 rows=6250 loops=12)
Recheck Cond: (product_id = catalogue_product.id)
Heap Blocks: exact=71735
Buffers: shared hit=55299 read=16686
-> Bitmap Index Scan on order_line_product_id_e620902d (cost=0.00..64.65 rows=3228 width=0) (actual time=21.532..21.532 rows=6269 loops=12)
Index Cond: (product_id = catalogue_product.id)
Buffers: shared hit=143 read=107
-> Hash (cost=2194.42..2194.42 rows=1147 width=12) (actual time=365.766..365.766 rows=1313 loops=1)
Buckets: 2048 Batches: 1 Memory Usage: 73kB
Buffers: shared hit=994 read=7
-> Index Scan using order_date_placed_utc_date_idx on order_order (cost=0.43..2194.42 rows=1147 width=12) (actual time=0.050..365.158 rows=1313 loops=1)
Index Cond: ((timezone('UTC'::text, date_placed))::date = '2022-11-22'::date)
Filter: ((status)::text <> ALL ('{Pending,PaymentDeclined,Canceled}'::text[]))
Rows Removed by Filter: 253
Buffers: shared hit=994 read=7
-> Index Scan using partner_stockrecord_pkey on partner_stockrecord (cost=0.41..0.43 rows=1 width=8) (actual time=0.017..0.017 rows=1 loops=52)
Index Cond: (id = order_line.stockrecord_id)
Buffers: shared hit=208
-> Hash (cost=1.05..1.05 rows=5 width=8) (actual time=0.028..0.028 rows=5 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
Buffers: shared hit=1
-> Seq Scan on partner_partner (cost=0.00..1.05 rows=5 width=8) (actual time=0.013..0.015 rows=5 loops=1)
Filter: (odoo_id IS NOT NULL)
Buffers: shared hit=1
Planning time: 3.275 ms
Execution time: 93629.781 ms
It doesn't have to do any rechecks. That line in the plan comes from the planner, not from the run-time part. (you can tell because if you just do EXPLAIN without ANALYZE, the line still appears.) At planning time, it doesn't know whether any of the bitmap will overflow, so it has to be prepared to do the recheck, even if that turns out not to be necessary to execute it at run time. The slowness almost certainly comes from the time spent reading 16686 random pages, which could be made clear by turning on track_io_timing.

Differing PSQL Planners with same Indexes

I have been trying to speed up my psql queries to squeeze out as much speed as possible. With a few indexes I installed on my local system I got good speeds. I installed these on the remote system but had different results. The screenshot for the planners follow:
Local Planner:
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=19.54..67.37 rows=12 width=133) (actual time=0.771..0.862 rows=12 loops=1)
Hash Cond: ((sensor_lookup.sensorid)::text = (sensor.sensorid)::text)
Buffers: shared hit=25
-> Nested Loop (cost=3.01..50.81 rows=12 width=119) (actual time=0.193..0.271 rows=12 loops=1)
Buffers: shared hit=19
-> Nested Loop (cost=2.60..26.10 rows=1 width=320) (actual time=0.163..0.228 rows=1 loops=1)
Buffers: shared hit=15
-> Nested Loop (cost=2.60..25.02 rows=1 width=98) (actual time=0.156..0.217 rows=1 loops=1)
Buffers: shared hit=14
-> Nested Loop (cost=0.27..13.80 rows=1 width=68) (actual time=0.097..0.151 rows=1 loops=1)
Buffers: shared hit=7
-> Index Scan using meta_pkey on meta (cost=0.27..4.29 rows=1 width=45) (actual time=0.029..0.031 rows=1 loops=1)
Index Cond: (stationid = 'WYTOR02'::bpchar)
Buffers: shared hit=3
-> Seq Scan on meta_lookup (cost=0.00..9.50 rows=1 width=31) (actual time=0.064..0.116 rows=1 loops=1)
Filter: ((stationid)::bpchar = 'WYTOR02'::bpchar)
Rows Removed by Filter: 439
Buffers: shared hit=4
-> Bitmap Heap Scan on datetime_lookup (cost=2.33..11.21 rows=1 width=38) (actual time=0.054..0.060 rows=1 loops=1)
Recheck Cond: (stationid = 'WYTOR02'::bpchar)
Filter: ((productid)::text = 'qc60'::text)
Rows Removed by Filter: 5
Heap Blocks: exact=5
Buffers: shared hit=7
-> Bitmap Index Scan on idx_16 (cost=0.00..2.32 rows=6 width=0) (actual time=0.033..0.033 rows=6 loops=1)
Index Cond: (stationid = 'WYTOR02'::bpchar)
Buffers: shared hit=2
-> Seq Scan on product (cost=0.00..1.07 rows=1 width=222) (actual time=0.006..0.008 rows=1 loops=1)
Filter: ((productid)::text = 'qc60'::text)
Rows Removed by Filter: 5
Buffers: shared hit=1
-> Index Scan using idx_15 on sensor_lookup (cost=0.41..24.59 rows=12 width=30) (actual time=0.027..0.034 rows=12 loops=1)
Index Cond: ((stationid = 'WYTOR02'::bpchar) AND ((productid)::text = 'qc60'::text))
Buffers: shared hit=4
-> Hash (cost=10.68..10.68 rows=468 width=27) (actual time=0.547..0.548 rows=468 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 34kB
Buffers: shared hit=6
-> Seq Scan on sensor (cost=0.00..10.68 rows=468 width=27) (actual time=0.013..0.208 rows=468 loops=1)
Buffers: shared hit=6
Planning time: 1.655 ms
Execution time: 1.106 ms
Remote Planner:
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=26.67..102.51 rows=12 width=133) (actual time=0.644..0.719 rows=12 loops=1)
Hash Cond: ((sensor_lookup.sensorid)::text = (sensor.sensorid)::text)
Buffers: shared hit=29
-> Nested Loop (cost=9.14..84.82 rows=12 width=119) (actual time=0.161..0.227 rows=12 loops=1)
Buffers: shared hit=19
-> Nested Loop (cost=4.60..38.12 rows=1 width=108) (actual time=0.128..0.187 rows=1 loops=1)
Buffers: shared hit=15
-> Nested Loop (cost=4.60..37.03 rows=1 width=98) (actual time=0.116..0.173 rows=1 loops=1)
Buffers: shared hit=14
-> Nested Loop (cost=0.27..17.80 rows=1 width=68) (actual time=0.081..0.132 rows=1 loops=1)
Buffers: shared hit=7
-> Index Scan using meta_pkey on meta (cost=0.27..8
.29 rows=1 width=45) (actual time=0.011..0.012 rows=1 loops=1)
Index Cond: (stationid = 'WYTOR02'::bpchar)
Buffers: shared hit=3
-> Seq Scan on meta_lookup (cost=0.00..9.50 rows=1 width=31) (actual time=0.067..0.117 rows=1 loops=1)
Filter: ((stationid)::bpchar = 'WYTOR02'::bpchar)
Rows Removed by Filter: 439
Buffers: shared hit=4
-> Bitmap Heap Scan on datetime_lookup (cost=4.33..19.22 rows=1 width=38) (actual time=0.031..0.036 rows=1 loops=1)
Recheck Cond: (stationid = 'WYTOR02'::bpchar)
Filter: ((productid)::text = 'qc60'::text)
Rows Removed by Filter: 5
Heap Blocks: exact=5
Buffers: shared hit=7
-> Bitmap Index Scan on idx_16 (cost=0.00..4.33 rows=6 width=0) (actual time=0.019..0.019 rows=6 loops=1)
Index Cond: (stationid = 'WYTOR02'::bpchar)
Buffers: shared hit=2
-> Seq Scan on product (cost=0.00..1.07 rows=1 width=10) (actual time=0.010..0.012 rows=1 loops=1)
Filter: ((productid)::text = 'qc60'::text)
Rows Removed by Filter: 5
Buffers: shared hit=1
-> Bitmap Heap Scan on sensor_lookup (cost=4.54..46.58 rows=12 width=30) (actual time=0.030..0.032 rows=12 loops=1)
Recheck Cond: ((stationid = 'WYTOR02'::bpchar) AND ((productid)::text = 'qc60'::text))
Heap Blocks: exact=1
Buffers: shared hit=4
-> Bitmap Index Scan on idx_15 (cost=0.00..4.54 rows=12 width=0) (actual time=0.021..0.021 rows=12 loops=1)
Index Cond: ((stationid = 'WYTOR02'::bpchar) AND ((productid)::text = 'qc60'::text))
Buffers: shared hit=3
-> Hash (cost=11.68..11.68 rows=468 width=27) (actual time=0.440..0.440 rows=468 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 34kB
Buffers: shared hit=7
-> Seq Scan on sensor (cost=0.00..11.68 rows=468 width=27) (actual time=0.004..0.174 rows=468 loops=1)
Buffers: shared hit=7
Planning time: 2.572 ms
Execution time: 0.947 ms
Even though the difference is 1ms, these calls are done thousands of time so the difference adds up. The difference seems to ne that the reomote is doing a Bitmap Heap Scan as opposed to an Index Scan. Though I'm not sure these differences account for the planning time it is a difference between matching systems. The settings in the postgresql.conf are the same so what can I look at to see why these are different?
Both the local and remote servers have the same Postgresql and Ubuntu versions:
Ubuntu 18.04.1
psql (PostgreSQL) 10.15 (Ubuntu 10.15-0ubuntu0.18.04.1)

select max(id) from joined table inefficient query plan

I have a view req_res which joins 2 tables - Request and Response inner joined on requestId. Request has primary key - ID column.
When I query (Query 1):
select max(ID) from req_res where ID > 1000000 and ID < 2000000;
Explain plan: hash join: Index scan of Request.ID and seqential scan of Response.request_id
query duration: 30s
When I lower the boundaries to 900k (Query 2):
select max(ID) from req_res where ID > 1000000 and ID < 1900000;
Plan: nested loop: Index scan of Request.ID and Index only scan of Response.request_id
query duration: 3s
When I play with first query and disable hash join - set enable_hashjoin=off; I get Merge join plan. When I disable also the merge join plan with set enable_mergejoin=off; I get nested loop, which completes in 3 seconds (instead of 30 using hash join).
Size of the Request table is ~70 Mil records. Most of the requests have response counterpart, but some of them don't.
Version: PostgreSQL 10.10
req_res DDL:
CREATE OR REPLACE VIEW public.req_res
AS SELECT req.id,
res.req_id,
res.body::character varying(500),
res.time,
res.duration,
res.balance,
res.header::character varying(100),
res.created_at
FROM response res
JOIN request req ON req.req_id = res.req_id;
Query 1 Plan:
Aggregate (cost=2834115.70..2834115.71 rows=1 width=8) (actual time=154709.729..154709.730 rows=1 loops=1)
Buffers: shared hit=467727 read=685320 dirtied=214, temp read=240773 written=239751
-> Hash Join (cost=2493060.64..2831172.33 rows=1177346 width=8) (actual time=143800.101..154147.080 rows=1198706 loops=1)
Hash Cond: (req.req_id = res.req_id)
Buffers: shared hit=467727 read=685320 dirtied=214, temp read=240773 written=239751
-> Append (cost=0.44..55619.59 rows=1177346 width=16) (actual time=0.957..2354.648 rows=1200001 loops=1)
Buffers: shared hit=438960 read=32014
-> Index Scan using "5_5_req_pkey" on _hyper_2_5_chunk rs (cost=0.44..19000.10 rows=399803 width=16) (actual time=0.956..546.231 rows=399999 loops=1)
Index Cond: ((id >= 49600001) AND (id <= 50800001))
Buffers: shared hit=178872 read=10742
-> Index Scan using "7_7_req_pkey" on _hyper_2_7_chunk rs_1 (cost=0.44..36619.50 rows=777543 width=16) (actual time=0.037..767.744 rows=800002 loops=1)
Index Cond: ((id >= 49600001) AND (id <= 50800001))
Buffers: shared hit=260088 read=21272
-> Hash (cost=1367864.98..1367864.98 rows=68583298 width=8) (actual time=143681.850..143681.850 rows=68568554 loops=1)
Buckets: 262144 Batches: 512 Memory Usage: 7278kB
Buffers: shared hit=28764 read=653306 dirtied=214, temp written=233652
-> Append (cost=0.00..1367864.98 rows=68583298 width=8) (actual time=0.311..99590.021 rows=68568554 loops=1)
Buffers: shared hit=28764 read=653306 dirtied=214
-> Seq Scan on _hyper_3_2_chunk wt (cost=0.00..493704.44 rows=24941244 width=8) (actual time=0.309..14484.420 rows=24950147 loops=1)
Buffers: shared hit=661 read=243631
-> Seq Scan on _hyper_3_6_chunk wt_1 (cost=0.00..503935.04 rows=24978804 width=8) (actual time=0.334..14487.931 rows=24963020 loops=1)
Buffers: shared hit=168 read=253979
-> Seq Scan on _hyper_3_8_chunk wt_2 (cost=0.00..370225.50 rows=18663250 width=8) (actual time=0.327..10837.291 rows=18655387 loops=1)
Buffers: shared hit=27935 read=155696 dirtied=214
Planning time: 3.986 ms
Execution time: 154709.859 ms
Query 2 Plan:
Finalize Aggregate (cost=2634042.50..2634042.51 rows=1 width=8) (actual time=5525.626..5525.627 rows=1 loops=1)
Buffers: shared hit=8764620 read=12779
-> Gather (cost=2634042.29..2634042.50 rows=2 width=8) (actual time=5525.609..5525.705 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=8764620 read=12779
-> Partial Aggregate (cost=2633042.29..2633042.30 rows=1 width=8) (actual time=5515.507..5515.508 rows=1 loops=3)
Buffers: shared hit=8764620 read=12779
-> Nested Loop (cost=0.88..2632023.83 rows=407382 width=8) (actual time=5.383..5261.979 rows=332978 loops=3)
Buffers: shared hit=8764620 read=12779
-> Append (cost=0.44..40514.98 rows=407383 width=16) (actual time=0.035..924.498 rows=333334 loops=3)
Buffers: shared hit=446706
-> Parallel Index Scan using "5_5_req_pkey" on _hyper_2_5_chunk rs (cost=0.44..16667.91 rows=166585 width=16) (actual time=0.033..169.854 rows=133333 loops=3)
Index Cond: ((id >= 49600001) AND (id <= 50600001))
Buffers: shared hit=190175
-> Parallel Index Scan using "7_7_req_pkey" on _hyper_2_7_chunk rs_1 (cost=0.44..23847.07 rows=240798 width=16) (actual time=0.039..336.091 rows=200001 loops=3)
Index Cond: ((id >= 49600001) AND (id <= 50600001))
Buffers: shared hit=256531
-> Append (cost=0.44..6.33 rows=3 width=8) (actual time=0.011..0.011 rows=1 loops=1000001)
Buffers: shared hit=8317914 read=12779
-> Index Only Scan using "2_2_response_pkey" on _hyper_3_2_chunk wt (cost=0.44..2.11 rows=1 width=8) (actual time=0.003..0.003 rows=0 loops=1000001)
Index Cond: (req_id = req.req_id)
Heap Fetches: 0
Buffers: shared hit=3000005
-> Index Only Scan using "6_6_response_pkey" on _hyper_3_6_chunk wt_1 (cost=0.44..2.11 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=1000001)
Index Cond: (req_id = req.req_id)
Heap Fetches: 192906
Buffers: shared hit=3551440 read=7082
-> Index Only Scan using "8_8_response_pkey" on _hyper_3_8_chunk wt_2 (cost=0.44..2.10 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=443006)
Index Cond: (req_id = req.req_id)
Heap Fetches: 162913
Buffers: shared hit=1766469 read=5697
Planning time: 0.839 ms
Execution time: 5525.814 ms

Search queries slow on Materialized Views of a restored dump as compared to the original DB

I have a psql DB containing various Materialized Views, on running a query, i.e., query_a we complete the query execution in 2800ms and re-running the same query again we get an execution time of 53ms. This can be explained by the caching done by psql. Now comes the tricky part, I create a dump of this db and restore it in NewDB, when I re-run query_a I get an execution time of 2253ms and on re-running get the same time, i.e., it seems that the psql caching is not working on the NewDB.
I conducted various experiments to rectify the same and noticed that there is no improvement when I explicitly refresh the views but if I drop these views and re create it in my NewDB, it gives me the original performance.
Note that the data is constant in DB and NewDB and I have used the commands mentioned here for dump creation and restore.
The result for re running the query on DB is ->
The results for running the same query on NewDB for 1st and 2nd time are as follows ->
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=113790614477.61..113790614477.62 rows=1 width=8) (actual time=2284.605..2284.605 rows=1 loops=1)
Buffers: shared hit=3540872
CTE t
-> Merge Join (cost=40600.92..11846650.56 rows=763041594 width=425) (actual time=3.693..1909.916 rows=6005 loops=1)
Merge Cond: (n.node_id = nd.node_id)
Buffers: shared hit=3524063
-> Index Scan using nodes_node_id on nodes n (cost=0.43..350865.91 rows=3824099 width=389) (actual time=0.014..1651.025 rows=3598491 loops=1)
Buffers: shared hit=3523372
-> Sort (cost=40600.49..40700.26 rows=39907 width=40) (actual time=3.668..4.227 rows=6005 loops=1)
Sort Key: nd.node_id
Sort Method: quicksort Memory: 623kB
Buffers: shared hit=691
-> Bitmap Heap Scan on nodes_depths nd (cost=1153.11..37550.73 rows=39907 width=40) (actual time=0.627..2.846 rows=6005 loops=1)
Recheck Cond: ((ancestor_1 = 1) OR (ancestor_2 = 1))
Heap Blocks: exact=658
Buffers: shared hit=691
-> BitmapOr (cost=1153.11..1153.11 rows=40007 width=0) (actual time=0.547..0.547 rows=0 loops=1)
Buffers: shared hit=33
-> Bitmap Index Scan on nodes_depths_1 (cost=0.00..566.58 rows=20003 width=0) (actual time=0.032..0.032 rows=156 loops=1)
Index Cond: (ancestor_1 = 1)
Buffers: shared hit=4
-> Bitmap Index Scan on nodes_depths_2 (cost=0.00..566.58 rows=20003 width=0) (actual time=0.515..0.515 rows=5849 loops=1)
Index Cond: (ancestor_2 = 1)
Buffers: shared hit=29
-> Merge Right Join (cost=169565733.26..97549168801.28 rows=6491839610305 width=0) (actual time=1915.721..2284.175 rows=6005 loops=1)
Merge Cond: (nodes_fts.node_id = t.node_id)
Buffers: shared hit=3540872
-> Index Only Scan using nodes_fts_idx on nodes_fts (cost=0.43..97055.96 rows=1701569 width=4) (actual time=0.041..277.890 rows=1598712 loops=1)
Heap Fetches: 1598712
Buffers: shared hit=16805
-> Materialize (cost=169565732.84..173380940.81 rows=763041594 width=4) (actual time=1915.675..1916.583 rows=6005 loops=1)
Buffers: shared hit=3524067
-> Sort (cost=169565732.84..171473336.82 rows=763041594 width=4) (actual time=1915.672..1916.057 rows=6005 loops=1)
Sort Key: t.node_id
Sort Method: quicksort Memory: 474kB
Buffers: shared hit=3524067
-> CTE Scan on t (cost=0.00..15260831.88 rows=763041594 width=4) (actual time=3.698..1914.771 rows=6005 loops=1)
Buffers: shared hit=3524063
Planning time: 68.064 ms
Execution time: 2285.084 ms
(40 rows)
and for the second run ->
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=113790614477.61..113790614477.62 rows=1 width=8) (actual time=2295.319..2295.319 rows=1 loops=1)
Buffers: shared hit=3540868
CTE t
-> Merge Join (cost=40600.92..11846650.56 rows=763041594 width=425) (actual time=15.324..1926.744 rows=6005 loops=1)
Merge Cond: (n.node_id = nd.node_id)
Buffers: shared hit=3524063
-> Index Scan using nodes_node_id on nodes n (cost=0.43..350865.91 rows=3824099 width=389) (actual time=0.027..1648.277 rows=3598491 loops=1)
Buffers: shared hit=3523372
-> Sort (cost=40600.49..40700.26 rows=39907 width=40) (actual time=15.254..15.903 rows=6005 loops=1)
Sort Key: nd.node_id
Sort Method: quicksort Memory: 623kB
Buffers: shared hit=691
-> Bitmap Heap Scan on nodes_depths nd (cost=1153.11..37550.73 rows=39907 width=40) (actual time=3.076..10.752 rows=6005 loops=1)
Recheck Cond: ((ancestor_1 = 1) OR (ancestor_2 = 1))
Heap Blocks: exact=658
Buffers: shared hit=691
-> BitmapOr (cost=1153.11..1153.11 rows=40007 width=0) (actual time=2.524..2.525 rows=0 loops=1)
Buffers: shared hit=33
-> Bitmap Index Scan on nodes_depths_1 (cost=0.00..566.58 rows=20003 width=0) (actual time=0.088..0.088 rows=156 loops=1)
Index Cond: (ancestor_1 = 1)
Buffers: shared hit=4
-> Bitmap Index Scan on nodes_depths_2 (cost=0.00..566.58 rows=20003 width=0) (actual time=2.434..2.435 rows=5849 loops=1)
Index Cond: (ancestor_2 = 1)
Buffers: shared hit=29
-> Merge Right Join (cost=169565733.26..97549168801.28 rows=6491839610305 width=0) (actual time=1933.113..2294.894 rows=6005 loops=1)
Merge Cond: (nodes_fts.node_id = t.node_id)
Buffers: shared hit=3540868
-> Index Only Scan using nodes_fts_idx on nodes_fts (cost=0.43..97055.96 rows=1701569 width=4) (actual time=0.077..271.313 rows=1598712 loops=1)
Heap Fetches: 1598712
Buffers: shared hit=16805
-> Materialize (cost=169565732.84..173380940.81 rows=763041594 width=4) (actual time=1933.030..1933.903 rows=6005 loops=1)
Buffers: shared hit=3524063
-> Sort (cost=169565732.84..171473336.82 rows=763041594 width=4) (actual time=1933.026..1933.375 rows=6005 loops=1)
Sort Key: t.node_id
Sort Method: quicksort Memory: 474kB
Buffers: shared hit=3524063
-> CTE Scan on t (cost=0.00..15260831.88 rows=763041594 width=4) (actual time=15.336..1932.145 rows=6005 loops=1)
Buffers: shared hit=3524063
Planning time: 1.154 ms
Execution time: 2295.801 ms
(40 rows)
The estimated number of rows is off from the actual numbers by orders of magnitude:
CTE Scan on t (cost=0.00..15260831.88 rows=763041594 width=4)
(actual time=15.336..1932.145 rows=6005 loops=1)
When Postgres can't make accurate estimates of how much work a particular way of executing your query is compared to another it will generate inefficient query plans and that is why the same query can be slow even if all the data is in RAM.
When you backup a table the dump does not contain the statistics used by the optimizer so you need to wait for the autovacuum daemon or run 'ANALYZE ' manually after restoring from the dump.

postgres window function trebles query time

I'm using postgres 10, and have the following query
select
count(task.id) over() as _total_ ,
json_agg(u.*) as users,
task.*
from task
left outer join taskuserlink_history tu on (task.id = tu.taskid)
left outer join "user" u on (tu.userId = u.id)
group by task.id offset 10 limit 10;
this query takes approx 800ms to execute
if I remove the count(task.id) over() as _total_ , line, then it executes in 250ms
I have to confess being a complete sql noob, so the query itself may be completely borked
I was wondering if anyone could point to the flaws in the query, and make suggestions on how to speed it up.
The number of tasks is approx 15k, with an average of 5 users per task, linked through taskuserlink
I have looked at the pgadmin "explain" diagram
but to be honest can't really figure it out yet ;)
the table definitions are
task , with id (int) as primary column
taskuserlink_history, with taskId (int) and userId (int) (both as foreign key constraints, indexed)
user, with id (int) as primary column
the query plan is as follows
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=4.74..12.49 rows=10 width=44) (actual time=1178.016..1178.043 rows=10 loops=1)
Buffers: shared hit=3731, temp read=6655 written=6914
-> WindowAgg (cost=4.74..10248.90 rows=13231 width=44) (actual time=1178.014..1178.040 rows=10 loops=1)
Buffers: shared hit=3731, temp read=6655 written=6914
-> GroupAggregate (cost=4.74..10083.51 rows=13231 width=36) (actual time=0.417..1049.294 rows=13255 loops=1)
Group Key: task.id
Buffers: shared hit=3731
-> Nested Loop Left Join (cost=4.74..9586.77 rows=66271 width=36) (actual time=0.103..309.372 rows=66162 loops=1)
Join Filter: (taskuserlink_history.userid = user_archive.id)
Rows Removed by Join Filter: 1182904
Buffers: shared hit=3731
-> Merge Left Join (cost=0.58..5563.22 rows=66271 width=8) (actual time=0.044..73.598 rows=66162 loops=1)
Merge Cond: (task.id = taskuserlink_history.taskid)
Buffers: shared hit=3629
-> Index Only Scan using task_pkey on task (cost=0.29..1938.30 rows=13231 width=4) (actual time=0.026..7.683 rows=13255 loops=1)
Heap Fetches: 13255
Buffers: shared hit=1810
-> Index Scan using taskuserlink_history_task_fk_idx on taskuserlink_history (cost=0.29..2764.46 rows=66271 width=8) (actual time=0.015..40.109 rows=66162 loops=1)
Filter: (timeend IS NULL)
Rows Removed by Filter: 13368
Buffers: shared hit=1819
-> Materialize (cost=4.17..50.46 rows=4 width=36) (actual time=0.000..0.001 rows=19 loops=66162)
Buffers: shared hit=102
-> Bitmap Heap Scan on user_archive (cost=4.17..50.44 rows=4 width=36) (actual time=0.050..0.305 rows=45 loops=1)
Recheck Cond: (archived_at IS NULL)
Heap Blocks: exact=11
Buffers: shared hit=102
-> Bitmap Index Scan on user_unique_username (cost=0.00..4.16 rows=4 width=0) (actual time=0.014..0.014 rows=46 loops=1)
Buffers: shared hit=1
SubPlan 1
-> Aggregate (cost=8.30..8.31 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=45)
Buffers: shared hit=90
-> Index Scan using task_assignedto_idx on task task_1 (cost=0.29..8.30 rows=1 width=4) (actual time=0.002..0.002 rows=0 loops=45)
Index Cond: (assignedtoid = user_archive.id)
Buffers: shared hit=90
Planning time: 0.989 ms
Execution time: 1191.451 ms
(37 rows)
without the window function it is
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=4.74..12.36 rows=10 width=36) (actual time=0.510..1.763 rows=10 loops=1)
Buffers: shared hit=91
-> GroupAggregate (cost=4.74..10083.51 rows=13231 width=36) (actual time=0.509..1.759 rows=10 loops=1)
Group Key: task.id
Buffers: shared hit=91
-> Nested Loop Left Join (cost=4.74..9586.77 rows=66271 width=36) (actual time=0.073..0.744 rows=50 loops=1)
Join Filter: (taskuserlink_history.userid = user_archive.id)
Rows Removed by Join Filter: 361
Buffers: shared hit=91
-> Merge Left Join (cost=0.58..5563.22 rows=66271 width=8) (actual time=0.029..0.161 rows=50 loops=1)
Merge Cond: (task.id = taskuserlink_history.taskid)
Buffers: shared hit=7
-> Index Only Scan using task_pkey on task (cost=0.29..1938.30 rows=13231 width=4) (actual time=0.016..0.031 rows=11 loops=1)
Heap Fetches: 11
Buffers: shared hit=4
-> Index Scan using taskuserlink_history_task_fk_idx on taskuserlink_history (cost=0.29..2764.46 rows=66271 width=8) (actual time=0.009..0.081 rows=50 loops=1)
Filter: (timeend IS NULL)
Rows Removed by Filter: 11
Buffers: shared hit=3
-> Materialize (cost=4.17..50.46 rows=4 width=36) (actual time=0.001..0.009 rows=8 loops=50)
Buffers: shared hit=84
-> Bitmap Heap Scan on user_archive (cost=4.17..50.44 rows=4 width=36) (actual time=0.040..0.382 rows=38 loops=1)
Recheck Cond: (archived_at IS NULL)
Heap Blocks: exact=7
Buffers: shared hit=84
-> Bitmap Index Scan on user_unique_username (cost=0.00..4.16 rows=4 width=0) (actual time=0.012..0.012 rows=46 loops=1)
Buffers: shared hit=1
SubPlan 1
-> Aggregate (cost=8.30..8.31 rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=38)
Buffers: shared hit=76
-> Index Scan using task_assignedto_idx on task task_1 (cost=0.29..8.30 rows=1 width=4) (actual time=0.003..0.003 rows=0 loops=38)
Index Cond: (assignedtoid = user_archive.id)
Buffers: shared hit=76
Planning time: 0.895 ms
Execution time: 1.890 ms
(35 rows)|
I believe the LIMIT clause is making the difference. LIMIT is limiting the number of rows returned, not neccessarily the work involved:
Your second query can be aborted early after 20 rows have been constructed (10 for OFFSET and 10 for LIMIT).
However, your first query needs to go through the whole set to calculate the count(task.id).
Not what you were asking, but I say it anyway:
"user" is not a table, but a view. That is were both queries actually get slower than they should be (The "Materialize" in the plan).
Using OFFSET for paging calls for trouble because it will get slow when the OFFSET increases
Using OFFSET and LIMIT without an ORDER BY is most likely not what you want. The result sets might not be identical on consecutive calls.