Suggestions to reduce memory usage on table partitioning (psql 11) - postgresql

I have few tables will 20-40million rows, due to which my queries used to take a lot of time for execution. Are there any suggestions to troubleshoot/analyze the queries in details as of where most of the memory is consumed or any more suggestions before going for partitioning?
Also, I have few queries which are used for analysis too, and these queries run over whole range of dates (have to go through whole data).
So I will need an overall solution to keep my basic queries fast and that the analysis queries doesn't fail by going out of memory or crashing the DB.
One table size is nearly 120GB, other tables just have huge number of rows.
I tried to partition the tables with weekly and monthly date basis but then the queries are running out of memory, number of locks increases by a huge factor while having partitions, normal table query took 13 locks and queries on partitioned tables take 250 locks (monthly partition) and 1000 locks (weekly partitions).
I read, there is an overhead that adds up while we have partitions.
Analysis query:
SELECT id
from TABLE1
where id NOT IN (
SELECT DISTINCT id
FROM TABLE2
);
TABLE1 and TABLE2 are partitioned, the first by event_data_timestamp and the second by event_timestamp.
Analysis queries run out of memory and consumes huge number of locks, date based queries are pretty fast though.
QUERY:
EXPLAIN (ANALYZE, BUFFERS) SELECT id FROM Table1_monthly WHERE event_timestamp > '2019-01-01' and id NOT IN (SELECT DISTINCT id FROM Table2_monthly where event_data_timestamp > '2019-01-01');
Append (cost=32731.14..653650.98 rows=4656735 width=16) (actual time=2497.747..15405.447 rows=10121827 loops=1)
Buffers: shared hit=3 read=169100
-> Seq Scan on TABLE1_monthly_2019_01_26 (cost=32731.14..77010.63 rows=683809 width=16) (actual time=2497.746..3489.767 rows=1156382 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 462851
Buffers: shared read=44559
SubPlan 1
-> HashAggregate (cost=32728.64..32730.64 rows=200 width=16) (actual time=248.084..791.054 rows=1314570 loops=6)
Group Key: TABLE2_monthly_2019_01_26.cid
Buffers: shared read=24568
-> Append (cost=0.00..32277.49 rows=180458 width=16) (actual time=22.969..766.903 rows=1314570 loops=1)
Buffers: shared read=24568
-> Seq Scan on TABLE2_monthly_2019_01_26 (cost=0.00..5587.05 rows=32135 width=16) (actual time=22.965..123.734 rows=211977 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
Rows Removed by Filter: 40282
Buffers: shared read=4382
-> Seq Scan on TABLE2_monthly_2019_02_25 (cost=0.00..5573.02 rows=32054 width=16) (actual time=0.700..121.657 rows=241977 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
Buffers: shared read=4371
-> Seq Scan on TABLE2_monthly_2019_03_27 (cost=0.00..5997.60 rows=34496 width=16) (actual time=0.884..123.043 rows=253901 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
Buffers: shared read=4704
-> Seq Scan on TABLE2_monthly_2019_04_26 (cost=0.00..6581.55 rows=37855 width=16) (actual time=0.690..129.537 rows=282282 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
Buffers: shared read=5162
-> Seq Scan on TABLE2_monthly_2019_05_26 (cost=0.00..6585.38 rows=37877 width=16) (actual time=1.248..122.794 rows=281553 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
Buffers: shared read=5165
-> Seq Scan on TABLE2_monthly_2019_06_25 (cost=0.00..999.60 rows=5749 width=16) (actual time=0.750..23.020 rows=42880 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
Buffers: shared read=784
-> Seq Scan on TABLE2_monthly_2019_07_25 (cost=0.00..12.75 rows=73 width=16) (actual time=0.007..0.007 rows=0 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
-> Seq Scan on TABLE2_monthly_2019_08_24 (cost=0.00..12.75 rows=73 width=16) (actual time=0.003..0.004 rows=0 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
-> Seq Scan on TABLE2_monthly_2019_09_23 (cost=0.00..12.75 rows=73 width=16) (actual time=0.003..0.004 rows=0 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
-> Seq Scan on TABLE2_monthly_2019_10_23 (cost=0.00..12.75 rows=73 width=16) (actual time=0.007..0.007 rows=0 loops=1)
Filter: (event_data_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone)
-> Seq Scan on TABLE1_monthly_2019_02_25 (cost=32731.14..88679.16 rows=1022968 width=16) (actual time=1008.738..2341.807 rows=1803957 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 241978
Buffers: shared hit=1 read=25258
-> Seq Scan on TABLE1_monthly_2019_03_27 (cost=32731.14..97503.58 rows=1184315 width=16) (actual time=1000.795..2474.769 rows=2114729 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 253901
Buffers: shared hit=1 read=29242
-> Seq Scan on TABLE1_monthly_2019_04_26 (cost=32731.14..105933.54 rows=1338447 width=16) (actual time=892.820..2405.941 rows=2394619 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 282282
Buffers: shared hit=1 read=33048
-> Seq Scan on TABLE1_monthly_2019_05_26 (cost=32731.14..87789.65 rows=249772 width=16) (actual time=918.397..2614.059 rows=2340789 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 281553
Buffers: shared read=32579
-> Seq Scan on TABLE1_monthly_2019_06_25 (cost=32731.14..42458.60 rows=177116 width=16) (actual time=923.367..1141.672 rows=311351 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 42880
Buffers: shared read=4414
-> Seq Scan on TABLE1_monthly_2019_07_25 (cost=32731.14..32748.04 rows=77 width=16) (actual time=0.008..0.008 rows=0 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
-> Seq Scan on TABLE1_monthly_2019_08_24 (cost=32731.14..32748.04 rows=77 width=16) (actual time=0.003..0.003 rows=0 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
-> Seq Scan on TABLE1_monthly_2019_09_23 (cost=32731.14..32748.04 rows=77 width=16) (actual time=0.003..0.003 rows=0 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
-> Seq Scan on TABLE1_monthly_2019_10_23 (cost=32731.14..32748.04 rows=77 width=16) (actual time=0.003..0.003 rows=0 loops=1)
Filter: ((event_timestamp > '2019-01-01 00:00:00+00'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Planning Time: 244.669 ms
Execution Time: 15959.111 ms
(69 rows)

A query that joins two large partitioned tables to produce 10 million rows is going to consume resources, there is no way around that.
You can trade memory consumption for speed by reducing work_mem: smaller vakues will make your queries slower, but consume less memory.
I'd say that the best thing would be to leave work_mem high but reduce max_connections so that you don't run out of memory so fast. Also, putting more RAM into the machine is one of the cheapest hardware tuning techniques.
You can improve the query slighty:
Remove the DISTINCT, which is useless, consumes CPU resources and throws your estimates off.
ANALYZE table2 so that you get better estimates.
About partitioning: if these queries scan all partitions, the query will be slower with partitioned tables.
Whether partitioning is a good idea for you or not depends on the question if you have other queries that benefit from partitioning:
First and foremost, mass deletion, which is painless by dropping partitions.
Sequential scans where the partitioning key is part of the scan filter.
Contrary to popular belief, partitioning is not something you always benefit from if you have large tables: many queries become slower by partitioning.
The locks are your least worry: just increase max_locks_per_transaction.

Related

Slow performance and index not taken into consideration in PostgreSQL

I'll post my query plan, view and indexes at the bottom of the page so that I can keep this question as clean as possible.
The issue I have is slow performance for a view that is not using indexes as I would expect them to be used. I have a table with around 7 million rows that I use as source for the view below.
I have added an index on eventdate which is being used as expected, but why is the index on manufacturerkey ignored? Which indexes would be more efficient?
Also, is it maybe this part to_char(fe.eventdate, 'HH24:MI'::text) AS hourminutes that hurts the performance?
Query plan: https://explain.dalibo.com/plan/Pvw
CREATE OR REPLACE VIEW public.v_test
AS SELECT df.facilityname,
dd.date,
dt.military_hour AS hour,
to_char(fe.eventdate, 'HH24:MI'::text) AS hourminutes,
df.tenantid,
df.tenantname,
dev.name AS event_type_name,
dtt.name AS ticket_type_name,
dde.name AS device_type_name,
count(*) AS count,
dl.country,
dl.state,
dl.district,
ds.systemmanufacturer
FROM fact_entriesexits fe
JOIN dim_facility df ON df.key = fe.facilitykey
JOIN dim_date dd ON dd.key = fe.datekey
JOIN dim_time dt ON dt.key = fe.timekey
LEFT JOIN dim_device dde ON dde.key = fe.devicekey
JOIN dim_eventtype dev ON dev.key = fe.eventtypekey
JOIN dim_tickettype dtt ON dtt.key = fe.tickettypekey
JOIN dim_licenseplate dl ON dl.key = fe.licenseplatekey
LEFT JOIN dim_systeminterface ds ON ds.key = fe.systeminterfacekey
WHERE fe.manufacturerkey = ANY (ARRAY[2, 1])
AND fe.eventdate >= '2022-01-01'
GROUP BY df.tenantname, df.tenantid, dl.region, dl.country, dl.state,
dl.district, df.facilityname, dev.name, dtt.name, dde.name,
ds.systemmanufacturer, dd.date, dt.military_hour, (to_char(fe.eventdate, 'HH24:MI'::text)), fe.licenseplatekey;
Here are the indexes the table fact_entriesexits contains:
CREATE INDEX idx_devicetype_fact_entriesexits_202008 ON public.fact_entriesexits_202008 USING btree (devicetype)
CREATE INDEX idx_etlsource_fact_entriesexits_202008 ON public.fact_entriesexits_202008 USING btree (etlsource)
CREATE INDEX idx_eventdate_fact_entriesexits_202008 ON public.fact_entriesexits_202008 USING btree (eventdate)
CREATE INDEX idx_fact_entriesexits_202008 ON public.fact_entriesexits_202008 USING btree (datekey)
CREATE INDEX idx_manufacturerkey_202008 ON public.fact_entriesexits_202008 USING btree (manufacturerkey)
Query plan:
Subquery Scan on v_lpr2 (cost=505358.60..508346.26 rows=17079 width=340) (actual time=85619.542..109797.440 rows=3008065 loops=1)
Buffers: shared hit=91037 read=366546, temp read=83669 written=83694
-> Finalize GroupAggregate (cost=505358.60..508175.47 rows=17079 width=359) (actual time=85619.539..109097.943 rows=3008065 loops=1)
Group Key: df.tenantname, df.tenantid, dl.region, dl.country, dl.state, dl.district, df.facilityname, dev.name, dtt.name, dde.name, ds.systemmanufacturer, dd.date, dt.military_hour, (to_char(fe.eventdate, 'HH24:MI'::text)), fe.licenseplatekey
Buffers: shared hit=91037 read=366546, temp read=83669 written=83694
-> Gather Merge (cost=505358.60..507392.70 rows=14232 width=359) (actual time=85619.507..105395.429 rows=3308717 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=91037 read=366546, temp read=83669 written=83694
-> Partial GroupAggregate (cost=504358.57..504749.95 rows=7116 width=359) (actual time=85169.770..94043.715 rows=1102906 loops=3)
Group Key: df.tenantname, df.tenantid, dl.region, dl.country, dl.state, dl.district, df.facilityname, dev.name, dtt.name, dde.name, ds.systemmanufacturer, dd.date, dt.military_hour, (to_char(fe.eventdate, 'HH24:MI'::text)), fe.licenseplatekey
Buffers: shared hit=91037 read=366546, temp read=83669 written=83694
-> Sort (cost=504358.57..504376.36 rows=7116 width=351) (actual time=85169.748..91995.088 rows=1500405 loops=3)
Sort Key: df.tenantname, df.tenantid, dl.region, dl.country, dl.state, dl.district, df.facilityname, dev.name, dtt.name, dde.name, ds.systemmanufacturer, dd.date, dt.military_hour, (to_char(fe.eventdate, 'HH24:MI'::text)), fe.licenseplatekey
Sort Method: external merge Disk: 218752kB
Buffers: shared hit=91037 read=366546, temp read=83669 written=83694
-> Hash Left Join (cost=3904.49..503903.26 rows=7116 width=351) (actual time=52.894..46338.295 rows=1500405 loops=3)
Hash Cond: (fe.systeminterfacekey = ds.key)
Buffers: shared hit=90979 read=366546
-> Hash Join (cost=3886.89..503848.87 rows=7116 width=321) (actual time=52.458..44551.012 rows=1500405 loops=3)
Hash Cond: (fe.licenseplatekey = dl.key)
Buffers: shared hit=90943 read=366546
-> Hash Left Join (cost=3849.10..503792.31 rows=7116 width=269) (actual time=51.406..43869.673 rows=1503080 loops=3)
Hash Cond: (fe.devicekey = dde.key)
Buffers: shared hit=90870 read=366546
-> Hash Join (cost=3405.99..503330.51 rows=7116 width=255) (actual time=47.077..43258.069 rows=1503080 loops=3)
Hash Cond: (fe.timekey = dt.key)
Buffers: shared hit=90021 read=366546
-> Hash Join (cost=570.97..500476.80 rows=7116 width=257) (actual time=6.869..42345.723 rows=1503080 loops=3)
Hash Cond: (fe.datekey = dd.key)
Buffers: shared hit=87348 read=366546
-> Hash Join (cost=166.75..500053.90 rows=7116 width=257) (actual time=2.203..41799.463 rows=1503080 loops=3)
Hash Cond: (fe.facilitykey = df.key)
Buffers: shared hit=86787 read=366546
-> Hash Join (cost=2.72..499871.14 rows=7116 width=224) (actual time=0.362..41103.372 rows=1503085 loops=3)
Hash Cond: (fe.tickettypekey = dtt.key)
Buffers: shared hit=86427 read=366546
-> Hash Join (cost=1.14..499722.81 rows=54741 width=214) (actual time=0.311..40595.537 rows=1503085 loops=3)
Hash Cond: (fe.eventtypekey = dev.key)
Buffers: shared hit=86424 read=366546
-> Append (cost=0.00..494830.25 rows=1824733 width=40) (actual time=0.266..40015.860 rows=1503085 loops=3)
Buffers: shared hit=86421 read=366546
-> Parallel Seq Scan on fact_entriesexits fe (cost=0.00..0.00 rows=1 width=40) (actual time=0.001..0.001 rows=0 loops=3)
Filter: ((manufacturerkey = ANY ('{2,1}'::integer[])) AND (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone))
-> Parallel Index Scan using idx_eventdate_fact_entriesexits_202101 on fact_entriesexits_202101 fe_25 (cost=0.42..4.28 rows=1 width=40) (actual time=0.005..0.006 rows=0 loops=3)
Index Cond: (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone)
Filter: (manufacturerkey = ANY ('{2,1}'::integer[]))
Buffers: shared hit=3
-> Parallel Index Scan using idx_eventdate_fact_entriesexits_202102 on fact_entriesexits_202102 fe_26 (cost=0.42..4.27 rows=1 width=40) (actual time=0.005..0.006 rows=0 loops=3)
Index Cond: (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone)
Filter: (manufacturerkey = ANY ('{2,1}'::integer[]))
Buffers: shared hit=3
-> Parallel Index Scan using idx_eventdate_fact_entriesexits_202103 on fact_entriesexits_202103 fe_27 (cost=0.42..4.24 rows=1 width=40) (actual time=0.007..0.007 rows=0 loops=3)
Index Cond: (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone)
Filter: (manufacturerkey = ANY ('{2,1}'::integer[]))
Buffers: shared hit=3
-> Parallel Index Scan using idx_eventdate_fact_entriesexits_202104 on fact_entriesexits_202104 fe_28 (cost=0.42..4.05 rows=1 width=40) (actual time=0.006..0.006 rows=0 loops=3)
Index Cond: (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone)
Filter: (manufacturerkey = ANY ('{2,1}'::integer[]))
Buffers: shared hit=3
-> Parallel Index Scan using idx_eventdate_fact_entriesexits_202105 on fact_entriesexits_202105 fe_29 (cost=0.43..4.12 rows=1 width=40) (actual time=0.006..0.006 rows=0 loops=3)
Index Cond: (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone)
Filter: (manufacturerkey = ANY ('{2,1}'::integer[]))
Buffers: shared hit=3
-> Parallel Index Scan using idx_eventdate_fact_entriesexits_202106 on fact_entriesexits_202106 fe_30 (cost=0.43..4.19 rows=1 width=40) (actual time=0.005..0.006 rows=0 loops=3)
Index Cond: (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone)
Filter: (manufacturerkey = ANY ('{2,1}'::integer[]))
Buffers: shared hit=3
-> Parallel Index Scan using idx_eventdate_fact_entriesexits_202107 on fact_entriesexits_202107 fe_31 (cost=0.43..4.28 rows=1 width=40) (actual time=0.005..0.006 rows=0 loops=3)
Index Cond: (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone)
Filter: (manufacturerkey = ANY ('{2,1}'::integer[]))
Buffers: shared hit=3
-> Parallel Index Scan using idx_eventdate_fact_entriesexits_202108 on fact_entriesexits_202108 fe_32 (cost=0.43..3.83 rows=1 width=40) (actual time=0.007..0.007 rows=0 loops=3)
Index Cond: (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone)
Filter: (manufacturerkey = ANY ('{2,1}'::integer[]))
Buffers: shared hit=3
-> Parallel Index Scan using idx_eventdate_fact_entriesexits_202109 on fact_entriesexits_202109 fe_33 (cost=0.43..3.40 rows=1 width=40) (actual time=0.006..0.007 rows=0 loops=3)
Index Cond: (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone)
Filter: (manufacturerkey = ANY ('{2,1}'::integer[]))
Buffers: shared hit=3
-> Parallel Index Scan using idx_eventdate_fact_entriesexits_202110 on fact_entriesexits_202110 fe_34 (cost=0.43..2.77 rows=1 width=40) (actual time=0.005..0.005 rows=0 loops=3)
Index Cond: (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone)
Filter: (manufacturerkey = ANY ('{2,1}'::integer[]))
Buffers: shared hit=3
-> Parallel Index Scan using idx_eventdate_fact_entriesexits_202111 on fact_entriesexits_202111 fe_35 (cost=0.43..3.21 rows=1 width=40) (actual time=0.005..0.005 rows=0 loops=3)
Index Cond: (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone)
Filter: (manufacturerkey = ANY ('{2,1}'::integer[]))
Buffers: shared hit=3
-> Parallel Index Scan using idx_eventdate_fact_entriesexits_202112 on fact_entriesexits_202112 fe_36 (cost=0.43..3.45 rows=1 width=40) (actual time=0.004..0.004 rows=0 loops=3)
Index Cond: (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone)
Filter: (manufacturerkey = ANY ('{2,1}'::integer[]))
Buffers: shared hit=3
-> Parallel Seq Scan on fact_entriesexits_202201 fe_37 (cost=0.00..382550.76 rows=445931 width=40) (actual time=0.032..39090.092 rows=379902 loops=3)
Filter: ((manufacturerkey = ANY ('{2,1}'::integer[])) AND (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone))
Rows Removed by Filter: 298432
Buffers: shared hit=3286 read=366546
-> Parallel Seq Scan on fact_entriesexits_202204 fe_38 (cost=0.00..39567.99 rows=469653 width=40) (actual time=0.015..242.895 rows=375639 loops=3)
Filter: ((manufacturerkey = ANY ('{2,1}'::integer[])) AND (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone))
Rows Removed by Filter: 158868
Buffers: shared hit=29546
-> Parallel Seq Scan on fact_entriesexits_202202 fe_39 (cost=0.00..30846.99 rows=437343 width=40) (actual time=0.019..230.952 rows=357451 loops=3)
Filter: ((manufacturerkey = ANY ('{2,1}'::integer[])) AND (eventdate >= '2022-01-01 00:00:00'::timestamp without time zone))
Rows Removed by Filter: 98708
Buffers: shared hit=22294
I think you'll get the most benefit out of creating a composite index for querying with both eventdate and manufacturerkey; e.g.:
CREATE INDEX idx_manufacturerkey_eventdate_202008
ON public.fact_entriesexits_202008 USING btree (manufacturerkey, eventdate)
Since it's a composite index, put whatever column you're more likely to query by alone on the left side. You can remove the other index for that column, since it will be covered by the composite index.
As for the to_char on evendate, while you could make a special index for that calculation, you might be able to get better performance by splitting the query up into a grouped CTE and a join. In other words, limit the group by to the columns that actually define your unique groups, and then join that query with the tables that you need to get the final selection of columns.
Your slowest seq scan step is returning over half the rows of its partition, removing 298432 and returning 379902. (times around 3 each due to parallel workers). An index is unlikely to be helpful when returning so much of the table rows anyway.
Note that that partition also seems to be massively bloated. It is hard to see why else it would be so slow, and require so many buffer reads compared to the number of rows.

Understanding why my Postgres queries are faster without indexes

We've inherited a Postgresql driven datawarehouse with some serious performance issues. We've selected a database for one of our customers and are benchmarking queries. We've picked into the queries and found a couple of common tables which are selected from in all queries, which we believe is at the heart of our poor performance. We're concentrating particularly on cold start performance, when no data is loaded in to the shared buffers as this is common scenario for our customers. We took a large query and stripped it down to its slowest part;
explain (analyze, buffers, costs, format json)
select
poi."SKUId" as "SKUId",
poi."ConvertedLineTotal" as "totalrevenue",
poi."TotalDispachCost" as "totaldispachcost",
poi."Quantity" as "quantitysold"
from public.processedorder o
join public.processedorder_item poi on poi."OrderId" = o."OrderId"
WHERE o."ReceivedDate" >= '2020-09-01' and o."ReceivedDate" <= '2021-01-01';
And in fact, stripped this down further to a single table which seems particularly slow;
explain (analyze, buffers, costs, format json)
select o."OrderId", o."ChannelId"
from public.processedorder o
WHERE o."ReceivedDate" >= '2020-09-01' and o."ReceivedDate" <= '2021-01-01'
This processedorders table has around 2.1 million rows with around 35/40k rows being added each month.
The table looks like this- it's fairly wide;
CREATE TABLE public.processedorder (
"OrderId" int4 NOT NULL,
"ChannelId" int4 NOT NULL,
"ShippingId" int4 NOT NULL,
"CountryId" int4 NOT NULL,
"LocationId" int4 NOT NULL,
"PackagingId" int4 NOT NULL,
"ConvertedTotal" numeric(18, 6) NOT NULL,
"ConvertedSubtotal" numeric(18, 6) NOT NULL,
"ConvertedShippingCost" numeric(18, 6) NOT NULL,
"ConvertedShippingTax" numeric(18, 6) NOT NULL,
"ConvertedTax" numeric(18, 6) NOT NULL,
"ConvertedDiscount" numeric(18, 6) NOT NULL,
"ConversionRate" numeric(18, 6) NOT NULL,
"Currency" varchar(3) NOT NULL,
"OriginalTotal" numeric(18, 6) NOT NULL,
"OriginalSubtotal" numeric(18, 6) NOT NULL,
"OriginalShippingCost" numeric(18, 6) NOT NULL,
"OrignalShippingTax" numeric(18, 6) NOT NULL,
"OriginalTax" numeric(18, 6) NOT NULL,
"OriginalDiscount" numeric(18, 6) NOT NULL,
"ReceivedDate" timestamp NOT NULL,
"DispatchByDate" timestamp NOT NULL,
"ProcessedDate" timestamp NOT NULL,
"HoldOrCancel" bool NOT NULL,
"CustomerHash" varchar(100) NOT NULL,
"EmailHash" varchar(100) NOT NULL,
"GetPostalCode" varchar(10) NOT NULL,
"TagId" uuid NOT NULL,
"timestamp" timestamp NOT NULL,
"IsRMA" bool NOT NULL DEFAULT false,
"ConversionType" int4 NOT NULL DEFAULT 0,
"ItemWeight" numeric(18, 6) NULL,
"TotalWeight" numeric(18, 6) NULL,
"PackageWeight" numeric(18, 6) NULL,
"PackageCount" int4 NULL,
CONSTRAINT processedorder_tagid_unique UNIQUE ("TagId")
)
WITH (
fillfactor=50
);
The confusion we have is, on a local copy of the database we run the smallest query with a simple index on receivedDate and it returns the results in 4 seconds-
create INDEX if not exists ix_processedorder_btree_receieveddate ON public.processedorder USING btree ("ReceivedDate" DESC);
execution plan can be seen here https://explain.tensor.ru/archive/explain/639d403ef7bf772f698502ed98ae3f63:0:2021-12-08#explain
Hash Join (cost=166953.18..286705.36 rows=201855 width=19) (actual time=3078.441..4176.251 rows=198552 loops=1)
Hash Cond: (poi."OrderId" = o."OrderId")
Buffers: shared hit=3 read=160623
-> Seq Scan on processedorder_item poi (cost=0.00..108605.28 rows=2434228 width=23) (actual time=0.158..667.435 rows=2434228 loops=1)
Buffers: shared read=84263
-> Hash (cost=164773.85..164773.85 rows=174346 width=4) (actual time=3077.990..3077.991 rows=173668 loops=1)
Buckets: 262144 (originally 262144) Batches: 1 (originally 1) Memory Usage: 8154kB
Buffers: shared read=76360
-> Bitmap Heap Scan on processedorder o (cost=3703.48..164773.85 rows=174346 width=4) (actual time=27.285..3028.721 rows=173668 loops=1)
Recheck Cond: (("ReceivedDate" >= '2020-09-01 00:00:00'::timestamp without time zone) AND ("ReceivedDate" <= '2021-01-01 00:00:00'::timestamp without time zone))
Heap Blocks: exact=75882
Buffers: shared read=76360
-> Bitmap Index Scan on ix_receiveddate (cost=0.00..3659.89 rows=174346 width=0) (actual time=17.815..17.815 rows=173668 loops=1)
Index Cond: (("ReceivedDate" >= '2020-09-01 00:00:00'::timestamp without time zone) AND ("ReceivedDate" <= '2021-01-01 00:00:00'::timestamp without time zone))
Buffers: shared read=478
We then apply this index to our staging server (same copy of the DB) and run the query, but this time it takes 44 seconds;
https://explain.tensor.ru/archive/explain/9610c603972ba89aac4e223072f27575:0:2021-12-08
Gather (cost=112168.21..275565.45 rows=174360 width=19) (actual time=42401.776..44549.996 rows=145082 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=149058 read=50771
-> Hash Join (cost=111168.21..257129.45 rows=72650 width=19) (actual time=42397.903..44518.824 rows=48361 loops=3)
Hash Cond: (poi."OrderId" = o."OrderId")
Inner Unique: true
Buffers: shared hit=445001 read=161024
-> Parallel Seq Scan on processedorder_item poi (cost=0.00..117223.50 rows=880850 width=23) (actual time=0.302..1426.753 rows=702469 loops=3)
Filter: ((NOT "ContainsComposites") AND ("SKUId" <> 0))
Rows Removed by Filter: 108940
Buffers: shared read=84260
-> Hash (cost=105532.45..105532.45 rows=173408 width=4) (actual time=42396.156..42396.156 rows=173668 loops=3)
Buckets: 262144 (originally 262144) Batches: 1 (originally 1) Memory Usage: 8154kB
Buffers: shared hit=444920 read=76764
-> Index Scan using ix_processedorder_receieveddate on processedorder o (cost=0.43..105532.45 rows=173408 width=4) (actual time=0.827..42152.428 rows=173668 loops=3)
Index Cond: (("ReceivedDate" >= '2020-09-01 00:00:00'::timestamp without time zone) AND ("ReceivedDate" <= '2021-01-01 00:00:00'::timestamp without time zone))
Buffers: shared hit=444920 read=76764
Finally, with common sense apparently not working, we just remove the index on the staging server and find it returns the data in 4 seconds (just like our local machine with the index)
https://explain.tensor.ru/archive/explain/486512e19b45d5cbe4b893fdecc434b8:0:2021-12-08
Gather (cost=1000.00..200395.65 rows=177078 width=4) (actual time=2.556..4695.986 rows=173668 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared read=41868
-> Parallel Seq Scan on processedorder o (cost=0.00..181687.85 rows=73782 width=4) (actual time=0.796..4663.511 rows=57889 loops=3)
Filter: (("ReceivedDate" >= '2020-09-01 00:00:00'::timestamp without time zone) AND ("ReceivedDate" <= '2021-01-01 00:00:00'::timestamp without time zone))
Rows Removed by Filter: 642941
Buffers: shared read=151028
Before each query run I execute from SSH:
echo 3 > /proc/sys/vm/drop_caches; service postgresql restart;
We also ran a vacuum full; analyze; before testing.
Is anyone able to explain why this is happening as it makes no sense to us- I would expect the query to perform fastest with the index, given that we are querying a small portion of the data (order records span 9 years and we are selecting just 3 months).
The server itself is Postgres 10.4 running on Amazon AWS E2 i3.2xlarge instance with a couple io2 EBS block store drives running in RAID 0 hosting the psql data.
work_mem is 150MB
shared_buffers is set to 15Gb (60gb server total ram)
effective_io_concurrency = 256
effective_cache_size=45GB
----- Update 1
As per franks suggestion we tried adding a new index which didn't seem to help
Gather (cost=1000.86..218445.54 rows=201063 width=19) (actual time=4.412..61616.676 rows=198552 loops=1)
Output: poi."SKUId", poi."ConvertedLineTotal", poi."TotalDispachCost", poi."Quantity"
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=239331 read=45293
-> Nested Loop (cost=0.86..197339.24 rows=83776 width=19) (actual time=3.214..61548.378 rows=66184 loops=3)
Output: poi."SKUId", poi."ConvertedLineTotal", poi."TotalDispachCost", poi."Quantity"
Buffers: shared hit=748815 read=136548
Worker 0: actual time=2.494..61658.415 rows=65876 loops=1
Buffers: shared hit=247252 read=45606
Worker 1: actual time=3.033..61667.490 rows=69100 loops=1
Buffers: shared hit=262232 read=45649
-> Parallel Index Only Scan using ix_processedorder_btree_receieveddate_orderid on public.processedorder o (cost=0.43..112293.34 rows=72359 width=4) (actual time=1.811..40429.474 rows=57889 loops=3)
Output: o."ReceivedDate", o."OrderId"
Index Cond: ((o."ReceivedDate" >= '2020-09-01 00:00:00'::timestamp without time zone) AND (o."ReceivedDate" <= '2021-01-01 00:00:00'::timestamp without time zone))
Buffers: shared hit=97131 read=76571
Worker 0: actual time=1.195..40625.809 rows=57420 loops=1
Buffers: shared hit=31847 read=25583
Worker 1: actual time=1.850..40463.813 rows=60469 loops=1
Buffers: shared hit=34898 read=25584
-> Index Scan using ix_processedorder_item_orderid on public.processedorder_item poi (cost=0.43..1.12 rows=2 width=23) (actual time=0.316..0.361 rows=1 loops=173668)
Output: poi."OrderItemId", poi."OrderId", poi."SKUId", poi."Quantity", poi."ConvertedLineTotal", poi."ConvertedLineSubtotal", poi."ConvertedLineTax", poi."ConvertedLineDiscount", poi."ConversionRate", poi."OriginalLineTotal", poi."OriginalLineSubtotal", poi."OriginalLineTax", poi."OriginalLineDiscount", poi."Currency", poi."ContainsComposites", poi."TotalDispachCost", poi."TagId", poi."timestamp"
Index Cond: (poi."OrderId" = o."OrderId")
Buffers: shared hit=651684 read=59977
Worker 0: actual time=0.316..0.362 rows=1 loops=57420
Buffers: shared hit=215405 read=20023
Worker 1: actual time=0.303..0.347 rows=1 loops=60469
Buffers: shared hit=227334 read=20065
---- update 2
We've rerun the larger of the two queries
explain (analyze, buffers, verbose, costs, format json)
select
poi."SKUId" as "SKUId",
poi."ConvertedLineTotal" as "totalrevenue",
poi."TotalDispachCost" as "totaldispachcost",
poi."Quantity" as "quantitysold"
from public.processedorder o
join public.processedorder_item poi on poi."OrderId" = o."OrderId"
WHERE o."ReceivedDate" >= '2020-09-01' and o."ReceivedDate" <= '2021-01-01';
this time with track io timings enabled- we ran this twice, once with an index, and again without the index- these are the plans;
With Index:
https://explain.tensor.ru/archive/explain/d763d7e1754c4ddac8bb61e403b135d2:0:2021-12-09
Gather (cost=111153.17..252254.44 rows=201029 width=19) (actual time=36978.100..39083.705 rows=198552 loops=1)
Output: poi."SKUId", poi."ConvertedLineTotal", poi."TotalDispachCost", poi."Quantity"
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=138354 read=60159
I/O Timings: read=16453.227
-> Hash Join (cost=110153.17..231151.54 rows=83762 width=19) (actual time=36974.257..39044.116 rows=66184 loops=3)
Output: poi."SKUId", poi."ConvertedLineTotal", poi."TotalDispachCost", poi."Quantity"
Hash Cond: (poi."OrderId" = o."OrderId")
Buffers: shared hit=444230 read=160640
I/O Timings: read=37254.816
Worker 0: actual time=36972.516..39140.086 rows=79787 loops=1
Buffers: shared hit=155175 read=48507
I/O Timings: read=9382.648
Worker 1: actual time=36972.438..39138.754 rows=77496 loops=1
Buffers: shared hit=150701 read=51974
I/O Timings: read=11418.941
-> Parallel Seq Scan on public.processedorder_item poi (cost=0.00..114682.68 rows=1014089 width=23) (actual time=0.262..1212.102 rows=811409 loops=3)
Output: poi."OrderItemId", poi."OrderId", poi."SKUId", poi."Quantity", poi."ConvertedLineTotal", poi."ConvertedLineSubtotal", poi."ConvertedLineTax", poi."ConvertedLineDiscount", poi."ConversionRate", poi."OriginalLineTotal", poi."OriginalLineSubtotal", poi."OriginalLineTax", poi."OriginalLineDiscount", poi."Currency", poi."ContainsComposites", poi."TotalDispachCost", poi."TagId", poi."timestamp"
Buffers: shared read=84260
I/O Timings: read=1574.316
Worker 0: actual time=0.073..1258.453 rows=870378 loops=1
Buffers: shared read=30133
I/O Timings: read=535.914
Worker 1: actual time=0.021..1256.769 rows=841299 loops=1
Buffers: shared read=29126
I/O Timings: read=538.814
-> Hash (cost=104509.15..104509.15 rows=173662 width=4) (actual time=36972.511..36972.511 rows=173668 loops=3)
Output: o."OrderId"
Buckets: 262144 (originally 262144) Batches: 1 (originally 1) Memory Usage: 8154kB
Buffers: shared hit=444149 read=76380
I/O Timings: read=35680.500
Worker 0: actual time=36970.996..36970.996 rows=173668 loops=1
Buffers: shared hit=155136 read=18374
I/O Timings: read=8846.735
Worker 1: actual time=36970.783..36970.783 rows=173668 loops=1
Buffers: shared hit=150662 read=22848
I/O Timings: read=10880.127
-> Index Scan using ix_processedorder_btree_receieveddate on public.processedorder o (cost=0.43..104509.15 rows=173662 width=4) (actual time=0.617..36736.881 rows=173668 loops=3)
Output: o."OrderId"
Index Cond: ((o."ReceivedDate" >= '2020-09-01 00:00:00'::timestamp without time zone) AND (o."ReceivedDate" <= '2021-01-01 00:00:00'::timestamp without time zone))
Buffers: shared hit=444149 read=76380
I/O Timings: read=35680.500
Worker 0: actual time=0.018..36741.158 rows=173668 loops=1
Buffers: shared hit=155136 read=18374
I/O Timings: read=8846.735
Worker 1: actual time=0.035..36733.684 rows=173668 loops=1
Buffers: shared hit=150662 read=22848
I/O Timings: read=10880.127
And then, the faster run, without indexes;
https://explain.tensor.ru/archive/explain/d0815f4b0c9baf3bdd512bc94051e768:0:2021-12-09
Gather (cost=231259.20..372360.47 rows=201029 width=19) (actual time=4829.302..7920.614 rows=198552 loops=1)
Output: poi."SKUId", poi."ConvertedLineTotal", poi."TotalDispachCost", poi."Quantity"
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=106720 read=69983
I/O Timings: read=2431.673
-> Hash Join (cost=230259.20..351257.57 rows=83762 width=19) (actual time=4825.285..7877.981 rows=66184 loops=3)
Output: poi."SKUId", poi."ConvertedLineTotal", poi."TotalDispachCost", poi."Quantity"
Hash Cond: (poi."OrderId" = o."OrderId")
Buffers: shared hit=302179 read=235288
I/O Timings: read=7545.729
Worker 0: actual time=4823.181..7978.501 rows=81394 loops=1
Buffers: shared hit=87613 read=93424
I/O Timings: read=2556.079
Worker 1: actual time=4823.645..7979.241 rows=76022 loops=1
Buffers: shared hit=107846 read=71881
I/O Timings: read=2557.977
-> Parallel Seq Scan on public.processedorder_item poi (cost=0.00..114682.68 rows=1014089 width=23) (actual time=0.267..2122.529 rows=811409 loops=3)
Output: poi."OrderItemId", poi."OrderId", poi."SKUId", poi."Quantity", poi."ConvertedLineTotal", poi."ConvertedLineSubtotal", poi."ConvertedLineTax", poi."ConvertedLineDiscount", poi."ConversionRate", poi."OriginalLineTotal", poi."OriginalLineSubtotal", poi."OriginalLineTax", poi."OriginalLineDiscount", poi."Currency", poi."ContainsComposites", poi."TotalDispachCost", poi."TagId", poi."timestamp"
Buffers: shared read=84260
I/O Timings: read=4135.860
Worker 0: actual time=0.034..2171.677 rows=865244 loops=1
Buffers: shared read=29949
I/O Timings: read=1394.407
Worker 1: actual time=0.068..2174.408 rows=827170 loops=1
Buffers: shared read=28639
I/O Timings: read=1395.723
-> Hash (cost=224615.18..224615.18 rows=173662 width=4) (actual time=4823.318..4823.318 rows=173668 loops=3)
Output: o."OrderId"
Buckets: 262144 (originally 262144) Batches: 1 (originally 1) Memory Usage: 8154kB
Buffers: shared hit=302056 read=151028
I/O Timings: read=3409.869
Worker 0: actual time=4820.884..4820.884 rows=173668 loops=1
Buffers: shared hit=87553 read=63475
I/O Timings: read=1161.672
Worker 1: actual time=4822.104..4822.104 rows=173668 loops=1
Buffers: shared hit=107786 read=43242
I/O Timings: read=1162.254
-> Seq Scan on public.processedorder o (cost=0.00..224615.18 rows=173662 width=4) (actual time=0.744..4644.291 rows=173668 loops=3)
Output: o."OrderId"
Filter: ((o."ReceivedDate" >= '2020-09-01 00:00:00'::timestamp without time zone) AND (o."ReceivedDate" <= '2021-01-01 00:00:00'::timestamp without time zone))
Rows Removed by Filter: 1928823
Buffers: shared hit=302056 read=151028
I/O Timings: read=3409.869
Worker 0: actual time=0.040..4651.203 rows=173668 loops=1
Buffers: shared hit=87553 read=63475
I/O Timings: read=1161.672
Worker 1: actual time=0.035..4637.415 rows=173668 loops=1
Buffers: shared hit=107786 read=43242
I/O Timings: read=1162.254
From your last two execution plans, I gather the following:
the slow plan using the index reads 22848 blocks from disk for processedorder, which takes 10.88 seconds
the fast plan using the sequential scan reads 43242 blocks from disk for processedorder, which takes 1.162 seconds
So there can be two explanations:
in the fast case, these blocks are actually cached in the kernel cache, so it is a caching effect
Experiment by clearing the kernel cache.
random I/O is more expensive than the optimizer thinks it is
In that case, you could consider raising random_page_cost.
That first query would benefit from an index on the data and the id in a single index:
CREATE INDEX ix_processedorder_btree_receieveddate ON public.processedorder USING btree ("ReceivedDate" DESC, "OrderId");
Does that change the query plan?

PostgreSQL query plan changed

I recently did a full vacuum on a range of tables, and a specific monitoring query suddenly became really slow. This used to be a query we used for monitoring, so would happily run every 10sec for the past 2 months, but with the performance hit that came after the vacuum, most dashboards using that is down, and it ramps up until the server runs out of connections or resources depending.
Unfortunately I do not have the explain output of the previous one.
By not specifying a date limitation:
explain (analyze,timing) select min(id) from iqsim_cdrs;
Result (cost=0.64..0.65 rows=1 width=8) (actual time=6.222..6.222 rows=1 loops=1)
InitPlan 1 (returns $0)
-> Limit (cost=0.57..0.64 rows=1 width=8) (actual time=6.216..6.217 rows=1 loops=1)
-> Index Only Scan using iqsim_cdrs_pkey on iqsim_cdrs (cost=0.57..34265771.63 rows=531041357 width=8) (a
ctual time=6.213..6.213 rows=1 loops=1)
Index Cond: (id IS NOT NULL)
Heap Fetches: 1
Planning time: 1.876 ms
Execution time: 6.313 ms
(8 rows)
By limiting the date:
explain (analyze,timing) select min(id) from iqsim_cdrs where timestamp < '2019-01-01 00:00:00';
Result (cost=7.38..7.39 rows=1 width=8) (actual time=363763.144..363763.145 rows=1 loops=1)
InitPlan 1 (returns $0)
-> Limit (cost=0.57..7.38 rows=1 width=8) (actual time=363763.137..363763.138 rows=1 loops=1)
-> Index Scan using iqsim_cdrs_pkey on iqsim_cdrs (cost=0.57..35593384.68 rows=5227047 width=8) (actual t
ime=363763.133..363763.133 rows=1 loops=1)
Index Cond: (id IS NOT NULL)
Filter: ("timestamp" < '2019-01-01 00:00:00'::timestamp without time zone)
Rows Removed by Filter: 488693105
Planning time: 7.707 ms
Execution time: 363763.219 ms
(9 rows)
Not sure what could have caused this, I can only presume before the full vacuum it used the index on timestamp?
* UPDATE *
As per #jjanes's recommendation, here the id+0 update
explain (analyze,timing) select min(id+0) from iqsim_cdrs where timestamp < '2019-01-01 00:00:00';
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=377400.34..377400.35 rows=1 width=8) (actual time=109.176..109.177 rows=1 loops=1)
-> Index Scan using index_iqsim_cdrs_on_timestamp on iqsim_cdrs (cost=0.57..351196.84 rows=5240699 width=8) (actual time=0.131..108.911 rows=126 loops=1)
Index Cond: ("timestamp" < '2019-01-01 00:00:00'::timestamp without time zone)
Planning time: 4.756 ms
Execution time: 109.405 ms
(5 rows)

Postgres Optimizer: Why it lies about costs? [EDIT] How to pick random_page_cost?

I've got following issue with Postgres:
Got two tables A and B:
A got 64 mln records
B got 16 mln records
A got b_id field which is indexed --> ix_A_b_id
B got datetime_field which is indexed --> ix_B_datetime
Got following query:
SELECT
A.id,
B.some_field
FROM
A
JOIN
B
ON A.b_id = B.id
WHERE
B.datetime_field BETWEEN 'from' AND 'to'
This query is fine when difference between from and to is small, in that case postgres use both indexes and i get results quite fast
When difference between dates is bigger query is slowing much, because postgres decides to use ix_B_datetime only and then Full Scan on table with 64 M records... which is simple stupid
I found point when optimizer decides that using Full Scan is faster.
For dates between
2019-03-10 17:05:00 and 2019-03-15 01:00:00
it got similar cost like for
2019-03-10 17:00:00 and 2019-03-15 01:00:00.
But fetching time for first query is something about 50 ms and for second almost 2 minutes.
Plans are below
Nested Loop (cost=1.00..3484455.17 rows=113057 width=8)
-> Index Scan using ix_B_datetime on B (cost=0.44..80197.62 rows=28561 width=12)
Index Cond: ((datetime_field >= '2019-03-10 17:05:00'::timestamp without time zone) AND (datetime_field < '2019-03-15 01:00:00'::timestamp without time zone))
-> Index Scan using ix_A_b_id on A (cost=0.56..112.18 rows=701 width=12)
Index Cond: (b_id = B.id)
Hash Join (cost=80615.72..3450771.89 rows=113148 width=8)
Hash Cond: (A.b_id = B.id)
-> Seq Scan on spot (cost=0.00..3119079.50 rows=66652050 width=12)
-> Hash (cost=80258.42..80258.42 rows=28584 width=12)
-> Index Scan using ix_B_datetime on B (cost=0.44..80258.42 rows=28584 width=12)
Index Cond: ((datetime_field >= '2019-03-10 17:00:00'::timestamp without time zone) AND (datetime_field < '2019-03-15 01:00:00'::timestamp without time zone))
So my question is why my Postgres lies about costs? Why it calculates something more expensive as it is actually? How to fix that?
Temporary I had to rewrite query to always use index on table A but I do not like following solution, because it's hacky, not clear and slower for small chunks of data but much faster for bigger chunks
with cc as (
select id, some_field from B WHERE B.datetime_field >= '2019-03-08'
AND B.datetime_field < '2019-03-15'
)
SELECT X.id, Y.some_field
FROM (SELECT b_id, id from A where b_id in (SELECT id from cc)) X
JOIN (SELECT id, some_field FROM cc) Y ON X.b_id = Y.id
EDIT:
So as #a_horse_with_no_name suggested I've played with RANDOM_PAGE_COST
I've modified query to count number of entries because fetching all was unnecessary so query looks following
SELECT count(*) FROM (
SELECT
A.id,
B.some_field
FROM
A
JOIN
B
ON A.b_id = B.id
WHERE
B.datetime_field BETWEEN '2019-03-01 00:00:00' AND '2019-03-15 01:00:00'
) A
And I've tested different levels of cost
RANDOM_PAGE_COST=0.25
Aggregate (cost=3491773.34..3491773.35 rows=1 width=8) (actual time=4166.998..4166.999 rows=1 loops=1)
Buffers: shared hit=1939402
-> Nested Loop (cost=1.00..3490398.51 rows=549932 width=0) (actual time=0.041..3620.975 rows=2462836 loops=1)
Buffers: shared hit=1939402
-> Index Scan using ix_B_datetime_field on B (cost=0.44..24902.79 rows=138927 width=8) (actual time=0.013..364.018 rows=313399 loops=1)
Index Cond: ((datetime_field >= '2019-03-01 00:00:00'::timestamp without time zone) AND (datetime_field < '2019-03-15 01:00:00'::timestamp without time zone))
Buffers: shared hit=311461
-> Index Only Scan using A_b_id_index on A (cost=0.56..17.93 rows=701 width=8) (actual time=0.004..0.007 rows=8 loops=313399)
Index Cond: (b_id = B.id)
Heap Fetches: 2462836
Buffers: shared hit=1627941
Planning time: 0.316 ms
Execution time: 4167.040 ms
RANDOM_PAGE_COST=1
Aggregate (cost=3918191.39..3918191.40 rows=1 width=8) (actual time=281236.100..281236.101 rows=1 loops=1)
" Buffers: shared hit=7531789 read=2567818, temp read=693 written=693"
-> Merge Join (cost=102182.07..3916816.56 rows=549932 width=0) (actual time=243755.551..280666.992 rows=2462836 loops=1)
Merge Cond: (A.b_id = B.id)
" Buffers: shared hit=7531789 read=2567818, temp read=693 written=693"
-> Index Only Scan using A_b_id_index on A (cost=0.56..3685479.55 rows=66652050 width=8) (actual time=0.010..263635.124 rows=64700055 loops=1)
Heap Fetches: 64700055
Buffers: shared hit=7220328 read=2567818
-> Materialize (cost=101543.05..102237.68 rows=138927 width=8) (actual time=523.618..1287.145 rows=2503965 loops=1)
" Buffers: shared hit=311461, temp read=693 written=693"
-> Sort (cost=101543.05..101890.36 rows=138927 width=8) (actual time=523.616..674.736 rows=313399 loops=1)
Sort Key: B.id
Sort Method: external merge Disk: 5504kB
" Buffers: shared hit=311461, temp read=693 written=693"
-> Index Scan using ix_B_datetime_field on B (cost=0.44..88589.92 rows=138927 width=8) (actual time=0.013..322.016 rows=313399 loops=1)
Index Cond: ((datetime_field >= '2019-03-01 00:00:00'::timestamp without time zone) AND (datetime_field < '2019-03-15 01:00:00'::timestamp without time zone))
Buffers: shared hit=311461
Planning time: 0.314 ms
Execution time: 281237.202 ms
RANDOM_PAGE_COST=2
Aggregate (cost=4072947.53..4072947.54 rows=1 width=8) (actual time=166896.775..166896.776 rows=1 loops=1)
" Buffers: shared hit=696849 read=2067171, temp read=194524 written=194516"
-> Hash Join (cost=175785.69..4071572.70 rows=549932 width=0) (actual time=29321.835..166332.812 rows=2462836 loops=1)
Hash Cond: (A.B_id = B.id)
" Buffers: shared hit=696849 read=2067171, temp read=194524 written=194516"
-> Seq Scan on A (cost=0.00..3119079.50 rows=66652050 width=8) (actual time=0.008..108959.789 rows=64700055 loops=1)
Buffers: shared hit=437580 read=2014979
-> Hash (cost=173506.11..173506.11 rows=138927 width=8) (actual time=29321.416..29321.416 rows=313399 loops=1)
Buckets: 131072 (originally 131072) Batches: 8 (originally 2) Memory Usage: 4084kB
" Buffers: shared hit=259269 read=52192, temp written=803"
-> Index Scan using ix_B_datetime_field on B (cost=0.44..173506.11 rows=138927 width=8) (actual time=1.676..29158.413 rows=313399 loops=1)
Index Cond: ((datetime_field >= '2019-03-01 00:00:00'::timestamp without time zone) AND (datetime_field < '2019-03-15 01:00:00'::timestamp without time zone))
Buffers: shared hit=259269 read=52192
Planning time: 7.367 ms
Execution time: 166896.824 ms
Still it's unclear for me, cost 0.25 is best for me but everywhere I can read that for ssd disk it should be 1-1.5. (I'm using AWS instance with ssd)
What is weird at cost 1 plan is worse than at 2 and 0.25
So what value to pick? Is there any possibility to calculate it?
Costs 0.25 > 2 > 1 efficiency in that case, what about other cases? How can I be sure that 0.25 which is good for my query won't break other queries. Do I need to write performance tests for every query I got?

PostgreSQL performance difference with datetime comparison

I'm trying to optimize the performance of my PostgreSQL queries. I noticed a big change in the time required to execute my query when I change the datetime in my query by one second. I'm trying to figure out why there's this drastic change in performance with such a small change in the query. I ran an explain(analyze, buffers) and see there's a difference in how they are operating but I don't understand enough to determine what to do about it. Any help?
Here is the first query
SELECT avg(travel_time_all)
FROM tt_data
WHERE date_time >= '2014-01-01 08:00:00' and
date_time < '2014-01-01 8:14:13' and
(tmc = '118P04252' or tmc = '118P04253' or tmc = '118P04254' or tmc = '118P04255' or tmc = '118P04256')
group by tmc order by tmc
If I increase the later date_time by one second to 2014-01-01 8:14:14 and rerun the query, it drastically increases the execution time.
Here are the results of the explain (analyze, buffers) on the two queries. First query:
GroupAggregate (cost=6251.99..6252.01 rows=1 width=14) (actual time=0.829..0.829 rows=1 loops=1)
Buffers: shared hit=506
-> Sort (cost=6251.99..6252.00 rows=1 width=14) (actual time=0.823..0.823 rows=1 loops=1)
Sort Key: tmc
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=506
-> Bitmap Heap Scan on tt_data (cost=36.29..6251.98 rows=1 width=14) (actual time=0.309..0.817 rows=1 loops=1)
Recheck Cond: ((date_time >= '2014-01-01 08:00:00'::timestamp without time zone) AND (date_time < '2014-01-01 08:14:13'::timestamp without time zone))
Filter: ((tmc = '118P04252'::text) OR (tmc = '118P04253'::text) OR (tmc = '118P04254'::text) OR (tmc = '118P04255'::text) OR (tmc = '118P04256'::text))
Rows Removed by Filter: 989
Buffers: shared hit=506
-> Bitmap Index Scan on tt_data_2_date_time_idx (cost=0.00..36.29 rows=1572 width=0) (actual time=0.119..0.119 rows=990 loops=1)
Index Cond: ((date_time >= '2014-01-01 08:00:00'::timestamp without time zone) AND (date_time < '2014-01-01 08:14:13'::timestamp without time zone))
Buffers: shared hit=7
Total runtime: 0.871 ms
Below is the second query:
GroupAggregate (cost=6257.31..6257.34 rows=1 width=14) (actual time=52.444..52.444 rows=1 loops=1)
Buffers: shared hit=2693
-> Sort (cost=6257.31..6257.32 rows=1 width=14) (actual time=52.438..52.438 rows=1 loops=1)
Sort Key: tmc
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=2693
-> Bitmap Heap Scan on tt_data (cost=6253.28..6257.30 rows=1 width=14) (actual time=52.427..52.431 rows=1 loops=1)
Recheck Cond: ((date_time >= '2014-01-01 08:00:00'::timestamp without time zone) AND (date_time < '2014-01-01 08:14:14'::timestamp without time zone) AND ((tmc = '118P04252'::text) OR (tmc = '118P04253'::text) OR (tmc = '118P04254'::text) OR (...)
Rows Removed by Index Recheck: 5
Buffers: shared hit=2693
-> BitmapAnd (cost=6253.28..6253.28 rows=1 width=0) (actual time=52.410..52.410 rows=0 loops=1)
Buffers: shared hit=2689
-> Bitmap Index Scan on tt_data_2_date_time_idx (cost=0.00..36.31 rows=1574 width=0) (actual time=0.132..0.132 rows=990 loops=1)
Index Cond: ((date_time >= '2014-01-01 08:00:00'::timestamp without time zone) AND (date_time < '2014-01-01 08:14:14'::timestamp without time zone))
Buffers: shared hit=7
-> BitmapOr (cost=6216.71..6216.71 rows=271178 width=0) (actual time=52.156..52.156 rows=0 loops=1)
Buffers: shared hit=2682
-> Bitmap Index Scan on tt_data_2_tmc_idx (cost=0.00..1243.34 rows=54236 width=0) (actual time=8.439..8.439 rows=125081 loops=1)
Index Cond: (tmc = '118P04252'::text)
Buffers: shared hit=483
-> Bitmap Index Scan on tt_data_2_tmc_idx (cost=0.00..1243.34 rows=54236 width=0) (actual time=10.257..10.257 rows=156115 loops=1)
Index Cond: (tmc = '118P04253'::text)
Buffers: shared hit=602
-> Bitmap Index Scan on tt_data_2_tmc_idx (cost=0.00..1243.34 rows=54236 width=0) (actual time=6.867..6.867 rows=102318 loops=1)
Index Cond: (tmc = '118P04254'::text)
Buffers: shared hit=396
-> Bitmap Index Scan on tt_data_2_tmc_idx (cost=0.00..1243.34 rows=54236 width=0) (actual time=13.371..13.371 rows=160566 loops=1)
Index Cond: (tmc = '118P04255'::text)
Buffers: shared hit=619
-> Bitmap Index Scan on tt_data_2_tmc_idx (cost=0.00..1243.34 rows=54236 width=0) (actual time=13.218..13.218 rows=150709 loops=1)
Index Cond: (tmc = '118P04256'::text)
Buffers: shared hit=582
Total runtime: 52.507 ms
Any advice on how to make the second query as fast as the first? I'd like to increase this time interval by a greater amount but don't want the performance to decrease.