Postgres indexing for timestamp range does not work - postgresql

I have a following table with 1.000.000 rows
create table event
(
id serial
constraint event_pk
primary key,
type text not null,
start_date timestamp not null,
end_date timestamp not null,
val text
);
and I need to execute a following SQL query
EXPLAIN (analyse, buffers, format text)
SELECT *
from event
WHERE end_date >= '2010-01-12T18:00:00'::timestamp
AND start_date <= '2010-01-13T00:00:00'::timestamp;
Please note that end_date in being compared with date which is eariler than that for start_date
The question is what index should I create for such query?
I've tried following one:
create index my_index
on event (end_date, start_date desc);
But it doesn't work, I can see that sequential search is being used
Seq Scan on event (cost=0.00..53040.01 rows=1967249 width=57) (actual time=0.142..149.163 rows=1971694 loops=1)
Filter: ((end_date >= '2010-01-12 18:00:00'::timestamp without time zone) AND (start_date <= '2010-01-13 00:00:00'::timestamp without time zone))
Rows Removed by Filter: 28307
Buffers: shared hit=15762 read=7278
Planning:
Buffers: shared hit=4
Planning Time: 0.127 ms
Execution Time: 201.610 ms
I can not understand why my index is not working, because if we just try following index and query:
create index simple
on event (start_date, end_date DESC);
SELECT *
from event
WHERE event.start_date >= '2010-01-12T18:00:00'::timestamp
AND event.end_date <= '2011-01-13T00:00:00'::timestamp;
indexing works just fine
Bitmap Heap Scan on event (cost=466.91..23418.68 rows=18035 width=57) (actual time=1.954..8.551 rows=15944 loops=1)
Recheck Cond: ((start_date >= '2010-01-12 00:00:00'::timestamp without time zone) AND (end_date <= '2011-01-13 00:00:00'::timestamp without time zone))
Heap Blocks: exact=7694
Buffers: shared hit=7734 read=26
-> Bitmap Index Scan on simple (cost=0.00..462.40 rows=18035 width=0) (actual time=1.314..1.314 rows=15944 loops=1)
Index Cond: ((start_date >= '2010-01-12 00:00:00'::timestamp without time zone) AND (end_date <= '2011-01-13 00:00:00'::timestamp without time zone))
Buffers: shared hit=55 read=11
Planning:
Buffers: shared hit=8
Planning Time: 0.133 ms
Execution Time: 9.025 ms
This query does not do what I need, but I'm just wondering now why this index works for such query, but my index above does not work for query which I need

The answer is right there in the execution plan:
Seq Scan on event (...) (actual ... rows=1971694 ...)
Filter: ((end_date >= '2010-01-12 18:00:00'::timestamp without time zone) AND (start_date <= '2010-01-13 00:00:00'::timestamp without time zone))
Rows Removed by Filter: 28307
the query returns almost two million rows, and the filter only removes 30000. It is more efficient to use a sequential scan than an index scan in that case.

Related

postgres NOW() function taking too long vs string equivalent

this is my first question on StackOverflow so forgive if the question may not be properly structured.
I have a table t_table with datetime column d_datetime and I need to filter data between the past 5 days. Both of the following work locally where I have less data:
query 1.
SELECT * FROM t_table
WHERE d_datetime
BETWEEN '2020-08-28T00:00:00.024Z' AND '2020-09-02T00:00:00.024Z';
query 2.
SELECT * FROM t_table
WHERE d_datetime
BETWEEN (NOW() - INTERVAL '5 days') AND NOW();
query 3.
SELECT * FROM t_table
WHERE d_datetime > NOW() - INTERVAL '5 days';
However, when I move to the live database, only the first query runs to completion in about 10 seconds. I cannot tell why but the other two just stick consuming too much processing power and I haven't once seen them run to completion, even after waiting upto 5 minutes on end.
I have tried automatically generating the strings used for the d_datetime shown in the first query using:
query 4.
SELECT * FROM t_table
WHERE d_datetime
BETWEEN
(TO_CHAR(NOW() - INTERVAL '5 days', 'YYYY-MM-ddThh:MI:SS.024Z'))
AND
(TO_CHAR(NOW(), 'YYYY-MM-ddThh:MI:SS.024Z'))
but it throws the following error:
operator does not exist: timestamp without time zone >= text
My questions are:
Is there any particular reason why query 1 so fast and the rest take an extremely longer period of time to run on a large dataset?
Why does query 4 fail when it practically generates the same string format as query 1 ('YYYY-MM-ddThh:mm:ss.024Z')?
The following is the result of the explain result to the first query
EXPLAIN SELECT * FROM t_table
WHERE d_datetime
BETWEEN '2020-08-28T00:00:00.024Z' AND '2020-09-02T00:00:00.024Z';
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize HashAggregate (cost=31346.37..31788.13 rows=35341 width=22) (actual time=388.622..388.845 rows=6 loops=1)
Output: count(_hyper_12_67688_chunk.octets), _hyper_12_67688_chunk.application, (date_trunc('day'::text, _hyper_12_67688_chunk.entry_time))
Group Key: (date_trunc('day'::text, _hyper_12_67688_chunk.entry_time)), _hyper_12_67688_chunk.application
Buffers: shared hit=17193
-> Gather (cost=27105.45..31081.31 rows=35341 width=22) (actual time=377.109..398.285 rows=11 loops=1)
Output: _hyper_12_67688_chunk.application, (date_trunc('day'::text, _hyper_12_67688_chunk.entry_time)), (PARTIAL count(_hyper_12_67688_chunk.octets))
Workers Planned: 1
Workers Launched: 1
Buffers: shared hit=17193
-> Partial HashAggregate (cost=26105.45..26547.21 rows=35341 width=22) (actual time=174.272..174.535 rows=6 loops=2)
Output: _hyper_12_67688_chunk.application, (date_trunc('day'::text, _hyper_12_67688_chunk.entry_time)), PARTIAL count(_hyper_12_67688_chunk.octets)
Group Key: date_trunc('day'::text, _hyper_12_67688_chunk.entry_time), _hyper_12_67688_chunk.application
Buffers: shared hit=17193
Worker 0: actual time=27.942..28.206 rows=5 loops=1
Buffers: shared hit=579
-> Result (cost=1.73..25272.75 rows=111027 width=18) (actual time=0.805..141.094 rows=94662 loops=2)
Output: _hyper_12_67688_chunk.application, date_trunc('day'::text, _hyper_12_67688_chunk.entry_time), _hyper_12_67688_chunk.octets
Buffers: shared hit=17193
Worker 0: actual time=1.576..23.928 rows=6667 loops=1
Buffers: shared hit=579
-> Parallel Append (cost=1.73..23884.91 rows=111027 width=18) (actual time=0.800..114.488 rows=94662 loops=2)
Buffers: shared hit=17193
Worker 0: actual time=1.572..20.204 rows=6667 loops=1
Buffers: shared hit=579
-> Parallel Bitmap Heap Scan on _timescaledb_internal._hyper_12_67688_chunk (cost=1.73..11.23 rows=8 width=17) (actual time=1.570..1.618 rows=16 loops=1)
Output: _hyper_12_67688_chunk.octets, _hyper_12_67688_chunk.application, _hyper_12_67688_chunk.entry_time
Recheck Cond: ((_hyper_12_67688_chunk.entry_time >= '2020-08-28 05:45:03.024'::timestamp without time zone) AND (_hyper_12_67688_chunk.entry_time <= '2020-09-02 11:45:03.024'::timestamp without time zone))
Filter: ((_hyper_12_67688_chunk.application)::text = 'dns'::text)
Rows Removed by Filter: 32
Buffers: shared hit=11
Worker 0: actual time=1.570..1.618 rows=16 loops=1
Buffers: shared hit=11
-> Bitmap Index Scan on _hyper_12_67688_chunk_dpi_applications_entry_time_idx (cost=0.00..1.73 rows=48 width=0) (actual time=1.538..1.538 rows=48 loops=1)
Index Cond: ((_hyper_12_67688_chunk.entry_time >= '2020-08-28 05:45:03.024'::timestamp without time zone) AND (_hyper_12_67688_chunk.entry_time <= '2020-09-02 11:45:03.024'::timestamp without time zone))
Buffers: shared hit=2
Worker 0: actual time=1.538..1.538 rows=48 loops=1
Buffers: shared hit=2
-> Parallel Index Scan Backward using _hyper_12_64752_chunk_dpi_applications_entry_time_idx on _timescaledb_internal._hyper_12_64752_chunk (cost=0.14..2.36 rows=1 width=44) (actual time=0.040..0.076 rows=52 loops=1)
Output: _hyper_12_64752_chunk.octets, _hyper_12_64752_chunk.application, _hyper_12_64752_chunk.entry_time
Index Cond: ((_hyper_12_64752_chunk.entry_time >= '2020-08-28 05:45:03.024'::timestamp without time zone) AND (_hyper_12_64752_chunk.entry_time <= '2020-09-02 11:45:03.024'::timestamp without time zone))
Filter: ((_hyper_12_64752_chunk.application)::text = 'dns'::text)
Rows Removed by Filter: 52
Buffers: shared hit=
-- cut logs
-> Parallel Seq Scan on _timescaledb_internal._hyper_12_64814_chunk (cost=0.00..2.56 rows=14 width=17) (actual time=0.017..0.038 rows=32 loops=1)
Output: _hyper_12_64814_chunk.octets, _hyper_12_64814_chunk.application, _hyper_12_64814_chunk.entry_time
Filter: ((_hyper_12_64814_chunk.entry_time >= '2020-08-28 05:45:03.024'::timestamp without time zone) AND (_hyper_12_64814_chunk.entry_time <= '2020-09-02 11:45:03.024'::timestamp without time zone) AND ((_hyper_12_64814_chunk.application)::text = 'dns'::text))
Rows Removed by Filter: 40
Buffers: shared hit=2
-> Parallel Seq Scan on _timescaledb_internal._hyper_12_62262_chunk (cost=0.00..2.54 rows=9 width=19) (actual time=0.027..0.039 rows=15 loops=1)
Output: _hyper_12_62262_chunk.octets, _hyper_12_62262_chunk.application, _hyper_12_62262_chunk.entry_time
Filter: ((_hyper_12_62262_chunk.entry_time >= '2020-08-28 05:45:03.024'::timestamp without time zone) AND (_hyper_12_62262_chunk.entry_time <= '2020-09-02 11:45:03.024'::timestamp without time zone) AND ((_hyper_12_62262_chunk.application)::text = 'dns'::text))
Rows Removed by Filter: 37
Buffers: shared hit=2
Planning Time: 3367.445 ms
Execution Time: 417.245 ms
(7059 rows)
The Parallel Index Scan Backward using... log continues for all hypertable chunks in the table.
For the other three queries that have previously been mentioned to be unsuccessful, they are still not completing when queried and just end up eventually filling the memory. Thus I cannot post the EXPLAIN results of these queries, sorry.
Please let me know if my question has not been properly structured. Thanks.
You are using a partitioned table that likely has a lot of partitions, because the planning time for the query takes 3 seconds.
You are probably using PostgreSQL v11 or earlier. v12 introduced partition pruning at execution time, while v11 can only exclude partitions at query planning time.
In your first query the WHERE condition contains constants, so that works. In the other queries, the function now() is used, whose result value is only known at query execution time (it is STABLE, not IMMUTABLE), so partition pruning cannot take place at query planning time. Query planning and execution need not happen at the same time – think of prepared statements.

Return all requests from a specific date that are not finished postgresql

I need to make a query where it returns all requests that are not finished or canceled from the beginning of recordings to a specific date. The way I'm doing right now, take too much time and returns an error: 'User query might have needed to see row versions that must be removed'(my guess it's due of lack of RAM).
Below is the query I'm using, and here are some information:
T1 where each new entry is saved, with an ID, creation date, status(open,closed) and other keys for several tables.
T2 where each change made in each request is saved(in progress, waiting, rejected and closed), date of change and other keys for other tables.
SELECT T1.id_request,
T1.dt_created,
T1.status
FROM T1
LEFT JOIN T2
ON T1.id_request = T2.id_request
WHERE (T1.dt_created >= '2012-01-01 00:00:00' AND T1.dt_created <= '2020-05-31 23:59:59')
AND T1.id_request NOT IN (SELECT T2.di_request
FROM T2
WHERE ((T2.dt_change >= '2012-01-01 00:00:00'
AND T2.dt_change <= '2020-05-31 23:59:59')
OR T2.dt_change IS NULL)
AND T2.status IN ('Closed','Canceled','rejected'))
My thoughts were to get all that is received - T1(I can't just retrieve what is open, it will only work for today, not to a specific past date - what I want) between the beginning of the records and lets say end of May. Then use WHERE T1.ID NOT IN (T2.ID with STATUS 'closed', in the same period). But as I've said it takes forever and returns an error.
I use this same code to get what was open for a specific month(1st to 30rd) and works perfectly fine.
Maybe this approach is not the best approach, but I couldn't think of any other way(I'm not an expert with SQL). If there's not enough information to provide an answer fell free to ask.
As per request from #MikeOrganek here is the analyzer:
Nested Loop Left Join (cost=27985.55..949402.48 rows=227455 width=20) (actual time=2486.433..54832.280 rows=47726 loops=1)
Buffers: shared hit=293242 read=260670
Seq Scan on T1 (cost=27984.99..324236.82 rows=73753 width=20) (actual time=2467.499..6202.970 rows=16992 loops=1)
Filter: ((dt_created >= '2020-05-01 00:00:00-03'::timestamp with time zone) AND (dt_created <= '2020-05-31 23:59:59-03'::timestamp with time zone) AND (NOT (hashed SubPlan 1)))
Rows Removed by Filter: 6085779
Buffers: shared hit=188489 read=250098
SubPlan 1
Nested Loop (cost=7845.36..27983.13 rows=745 width=4) (actual time=129.379..1856.518 rows=168690 loops=1)
Buffers: shared hit=60760
Seq Scan on T3(cost=0.00..5.21 rows=3 width=8) (actual time=0.057..0.104 rows=3 loops=1)
Filter: ((status_request)::text = ANY ('{Closed,Canceled,rejected}'::text[]))
Rows Removed by Filter: 125
Buffers: shared hit=7
Bitmap Heap Scan on T2(cost=7845.36..9321.70 rows=427 width=8) (actual time=477.324..607.171 rows=56230 loops=3)
Recheck Cond: ((dt_change >= '2020-05-01 00:00:00-03'::timestamp with time zone) AND (dt_change <= '2020-05-31 23:59:59-03'::timestamp with time zone) AND (T2.ID_status= T3.ID_status))
Rows Removed by Index Recheck: 87203
Heap Blocks: exact=36359
Buffers: shared hit=60753
BitmapAnd (cost=7845.36..7845.36 rows=427 width=0) (actual time=473.864..473.864 rows=0 loops=3)
Buffers: shared hit=24394
Bitmap Index Scan on idx_ix_T2_dt_change (cost=0.00..941.81 rows=30775 width=0) (actual time=47.380..47.380 rows=306903 loops=3)
Index Cond: ((dt_change >= '2020-05-01 00:00:00-03'::timestamp with time zone) AND (dt_change<= '2020-05-31 23:59:59-03'::timestamp with time zone))
Buffers: shared hit=2523
Bitmap Index Scan on idx_T2_ID_status (cost=0.00..6895.49 rows=262724 width=0) (actual time=418.942..418.942 rows=2105165 loops=3)
Index Cond: (ID_status = T3.ID_status )
Buffers: shared hit=21871
Index Only Scan using idx_ix_T2_id_request on T2 (cost=0.56..8.30 rows=18 width=4) (actual time=0.369..2.859 rows=3 loops=16992)
Index Cond: (id_request = t17.id_request )
Heap Fetches: 44807
Buffers: shared hit=104753 read=10572
Planning time: 23.424 ms
Execution time: 54841.261 ms
And here is the main difference with dt_change IS NULL:
Planning time: 34.320 ms
Execution time: 230683.865 ms
Thanks
It looks like the OR T2.dt_change is NULL is very costly in that it increased overall execution time by a factor of five.
The only option I can see is changing the not in to a not exists, as below.
SELECT T1.id_request,
T1.dt_created,
T1.status
FROM T1
LEFT JOIN T2
ON T1.id_request = T2.id_request
WHERE T1.dt_created >= '2012-01-01 00:00:00'
AND T1.dt_created <= '2020-05-31 23:59:59'
AND NOT EXISTS (SELECT 1
FROM T2
WHERE id_request = T1.id_request
AND ( ( dt_change >= '2012-01-01 00:00:00'
AND dt_change <= '2020-05-31 23:59:59')
OR dt_change IS NULL)
AND status IN ('Closed','Canceled','rejected'))
But I expect that to give you only a marginal improvement. Can you please see how much this change helps?

Postgresql Index Only Scan Doesnt Properly Work On Group By

I have a table like:
CREATE TABLE summary
(
id serial NOT NULL,
user_id bigint NOT NULL,
country character varying(5),
product_id bigint NOT NULL,
category_id bigint NOT NULL,
text_id bigint NOT NULL,
text character varying(255),
product_type integer NOT NULL,
event_name character varying(255),
report_date date NOT NULL,
currency character varying(5),
revenue double precision,
last_event_time timestamp
);
My table size is 1786 MB (except index). In here, I've created index like below:
CREATE INDEX "idx_as_type_usr_productId_eventTime"
ON summary USING btree
(product_type, user_id, product_id, last_event_time)
INCLUDE(event_name);
And my simple query looks like below:
select
event_name,
max(last_event_time)
from summary s
where s.user_id = ? and s.product_id = ? and s.product_type = ?
and s.last_event_time > '2020-03-01' and s.last_event_time < '2020-03-25'
group by event_name;
When I explain it, it looks like;
HashAggregate (cost=93.82..96.41 rows=259 width=25) (actual time=9187.533..9187.536 rows=10 loops=1)
Group Key: event_name
Buffers: shared hit=70898 read=10579 dirtied=22650
I/O Timings: read=3876.367
-> Index Only Scan using "idx_as_type_usr_productId_eventTime" on summary s (cost=0.56..92.36 rows=292 width=25) (actual time=0.485..9153.812 rows=87322 loops=1)
Index Cond: ((product_type = 2) AND (product_id = ?) AND (product_id = ?) AND (last_event_time > '2020-03-01 00:00:00'::timestamp without time zone) AND (last_event_time < '2020-03-25 00:00:00'::timestamp without time zone))
Heap Fetches: 35967
Buffers: shared hit=70898 read=10579 dirtied=22650
I/O Timings: read=3876.367
Planning Time: 0.452 ms
Execution Time: 9187.583 ms
In here, everything looks fine. But when I execute it, it takes more than 10 seconds, sometime it takes more than 30 seconds.
In here, if I execute it without Group By, it returns so quickly like less than 2 seconds. What can be the effect of Group By? The select part is so little (like a 500 rows).
This table has insert/update operations with 30/per second. Can this be related with this indexing problem?
Updated:
Query Without - GroupBy:
select
event_name,
last_event_time
from summary s
where s.user_id = ? and s.product_id = ? and s.product_type = ?
and s.last_event_time > '2020-03-01' and s.last_event_time < '2020-03-25';
Explain Without - Group By:
Index Only Scan using "idx_as_type_usr_productId_eventTime" on summary s (cost=0.56..92.36 rows=292 width=25) (actual time=0.023..79.138 rows=87305 loops=1)
Index Cond: ((product_type = ?) AND (user_id = ?) AND (product_id = ?) AND (last_event_time > '2020-03-01 00:00:00'::timestamp without time zone) AND (last_event_time < '2020-03-25 00:00:00'::timestamp without time zone))
Heap Fetches: 22949
Buffers: shared hit=37780 read=12143 dirtied=15156
I/O Timings: read=4418.930
Planning Time: 0.639 ms
Execution Time: 4625.213 ms
There are several problems:
PostgreSQL had to set hint bits, which dirty the pages and cause writes.
PostgreSQL has to fetch table rows from disk to fetch their visibility.
PostgreSQL has to scan 80000 pages to get 87000 rows, so the index must be totally bloated.
The first two can be taken care of by running
VACUUM summary;
which is always a good idea after a bulk load, and the bloat can be cured by
REINDEX INDEX "idx_as_type_usr_productId_eventTime";

Efficiently counting rows by date, adjusted for timezone

I have a table with a schema that looks like this:
id (uuid; pk)
timestamp (timestamp)
category (bpchar)
flaged_as_spam (bool)
flagged_as_bot (bool)
... (other metadata)
I have an index on this table that looks like this:
CREATE INDEX time_index ON events_table USING btree (flagged_as_bot, flagged_as_spam, category, "timestamp") WHERE ((flagged_as_bot = false) AND (flagged_as_spam = false))
I run queries against this table to generate line charts representing the number of events that occurred each day. However, I want the line chart to be adjusted for the user's timezone. Currently, I have a query that looks like this:
SELECT
date_trunc('day', timestamp + INTERVAL '-5 hour') AS ts,
category,
COUNT(*) AS count
FROM
events_table
WHERE
category = 'the category'
AND flagged_as_bot = FALSE
AND flagged_as_spam = FALSE
AND timestamp >= '2018-05-04T00:00:00'::timestamp
AND timestamp < '2018-10-31T17:57:59.661664'::timestamp
GROUP BY
ts,
category
ORDER BY
1 ASC
In most cases, for categories with under 100,000 records, this is quite fast:
GroupAggregate (cost=8908.56..8958.18 rows=1985 width=70) (actual time=752.886..753.301 rows=124 loops=1)
Group Key: (date_trunc('day'::text, ("timestamp" + '-05:00:00'::interval))), category
-> Sort (cost=8908.56..8913.52 rows=1985 width=62) (actual time=752.878..752.983 rows=797 loops=1)
Sort Key: (date_trunc('day'::text, ("timestamp" + '-05:00:00'::interval)))
Sort Method: quicksort Memory: 137kB
-> Bitmap Heap Scan on listens (cost=552.79..8799.83 rows=1985 width=62) (actual time=748.683..752.568 rows=797 loops=1)
Recheck Cond: ((category = '7248c3b8-727e-4357-a267-e9b0e3e36d4b'::bpchar) AND ("timestamp" >= '2018-05-04 00:00:00'::timestamp without time zone) AND ("timestamp" < '2018-10-31 17:57:59.661664'::timestamp without time zone))
Filter: ((NOT flagged_as_bot) AND (NOT flagged_as_spam))
Rows Removed by Filter: 1576
Heap Blocks: exact=1906
-> Bitmap Index Scan on time_index (cost=0.00..552.30 rows=2150 width=0) (actual time=748.324..748.324 rows=2373 loops=1)
Index Cond: ((category = '7248c3b8-727e-4357-a267-e9b0e3e36d4b'::bpchar) AND ("timestamp" >= '2018-05-04 00:00:00'::timestamp without time zone) AND ("timestamp" < '2018-10-31 17:57:59.661664'::timestamp without time zone))
Planning time: 0.628 ms
Execution time: 753.362 ms"
For categories with a very large number of records (>100,000), the index is not used and the query is very slow:
GroupAggregate (cost=1232229.95..1287491.60 rows=2126204 width=70) (actual time=14649.671..17178.955 rows=181 loops=1)
Group Key: (date_trunc('day'::text, ("timestamp" + '-05:00:00'::interval))), category
-> Sort (cost=1232229.95..1238072.10 rows=2336859 width=62) (actual time=14643.887..16031.031 rows=3070695 loops=1)
Sort Key: (date_trunc('day'::text, ("timestamp" + '-05:00:00'::interval)))
Sort Method: external merge Disk: 216200kB
-> Seq Scan on listens (cost=0.00..809314.38 rows=2336859 width=62) (actual time=0.015..9572.722 rows=3070695 loops=1)
Filter: ((NOT flagged_as_bot) AND (NOT flagged_as_spam) AND ("timestamp" >= '2018-05-04 00:00:00'::timestamp without time zone) AND ("timestamp" < '2018-10-31 17:57:59.661664'::timestamp without time zone) AND (category = '3b634b32-bb82-4f56-ada4-f4b7bc4288a5'::bpchar))
Rows Removed by Filter: 8788028
Planning time: 0.239 ms
Execution time: 17228.314 ms
My assumption is that this the index is not used because the overhead of using the index is far higher than simply performing a table scan. And of course, I imagine this to be because of the use of date_trunc to calculate the date to group by.
I've considered what could be done here. Here are some of my thoughts:
Most simply, I could create an expression index for each timezone offset that I care about (generally GMT/EST/CST/MST/PST). This would take up a good deal of space and each index would be infrequently used, but it would theoretically allow for index-only scans.
I could create an expression index truncated by hour. I don't know if this would help Postgres to optimize the query, though.
I could calculate each of the date ranges in advance and using some subquery magic, query the event count per range. I also don't know if this will yield any improvement.
Before I go down a rabbit hole, I figured I'd reach out to see if anyone has any thoughts.

postgres how to debug why planning time is too long?

version - postgres 9.6.
I were not so clear in question i asked in past and someone already answer there, so i thought best will be to post new question with more clear info and be more specific about my question.
Trying to join event table with dimension table.
event table is a daily partition (3k children) table with check constraints.The event table has 72 columns (i suspect that this is the issue).
I simplify the query in order to demonstrate the question (in practice range is wider and i query field from both tables).
You can see that for this simple query - the plan take almost 10 seconds (my question is about plan time and not execution time).
If i query direct on the child table ( please dont advice to use union on all child in range ) query plan is few ms.
explain analyze select campaign_id , spent as spent from events_daily r left join report_campaigns c on r.campaign_id = c.c_id where date >= '20170720' and date < '20170721' ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop Left Join (cost=0.29..28.88 rows=2 width=26) (actual time=0.021..0.021 rows=0 loops=1)
-> Append (cost=0.00..12.25 rows=2 width=26) (actual time=0.003..0.003 rows=0 loops=1)
-> Seq Scan on events_daily r (cost=0.00..0.00 rows=1 width=26) (actual time=0.002..0.002 rows=0 loops=1)
Filter: ((date >= '2017-07-20 00:00:00'::timestamp without time zone) AND (date < '2017-07-21 00:00:00'::timestamp without time zone))
-> Seq Scan on events_daily_20170720 r_1 (cost=0.00..12.25 rows=1 width=26) (actual time=0.000..0.000 rows=0 loops=1)
Filter: ((date >= '2017-07-20 00:00:00'::timestamp without time zone) AND (date < '2017-07-21 00:00:00'::timestamp without time zone))
-> Index Only Scan using report_campaigns_campaign_idx on report_campaigns c (cost=0.29..8.31 rows=1 width=8) (never executed)
Index Cond: (c_id = r.campaign_id)
Heap Fetches: 0
Planning time: 8393.337 ms
Execution time: 0.132 ms
(11 rows)
explain analyze select campaign_id , spent as spent from events_daily_20170720 r left join report_campaigns c on r.campaign_id = c.c_id where date >= '20170720' and date < '20170721' ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop Left Join (cost=0.29..20.57 rows=1 width=26) (actual time=0.008..0.008 rows=0 loops=1)
-> Seq Scan on events_daily_20170720 r (cost=0.00..12.25 rows=1 width=26) (actual time=0.007..0.007 rows=0 loops=1)
Filter: ((date >= '2017-07-20 00:00:00'::timestamp without time zone) AND (date < '2017-07-21 00:00:00'::timestamp without time zone))
-> Index Only Scan using report_campaigns_campaign_idx on report_campaigns c (cost=0.29..8.31 rows=1 width=8) (never executed)
Index Cond: (c_id = r.campaign_id)
Heap Fetches: 0
Planning time: 0.242 ms
Execution time: 0.059 ms
\d events_daily_20170720
date | timestamp without time zone |
Check constraints:
"events_daily_20170720_date_check" CHECK (date >= '2017-07-20 00:00:00'::timestamp without time zone AND date < '2017-07-21 00:00:00'::timestamp without time zone)
Inherits: events_daily
show constraint_exclusion;
constraint_exclusion
----------------------
on
When running ltrace it seems that it run this thousands of time on each field (hint that it run on all patitions tables for the plan) :
strlen("process") = 7
memcpy(0x0b7aac10, "process", 8) = 0x0b7aac10
strlen("channel") = 7
memcpy(0x0b7aac68, "channel", 8) = 0x0b7aac68
strlen("deleted") = 7
memcpy(0x0b7aacc0, "deleted", 8) = 0x0b7aacc0
strlen("old_spent") = 9
memcpy(0x0b7aad18, "old_spent", 10)
The problem is that you have too many partitions.
As the documentation warns:
All constraints on all partitions of the master table are examined during constraint exclusion,
so large numbers of partitions are likely to increase query planning time considerably.
Partitioning using these techniques will work well with up to perhaps a hundred partitions;
don't try to use many thousands of partitions.
You should try to reduce the number of partitions by using a longer time interval for each partition.
Alternatively, you could try to change the application code to directly access the correct partition if possible, but that might prove difficult and it removes many advantages that partitioning should bring.