Very slow query on GROUP BY - postgresql

I am having a really slow query (~100mins). I have omitted a lot of the inner child nodes by denoting it with a suffix ...
HashAggregate (cost=6449635645.84..6449635742.59 rows=1290 width=112) (actual time=5853093.882..5853095.159 rows=785 loops=1)
Group Key: p.processid
-> Nested Loop (cost=10851145.36..6449523319.09 rows=832050 width=112) (actual time=166573.289..5853043.076 rows=3904 loops=1)
Join Filter: (SubPlan 2)
Rows Removed by Join Filter: 617040
-> Merge Left Join (cost=5425572.68..5439530.95 rows=1290 width=799) (actual time=80092.782..80114.828 rows=788 loops=1) ...
-> Materialize (cost=5425572.68..5439550.30 rows=1290 width=112) (actual time=109.689..109.934 rows=788 loops=788) ...
SubPlan 2
-> Limit (cost=3869.12..3869.13 rows=5 width=8) (actual time=9.155..9.156 rows=5 loops=620944) ...
Planning time: 1796.764 ms
Execution time: 5853316.418 ms
(2836 rows)
The above query plan is a query executed to the view, schema below (simplified)
create or replace view foo_bar_view(processid, column_1, run_count) as
SELECT
q.processid,
q.column_1,
q.run_count
FROM
(
SELECT
r.processid,
avg(h.some_column) AS column_1,
-- many more aggregate function on many more columns
count(1) AS run_count
FROM
foo_bar_table r,
foo_bar_table h
WHERE (h.processid IN (SELECT p.processid
FROM process p
LEFT JOIN bar i ON p.barid = i.id
LEFT JOIN foo ii ON i.fooid = ii.fooid
JOIN foofoobar pt ON p.typeid = pt.typeid AND pt.displayname ~~
((SELECT ('%'::text || property.value) || '%'::text
FROM property
WHERE property.name = 'something'::text))
WHERE p.processid < r.processid
AND (ii.name = r.foo_name OR ii.name IS NULL AND r.foo_name IS NULL)
ORDER BY p.processid DESC
LIMIT 5))
GROUP BY r.processid
) q;
I would just like to understand, does this mean that most of the time is spent performing the GROUP BY processid?
If not, what is causing the issue? I can't think of a reason why is this query so slow.
The aggregate functions used are avg, min, max, stddev.
A total of 52 of them were used, 4 on each of the 13 columns.
Update: Expanding on the child node of SubPlan 2. We can see that the Bitmap Index Scan on process_pkey part is the bottleneck.
-> Bitmap Heap Scan on process p_30 (cost=1825.89..3786.00 rows=715 width=24) (actual time=8.642..8.833 rows=394 loops=620944)
Recheck Cond: ((typeid = pt_30.typeid) AND (processid < p.processid))
Heap Blocks: exact=185476288
-> BitmapAnd (cost=1825.89..1825.89 rows=715 width=0) (actual time=8.611..8.611 rows=0 loops=620944)
-> Bitmap Index Scan on ix_process_typeid (cost=0.00..40.50 rows=2144 width=0) (actual time=0.077..0.077 rows=788 loops=620944)
Index Cond: (typeid = pt_30.typeid)
-> Bitmap Index Scan on process_pkey (cost=0.00..1761.20 rows=95037 width=0) (actual time=8.481..8.481 rows=145093 loops=620944)
Index Cond: (processid < p.processid)
What I am unable to figure out is why is it using a Bitmap Index Scan and not Index Scan. From what it seems, there should only be 788 rows that needs to be compared? Wouldn't that be faster? If not how can I optimise this query?
processid is of bigint type and has an index
The complete execution plan is here.

You conveniently left out the names of the tables in the execution plan, but I assume that the nested loop join is between foo_bar_table r and foo_bar_table h, and the subplan is the IN condition.
The high execution time is caused by the subplan, which is executed for each potential join result, that is 788 * 788 = 620944 times. 620944 * 9.156 accounts for 5685363 milliseconds.
Create this index:
CREATE INDEX ON process (typeid, processid, installationid);
And run VACUUM:
VACUUM process;
That should give you a fast index-only scan.

Related

Why this query does not trigger partition pruning in Postgres?

I just noticed Postgres (checked on version 13 and 14) behavior that surprised me. I have a simple table volume with id and unique text column name. Second table dir has 3 columns: id, volume_id and path. This table is partitioned by hash on volume_id column. Here is full table schema with sample data:
CREATE TABLE dir (
id BIGSERIAL,
volume_id BIGINT,
path TEXT
) PARTITION BY HASH (volume_id);
CREATE TABLE dir_0
PARTITION OF dir FOR VALUES WITH (modulus 3, remainder 0);
CREATE TABLE dir_1
PARTITION OF dir FOR VALUES WITH (modulus 3, remainder 1);
CREATE TABLE dir_2
PARTITION OF dir FOR VALUES WITH (modulus 3, remainder 2);
CREATE TABLE volume(
id BIGINT,
name TEXT UNIQUE
);
INSERT INTO volume (id, name) VALUES (1, 'vol1'), (2, 'vol2'), (3, 'vol3');
INSERT INTO dir (volume_id, path) SELECT i % 3 + 1, 'path_' || i FROM generate_series(1,1000) AS i;
Now, given the volume name, I need to find all the rows from the dir table on that volume. I can do that in 2 different ways.
Query #1
EXPLAIN ANALYZE
SELECT * FROM dir AS d
INNER JOIN volume AS v ON d.volume_id = v.id
WHERE v.name = 'vol1';
Which produces query plan:
QUERY PLAN
Hash Join (cost=1.05..31.38 rows=333 width=37) (actual time=0.186..0.302 rows=333 loops=1)
Hash Cond: (d.volume_id = v.id)
-> Append (cost=0.00..24.00 rows=1000 width=24) (actual time=0.006..0.154 rows=1000 loops=1)
-> Seq Scan on dir_0 d_1 (cost=0.00..6.34 rows=334 width=24) (actual time=0.006..0.032 rows=334 loops=1)
-> Seq Scan on dir_1 d_2 (cost=0.00..6.33 rows=333 width=24) (actual time=0.006..0.029 rows=333 loops=1)
-> Seq Scan on dir_2 d_3 (cost=0.00..6.33 rows=333 width=24) (actual time=0.004..0.026 rows=333 loops=1)
-> Hash (cost=1.04..1.04 rows=1 width=13) (actual time=0.007..0.007 rows=1 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on volume v (cost=0.00..1.04 rows=1 width=13) (actual time=0.003..0.004 rows=1 loops=1)
Filter: (name = 'vol1'::text)
Rows Removed by Filter: 2
Planning Time: 0.500 ms
Execution Time: 0.364 ms
As you can see this query leads to a sequential scan on all 3 partitions of the dir table.
Alternatively, we can write this query like this:
Query #2
EXPLAIN ANALYZE
SELECT * FROM dir AS d
WHERE volume_id = (SELECT id FROM volume AS v WHERE v.name = 'vol1');
In that case we get following query plan:
QUERY PLAN
Append (cost=1.04..27.54 rows=1000 width=24) (actual time=0.010..0.066 rows=333 loops=1)
InitPlan 1 (returns $0)
-> Seq Scan on volume v (cost=0.00..1.04 rows=1 width=8) (actual time=0.003..0.004 rows=1 loops=1)
Filter: (name = 'vol1'::text)
Rows Removed by Filter: 2
-> Seq Scan on dir_0 d_1 (cost=0.00..7.17 rows=334 width=24) (never executed)
Filter: (volume_id = $0)
-> Seq Scan on dir_1 d_2 (cost=0.00..7.16 rows=333 width=24) (never executed)
Filter: (volume_id = $0)
-> Seq Scan on dir_2 d_3 (cost=0.00..7.16 rows=333 width=24) (actual time=0.004..0.037 rows=333 loops=1)
Filter: (volume_id = $0)
Planning Time: 0.063 ms
Execution Time: 0.093 ms
Here we can see that partitions dir_0 and dir_1 have never executed annotation.
View on DB Fiddle
My question is:
Why in the first case is there no partition pruning? Postgres already knows that the volume.name column is unique and that it will translate into a single volume_id. I would like to get a good intuition on when partition pruning can happen during query execution.
To get partition pruning with a hash join, you'd need to add a condition on d.volume_id to your query. No inference is made from the join with volume.
Your second query shows partition pruning; the "never executed" means that the query executor pruned the scan of certain partitions.
An alternative method that forces a nested loop join and prunes partitions should be
SELECT *
FROM volume AS v
CROSS JOIN LATERAL (SELECT * FROM dir
WHERE dir.volume_id = v.id
OFFSET 0) AS d
WHERE v.name = 'vol1';
OFFSET 0 prevents anything except a nested loop join. Different from your query, that should also work if volumn.name is not unique.

Can PostgreSQL 12 do partition pruning at execution time with subquery returning a list?

I'm trying to take advantages of partitioning in one case:
I have table "events" which partitioned by list by field "dt_pk" which is foreign key to table "dates".
-- Schema
drop schema if exists test cascade;
create schema test;
-- Tables
create table if not exists test.dates (
id bigint primary key,
dt date not null
);
create sequence test.seq_events_id;
create table if not exists test.events
(
id bigint not null,
dt_pk bigint not null,
content_int bigint,
foreign key (dt_pk) references test.dates(id) on delete cascade,
primary key (dt_pk, id)
)
partition by list (dt_pk);
-- Partitions
create table test.events_1 partition of test.events for values in (1);
create table test.events_2 partition of test.events for values in (2);
create table test.events_3 partition of test.events for values in (3);
-- Fill tables
insert into test.dates (id, dt)
select id, dt
from (
select 1 id, '2020-01-01'::date as dt
union all
select 2 id, '2020-01-02'::date as dt
union all
select 3 id, '2020-01-03'::date as dt
) t;
do $$
declare
dts record;
begin
for dts in (
select id
from test.dates
) loop
for k in 1..10000 loop
insert into test.events (id, dt_pk, content_int)
values (nextval('test.seq_events_id'), dts.id, random_between(1, 1000000));
end loop;
commit;
end loop;
end;
$$;
vacuum analyze test.dates, test.events;
I want to run select like this:
select *
from test.events e
join test.dates d on e.dt_pk = d.id
where d.dt between '2020-01-02'::date and '2020-01-03'::date;
But in this case partition pruning doesn't work. It's clear, I don't have constant for partition key. But from documentation I know that there is partition pruning at execution time, which works with value obtained from a subquery:
Partition pruning can be performed not only during the planning of a
given query, but also during its execution. This is useful as it can
allow more partitions to be pruned when clauses contain expressions
whose values are not known at query planning time, for example,
parameters defined in a PREPARE statement, using a value obtained from
a subquery, or using a parameterized value on the inner side of a
nested loop join.
So I rewrite my query like this and I expected partitionin pruning:
select *
from test.events e
where e.dt_pk in (
select d.id
from test.dates d
where d.dt between '2020-01-02'::date and '2020-01-03'::date
);
But explain for this select says:
Hash Join (cost=1.07..833.07 rows=20000 width=24) (actual time=3.581..15.989 rows=20000 loops=1)
Hash Cond: (e.dt_pk = d.id)
-> Append (cost=0.00..642.00 rows=30000 width=24) (actual time=0.005..6.361 rows=30000 loops=1)
-> Seq Scan on events_1 e (cost=0.00..164.00 rows=10000 width=24) (actual time=0.005..1.104 rows=10000 loops=1)
-> Seq Scan on events_2 e_1 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.005..1.127 rows=10000 loops=1)
-> Seq Scan on events_3 e_2 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.008..1.097 rows=10000 loops=1)
-> Hash (cost=1.04..1.04 rows=2 width=8) (actual time=0.006..0.006 rows=2 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on dates d (cost=0.00..1.04 rows=2 width=8) (actual time=0.004..0.004 rows=2 loops=1)
Filter: ((dt >= '2020-01-02'::date) AND (dt <= '2020-01-03'::date))
Rows Removed by Filter: 1
Planning Time: 0.206 ms
Execution Time: 17.237 ms
So, we read all partitions. I even tried to the planner to use nested loop join, because I read in documentation "parameterized value on the inner side of a nested loop join", but it didn't work:
set enable_hashjoin to off;
set enable_mergejoin to off;
And again:
Nested Loop (cost=0.00..1443.05 rows=20000 width=24) (actual time=9.160..25.252 rows=20000 loops=1)
Join Filter: (e.dt_pk = d.id)
Rows Removed by Join Filter: 30000
-> Append (cost=0.00..642.00 rows=30000 width=24) (actual time=0.008..6.280 rows=30000 loops=1)
-> Seq Scan on events_1 e (cost=0.00..164.00 rows=10000 width=24) (actual time=0.008..1.105 rows=10000 loops=1)
-> Seq Scan on events_2 e_1 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.008..1.047 rows=10000 loops=1)
-> Seq Scan on events_3 e_2 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.007..1.082 rows=10000 loops=1)
-> Materialize (cost=0.00..1.05 rows=2 width=8) (actual time=0.000..0.000 rows=2 loops=30000)
-> Seq Scan on dates d (cost=0.00..1.04 rows=2 width=8) (actual time=0.004..0.004 rows=2 loops=1)
Filter: ((dt >= '2020-01-02'::date) AND (dt <= '2020-01-03'::date))
Rows Removed by Filter: 1
Planning Time: 0.202 ms
Execution Time: 26.516 ms
Then I noticed that in every example of "partition pruning at execution time" I see only = condition, not in.
And it really works that way:
explain (analyze) select * from test.events e where e.dt_pk = (select id from test.dates where id = 2);
Append (cost=1.04..718.04 rows=30000 width=24) (actual time=0.014..3.018 rows=10000 loops=1)
InitPlan 1 (returns $0)
-> Seq Scan on dates (cost=0.00..1.04 rows=1 width=8) (actual time=0.007..0.008 rows=1 loops=1)
Filter: (id = 2)
Rows Removed by Filter: 2
-> Seq Scan on events_1 e (cost=0.00..189.00 rows=10000 width=24) (never executed)
Filter: (dt_pk = $0)
-> Seq Scan on events_2 e_1 (cost=0.00..189.00 rows=10000 width=24) (actual time=0.004..2.009 rows=10000 loops=1)
Filter: (dt_pk = $0)
-> Seq Scan on events_3 e_2 (cost=0.00..189.00 rows=10000 width=24) (never executed)
Filter: (dt_pk = $0)
Planning Time: 0.135 ms
Execution Time: 3.639 ms
And here is my final question: does partition pruning at execution time work only with subquery returning one item, or there is a way to get advantages of partition pruning with subquery returning a list?
And why doesn't it work with nested loop join, did I understand something wrong in words:
This includes values from subqueries and values from execution-time
parameters such as those from parameterized nested loop joins.
Or "parameterized nested loop joins" is something different from regular nested loop joins?
There is no partition pruning in your nested loop join because the partitioned table is on the outer side, which is always scanned completely. The inner side is scanned with the join key from the outer side as parameter (hence parameterized scan), so if the partitioned table were on the inner side of the nested loop join, partition pruning could happen.
Partition pruning with IN lists can take place if the list vales are known at plan time:
EXPLAIN (COSTS OFF)
SELECT * FROM test.events WHERE dt_pk IN (1, 2);
QUERY PLAN
---------------------------------------------------
Append
-> Seq Scan on events_1
Filter: (dt_pk = ANY ('{1,2}'::bigint[]))
-> Seq Scan on events_2
Filter: (dt_pk = ANY ('{1,2}'::bigint[]))
(5 rows)
But no attempts are made to flatten a subquery, and PostgreSQL doesn't use partition pruning, even if you force the partitioned table to be on the inner side (enable_material = off, enable_hashjoin = off, enable_mergejoin = off):
EXPLAIN (ANALYZE)
SELECT * FROM test.events WHERE dt_pk IN (SELECT 1 UNION SELECT 2);
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.06..2034.09 rows=20000 width=24) (actual time=0.057..15.523 rows=20000 loops=1)
Join Filter: (events_1.dt_pk = (1))
Rows Removed by Join Filter: 40000
-> Unique (cost=0.06..0.07 rows=2 width=4) (actual time=0.026..0.029 rows=2 loops=1)
-> Sort (cost=0.06..0.07 rows=2 width=4) (actual time=0.024..0.025 rows=2 loops=1)
Sort Key: (1)
Sort Method: quicksort Memory: 25kB
-> Append (cost=0.00..0.05 rows=2 width=4) (actual time=0.006..0.009 rows=2 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)
-> Append (cost=0.00..642.00 rows=30000 width=24) (actual time=0.012..4.334 rows=30000 loops=2)
-> Seq Scan on events_1 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.011..1.057 rows=10000 loops=2)
-> Seq Scan on events_2 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.004..0.641 rows=10000 loops=2)
-> Seq Scan on events_3 (cost=0.00..164.00 rows=10000 width=24) (actual time=0.002..0.594 rows=10000 loops=2)
Planning Time: 0.531 ms
Execution Time: 16.567 ms
(16 rows)
I am not certain, but it may be because the tables are so small. You might want to try with bigger tables.
If you care more about get it working than the fine details, and you haven't tried this yet: you can rewrite the query to something like
explain analyze select *
from test.dates d
join test.events e on e.dt_pk = d.id
where
d.dt between '2020-01-02'::date and '2020-01-03'::date
and e.dt_pk in (extract(day from '2020-01-02'::date)::int,
extract(day from '2020-01-03'::date)::int);
which will give the expected pruning.

Postgresql very slow query on indexed columns

I have very simple query which uses json data for joining on primary table:
WITH
timecode_range AS
(
SELECT
(t->>'table_id')::integer AS table_id,
(t->>'timecode_from')::bigint AS timecode_from,
(t->>'timecode_to')::bigint AS timecode_to
FROM (SELECT '{"table_id":1,"timecode_from":19890328,"timecode_to":119899328}'::jsonb t) rowset
)
SELECT n.*
FROM partition.json_notification n
INNER JOIN timecode_range r ON n.table_id = r.table_id AND n.timecode > r.timecode_from AND n.timecode <= r.timecode_to
It works perfectly when "timecode_range" returns only 1 record:
Nested Loop (cost=0.43..4668.80 rows=1416 width=97) (actual time=0.352..0.352 rows=0 loops=1)
CTE timecode_range
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.002..0.002 rows=1 loops=1)
-> CTE Scan on timecode_range r (cost=0.00..0.02 rows=1 width=20) (actual time=0.007..0.007 rows=1 loops=1)
-> Index Scan using json_notification_pkey on json_notification n (cost=0.42..4654.61 rows=1416 width=97) (actual time=0.322..0.322 rows=0 loops=1)
Index Cond: ((timecode > r.timecode_from) AND (timecode <= r.timecode_to))
Filter: (r.table_id = table_id)
Planning time: 2.292 ms
Execution time: 0.665 ms
But when I need to return several records:
WITH
timecode_range AS
(
SELECT
(t->>'table_id')::integer AS table_id,
(t->>'timecode_from')::bigint AS timecode_from,
(t->>'timecode_to')::bigint AS timecode_to
FROM (SELECT json_array_elements('[{"table_id":1,"timecode_from":19890328,"timecode_to":119899328}]') t) rowset
)
SELECT n.*
FROM partition.json_notification n
INNER JOIN timecode_range r ON n.table_id = r.table_id AND n.timecode > r.timecode_from AND n.timecode <= r.timecode_to
It starts using sequential scan and execution time dramatically grows :(
Hash Join (cost=7.01..37289.68 rows=92068 width=97) (actual time=418.563..418.563 rows=0 loops=1)
Hash Cond: (n.table_id = r.table_id)
Join Filter: ((n.timecode > r.timecode_from) AND (n.timecode <= r.timecode_to))
Rows Removed by Join Filter: 14444
CTE timecode_range
-> Subquery Scan on rowset (cost=0.00..3.76 rows=100 width=32) (actual time=0.233..0.234 rows=1 loops=1)
-> Result (cost=0.00..0.51 rows=100 width=0) (actual time=0.218..0.218 rows=1 loops=1)
-> Seq Scan on json_notification n (cost=0.00..21703.36 rows=840036 width=97) (actual time=0.205..312.991 rows=840036 loops=1)
-> Hash (cost=2.00..2.00 rows=100 width=20) (actual time=0.239..0.239 rows=1 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> CTE Scan on timecode_range r (cost=0.00..2.00 rows=100 width=20) (actual time=0.235..0.236 rows=1 loops=1)
Planning time: 4.729 ms
Execution time: 418.937 ms
What am I doing wrong?
PostgreSQL has no possibility to estimate the number of rows returned from a table function, so it uses the ROWS value specified in CREATE FUNCTION (default 1000).
For json_array_elements this value is set to 100:
SELECT prorows FROM pg_proc WHERE proname = 'json_array_elements';
┌─────────┐
│ prorows │
├─────────┤
│ 100 │
└─────────┘
(1 row)
But in your case the function returns only 1 row.
This misestimate makes PostgreSQL choose another join strategy (hash join instead of nested loop), which causes the longer execution time.
If you can choose some other construct than such a table function (e.g. a VALUES statement) that PostgreSQL can estimate, you'll get a better plan.
An alternative is to use a LIMIT clause on the CTE definition if you can safely specify an upper limit.
If you think that PostgreSQL is wrong when it switches to a hash join beyond a certain row count, you can test as follows:
Run the query (using a sequential scan and a hash join) and measure the duration (psql's \timing command will help).
Force a nested loop join:
SET enable_hashjoin=off;
SET enable_mergejoin=off;
Run the query again (with a nested loop join) and measure the duration.
If PostgreSQL is indeed wrong, you could adjust the optimizer parameters by lowering random_page_cost to a value closer to seq_page_cost.

PostgreSQL index in temporarly table

I have the following PostGIS/greSQL query
SELECT luc.*
FROM spatial_derived.lucas12 luc,
(SELECT geom
FROM spatial_derived.germany_bld
WHERE state = 'SN') sn
WHERE ST_Contains(sn.geom, luc.geom)
Query plan:
Nested Loop (cost=2.45..53.34 rows=8 width=236) (actual time=1.030..26.751 rows=1282 loops=1)
-> Seq Scan on germany_bld (cost=0.00..2.20 rows=1 width=18399) (actual time=0.023..0.029 rows=1 loops=1)
Filter: ((state)::text = 'SN'::text)
Rows Removed by Filter: 15
-> Bitmap Heap Scan on lucas12 luc (cost=2.45..51.06 rows=8 width=236) (actual time=1.002..26.031 rows=1282 loops=1)
Recheck Cond: (germany_bld.geom ~ geom)
Filter: _st_contains(germany_bld.geom, geom)
Rows Removed by Filter: 499
Heap Blocks: exact=174
-> Bitmap Index Scan on lucas12_geom_idx (cost=0.00..2.45 rows=23 width=0) (actual time=0.419..0.419 rows=1781 loops=1)
Index Cond: (germany_bld.geom ~ geom)
Planning time: 0.536 ms
Execution time: 27.023 ms
which is due to an index on the geometry columns pretty fast. However when I want to add a buffer to the sn polygon (1 big polygon that represents a border line, hence a quite simple feature):
SELECT luc.*
FROM spatial_derived.lucas12 luc,
(SELECT ST_Buffer(geom, 30000) geom
FROM spatial_derived.germany_bld
WHERE state = 'SN') sn
WHERE ST_Contains(sn.geom, luc.geom)
Query plan:
Nested Loop (cost=0.00..13234.80 rows=7818 width=236) (actual time=6221.391..1338380.257 rows=2298 loops=1)
Join Filter: st_contains(st_buffer(germany_bld.geom, 30000::double precision), luc.geom)
Rows Removed by Join Filter: 22637
-> Seq Scan on germany_bld (cost=0.00..2.20 rows=1 width=18399) (actual time=0.018..0.036 rows=1 loops=1)
Filter: ((state)::text = 'SN'::text)
Rows Removed by Filter: 15
-> Seq Scan on lucas12 luc (cost=0.00..1270.55 rows=23455 width=236) (actual time=0.005..25.623 rows=24935 loops=1)
Planning time: 0.271 ms
Execution time: 1338381.079 ms
the query takes forever! I blame it on the not existing index in the temporally table sn. The massive decrease in speed can't be 'caused by ST_Buffer() as it's itself really fast and the buffered feature is simple.
Two Questions:
1) Am I right?
2) What can I do, to reach similar speed as with the first query?
I've ran into a trap. ST_Buffer() is not the right choice here rather ST_DWithin() which keeps the indexes of every geometry column when actually performing a bounding box comparison. The help page for ST_Buffer() clearly states to not make the mistake using ST_Buffer(), but instead use ST_DWithin() for radius searches. Since the word Buffer is used in a lot of GIS softwares I didn't consider looking for alternatives.
SELECT luc.*
FROM spatial_derived.lucas12 luc
JOIN spatial_derived.germany_bld sn ON ST_DWithin(sn.geom, luc.geom, 30000)
WHERE bld.state = 'SN'
works and only takes a second (2300 points within that "buffer")!
to check if you right, you can leave sn as is and apply ST_Buffer on join:
SELECT luc.*
FROM spatial_derived.lucas12 luc,
(SELECT geom
FROM spatial_derived.germany_bld
WHERE state = 'SN') sn
WHERE ST_Contains(ST_Buffer(sn.geom, 30000), luc.geom)
Query plan:
Nested Loop (cost=0.00..13234.80 rows=7818 width=236) (actual time=6237.876..1340000.576 rows=2298 loops=1)
Join Filter: st_contains(st_buffer(germany_bld.geom, 30000::double precision), luc.geom)
Rows Removed by Join Filter: 22637
-> Seq Scan on germany_bld (cost=0.00..2.20 rows=1 width=18399) (actual time=0.023..0.038 rows=1 loops=1)
Filter: ((state)::text = 'SN'::text)
Rows Removed by Filter: 15
-> Seq Scan on lucas12 luc (cost=0.00..1270.55 rows=23455 width=236) (actual time=0.004..24.525 rows=24935 loops=1)
Planning time: 0.453 ms
Execution time: 1340001.420 ms
this query will answer both your questions or first, depending on result.
Update
Your assumption seems to be wrong. The ST_Buffer() causes speed drop down
You seem to join on much larger set when using the ST_Buffer, so time increase is quite expected. You can run explain analyze for both with and without ST_Buffer() queries - it probably will show same plans with different rows number and cost second value...

Need help optimizing this simple query

My end goal is to optimize my query using indexes, but I'm having trouble adding the right index. Everything I've tried results in the same cost in the Explain diagram, and no indication that it's even using any indexes.
I have two tables:
event that has two date columns: start_date and end_date (can be null).
fiscal_date that has:
two date columns start_date and end_date (cannot be null)
a fiscal_year column of type char(4)
a fiscal_quarter column of type char(1)
There's another table address that's just a one-to-one with a foreign key in event. There's no indexes on it save the public key.
I have a query that I can't change that figures out what fiscal quarter and year the event starts in:
SELECT
e.*,
(select 'Q' || fd.fiscal_quarter || ' FY' || fd.fiscal_year
from fiscal_date fd
where e.start_date between fd.start_date and fd.end_date
limit 1) as fiscal_quarter_year,
(select 'Q' || fd.fiscal_quarter
from fiscal_date fd
where e.start_date between fd.start_date and fd.end_date
limit 1) as fiscal_quarter,
(select 'FY' || fd.fiscal_year
from fiscal_date fd
where e.start_date between fd.start_date and fd.end_date
limit 1) as fiscal_year,
a.street1, a.street2, a.street3, a.city, a.state, a.country, a.postal_code
FROM event AS e
LEFT OUTER JOIN address a ON e.address_id=a.address_id;
Here's an EXPLAIN of the query (notice all the expensive seq scans on the left):
As requested, here's the output of explain analyze:
Hash Left Join (cost=115.78..2846.64 rows=1649 width=5087) (actual time=18.334..134.279 rows=1649 loops=1)
Hash Cond: (e.address_id = a.address_id)
-> Seq Scan on event e (cost=0.00..323.49 rows=1649 width=5031) (actual time=0.223..19.808 rows=1649 loops=1)
-> Hash (cost=68.68..68.68 rows=3768 width=60) (actual time=17.797..17.797 rows=3768 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 248kB
-> Seq Scan on address a (cost=0.00..68.68 rows=3768 width=60) (actual time=0.004..9.071 rows=3768 loops=1)
SubPlan 1
-> Limit (cost=0.00..0.49 rows=1 width=28) (actual time=0.011..0.014 rows=1 loops=1649)
-> Seq Scan on fiscal_date fd (cost=0.00..1.46 rows=3 width=28) (actual time=0.006..0.006 rows=1 loops=1649)
Filter: (($0 >= start_date) AND ($0 <= end_date))
SubPlan 2
-> Limit (cost=0.00..0.48 rows=1 width=8) (actual time=0.010..0.012 rows=1 loops=1649)
-> Seq Scan on fiscal_date fd (cost=0.00..1.43 rows=3 width=8) (actual time=0.006..0.006 rows=1 loops=1649)
Filter: (($1 >= start_date) AND ($1 <= end_date))
SubPlan 3
-> Limit (cost=0.00..0.48 rows=1 width=20) (actual time=0.010..0.012 rows=1 loops=1649)
-> Seq Scan on fiscal_date fd (cost=0.00..1.43 rows=3 width=20) (actual time=0.005..0.005 rows=1 loops=1649)
Filter: (($2 >= start_date) AND ($2 <= end_date))
Total runtime: 138.008 ms
I've tried adding indexes to event that index the start and end dates (both and individually), adding an index to fiscal_date's date columns, but nothing seems to be reducing the cost calculation of this query.
How do I optimize this query, or isn't it possible?
Ok, so your problem isn't the fiscal date sequential scans, the number of rows are so few that sequential scan is probably the correct thing to do. You probably want Index on address._id on both tables.
If adress_is is the primary key of adress table it already is indexed.
Also, just to be sure, run vacuum full and vacuum analyze on all tables.
EDIT:
The performance seems really bad considering there are so few rows (under 10000 is nothing). Are the tables really big or is the hardware ancient? If not you should probably take a serious look at the configuration (working mem etc)