When joining on a table and then filtering (LIMIT 30 for instance), Postgres will apply a JOIN operation on all rows, even if the columns from those rows is only used in the returned column, and not as a filtering predicate.
This would be understandable for an INNER JOIN (PG has to know if the row will be returned or not) or for a LEFT JOIN without a unique constraint (PG has to know if more than one row will be returned or not), but for a LEFT JOIN on a UNIQUE column, this seems wasteful: if the query matches 10k rows, then 10k joins will be performed, and then only 30 will be returned.
It would seem more efficient to "delay", or defer, the join, as much as possible, and this is something that I've seen happen on some other queries.
Splitting this into a subquery (SELECT * FROM (SELECT * FROM main WHERE x LIMIT 30) LEFT JOIN secondary) works, by ensuring that only 30 items are returned from the main table before joining them, but it feels like I'm missing something, and the "standard" form of the query should also apply the same optimization.
Looking at the EXPLAIN plans, however, I can see that the number of rows joined is always the total number of rows, without "early bailing out" as you could see when, for instance, running a Seq Scan with a LIMIT 5.
Example schema, with a main table and a secondary one: secondary columns will only be returned, never filtered on.
drop table if exists secondary;
drop table if exists main;
create table main(id int primary key not null, main_column int);
create index main_column on main(main_column);
insert into main(id, main_column) SELECT i, i % 3000 from generate_series( 1, 1000000, 1) i;
create table secondary(id serial primary key not null, main_id int references main(id) not null, secondary_column int);
create unique index secondary_main_id on secondary(main_id);
insert into secondary(main_id, secondary_column) SELECT i, (i + 17) % 113 from generate_series( 1, 1000000, 1) i;
analyze main;
analyze secondary;
Example query:
explain analyze verbose select main.id, main_column, secondary_column
from main
left join secondary on main.id = secondary.main_id
where main_column = 5
order by main.id
limit 50;
This is the most "obvious" way of writing the query, takes on average around 5ms on my computer.
Explain:
Limit (cost=3742.93..3743.05 rows=50 width=12) (actual time=5.010..5.322 rows=50 loops=1)
Output: main.id, main.main_column, secondary.secondary_column
-> Sort (cost=3742.93..3743.76 rows=332 width=12) (actual time=5.006..5.094 rows=50 loops=1)
Output: main.id, main.main_column, secondary.secondary_column
Sort Key: main.id
Sort Method: top-N heapsort Memory: 27kB
-> Nested Loop Left Join (cost=11.42..3731.90 rows=332 width=12) (actual time=0.123..4.446 rows=334 loops=1)
Output: main.id, main.main_column, secondary.secondary_column
Inner Unique: true
-> Bitmap Heap Scan on public.main (cost=11.00..1036.99 rows=332 width=8) (actual time=0.106..1.021 rows=334 loops=1)
Output: main.id, main.main_column
Recheck Cond: (main.main_column = 5)
Heap Blocks: exact=334
-> Bitmap Index Scan on main_column (cost=0.00..10.92 rows=332 width=0) (actual time=0.056..0.057 rows=334 loops=1)
Index Cond: (main.main_column = 5)
-> Index Scan using secondary_main_id on public.secondary (cost=0.42..8.12 rows=1 width=8) (actual time=0.006..0.006 rows=1 loops=334)
Output: secondary.id, secondary.main_id, secondary.secondary_column
Index Cond: (secondary.main_id = main.id)
Planning Time: 0.761 ms
Execution Time: 5.423 ms
explain analyze verbose select m.id, main_column, secondary_column
from (
select main.id, main_column
from main
where main_column = 5
order by main.id
limit 50
) m
left join secondary on m.id = secondary.main_id
where main_column = 5
order by m.id
limit 50
This returns the same results, in 2ms.
The total EXPLAIN cost is also three times higher, in line with the performance gain we're seeing.
Limit (cost=1048.44..1057.21 rows=1 width=12) (actual time=1.219..2.027 rows=50 loops=1)
Output: m.id, m.main_column, secondary.secondary_column
-> Nested Loop Left Join (cost=1048.44..1057.21 rows=1 width=12) (actual time=1.216..1.900 rows=50 loops=1)
Output: m.id, m.main_column, secondary.secondary_column
Inner Unique: true
-> Subquery Scan on m (cost=1048.02..1048.77 rows=1 width=8) (actual time=1.201..1.515 rows=50 loops=1)
Output: m.id, m.main_column
Filter: (m.main_column = 5)
-> Limit (cost=1048.02..1048.14 rows=50 width=8) (actual time=1.196..1.384 rows=50 loops=1)
Output: main.id, main.main_column
-> Sort (cost=1048.02..1048.85 rows=332 width=8) (actual time=1.194..1.260 rows=50 loops=1)
Output: main.id, main.main_column
Sort Key: main.id
Sort Method: top-N heapsort Memory: 27kB
-> Bitmap Heap Scan on public.main (cost=11.00..1036.99 rows=332 width=8) (actual time=0.054..0.753 rows=334 loops=1)
Output: main.id, main.main_column
Recheck Cond: (main.main_column = 5)
Heap Blocks: exact=334
-> Bitmap Index Scan on main_column (cost=0.00..10.92 rows=332 width=0) (actual time=0.029..0.030 rows=334 loops=1)
Index Cond: (main.main_column = 5)
-> Index Scan using secondary_main_id on public.secondary (cost=0.42..8.44 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=50)
Output: secondary.id, secondary.main_id, secondary.secondary_column
Index Cond: (secondary.main_id = m.id)
Planning Time: 0.161 ms
Execution Time: 2.115 ms
This is a toy dataset here, but on a real DB, the IO difference is significant (no need to fetch 1000 rows when 30 are enough), and the timing difference also quickly adds up (up to an order of magnitude slower).
So my question: is there any way to get the planner to understand that the JOIN can be applied much later in the process?
It seems like something that could be applied automatically to gain a sizeable performance boost.
Deferred joins are good. It's usually helpful to run the limit operation on a subquery that yields only the id values. The order by....limit operation has to sort less data just to discard it.
select main.id, main.main_column, secondary.secondary_column
from main
join (
select id
from main
where main_column = 5
order by id
limit 50
) selection on main.id = selection.id
left join secondary on main.id = secondary.main_id
order by main.id
limit 50
It's also possible adding id to your main_column index will help. With a BTREE index the query planner knows it can get the id values in ascending order from the index, so it may be able to skip the sort step entirely and just scan the first 50 values.
create index main_column on main(main_column, id);
Edit In a large table, the heavy lifting of your query will be the selection of the 50 main.id values to process. To get those 50 id values as cheaply as possible you can use a scan of the covering index I proposed with the subquery I proposed. Once you've got your 50 id values, looking up 50 rows' worth of details from your various tables by main.id and secondary.main_id is trivial; you have the correct indexes in place and it's a limited number of rows. Because it's a limited number of rows it won't take much time.
It looks like your table sizes are too small for various optimizations to have much effect, though. Query plans change a lot when tables are larger.
Alternative query, using row_number() instead of LIMIT (I think you could even omit LIMIT here):
-- prepare q3 AS
select m.id, main_column, secondary_column
from (
select id, main_column
, row_number() OVER (ORDER BY id, main_column) AS rn
from main
where main_column = 5
) m
left join secondary on m.id = secondary.main_id
WHERE m.rn <= 50
ORDER BY m.id
LIMIT 50
;
Puttting the subsetting into a CTE can avoid it to be merged into the main query:
PREPARE q6 AS
WITH
-- MATERIALIZED -- not needed before version 12
xxx AS (
SELECT DISTINCT x.id
FROM main x
WHERE x.main_column = 5
ORDER BY x.id
LIMIT 50
)
select m.id, m.main_column, s.secondary_column
from main m
left join secondary s on m.id = s.main_id
WHERE EXISTS (
SELECT *
FROM xxx x WHERE x.id = m.id
)
order by m.id
-- limit 50
;
Related
I have a table with around 3 million rows.
I created single gin index on multiple columns of the table.
CREATE INDEX search_idx ON customer USING gin (name gin_trgm_ops, id gin_trgm_ops, data gin_trgm_ops)
I am running following query (simplified to use single column in criteria) but it takes around 4 seconds:
EXPLAIN ANALYSE
SELECT c.id, similarity(c.name, 'john') sml
FROM customer c WHERE
c.name % 'john'
ORDER BY sml DESC
LIMIT 10
The output query plan is:
Limit (cost=9255.12..9255.14 rows=10 width=30) (actual time=3771.661..3771.665 rows=10 loops=1)
-> Sort (cost=9255.12..9260.43 rows=2126 width=30) (actual time=3771.659..3771.661 rows=10 loops=1)
Sort Key: (similarity((name)::text, 'john'::text)) DESC
Sort Method: top-N heapsort Memory: 26kB
-> Bitmap Heap Scan on customer c (cost=1140.48..9209.18 rows=2126 width=30) (actual time=140.665..3770.478 rows=3605 loops=1)
Recheck Cond: ((name)::text % 'john'::text)
Rows Removed by Index Recheck: 939598
Heap Blocks: exact=158055 lossy=132577
-> Bitmap Index Scan on search_idx (cost=0.00..1139.95 rows=2126 width=0) (actual time=105.609..105.610 rows=458131 loops=1)
Index Cond: ((name)::text % 'john'::text)
Planning Time: 0.102 ms
I fail to understand that why the rows are not SORTED and LIMITed to 10 when retrieved from the search_idx in first step, and afterwards only 10 rows are fetched from the customer table (instead of 2126 rows)
Any ideas how can this query be made faster.
I tried gist index but i see no performance gains.
I also tried increasing the work_mem from 4MB to 32MB and I can see improvement of 1 sec but not more.
Also I noticed that even if I remove c.id in SELECT clause, postgres does not perform index only scan and still joins with main table.
Thanks for the help.
Update1:
After Laurenz Albe suggestion below the query performance increased and it is now around 600 ms. Plan looks like this now:
Subquery Scan on q (cost=0.41..78.29 rows=1 width=12) (actual time=63.150..574.536 rows=10 loops=1)
Filter: ((q.name)::text % 'john'::text)
-> Limit (cost=0.41..78.16 rows=10 width=40) (actual time=63.148..574.518 rows=10 loops=1)
-> Index Scan using search_name_idx on customer c (cost=0.41..2232864.76 rows=287182 width=40) (actual time=63.146..574.513 rows=10 loops=1)
Order By: ((name)::text <-> 'john'::text)
Planning Time: 42.671 ms
Execution Time: 585.554 ms
To get the 10 closest matches with index support, you should create a GiST index and query like this:
SELECT id, sml
FROM (SELECT c.id,
c.name,
similarity(c.name, 'john') sml
FROM customer c
ORDER BY c.name <-> 'john'
LIMIT 10) AS q
WHERE name % 'john';
The subquery can use the GiST index, and the outer query eliminates all results that do not exceed the pg_trgm.similarity_threshold.
UPDATE: Referring https://alexklibisz.com/2022/02/18/optimizing-postgres-trigram-search.html
I updated the query to use word_similarity with multicolumn GIST index and it improved performance alot.
EXPLAIN (analyze, buffers)
WITH input AS (SELECT 'new york' AS search)
SELECT *
FROM customer, input
WHERE
(
input.search <% id
OR input.search <% name
OR input.search <% city
)
ORDER BY
input.search <<-> id,
input.search <<-> name,
input.search <<-> city
LIMIT 10
Explain plan showsindex only scan.
I am having a really slow query (~100mins). I have omitted a lot of the inner child nodes by denoting it with a suffix ...
HashAggregate (cost=6449635645.84..6449635742.59 rows=1290 width=112) (actual time=5853093.882..5853095.159 rows=785 loops=1)
Group Key: p.processid
-> Nested Loop (cost=10851145.36..6449523319.09 rows=832050 width=112) (actual time=166573.289..5853043.076 rows=3904 loops=1)
Join Filter: (SubPlan 2)
Rows Removed by Join Filter: 617040
-> Merge Left Join (cost=5425572.68..5439530.95 rows=1290 width=799) (actual time=80092.782..80114.828 rows=788 loops=1) ...
-> Materialize (cost=5425572.68..5439550.30 rows=1290 width=112) (actual time=109.689..109.934 rows=788 loops=788) ...
SubPlan 2
-> Limit (cost=3869.12..3869.13 rows=5 width=8) (actual time=9.155..9.156 rows=5 loops=620944) ...
Planning time: 1796.764 ms
Execution time: 5853316.418 ms
(2836 rows)
The above query plan is a query executed to the view, schema below (simplified)
create or replace view foo_bar_view(processid, column_1, run_count) as
SELECT
q.processid,
q.column_1,
q.run_count
FROM
(
SELECT
r.processid,
avg(h.some_column) AS column_1,
-- many more aggregate function on many more columns
count(1) AS run_count
FROM
foo_bar_table r,
foo_bar_table h
WHERE (h.processid IN (SELECT p.processid
FROM process p
LEFT JOIN bar i ON p.barid = i.id
LEFT JOIN foo ii ON i.fooid = ii.fooid
JOIN foofoobar pt ON p.typeid = pt.typeid AND pt.displayname ~~
((SELECT ('%'::text || property.value) || '%'::text
FROM property
WHERE property.name = 'something'::text))
WHERE p.processid < r.processid
AND (ii.name = r.foo_name OR ii.name IS NULL AND r.foo_name IS NULL)
ORDER BY p.processid DESC
LIMIT 5))
GROUP BY r.processid
) q;
I would just like to understand, does this mean that most of the time is spent performing the GROUP BY processid?
If not, what is causing the issue? I can't think of a reason why is this query so slow.
The aggregate functions used are avg, min, max, stddev.
A total of 52 of them were used, 4 on each of the 13 columns.
Update: Expanding on the child node of SubPlan 2. We can see that the Bitmap Index Scan on process_pkey part is the bottleneck.
-> Bitmap Heap Scan on process p_30 (cost=1825.89..3786.00 rows=715 width=24) (actual time=8.642..8.833 rows=394 loops=620944)
Recheck Cond: ((typeid = pt_30.typeid) AND (processid < p.processid))
Heap Blocks: exact=185476288
-> BitmapAnd (cost=1825.89..1825.89 rows=715 width=0) (actual time=8.611..8.611 rows=0 loops=620944)
-> Bitmap Index Scan on ix_process_typeid (cost=0.00..40.50 rows=2144 width=0) (actual time=0.077..0.077 rows=788 loops=620944)
Index Cond: (typeid = pt_30.typeid)
-> Bitmap Index Scan on process_pkey (cost=0.00..1761.20 rows=95037 width=0) (actual time=8.481..8.481 rows=145093 loops=620944)
Index Cond: (processid < p.processid)
What I am unable to figure out is why is it using a Bitmap Index Scan and not Index Scan. From what it seems, there should only be 788 rows that needs to be compared? Wouldn't that be faster? If not how can I optimise this query?
processid is of bigint type and has an index
The complete execution plan is here.
You conveniently left out the names of the tables in the execution plan, but I assume that the nested loop join is between foo_bar_table r and foo_bar_table h, and the subplan is the IN condition.
The high execution time is caused by the subplan, which is executed for each potential join result, that is 788 * 788 = 620944 times. 620944 * 9.156 accounts for 5685363 milliseconds.
Create this index:
CREATE INDEX ON process (typeid, processid, installationid);
And run VACUUM:
VACUUM process;
That should give you a fast index-only scan.
I have a table with 3 columns and composite primary key with all the 3 columns. All the individual columns have lot of duplicates and I have btree separately on all of them. The table has around 10 million records.
My query with just a condition with a hardcoded value for single column would always return more than a million records. It takes more than 40 secs whereas it takes very few seconds if I limit the query to 1 or 2 million rows without any condition.
Any help to optimize it as there is no bitmap index in Postgres? All 3 columns have lots of duplicates, would it help if I drop the btree index on them?
SELECT t1.filterid,
t1.filterby,
t1.filtertype
FROM echo_sm.usernotificationfilters t1
WHERE t1.filtertype = 9
UNION
SELECT t1.filterid, '-1' AS filterby, 9 AS filtertype
FROM echo_sm.usernotificationfilters t1
WHERE NOT EXISTS (SELECT 1
FROM echo_sm.usernotificationfilters t2
WHERE t2.filtertype = 9 AND t2.filterid = t1.filterid);
Filtertype column is integer and the rest 2 are varchar(50). All 3 columns have separate btree indexes on them.
Explain plan:
Unique (cost=2168171.15..2201747.47 rows=3357632 width=154) (actual time=32250.340..36371.928 rows=3447159 loops=1)
-> Sort (cost=2168171.15..2176565.23 rows=3357632 width=154) (actual time=32250.337..35544.050 rows=4066447 loops=1)
Sort Key: usernotificationfilters.filterid, usernotificationfilters.filterby, usernotificationfilters.filtertype
Sort Method: external merge Disk: 142696kB
-> Append (cost=62854.08..1276308.41 rows=3357632 width=154) (actual time=150.155..16025.874 rows=4066447 loops=1)
-> Bitmap Heap Scan on usernotificationfilters (cost=62854.08..172766.46 rows=3357631 width=25) (actual time=150.154..574.297 rows=3422522 loops=1)
Recheck Cond: (filtertype = 9)
Heap Blocks: exact=39987
-> Bitmap Index Scan on index_sm_usernotificationfilters_filtertype (cost=0.00..62014.67 rows=3357631 width=0) (actual time=143.585..143.585 rows=3422522 loops=1)
Index Cond: (filtertype = 9)
-> Gather (cost=232131.85..1069965.63 rows=1 width=50) (actual time=3968.492..15133.812 rows=643925 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Hash Anti Join (cost=231131.85..1068965.53 rows=1 width=50) (actual time=4135.235..12945.029 rows=214642 loops=3)
Hash Cond: ((usernotificationfilters_1.filterid)::text = (usernotificationfilters_1_1.filterid)::text)
-> Parallel Seq Scan on usernotificationfilters usernotificationfilters_1 (cost=0.00..106879.18 rows=3893718 width=14) (actual time=0.158..646.432 rows=3114974 loops=3)
-> Hash (cost=172766.46..172766.46 rows=3357631 width=14) (actual time=4133.991..4133.991 rows=3422522 loops=3)
Buckets: 131072 Batches: 64 Memory Usage: 3512kB
-> Bitmap Heap Scan on usernotificationfilters usernotificationfilters_1_1 (cost=62854.08..172766.46 rows=3357631 width=14) (actual time=394.775..1891.931 rows=3422522 loops=3)
Recheck Cond: (filtertype = 9)
Heap Blocks: exact=39987
-> Bitmap Index Scan on index_sm_usernotificationfilters_filtertype (cost=0.00..62014.67 rows=3357631 width=0) (actual time=383.635..383.635 rows=3422522 loops=3)
Index Cond: (filtertype = 9)
Planning time: 0.467 ms
Execution time: 36531.763 ms
The second subquery in your UNION takes about 15 seconds all by itself, and that could possibly be optimized separately from the rest of the query.
The sort to implement the duplicate removal implied by UNION takes about 20 seconds all by itself. It spills to disk. You could increase "work_mem" until it either stops spilling to disk, or starts using a hash rather than a sort. Of course you do need to have the RAM to backup your setting of "work_mem".
A third possibility would be not to treat these steps in isolation. If you had an index which would allow the data to be read from the 2nd branch of the union already in order, than it might not have to re-sort the whole thing. That would probably be an index on (filterid, filterby, filtertype).
This is a separate independent way to approach it.
I think your
WHERE NOT EXISTS (SELECT 1...
could be correctly changed to
WHERE t1.filtertype <> 9 NOT EXISTS AND (SELECT 1...
because the case where t1.filtertype=9 would filter itself out. Is that correct? If so, you could try writing it that way, as the planner is probably not smart enough to make that transformation on its own. Once you have done that, than maybe a filtered index, something like the below, would come in useful.
create index on echo_sm.usernotificationfilters (filterid, filterby, filtertype)
where filtertype <> 9
But, unless you get rid of or speed up that sort, there is only so much improvement you can get with other things.
It appears you only want to retrieve one record per filterid: a record with filtertype = 9 if available, or just another, with dummy values for the other columns. This can be done by ordering BY (filtertype<>9), filtertype ) and picking only the first row via row_number() = 1:
-- EXPLAIN ANALYZE
SELECT xx.filterid
, case(xx.filtertype) when 9 then xx.filterby ELSE '-1' END AS filterby
, 9 AS filtertype -- xx.filtertype
-- , xx.rn
FROM (
SELECT t1.filterid , t1.filterby , t1.filtertype
, row_number() OVER (PARTITION BY t1.filterid ORDER BY (filtertype<>9), filtertype ) AS rn
FROM userfilters t1
) xx
WHERE xx.rn = 1
-- ORDER BY xx.filterid, xx.rn
;
This query can be supported by a an index on the same expression:
CREATE INDEX ON userfilters ( filterid , (filtertype<>9), filtertype ) ;
But, on my machine the UNION ALL version is faster (using the same index):
EXPLAIN ANALYZE
SELECT t1.filterid
, t1.filterby
, t1.filtertype
FROM userfilters t1
WHERE t1.filtertype = 9
UNION ALL
SELECT DISTINCT t1.filterid , '-1' AS filterby ,9 AS filtertype
FROM userfilters t1
WHERE NOT EXISTS (
SELECT *
FROM userfilters t2
WHERE t2.filtertype = 9 AND t2.filterid = t1.filterid
)
;
Even simpler (and faster!) is to use DISTINCT ON() , supported by the same conditional index:
-- EXPLAIN ANALYZE
SELECT DISTINCT ON (t1.filterid)
t1.filterid
, case(t1.filtertype) when 9 then t1.filterby ELSE '-1' END AS filterby
, 9 AS filtertype -- t1.filtertype
FROM userfilters t1
ORDER BY t1.filterid , (t1.filtertype<>9), t1.filtertype
;
If I want to select 0.5% rows, or even 5% rows from the following table via a PK, the query planner correctly chooses to use the PK index. Here is the table:
create table weather as
with numbers as(
select generate_series as id from generate_series(0,1048575))
select id,
50 + 50*sin(id) as temperature_in_f,
50 + 50*sin(id) as humidity_in_percent
from numbers;
alter table weather
add constraint pk_weather primary key(id);
vacuum analyze weather;
The stats are up-to-date, and the following query does use the PK index:
explain analyze select sum(w.id), sum(humidity_in_percent), count(*)
from weather as w
where w.id between 1 and 66720;
Suppose, however, that we need to join this table with another, much smaller, one:
create table lightnings
as
select id as weather_id
from weather
where humidity_in_percent between 99.99 and 100;
alter table lightnings
add constraint pk_lightnings
primary key(weather_id);
analyze lightnings;
Here is my join, in four logically equivalent forms:
explain analyze select sum(w.id), count(*) from weather as w
where w.humidity_in_percent between 99.99 and 100
and exists(select * from lightnings as l
where l.weather_id=w.id);
explain analyze select sum(w.id), count(*)
from weather as w
join lightnings as l
on l.weather_id=w.id
where w.humidity_in_percent between 99.99 and 100;
explain analyze select sum(w.id), count(*)
from lightnings as l
join weather as w
on l.weather_id=w.id
where w.humidity_in_percent between 99.99 and 100;
-- replaced explicit join with where clause
explain analyze select sum(w.id), count(*)
from lightnings as l, weather as w
where w.humidity_in_percent between 99.99 and 100
and l.weather_id=w.id;
Unfortunately the query planner resorts to scanning the whole weather table:
"Aggregate (cost=22645.68..22645.69 rows=1 width=4) (actual time=167.427..167.427 rows=1 loops=1)"
" -> Hash Join (cost=180.12..22645.52 rows=32 width=4) (actual time=2.500..166.444 rows=6672 loops=1)"
" Hash Cond: (w.id = l.weather_id)"
" -> Seq Scan on weather w (cost=0.00..22407.64 rows=5106 width=4) (actual time=0.013..158.593 rows=6672 loops=1)"
" Filter: ((humidity_in_percent >= 99.99::double precision) AND (humidity_in_percent <= 100::double precision))"
" Rows Removed by Filter: 1041904"
" -> Hash (cost=96.72..96.72 rows=6672 width=4) (actual time=2.479..2.479 rows=6672 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 235kB"
" -> Seq Scan on lightnings l (cost=0.00..96.72 rows=6672 width=4) (actual time=0.009..0.908 rows=6672 loops=1)"
"Planning time: 0.326 ms"
"Execution time: 167.581 ms"
The query planner's estimate on how many rows in weather table will be selected is rows=5106. This is more or less close to the exact value of 6672. If I select this small number of rows in weather table via id, the PK index is used. If I select the same amount via a join with another table, the query planner goes for scanning the table.
What am I missing?
select version()
"PostgreSQL 9.4.0"
Edit: if I remove the condition on humidity, the query planner correctly recognizes that the condition on weather.id is quite selective, and chooses to use the index on PK:
explain analyze select sum(w.id), count(*) from weather as w
where exists(select * from lightnings as l
where l.weather_id=w.id);
"Aggregate (cost=14677.84..14677.85 rows=1 width=4) (actual time=37.200..37.200 rows=1 loops=1)"
" -> Nested Loop (cost=0.42..14644.48 rows=6672 width=4) (actual time=0.022..36.189 rows=6672 loops=1)"
" -> Seq Scan on lightnings l (cost=0.00..96.72 rows=6672 width=4) (actual time=0.011..0.868 rows=6672 loops=1)"
" -> Index Only Scan using pk_weather on weather w (cost=0.42..2.17 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=6672)"
" Index Cond: (id = l.weather_id)"
" Heap Fetches: 0"
"Planning time: 0.321 ms"
"Execution time: 37.254 ms"
Yet adding a condition totally confuses the query planner.
Expecting the optimiser to use an index on the PK of the larger table implies that you expect the query to be driven from the smaller table. Of course, you know that the rows that the smaller table will join to in the larger one are the same as those selected by the predicate on it, but the optimiser does not.
Look at the line on the plan:
Hash Join (cost=180.12..22645.52 rows=32 width=4) (actual time=2.500..166.444 rows=6672 loops=1)"
It expects 32 rows to result from the join, but 6672 actually result.
Anyway, it pretty much has the option of:
A full scan on the smaller table, and an index lookup on the larger, with the predicate being used to filter out rows subsequent to the join (and expecting most of the rows to then be filtered out).
A full scan on both tables, with rows being removed by the predicate on the larger table, and a hash join of the result.
A scan of the larger table with rows being removed by the predicate, and an index lookup on the smaller table that may fail to find a value.
The second of these has been judged to be the lowest cost, and I think it is correct to do so based on the evidence it has, as hash joins are very efficient for joining many rows.
Of course it would probably be more efficient to place an index on weather(humidity_in_percent,id) in this particular case, but I suspect that this is a modified version of your real situation (the sum of the id column?) so specific advice may not be applicable.
I believe the differences your seeing between the first query, which uses the index and the other 3 which don't, is in the where clause.
In the first query, your where clause is on w.id, which is indexed.
In the other 3, the effective where clause is on w.humidity_in_percent. I tested the following ...
create index wh_idx on weather(humidity_in_percent);
explain analyse select sum(w.id), count(*) from weather as w
where w.humidity_in_percent between 99.99 and 100
and exists(select * from lightnings as l
where l.weather_id=w.id);
and get a much better plan. I tried to post the actual plan returned, but I'm having trouble formatting it for proper display, sorry.
My end goal is to optimize my query using indexes, but I'm having trouble adding the right index. Everything I've tried results in the same cost in the Explain diagram, and no indication that it's even using any indexes.
I have two tables:
event that has two date columns: start_date and end_date (can be null).
fiscal_date that has:
two date columns start_date and end_date (cannot be null)
a fiscal_year column of type char(4)
a fiscal_quarter column of type char(1)
There's another table address that's just a one-to-one with a foreign key in event. There's no indexes on it save the public key.
I have a query that I can't change that figures out what fiscal quarter and year the event starts in:
SELECT
e.*,
(select 'Q' || fd.fiscal_quarter || ' FY' || fd.fiscal_year
from fiscal_date fd
where e.start_date between fd.start_date and fd.end_date
limit 1) as fiscal_quarter_year,
(select 'Q' || fd.fiscal_quarter
from fiscal_date fd
where e.start_date between fd.start_date and fd.end_date
limit 1) as fiscal_quarter,
(select 'FY' || fd.fiscal_year
from fiscal_date fd
where e.start_date between fd.start_date and fd.end_date
limit 1) as fiscal_year,
a.street1, a.street2, a.street3, a.city, a.state, a.country, a.postal_code
FROM event AS e
LEFT OUTER JOIN address a ON e.address_id=a.address_id;
Here's an EXPLAIN of the query (notice all the expensive seq scans on the left):
As requested, here's the output of explain analyze:
Hash Left Join (cost=115.78..2846.64 rows=1649 width=5087) (actual time=18.334..134.279 rows=1649 loops=1)
Hash Cond: (e.address_id = a.address_id)
-> Seq Scan on event e (cost=0.00..323.49 rows=1649 width=5031) (actual time=0.223..19.808 rows=1649 loops=1)
-> Hash (cost=68.68..68.68 rows=3768 width=60) (actual time=17.797..17.797 rows=3768 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 248kB
-> Seq Scan on address a (cost=0.00..68.68 rows=3768 width=60) (actual time=0.004..9.071 rows=3768 loops=1)
SubPlan 1
-> Limit (cost=0.00..0.49 rows=1 width=28) (actual time=0.011..0.014 rows=1 loops=1649)
-> Seq Scan on fiscal_date fd (cost=0.00..1.46 rows=3 width=28) (actual time=0.006..0.006 rows=1 loops=1649)
Filter: (($0 >= start_date) AND ($0 <= end_date))
SubPlan 2
-> Limit (cost=0.00..0.48 rows=1 width=8) (actual time=0.010..0.012 rows=1 loops=1649)
-> Seq Scan on fiscal_date fd (cost=0.00..1.43 rows=3 width=8) (actual time=0.006..0.006 rows=1 loops=1649)
Filter: (($1 >= start_date) AND ($1 <= end_date))
SubPlan 3
-> Limit (cost=0.00..0.48 rows=1 width=20) (actual time=0.010..0.012 rows=1 loops=1649)
-> Seq Scan on fiscal_date fd (cost=0.00..1.43 rows=3 width=20) (actual time=0.005..0.005 rows=1 loops=1649)
Filter: (($2 >= start_date) AND ($2 <= end_date))
Total runtime: 138.008 ms
I've tried adding indexes to event that index the start and end dates (both and individually), adding an index to fiscal_date's date columns, but nothing seems to be reducing the cost calculation of this query.
How do I optimize this query, or isn't it possible?
Ok, so your problem isn't the fiscal date sequential scans, the number of rows are so few that sequential scan is probably the correct thing to do. You probably want Index on address._id on both tables.
If adress_is is the primary key of adress table it already is indexed.
Also, just to be sure, run vacuum full and vacuum analyze on all tables.
EDIT:
The performance seems really bad considering there are so few rows (under 10000 is nothing). Are the tables really big or is the hardware ancient? If not you should probably take a serious look at the configuration (working mem etc)