Simple WHERE EXISTS ... ORDER BY... query very slow in PostrgeSQL - postgresql

I have this very simple query, generated by my ORM (Entity Framework Core):
SELECT *
FROM "table1" AS "t1"
WHERE EXISTS (
SELECT 1
FROM "table2" AS "t2"
WHERE ("t2"."is_active" = TRUE) AND ("t1"."table2_id" = "t2"."id"))
ORDER BY "t1"."table2_id"
There are 2 "is_active" records. The other involved columns ("id") are the primary keys. Query returns exactly 4 rows.
Table 1 is 96 million records.
Table 2 is 30 million records.
The 3 columns involved in this query are indexed (is_active, id, table2_id).
The C#/LINQ code that generates this simple query is: Table2.Where(t => t.IsActive).Include(t => t.Table1).ToList();`
SET STATISTICS 10000 was set to all of the 3 columns.
VACUUM FULL ANALYZE was run on both tables.
WITHOUT the ORDER BY clause, the query returns within a few milliseconds, and I’d expect nothing else for 4 records to return. EXPLAIN output:
Nested Loop (cost=1.13..13.42 rows=103961024 width=121)
-> Index Scan using table2_is_active_idx on table2 (cost=0.56..4.58 rows=1 width=8)
Index Cond: (is_active = true)
Filter: is_active
-> Index Scan using table1_table2_id_fkey on table1 t1 (cost=0.57..8.74 rows=10 width=121)
Index Cond: (table2_id = table1.id)
WITH the ORDER BY clause, the query takes 5 minutes to complete! EXPLAIN output:
Merge Semi Join (cost=10.95..4822984.67 rows=103961040 width=121)
Merge Cond: (t1.table2_id = t2.id)
-> Index Scan using table1_table2_id_fkey on table1 t1 (cost=0.57..4563070.61 rows=103961040 width=121)
-> Sort (cost=4.59..4.59 rows=2 width=8)
Sort Key: t2.id
-> Index Scan using table2_is_active_idx on table2 a (cost=0.56..4.58 rows=2 width=8)
Index Cond: (is_active = true)
Filter: is_active
The inner, first index scan should return no more than 2 rows. Then the outer, second index scan doesn't make any sense with its cost of 4563070 and 103961040 rows. It only has to match 2 rows in table2 with 4 rows in table1!
This is a very simple query with very few records to return. Why is Postgres failing to execute it properly?

Ok I solved my problem in the most unexpected way. I upgraded Postgresql from 9.6.1 to 9.6.3. And that was it. After restarting the service, the explain plan now looked good and the query ran just fine this time. I did not change anything, no new index, nothing. The only explanation I can think of is that there is was a query planner bug in 9.6.1 and solved in 9.6.3. Thank you all for your answers!

Add an index:
CREATE INDEX _index
ON table2
USING btree (id)
WHERE is_active IS TRUE;
And rewrite query like this
SELECT table1.*
FROM table2
INNER JOIN table1 ON (table1.table2_id = table2.id)
WHERE table2.is_active IS TRUE
ORDER BY table2.id
It is necessary to take into account that "is_active IS TRUE" and "is_active = TRUE" process by PostgreSQL in different ways. So the expression in the index predicate and the query must match.
If u can't rewrite query try add an index:
CREATE INDEX _index
ON table2
USING btree (id)
WHERE is_active = TRUE;

Your guess is right, there is a bug in Postgres 9.6.1 that fits your use case exactly. And upgrading was the right thing to do. Upgrading to the latest point-release is always the right thing to do.
Quoting the release notes for Postgres 9.6.2:
Fix foreign-key-based join selectivity estimation for semi-joins and
anti-joins, as well as inheritance cases (Tom Lane)
The new code for taking the existence of a foreign key relationship
into account did the wrong thing in these cases, making the estimates
worse not better than the pre-9.6 code.
You should still create that partial index like Dima advised. But keep it simple:
is_active = TRUE and is_active IS TRUE subtly differ in that the second returns FALSE instead of NULL for NULL input. But none of this matters in a WHERE clause where only TRUE qualifies. And both expressions are just noise. In Postgres you can use boolean values directly:
CREATE INDEX t2_id_idx ON table2 (id) WHERE is_active; -- that's all
And do not rewrite your query with a LEFT JOIN. This would add rows consisting of NULL values to the result for "active" rows in table2 without any siblings in table1. To match your current logic it would have to be an [INNER] JOIN:
SELECT t1.*
FROM table2 t2
JOIN table1 t1 ON t1.table2_id = t2.id -- and no parentheses needed
WHERE t2.is_active -- that's all
ORDER BY t1.table2_id;
But there is no need to rewrite your query that way at all. The EXISTS semi-join you have is just as good. Results in the same query plan once you have the partial index.
SELECT *
FROM table1 t1
WHERE EXISTS (
SELECT 1 FROM table2
WHERE is_active -- that's all
WHERE id = t1.table2_id
)
ORDER BY table2_id;
BTW, since you fixed the bug by upgrading and once you have created that partial index (and run ANALYZE or VACUUM ANALYZE on the table at least once - or autovacuum did that for you), you will never again get a bad query plan for this, since Postgres maintains separate estimates for the partial index, which are unambiguous for your numbers. Details:
Get count estimates from pg_class.reltuples for given conditions
Index that is not used, yet influences query

Related

Postgresql planner won't select index for 'NOT' queries

[Title updated to reflect updates in description]
I am running Postgresql 9.6
I have a complex query that isn't using the indexes that I expect, when I break it down to this small example I am lost as to why the index isn't being used.
These examples run on a table with 1 million records, and currently all records have the value 'COMPLETED' for column state. State is a text column and I have a btree index on it.
The following query uses my index as I'd expect:
explain analyze
SELECT * FROM(
SELECT
q.state = 'COMPLETED'::text AS completed_successfully
FROM request.request q
) a where NOT completed_successfully;
V
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using request_state_index on request q (cost=0.43..88162.19 rows=11200 width=1) (actual time=200.554..200.554 rows=0 loops=1)
Filter: (state <> 'COMPLETED'::text)
Rows Removed by Filter: 1050005
Heap Fetches: 198150
Planning time: 0.272 ms
Execution time: 200.579 ms
(6 rows)
But if I add anything else to the select that references my table, then the planner chooses to do a sequential scan instead.
explain analyze
SELECT * FROM(
SELECT
q.state = 'COMPLETED'::text AS completed_successfully,
q.type
FROM request.request q
) a where NOT completed_successfully;
V
QUERY PLAN
----------------------------------------------------------------------------------------------------------------
Seq Scan on request q (cost=0.00..234196.06 rows=11200 width=8) (actual time=407.713..407.713 rows=0 loops=1)
Filter: (state <> 'COMPLETED'::text)
Rows Removed by Filter: 1050005
Planning time: 0.113 ms
Execution time: 407.733 ms
(5 rows)
Even this simpler example has the same issue.
Uses Index:
SELECT
q.state
FROM request.request q
WHERE q.state = 'COMPLETED';
Doesn't use Index:
SELECT
q.state,
q.type
FROM request.request q
WHERE q.state = 'COMPLETED';
[UPDATE]
I now understand (for this case) that the index it's using there is INDEX ONLY, and it would stop using that in this case because type isn't also in the index. So the question perhaps is why won't it use it in the 'Not' case below:
When I use a different value that isn't in the table, i knows to use the index (which makes sense):
SELECT
q.state,
q.type
FROM request.request q
WHERE q.state = 'CREATED';
But if I not it, it doesn't:
SELECT
q.state,
q.type
FROM request.request q
WHERE q.state != 'COMPLETED';
Why is my index not being used?
What can I do to ensure it gets used?
Most of the time, I expect nearly all the records in this table to be in one of many end states (using IN operator);. So when running my more complex query, I expect these records should be excluded from my more expensive part of the query early and quickly.
[UPDATES]
It looks like 'NOT' is not a supported B-Tree operation. I'll need some kind of unique approach: https://www.postgresql.org/docs/current/indexes-types.html#INDEXES-TYPES-BTREE
I tried adding the following partial indexes but they didn't seem to work:
CREATE INDEX request_incomplete_state_index ON request.request (state) WHERE state NOT IN('COMPLETED', 'FAILED', 'CANCELLED');
CREATE INDEX request_complete_state_index ON request.request (state) WHERE state IN('COMPLETED', 'FAILED', 'CANCELLED');
This partial index does work, but is not an ideal solution.
CREATE INDEX request_incomplete_state_exact_index ON request.request (state) WHERE state != 'COMPLETED';
explain analyze SELECT q.state, q.type FROM request.request q WHERE q.state != 'COMPLETED';
I also tried this expression index, while also not ideal also didn't work:
CREATE OR REPLACE FUNCTION request.request_is_done(in_state text)
RETURNS BOOLEAN
LANGUAGE sql
STABLE
AS $function$
SELECT in_state IN ('COMPLETED', 'FAILED', 'CANCELLED');
$function$
;
CREATE INDEX request_is_done_index ON request.request (request.request_is_done(state));
explain analyze select * from request.request q where NOT request.request_is_done(state);
Using a list (In Clause) of states with equals works. So I may have to figure out my larger query to just not use the NOT.

UPDATE query for 180k rows in 10M row table unexpectedly slow

I have a table that is getting too big and I want to reduce it's size
with an UPDATE query. Some of the data in this table is redundant, and
I should be able to reclaim a lot of space by setting the redundant
"cells" to NULL. However, my UPDATE queries are taking excessive
amounts of time to complete.
Table details
-- table1 10M rows (estimated)
-- 45 columns
-- Table size 2200 MB
-- Toast Table size 17 GB
-- Indexes Size 1500 MB
-- **columns in query**
-- id integer primary key
-- testid integer foreign key
-- band integer
-- date timestamptz indexed
-- data1 real[]
-- data2 real[]
-- data3 real[]
This was my first attempt at an update query. I broke it up into some
temporary tables just to get the id's to update. Further, to reduce the
query, I selected a date range for June 2020
CREATE TEMP TABLE A as
SELECT testid
FROM table1
WHERE date BETWEEN '2020-06-01' AND '2020-07-01'
AND band = 3;
CREATE TEMP TABLE B as -- this table has 180k rows
SELECT id
FROM table1
WHERE date BETWEEN '2020-06-01' AND '2020-07-01'
AND testid in (SELECT testid FROM A)
AND band > 1
UPDATE table1
SET data1 = Null, data2 = Null, data3 = Null
WHERE id in (SELECT id FROM B)
Queries for creating TEMP tables execute in under 1 sec. I ran the UPDATE query for an hour(!) before I finally killed it. Only 180k
rows needed to be updated. It doesn't seem like it should take that much
time to update that many rows. Temp table B identifies exactly which
rows to update.
Here is the EXPLAIN from the above UPDATE query. One of the odd features of this explain is that it shows 4.88M rows, but there are only 180k rows to update.
Update on table1 (cost=3212.43..4829.11 rows=4881014 width=309)
-> Nested Loop (cost=3212.43..4829.11 rows=4881014 width=309)
-> HashAggregate (cost=3212.00..3214.00 rows=200 width=10)
-> Seq Scan on b (cost=0.00..2730.20 rows=192720 width=10)
-> Index Scan using table1_pkey on table1 (cost=0.43..8.07 rows=1 width=303)
Index Cond: (id = b.id)
Another way to run this query is in one shot:
WITH t as (
SELECT id from table1
WHERE testid in (
SELECT testid
from table1
WHERE date BETWEEN '2020-06-01' AND '2020-07-01'
AND band = 3
)
)
UPDATE table1 a
SET data1 = Null, data2 = Null, data3 = Null
FROM t
WHERE a.id = t.id
I only ran this one for about 10 minutes before I killed it. It feels like I should be able to run this query in much less time if I just knew the tricks. This query has EXPLAIN below. This explain shows 195k rows which is more expected, but cost is much higher # 1.3M to 1.7M
Update on testlog a (cost=1337986.60..1740312.98 rows=195364 width=331)
CTE t
-> Hash Join (cost=8834.60..435297.00 rows=195364 width=4)
Hash Cond: (testlog.testid = testlog_1.testid)
-> Seq Scan on testlog (cost=0.00..389801.27 rows=9762027 width=8)
-> Hash (cost=8832.62..8832.62 rows=158 width=4)"
-> HashAggregate (cost=8831.04..8832.62 rows=158 width=4)
-> Index Scan using amptest_testlog_date_idx on testlog testlog_1 (cost=0.43..8820.18 rows=4346 width=4)
Index Cond: ((date >= '2020-06-01 00:00:00-07'::timestamp with time zone) AND (date <= '2020-07-01 00:00:00-07'::timestamp with time zone))
Filter: (band = 3)
-> Hash Join (cost=902689.61..1305015.99 rows=195364 width=331)
Hash Cond: (t.id = a.id)
-> CTE Scan on t (cost=0.00..3907.28 rows=195364 width=32)
-> Hash (cost=389801.27..389801.27 rows=9762027 width=303)
-> Seq Scan on testlog a (cost=0.00..389801.27 rows=9762027 width=303)
Edit: one of the suggestions in the accepted answer was to drop any indexes before the update and then add them back later. This is what I went with, with a twist: I needed another table to hold indexed data from the dropped indexes to make the A and B queries faster:
CREATE TABLE tempid AS
SELECT id, testid, band, date
FROM table1
I made indexes on this table for id, testid, and date. Then I replaced table1 in the A and B queries with tempid. It still went slower than I would have liked, but it did get the job done.
You might have another table that has a foreign key to this table to one or more columns you are setting to NULL. And this foreign table does not have an index on the column.
Each time you set the row value to NULL the database has to check the foreign table - maybe it has a row that references the value you are removing.
If this is the case you should be able to speed it up by adding an index on this remote table.
For example if you have a table like this:
create table table2 (
id serial primary key,
band integer references table1(data1)
)
Then you can create an index create index table2_band_nnull_idx on table2(band) where band is not null.
But you suggested that all columns you are setting to NULL have array type. This means that it is unlikely that they are referenced. Still it is worth checking.
Another possibility is that you have a trigger on the table that works slowly.
Another possibility is that you have a lot of indexes on the table. Each index has to be updated for each row you update and it can use only a single processor core.
Sometimes it is faster to drop all indexes, do the bulk update and then recreate them all back. Creating indexes can use multiple cores - one core per index.
Another possibility is that your query is waiting for some other query to finish and release its locks. You should check with:
select now()-query_start, * from pg_stat_activity where state<>'idle' order by 1;

Postgresql should left join use WHERE or ON is enough?

When you do a select subquery, should you use WHERE inside it or 's on s.id = t.id' is enough? I want to understand if subquery without where selecting all the rows and then filter them, or it's select only that match condition 'on add.id = table.id'
SELECT * FROM table
left join (
select *
from add
/* where add.id = 1 */ - do i need this?
group by add.id
) add on add.id = table.id
WHERE table.id = 1
As i understand from EXPLAIN:
Nested Loop Left Join (cost=2.95..13.00 rows=10 width=1026)
Join Filter: (add.id = table.id)
It loads all rows and then do a filter. Is it bad?
I'm not sure if your example it too simple, but you shouldn't need a subquery at all for this one - and definitely not the group by.
Suppose you do need a subquery, then for this specific example, it leads to exactly the same query plan whether you add the where clause or not. The idea of the query planner is that it tries to find a way to make your query as fast as possible. Oftentimes this means ordering the execution of joins and where clauses in such a way, that the result set is increased sooner rather than later. I generated exactly the same query, only with reservations and customers, I hope that's okay.
EXPLAIN
SELECT *
FROM reservations
LEFT OUTER JOIN (
SELECT *
FROM customers
) AS customers ON customers.id = reservations.customer_id
WHERE customer_id = 1;
Nested Loop Left Join (cost=0.17..183.46 rows=92 width=483)
Join Filter: (customers.id = reservations.customer_id)
-> Index Scan using index_reservations_on_customer_id on reservations (cost=0.09..179.01 rows=92 width=255)
Index Cond: (customer_id = 1)
-> Materialize (cost=0.08..4.09 rows=1 width=228)
-> Index Scan using customers_pkey on customers (cost=0.08..4.09 rows=1 width=228)
Index Cond: (id = 1)
The deepest arrows are executed first. This means that even though I didn't have the equivalent of where add.id = 1 in my subquery, it still knew that the equality customers.id = customer_id = 1 should be true, so it decided to filter on customers.id = 1 before even attempting to join anything

Lock one table at update and another in subquery, which one will be locked first?

I have a query like this:
UPDATE table1 SET
col = 'some value'
WHERE id = X
RETURNING col1, (SELECT col2 FROM table2 WHERE id = table1.table2_id FOR UPDATE);
So, this query will lock both tables, table1 and table2, right? But which one will be locked first?
The execution plan for the query will probably look like this:
QUERY PLAN
-------------------------------------------------------------------------------------------
Update on laurenz.table1
Output: table1.col1, (SubPlan 1)
-> Index Scan using table1_pkey on laurenz.table1
Output: table1.id, table1.table2_id, 'some value'::text, table1.col1, table1.ctid
Index Cond: (table1.id = 42)
SubPlan 1
-> LockRows
Output: table2.col2, table2.ctid
-> Index Scan using table2_pkey on laurenz.table2
Output: table2.col2, table2.ctid
Index Cond: (table2.id = table1.table2_id)
That suggests that the row in table1 is locked first.
Looking into the code, I see that ExecUpdate first calls EvalPlanQual, where the updated tuple is locked, and only after that calls ExecProcessReturning where the RETURNING clause is processed.
So yes, the row in table1 is locked first.
So far, I have treated row locks, but there are also the ROW EXCLUSIVE locks on the tables themselves:
The tables are all locked in InitPlan in execMain.c, and it seems to me that again table1 will be locked before table2 here.

Why can't PostgreSQL do this simple FULL JOIN?

Here's a minimal setup with 2 tables a and b each with 3 rows:
CREATE TABLE a (
id SERIAL PRIMARY KEY,
value TEXT
);
CREATE INDEX ON a (value);
CREATE TABLE b (
id SERIAL PRIMARY KEY,
value TEXT
);
CREATE INDEX ON b (value);
INSERT INTO a (value) VALUES ('x'), ('y'), (NULL);
INSERT INTO b (value) VALUES ('y'), ('z'), (NULL);
Here is a LEFT JOIN that works fine as expected:
SELECT * FROM a
LEFT JOIN b ON a.value IS NOT DISTINCT FROM b.value;
with output:
id | value | id | value
----+-------+----+-------
1 | x | |
2 | y | 1 | y
3 | | 3 |
(3 rows)
Changing "LEFT JOIN" to "FULL JOIN" gives an error:
SELECT * FROM a
FULL JOIN b ON a.value IS NOT DISTINCT FROM b.value;
ERROR: FULL JOIN is only supported with merge-joinable or hash-joinable join conditions
Can someone please answer:
What is a "merge-joinable or hash-joinable join condition" and why joining on a.value IS NOT DISTINCT FROM b.value doesn't fulfill this condition, but a.value = b.value is perfectly fine?
It seems that the only difference is how NULL values are handled. Since the value column is indexed in both tables, running an EXPLAIN on a NULL lookup is just as efficient as looking up values that are non-NULL:
EXPLAIN SELECT * FROM a WHERE value = 'x';
QUERY PLAN
--------------------------------------------------------------------------
Bitmap Heap Scan on a (cost=4.20..13.67 rows=6 width=36)
Recheck Cond: (value = 'x'::text)
-> Bitmap Index Scan on a_value_idx (cost=0.00..4.20 rows=6 width=0)
Index Cond: (value = 'x'::text)
EXPLAIN SELECT * FROM a WHERE value ISNULL;
QUERY PLAN
--------------------------------------------------------------------------
Bitmap Heap Scan on a (cost=4.20..13.65 rows=6 width=36)
Recheck Cond: (value IS NULL)
-> Bitmap Index Scan on a_value_idx (cost=0.00..4.20 rows=6 width=0)
Index Cond: (value IS NULL)
This has been tested with PostgreSQL 9.6.3 and 10beta1.
There has been discussion about this issue, but it doesn't directly answer the above question.
PostgreSQL implements FULL OUTER JOIN with either a hash or a merge join.
To be eligible for such a join, the join condition has to have the form
<expression using only left table> <operator> <expression using only right table>
Now your join condition does look like this, but PostgreSQL does not have a special IS NOT DISTINCT FROM operator, so it parses your condition into:
(NOT ($1 IS DISTINCT FROM $2))
And such an expression cannot be used for hash or merge joins, hence the error message.
I can think of a way to work around it:
SELECT a_id, NULLIF(a_value, '<null>'),
b_id, NULLIF(b_value, '<null>')
FROM (SELECT id AS a_id,
COALESCE(value, '<null>') AS a_value
FROM a
) x
FULL JOIN
(SELECT id AS b_id,
COALESCE(value, '<null>') AS b_value
FROM b
) y
ON x.a_value = y.b_value;
That works if <null> does not appear anywhere in the value columns.
I just solved such a case by replacing the ON condition with "TRUE", and moving the original "ON" condition into a WHERE clause. I don't know the performance impact of this, though.