Postgresql should left join use WHERE or ON is enough? - postgresql

When you do a select subquery, should you use WHERE inside it or 's on s.id = t.id' is enough? I want to understand if subquery without where selecting all the rows and then filter them, or it's select only that match condition 'on add.id = table.id'
SELECT * FROM table
left join (
select *
from add
/* where add.id = 1 */ - do i need this?
group by add.id
) add on add.id = table.id
WHERE table.id = 1
As i understand from EXPLAIN:
Nested Loop Left Join (cost=2.95..13.00 rows=10 width=1026)
Join Filter: (add.id = table.id)
It loads all rows and then do a filter. Is it bad?

I'm not sure if your example it too simple, but you shouldn't need a subquery at all for this one - and definitely not the group by.
Suppose you do need a subquery, then for this specific example, it leads to exactly the same query plan whether you add the where clause or not. The idea of the query planner is that it tries to find a way to make your query as fast as possible. Oftentimes this means ordering the execution of joins and where clauses in such a way, that the result set is increased sooner rather than later. I generated exactly the same query, only with reservations and customers, I hope that's okay.
EXPLAIN
SELECT *
FROM reservations
LEFT OUTER JOIN (
SELECT *
FROM customers
) AS customers ON customers.id = reservations.customer_id
WHERE customer_id = 1;
Nested Loop Left Join (cost=0.17..183.46 rows=92 width=483)
Join Filter: (customers.id = reservations.customer_id)
-> Index Scan using index_reservations_on_customer_id on reservations (cost=0.09..179.01 rows=92 width=255)
Index Cond: (customer_id = 1)
-> Materialize (cost=0.08..4.09 rows=1 width=228)
-> Index Scan using customers_pkey on customers (cost=0.08..4.09 rows=1 width=228)
Index Cond: (id = 1)
The deepest arrows are executed first. This means that even though I didn't have the equivalent of where add.id = 1 in my subquery, it still knew that the equality customers.id = customer_id = 1 should be true, so it decided to filter on customers.id = 1 before even attempting to join anything

Related

PostgreSQL - Slow Count

I need to write one time query. It will be run one time, and the data will be moved to other system (AWS Personalize). It does not need to be optimized for sure, but at least sped up a bit, so the migration of data is even possible.
Coming from MySQL I thought it would not be a problem. But reading a lot, it seems the COUNT function is handled differently in PostgreSQL. Having mentioned all of that this is the query, reduced in size. There are several other joins (removed from this example), but they do not present an issue, at least looking at the QUERY PLAN.
explain
SELECT DISTINCT ON (p.id)
'plan_progress' AS EVENT_TYPE,
'-1' AS EVENT_VALUE,
extract(EPOCH FROM p.created_at) AS CREATION_TIMESTAMP
FROM plans p
INNER JOIN schedules sch ON p.id = sch.plan_id
WHERE p.status = 'ENDED' AND p.continuous IS NOT TRUE
AND (select Count(id) FROM schedules s WHERE s.plan_id = sch.plan_id AND s.status = 'DONE') = 1
The issue is here:
select Count(id) FROM schedules s WHERE s.plan_id = sch.plan_id AND s.status = 'DONE'
The id field in the schedules table is uuid.
I have tried lots of things, but they all end up the same. Same or worse.
I have read somewhere it is possible to use row estimate in these cases, but I have honestly no idea how to do that in this case.
This is the query plan:
Unique (cost=0.99..25152516038.36 rows=100054 width=88)
-> Nested Loop (cost=0.99..25152515788.22 rows=100054 width=88)
-> Index Only Scan using idx_schedules_plan_id_done_date on schedules sch (cost=0.56..25152152785.84 rows=107641 width=16)
Filter: ((SubPlan 1) = 1)
SubPlan 1
-> Aggregate (cost=1168.28..1168.29 rows=1 width=8)
-> Bitmap Heap Scan on schedules s (cost=14.78..1168.13 rows=58 width=16)
Recheck Cond: (plan_id = sch.plan_id)
Filter: ((status)::text = 'DONE'::text)
-> Bitmap Index Scan on idx_schedules_plan_id_done_date (cost=0.00..14.77 rows=294 width=0)
Index Cond: (plan_id = sch.plan_id)
-> Index Scan using plans_pkey on plans p (cost=0.42..3.37 rows=1 width=24)
Index Cond: (id = sch.plan_id)
Filter: ((continuous IS NOT TRUE) AND ((status)::text = 'ENDED'::text))
you are not selecting any column from the schedules table, so it can be omitted from the main query, and put into an EXISTS() term
distinct is probaly not needed, assuming id is a PK
Maybe you dont need the COUNT() to be exactly one, but just > 0
SELECT DISTINCT ON (p.id)
'plan_progress' AS EVENT_TYPE
, '-1' AS EVENT_VALUE
, extract(EPOCH FROM p.created_at) AS CREATION_TIMESTAMP
FROM plans p
WHERE p.status = 'ENDED' AND p.continuous IS NOT TRUE
AND EXISTS (
SELECT *
FROM schedules sch
WHERE p.id = sch.plan_id
)
AND EXISTS(
select *
FROM schedules s
WHERE s.plan_id = p.id
AND s.status = 'DONE' -- <<-- Must there be EXACTLY ONE schedules record?
) ;
Now you can see that the first EXISTS() is actually not needed: if the second one yields True, the first EXISTS() must yield True, too
SELECT -- DISTINCT ON (p.id)
'plan_progress' AS EVENT_TYPE
, '-1' AS EVENT_VALUE
, extract(EPOCH FROM p.created_at) AS CREATION_TIMESTAMP
FROM plans p
WHERE p.status = 'ENDED' AND p.continuous IS NOT TRUE
AND EXISTS(
select *
FROM schedules s
WHERE s.plan_id = p.id
AND s.status = 'DONE'
) ;

Why is "order by" on the primary key changing the query plan so that it ignores an useful index?

After investigating why a multi-column index doesn't help speed up a query when I was expecting it to, I realized that it's because of a simple ORDER BY clause.
I reduced the query to this simple form (first without the ORDER BY, then with it):
somedb=# explain select * from user_resource where resource_id = 943 and status = 2 limit 10;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
Limit (cost=0.56..39.29 rows=10 width=44)
-> Index Scan using user_resource_resource_id_status on user_resource (cost=0.56..5422.22 rows=1400 width=44)
Index Cond: ((resource_id = 943) AND (status = 2))
(3 rows)
Time: 0.409 ms
somedb=# explain select * from user_resource where resource_id = 943 and status = 2 order by id desc limit 10;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
Limit (cost=1000.46..4984.60 rows=10 width=44)
-> Gather Merge (cost=1000.46..558780.31 rows=1400 width=44)
Workers Planned: 2
-> Parallel Index Scan Backward using idx_121518_primary on user_resource (cost=0.44..557618.69 rows=583 width=44)
Filter: ((resource_id = 943) AND (status = 2))
Once I add the ORDER BY, you can see the user_resource_resource_id_status key is not used anymore and the query becomes ~10 times slower.
Why is this? And is there a way to fix it? I would think sorting by a simple integer field shouldn't make an index useless. Thank you.
It is related to the limit.
You can run the query without the limit clause and with an offset of 0 to prevent inlining the subquery, then apply the limit.
select * from (
select * from user_resource
where resource_id = 943
and status = 2
offset 0
) sub
order by id desc
limit 10;
It depends on, how you created the index. Example NULLS FIRST, ASC, DESC, NULLS FIRST, and/or NULLS LAST
Refer https://www.postgresql.org/docs/current/indexes-ordering.html, explains how to work with Indexes and ORDER BY

Lock one table at update and another in subquery, which one will be locked first?

I have a query like this:
UPDATE table1 SET
col = 'some value'
WHERE id = X
RETURNING col1, (SELECT col2 FROM table2 WHERE id = table1.table2_id FOR UPDATE);
So, this query will lock both tables, table1 and table2, right? But which one will be locked first?
The execution plan for the query will probably look like this:
QUERY PLAN
-------------------------------------------------------------------------------------------
Update on laurenz.table1
Output: table1.col1, (SubPlan 1)
-> Index Scan using table1_pkey on laurenz.table1
Output: table1.id, table1.table2_id, 'some value'::text, table1.col1, table1.ctid
Index Cond: (table1.id = 42)
SubPlan 1
-> LockRows
Output: table2.col2, table2.ctid
-> Index Scan using table2_pkey on laurenz.table2
Output: table2.col2, table2.ctid
Index Cond: (table2.id = table1.table2_id)
That suggests that the row in table1 is locked first.
Looking into the code, I see that ExecUpdate first calls EvalPlanQual, where the updated tuple is locked, and only after that calls ExecProcessReturning where the RETURNING clause is processed.
So yes, the row in table1 is locked first.
So far, I have treated row locks, but there are also the ROW EXCLUSIVE locks on the tables themselves:
The tables are all locked in InitPlan in execMain.c, and it seems to me that again table1 will be locked before table2 here.

Simple WHERE EXISTS ... ORDER BY... query very slow in PostrgeSQL

I have this very simple query, generated by my ORM (Entity Framework Core):
SELECT *
FROM "table1" AS "t1"
WHERE EXISTS (
SELECT 1
FROM "table2" AS "t2"
WHERE ("t2"."is_active" = TRUE) AND ("t1"."table2_id" = "t2"."id"))
ORDER BY "t1"."table2_id"
There are 2 "is_active" records. The other involved columns ("id") are the primary keys. Query returns exactly 4 rows.
Table 1 is 96 million records.
Table 2 is 30 million records.
The 3 columns involved in this query are indexed (is_active, id, table2_id).
The C#/LINQ code that generates this simple query is: Table2.Where(t => t.IsActive).Include(t => t.Table1).ToList();`
SET STATISTICS 10000 was set to all of the 3 columns.
VACUUM FULL ANALYZE was run on both tables.
WITHOUT the ORDER BY clause, the query returns within a few milliseconds, and I’d expect nothing else for 4 records to return. EXPLAIN output:
Nested Loop (cost=1.13..13.42 rows=103961024 width=121)
-> Index Scan using table2_is_active_idx on table2 (cost=0.56..4.58 rows=1 width=8)
Index Cond: (is_active = true)
Filter: is_active
-> Index Scan using table1_table2_id_fkey on table1 t1 (cost=0.57..8.74 rows=10 width=121)
Index Cond: (table2_id = table1.id)
WITH the ORDER BY clause, the query takes 5 minutes to complete! EXPLAIN output:
Merge Semi Join (cost=10.95..4822984.67 rows=103961040 width=121)
Merge Cond: (t1.table2_id = t2.id)
-> Index Scan using table1_table2_id_fkey on table1 t1 (cost=0.57..4563070.61 rows=103961040 width=121)
-> Sort (cost=4.59..4.59 rows=2 width=8)
Sort Key: t2.id
-> Index Scan using table2_is_active_idx on table2 a (cost=0.56..4.58 rows=2 width=8)
Index Cond: (is_active = true)
Filter: is_active
The inner, first index scan should return no more than 2 rows. Then the outer, second index scan doesn't make any sense with its cost of 4563070 and 103961040 rows. It only has to match 2 rows in table2 with 4 rows in table1!
This is a very simple query with very few records to return. Why is Postgres failing to execute it properly?
Ok I solved my problem in the most unexpected way. I upgraded Postgresql from 9.6.1 to 9.6.3. And that was it. After restarting the service, the explain plan now looked good and the query ran just fine this time. I did not change anything, no new index, nothing. The only explanation I can think of is that there is was a query planner bug in 9.6.1 and solved in 9.6.3. Thank you all for your answers!
Add an index:
CREATE INDEX _index
ON table2
USING btree (id)
WHERE is_active IS TRUE;
And rewrite query like this
SELECT table1.*
FROM table2
INNER JOIN table1 ON (table1.table2_id = table2.id)
WHERE table2.is_active IS TRUE
ORDER BY table2.id
It is necessary to take into account that "is_active IS TRUE" and "is_active = TRUE" process by PostgreSQL in different ways. So the expression in the index predicate and the query must match.
If u can't rewrite query try add an index:
CREATE INDEX _index
ON table2
USING btree (id)
WHERE is_active = TRUE;
Your guess is right, there is a bug in Postgres 9.6.1 that fits your use case exactly. And upgrading was the right thing to do. Upgrading to the latest point-release is always the right thing to do.
Quoting the release notes for Postgres 9.6.2:
Fix foreign-key-based join selectivity estimation for semi-joins and
anti-joins, as well as inheritance cases (Tom Lane)
The new code for taking the existence of a foreign key relationship
into account did the wrong thing in these cases, making the estimates
worse not better than the pre-9.6 code.
You should still create that partial index like Dima advised. But keep it simple:
is_active = TRUE and is_active IS TRUE subtly differ in that the second returns FALSE instead of NULL for NULL input. But none of this matters in a WHERE clause where only TRUE qualifies. And both expressions are just noise. In Postgres you can use boolean values directly:
CREATE INDEX t2_id_idx ON table2 (id) WHERE is_active; -- that's all
And do not rewrite your query with a LEFT JOIN. This would add rows consisting of NULL values to the result for "active" rows in table2 without any siblings in table1. To match your current logic it would have to be an [INNER] JOIN:
SELECT t1.*
FROM table2 t2
JOIN table1 t1 ON t1.table2_id = t2.id -- and no parentheses needed
WHERE t2.is_active -- that's all
ORDER BY t1.table2_id;
But there is no need to rewrite your query that way at all. The EXISTS semi-join you have is just as good. Results in the same query plan once you have the partial index.
SELECT *
FROM table1 t1
WHERE EXISTS (
SELECT 1 FROM table2
WHERE is_active -- that's all
WHERE id = t1.table2_id
)
ORDER BY table2_id;
BTW, since you fixed the bug by upgrading and once you have created that partial index (and run ANALYZE or VACUUM ANALYZE on the table at least once - or autovacuum did that for you), you will never again get a bad query plan for this, since Postgres maintains separate estimates for the partial index, which are unambiguous for your numbers. Details:
Get count estimates from pg_class.reltuples for given conditions
Index that is not used, yet influences query

Why can't PostgreSQL do this simple FULL JOIN?

Here's a minimal setup with 2 tables a and b each with 3 rows:
CREATE TABLE a (
id SERIAL PRIMARY KEY,
value TEXT
);
CREATE INDEX ON a (value);
CREATE TABLE b (
id SERIAL PRIMARY KEY,
value TEXT
);
CREATE INDEX ON b (value);
INSERT INTO a (value) VALUES ('x'), ('y'), (NULL);
INSERT INTO b (value) VALUES ('y'), ('z'), (NULL);
Here is a LEFT JOIN that works fine as expected:
SELECT * FROM a
LEFT JOIN b ON a.value IS NOT DISTINCT FROM b.value;
with output:
id | value | id | value
----+-------+----+-------
1 | x | |
2 | y | 1 | y
3 | | 3 |
(3 rows)
Changing "LEFT JOIN" to "FULL JOIN" gives an error:
SELECT * FROM a
FULL JOIN b ON a.value IS NOT DISTINCT FROM b.value;
ERROR: FULL JOIN is only supported with merge-joinable or hash-joinable join conditions
Can someone please answer:
What is a "merge-joinable or hash-joinable join condition" and why joining on a.value IS NOT DISTINCT FROM b.value doesn't fulfill this condition, but a.value = b.value is perfectly fine?
It seems that the only difference is how NULL values are handled. Since the value column is indexed in both tables, running an EXPLAIN on a NULL lookup is just as efficient as looking up values that are non-NULL:
EXPLAIN SELECT * FROM a WHERE value = 'x';
QUERY PLAN
--------------------------------------------------------------------------
Bitmap Heap Scan on a (cost=4.20..13.67 rows=6 width=36)
Recheck Cond: (value = 'x'::text)
-> Bitmap Index Scan on a_value_idx (cost=0.00..4.20 rows=6 width=0)
Index Cond: (value = 'x'::text)
EXPLAIN SELECT * FROM a WHERE value ISNULL;
QUERY PLAN
--------------------------------------------------------------------------
Bitmap Heap Scan on a (cost=4.20..13.65 rows=6 width=36)
Recheck Cond: (value IS NULL)
-> Bitmap Index Scan on a_value_idx (cost=0.00..4.20 rows=6 width=0)
Index Cond: (value IS NULL)
This has been tested with PostgreSQL 9.6.3 and 10beta1.
There has been discussion about this issue, but it doesn't directly answer the above question.
PostgreSQL implements FULL OUTER JOIN with either a hash or a merge join.
To be eligible for such a join, the join condition has to have the form
<expression using only left table> <operator> <expression using only right table>
Now your join condition does look like this, but PostgreSQL does not have a special IS NOT DISTINCT FROM operator, so it parses your condition into:
(NOT ($1 IS DISTINCT FROM $2))
And such an expression cannot be used for hash or merge joins, hence the error message.
I can think of a way to work around it:
SELECT a_id, NULLIF(a_value, '<null>'),
b_id, NULLIF(b_value, '<null>')
FROM (SELECT id AS a_id,
COALESCE(value, '<null>') AS a_value
FROM a
) x
FULL JOIN
(SELECT id AS b_id,
COALESCE(value, '<null>') AS b_value
FROM b
) y
ON x.a_value = y.b_value;
That works if <null> does not appear anywhere in the value columns.
I just solved such a case by replacing the ON condition with "TRUE", and moving the original "ON" condition into a WHERE clause. I don't know the performance impact of this, though.