For the simplified query:
Select t1.c1, t1.c2, t2.d1
FROM table1 t1
LEFT JOIN table2 t2
ON
(t1.c1 = t2.d2)
It seems from math by the symmetric property that this would be exactly the same as if one reversed the ON to:
Select t1.c1, t1.c2, t2.d1
FROM table1 t1
LEFT JOIN table2 t2
ON
(t2.d2 = t1.c1)
Is this always to true in TSQL or is there some exception where one could get more rows if the query was subtly changed as described above?
I've learned that very subtle join changes (in queries much more complicated than this simple example) can affect row counts greatly.
Also, in addition to rows returned (which I think should be EXACTLY THE SAME for both queries always) I would suppose that one ordering of the "ON" clause could make the query have better performance speed. Could someone verify that?
The ON condition in any sort of JOIN can be commutative, as you have observed. You can do ON a = b or ON b = a and have them mean precisely the same thing. They're nothing but Boolean expressions.
Related
I've been trying to optimize this simple query on Postgres 12 that joins several tables to a base relation. They each have 1-to-1 relation and have anywhere from 10 thousand to 10 million rowss.
SELECT *
FROM base
LEFT JOIN t1 ON t1.id = base.t1_id
LEFT JOIN t2 ON t2.id = base.t2_id
LEFT JOIN t3 ON t3.id = base.t3_id
LEFT JOIN t4 ON t4.id = base.t4_id
LEFT JOIN t5 ON t5.id = base.t5_id
LEFT JOIN t6 ON t6.id = base.t6_id
LEFT JOIN t7 ON t7.id = base.t7_id
LEFT JOIN t8 ON t8.id = base.t8_id
LEFT JOIN t9 ON t9.id = base.t9_id
(the actual relations are a bit more complicated than this, but for demonstration purposes this is fine)
I noticed that the query is still very slow when I only do SELECT base.id which seems odd, because then query planner should know that the joins are unnecessary and shouldn't affect the performance.
Then I noticed that 8 seems to be some kind of magic number. If I remove any single one of the joins, the query time goes from 500ms to 1ms. With EXPLAIN I was able to see that Postgres is doing index only scans when joining 8 tables, but with 9 tables it starts doing sequential scans.
That's even when I only do SELECT base.id so somehow the amount of tables is tripping up the query planner.
We finally found out that there is indeed a configuration setting in postgres called join_collapse_limit, which is set to 8 by default.
https://www.postgresql.org/docs/current/runtime-config-query.html
The planner will rewrite explicit JOIN constructs (except FULL JOINs) into lists of FROM items whenever a list of no more than this many items would result. Smaller values reduce planning time but might yield inferior query plans. By default, this variable is set the same as from_collapse_limit, which is appropriate for most uses. Setting it to 1 prevents any reordering of explicit JOINs. Thus, the explicit join order specified in the query will be the actual order in which the relations are joined. Because the query planner does not always choose the optimal join order, advanced users can elect to temporarily set this variable to 1, and then specify the join order they desire explicitly.
After reading this article we decided to increase the limit, along with other values such as from_collapse_limit and geco_threshold. Beware that query planning time increases exponentially with the amount of joins, so the limit is there for a reason and should not be increased carelessly.
From PostgreSQL document, when explaining basics of EXPLAIN command:
When dealing with outer joins, you might see join plan nodes with both
“Join Filter” and plain “Filter” conditions attached. Join Filter
conditions come from the outer join's ON clause, so a row that
fails the Join Filter condition could still get emitted as a
null-extended row. But a plain Filter condition is applied after the outer-join rules and so acts to remove rows unconditionally. In an inner join there is no semantic
difference between these types of filters.
"Join Filter conditions come from the outer join's ON clause". Then in outer join, where does a plain filter condition come from?
Could you give some examples?
Thanks.
There is no other use of the term "plain Filter condition" used elsewhere in the Postgres documentation, so I would suspect that the author meant the word "plain" literally like not decorated or elaborate; simple or ordinary in character.
So really they are saying "When a filter is applied in an OUTER JOIN's ON clause the table or derived table being joined is just plainly filtered before the join occurs. This will lead to any columns from this table or derived table in the result set to be null".
Here is a little example that might enlighten you:
CREATE TABLE a(a_id) AS VALUES (1), (3), (4);
CREATE TABLE b(b_id) AS VALUES (1), (2), (5);
Now we have to force a nested loop join:
SET enable_hashjoin = off;
SET enable_mergejoin = off;
Our query is:
SELECT *
FROM a
LEFT JOIN b ON a_id = b_id
WHERE a_id > coalesce(b_id, 0);
a_id | b_id
------+------
3 |
4 |
(2 rows)
The plan is:
QUERY PLAN
------------------------------------------
Nested Loop Left Join
Join Filter: (a.a_id = b.b_id)
Filter: (a.a_id > COALESCE(b.b_id, 0))
-> Seq Scan on a
-> Materialize
-> Seq Scan on b
The “plain filter” is a condition that is applied after the join.
It is a frequent mistake to believe that conditions in the WHERE clause are the same as conditions in a JOIN … ON clause. That is only the case for inner joins. For outer joins, rows from the outer side that don't meet the condition are also included in the result.
That makes it necessary to have two different filters.
I have a query that uses a subquery and I am having a problem returning the expected results. The error I receive is..."Only one expression can be specified in the select list when the subquery is not introduced with EXISTS." How can I rewrite this to work?
SELECT
a.Part,
b.Location,
b.LeadTime
FROM
dbo.Parts a
LEFT OUTER JOIN dbo.Vendor b ON b.Part = a.Part
WHERE
b.Location IN ('A','B','C')
AND
Date IN (SELECT Location, MAX(Date) FROM dbo.Vendor GROUP BY Location)
GROUP BY
a.Part,
b.Location,
b.LeadTime
ORDER BY
a.Part
I think something like this may be what you're looking for. You didn't say what version of SQL Server--this works in SQL 2005 and up:
SELECT
p.Part,
p.Location, -- from *p*, otherwise if no match we'll get a NULL
v.LeadTime
FROM
dbo.Parts p
OUTER APPLY (
SELECT TOP (1) * -- * here is okay because we specify columns outside
FROM dbo.Vendor v
WHERE p.Location = v.Location -- the correlation part
ORDER BY v.Date DESC
) v
WHERE
p.Location IN ('A','B','C')
ORDER BY
p.Part
;
Now, your query can be repaired as is by adding the "correlation" part to change your query into a correlated subquery as demonstrated in Kory's answer (you'd also remove the GROUP BY clause). However, that method still requires an additional and unnecessary join, hurting performance, plus it can only pull one column at a time. This method allows you to pull all the columns from the other table, and has no extra join.
Note: this gives logically the same results as Lamak's answer, however I prefer it for a few reasons:
When there is an index on the correlation columns (Location, here) this can be satisfied with seeks, but the Row_Number solution has to scan (I believe).
I prefer the way this expresses the intent of the query more directly and succinctly. In the Row_Number method, one must get out to the outer condition to see that we are only grabbing the rn = 1 values, then bop back into the CTE to see what that is.
Using CROSS APPLY or OUTER APPLY, all the other tables not involved in the single-inner-row-per-outer-row selection are outside where (to me) they belong. We aren't squishing concerns together. Using Row_Number feels a bit like throwing a DISTINCT on a query to fix duplication rather than dealing with the underlying issue. I guess this is basically the same issue as the previous point worded in a different way.
The moment you have TWO tables from which you wish to pull the most recent value, the Row_Number() solution blows up completely. With this syntax, you just easily add another APPLY clause, and it's crystal clear what you're doing. There is a way to use Row_Number for the multiple tables scenario by moving the other tables outside, but I still don't prefer that syntax.
Using this syntax allows you to perform additional joins based on whether the selected row exists or not (in the case that no matching row was found). In the Row_Number solution, you can only reasonably do that NOT NULL checking in the outer query--so you are forced to split up the query into multiple, separated parts (you don't want to be joining to values you will be discarding!).
P.S. I strongly encourage you to use aliases that hint at the table they represent. Please don't use a and b. I used p for Parts and v for Vendor--this helps you and others make sense of the query more quickly in the future.
If I understood you corrrectly, you want the rows with the max date for locations A, B and C. Now, assuming SQL Server 2005+, you can do this:
;WITH CTE AS
(
SELECT
a.Part,
b.Location,
b.LeadTime,
RN = ROW_NUMBER() OVER(PARTITION BY a.Part ORDER BY [Date] DESC)
FROM
dbo.Parts a
LEFT OUTER JOIN dbo.Vendor b ON b.Part = a.Part
WHERE
b.Location IN ('A','B','C')
)
SELECT Part,
Location,
LeadTime
FROM CTE
WHERE RN = 1
ORDER BY Part
In your subquery you need to correlate the Location and Part to the outer query.
Example:
Date = (SELECT MAX(Date)
FROM dbo.Vender v
WHERE v.Location = b.Location
AND v.Part = b.Part
)
So this will bring back one date for each location and part
Should SQL Server yield the same results for both of the queries below? The primary difference is the condition being placed in the WHERE clause with the former, and with the latter being placed as a condition upon the join itself.
SELECT *
FROM cars c
INNER JOIN parts p
ON c.CarID = p.CarID
WHERE p.Desc LIKE '%muffler%'
SELECT *
FROM cars c
INNER JOIN parts p
ON c.CarID = p.CarID
AND p.Desc LIKE '%muffler%'
Thanks in advance for any help that I receive upon this!
For INNER JOINS it will make no difference to semantics or performance. Both will give the same plan. For OUTER JOINs it does make a difference though.
/*Will return all rows from cars*/
SELECT c.*
FROM cars c
LEFT JOIN parts p
ON c.CarID = p.CarID AND c.CarID <> c.CarID
/*Will return no rows*/
SELECT c.*
FROM cars c
LEFT JOIN parts p
ON c.CarID = p.CarID
WHERE c.CarID <> c.CarID
For inner joins the only issue is clarity. The JOIN condition should (IMO) only contain predicates concerned with how the two tables in the JOIN are related. Other unrelated filters should go in the WHERE clause.
For inner joins the two queries should yield exactly the same results. Are you seeing a difference?
Yes, they both get the same results. The difference is when the condition is checked, if during the join or afterwards.
The execution plan will be identical in your example. Next to the parse button should be the "Show execution plan" button. It will give you a clearer picture.
I think in a more complex query with many joins it can be an issue in efficiency, as stated above, before or after.
EDIT: sorry assuming your using sql server management studio.
My recommendation for this kind of situation would be:
put the JOIN condition (what establishes the "link" between the two tables) - and only that JOIN condition - after the JOIN operator
any additional conditions for one of the two joined tables belongs in the regular WHERE clause
So based on that, I would always recommend to write your query this way:
SELECT
(list of columns)
FROM
dbo.cars c
INNER JOIN
dbo.parts p ON c.CarID = p.CarID
WHERE
p.Desc LIKE 'muffler%'
It seem "cleaner" and more expressive that way - don't "hide" additional conditions behind a JOIN clause if they don't really belong there (e.g. help establish the link between the two tables being joined).
I have a Notes table with a uniqueidentifier column that I use as a FK for a variety of other tables in the database (don't worry, the uniqueidentifier columns on the other tables aren't clustered PKs). These other tables represent something of a hierarchy of business objects. As a simple representation, let's say I have two other tables:
Leads (PK LeadID)
Quotes (PK QuoteID, FK LeadID)
In the display of a Lead in the application, I need to show all notes related to the lead, including those tagged to any Quote that belongs to that lead. I have two options as far as I can see — either a UNION ALL or several LEFT JOIN statements. Here's how they'd look:
SELECT N.*
FROM Notes N
JOIN Leads L ON N.TargetUniqueID = L.UniqueID
WHERE L.LeadID = #LeadID
UNION ALL
SELECT N.*
FROM Notes N
JOIN Quotes Q ON N.TargetUniqueID = Q.UniqueID
WHERE Q.LeadID = #LeadID
Or...
SELECT N.*
FROM Notes N
LEFT JOIN Leads L ON N.TargetUniqueID = L.UniqueID
LEFT JOIN Quotes Q ON N.TargetUniqueID = Q.UniqueID
WHERE L.LeadID = #LeadID OR Q.LeadID = #LeadID
In real life I have a total of five tables that the notes could be attached to, and that number could grow as the application grows. I already have non-clustered indexes set up on the uniqueidentifier columns I'm using, and SQL Profiler says I can't make any more improvements, but when I do a performance test on a realistically-sized test data set, I get the following numbers:
UNION ALL — 0.010 sec
LEFT JOIN — 0.744 sec
I had always heard that using UNION was bad, and that UNION ALL was only marginally better, but the performance numbers don't seem to bear that out. Granted, the UNION ALL SQL code might be more of a pain to maintain, but at that kind of performance difference it's probably worth it.
So is UNION ALL really better here or am I missing something on the LEFT JOIN code that is slowing things down?
The UNION ALL version would probably be satisfied quite easily by 2 index seeks. OR can lead to scans. What do the execution plans look like?
Also have you tried this to avoid accessing Notes twice?
;WITH J AS
(
SELECT UniqueID FROM Leads WHERE LeadID = #LeadID
UNION ALL
SELECT UniqueID FROM Quotes WHERE LeadID = #LeadID
)
SELECT N.* /*Don't use * though!*/
FROM Notes N
JOIN J ON N.TargetUniqueID = J.UniqueID
I may be wrong, but I think that you will get a better performance if you rewrite you JOIN version to
SELECT N.*
FROM Notes N
LEFT JOIN Leads L ON N.TargetUniqueID = L.UniqueID AND L.LeadID = #LeadID
LEFT JOIN Quotes Q ON N.TargetUniqueID = Q.UniqueID AND Q.LeadID = #LeadID
WHERE Q.LeadID IS NOT NULL OR L.LeadID IS NOT NULL
In my experience, SQL Server is really bad with join conditions containing OR. I also use UNIONs in that case, and I got similar results like you (maybe half a second instead of 20).
Who said UNIONS are bad? Especially if you use UNION ALL, there should not be a performance hit, as UNION would have to go through the result to only keep unique records (actually doing something like distinct or group by).
You second query wouldn' even give correct results as it would covert the left joins to inner joins, see here for explantion as to why your syntax is bad:
http://wiki.lessthandot.com/index.php/WHERE_conditions_on_a_LEFT_JOIN
UNION is slower, but UNION ALL should be pretty quick, right?