NATURAL FULL OUTER JOIN or USING, if common attribute is NULL - postgresql

So if table A is:
no | username
1 | admin
2 | chicken
And table B is:
id | no
a | 1
b | 3
c | 4
Then, I do a NATURAL FULL OUTER JOIN as so:
SELECT no
FROM A NATURAL FULL OUTER JOIN
B;
Then, what is the result? And is the result the same for all PostgreSQL implementations?
Because does the 'no' come from table A, or table B, it is ambiguous. But, NATURAL joins combine the 'no'. But what if one of the 'no' is ambiguous, i.e. A.no IS NOT NULL, but B.no IS NULL, which of the 'no' does it pick? And what if A.no and B.no are both NULL?
TL;DR: So the question is, WHAT is the value of the no in SELECT no: Is it the A.no or B.no, or is it the COLAESCE of them?
SELECT no
FROM A NATURAL FULL OUTER JOIN
B;

First, don't use natural for joins. It is a bug waiting to happen. As you note in your question, natural chooses the join keys based on the names of columns. It doesn't take types into account. It doesn't even take explicitly declared foreign key relationships in to account.
The particularly insidious problem, though, is that someone reading the query does not see the join keys. That makes is much harder to debug queries or to modify/enhance them.
So, my advice is to use using instead.
SELECT no
FROM A FULL OUTER JOIN
B
USING (no);
What does a full join return? It returns all rows from both tables, regardless of whether the join matches or not. Because a NULL comparison always fails, NULL will not match in the join conditions.
For example, the following query returns 4 rows not 2 containing a NULL value:
with x as (
select NULL::int as id union all select NULL as id
)
select id
from x full join
x y
using (id);
You would get the same result with a natural join, but I simply don't use that construct.
I'm not 100% sure, but I'm pretty sure that all versions of Postgres that support full join would work the same way. This behavior is derived specifically from the ANSI definitions of joins and join conditions.

Related

Postgresql optional join

Is it possible in Postgres to have an optional join?
My use case is something like
select ...
from a
inner join b using (b_id)
where b.type in (...)
a is a very large reporting table. b is used to filter a, BUT the most common use case is that we will want all b.types, and therefore all the b records in the join. In other words, in most cases we don't want to filter by b at all, and would not need the join in that case, but the filtering optionality still needs to be there in cases when the user wants to filter by type.
So is it possible to invoke the join optionally, and save the join effort in cases when we just want all of a?
If not, what's my next best option? IF ... THEN or CTE with a union of separate queries?
If you don't need any of b's columns, there is no need to JOIN table b, You can filter by using EXISTS(SELECT .. FROM b WHERE ...).
If you want to conditionally exclude a part of the WHERE clause, you could use the following construct: (the ignore_b boolean will function as an on/off switch)
-- $ignore_b is a Boolean flag
-- when True, the optimiser will ignore the exists(...)
SELECT ...
FROM a
WHERE ( $ignore_b OR EXISTS (
SELECT *
FROM b
WHERE b.b_id = a.some_id
AND b.type in (1,2,3,4,5)
)
);
In our example, you are still filtering based on b, based on whether a row with that b_id exists in b in the first place.
Postgresql will remove unneeded joins under very specific circumstances. You write the join as a left join, so that no rows of A can be removed due to the absence of corresponding rows in B. The column B.b_id is a declared unique or primary key, so that no rows of A can be duplicated due to duplicate matches in B. And of course, no column of B can referenced in the query (except the reference to the key column in the left join condition).
In those cases, you can just always write the LEFT JOIN, and PostgreSQL will figure out that it can skip it.
You can argue that if you have a declared foreign key constraint on the join condition, then you shouldn't need the JOIN to be a LEFT JOIN in order to implement this optimization. I think that that argument is correct, but PostgreSQL does not implement it that way.
I would just do it programatically. If you are already programmatically adding references to B in the WHERE clause, you should be able to do it for the join as well.

TSQL -- Does ordering of on clause matter

For the simplified query:
Select t1.c1, t1.c2, t2.d1
FROM table1 t1
LEFT JOIN table2 t2
ON
(t1.c1 = t2.d2)
It seems from math by the symmetric property that this would be exactly the same as if one reversed the ON to:
Select t1.c1, t1.c2, t2.d1
FROM table1 t1
LEFT JOIN table2 t2
ON
(t2.d2 = t1.c1)
Is this always to true in TSQL or is there some exception where one could get more rows if the query was subtly changed as described above?
I've learned that very subtle join changes (in queries much more complicated than this simple example) can affect row counts greatly.
Also, in addition to rows returned (which I think should be EXACTLY THE SAME for both queries always) I would suppose that one ordering of the "ON" clause could make the query have better performance speed. Could someone verify that?
The ON condition in any sort of JOIN can be commutative, as you have observed. You can do ON a = b or ON b = a and have them mean precisely the same thing. They're nothing but Boolean expressions.

SQL with table as becomes ambiguous

Perhaps I'm approaching this all wrong, in which case feel free to point out a better way to solve the overall question, which "How do I use an intermediate table for future queries?"
Let's say I've got tables foo and bar, which join on some baz_id, and I want to use combine this into an intermediate table to be fed into upcoming queries. I know of the WITH .. AS (...) statement, but am running into problems as such:
WITH foobar AS (
SELECT *
FROM foo
INNER JOIN bar ON bar.baz_id = foo.baz_id
)
SELECT
baz_id
-- some other things as well
FROM
foobar
The issue is that (Postgres 9.4) tells me baz_id is ambiguous. I understand this happens because SELECT * includes all the columns in both tables, so baz_id shows up twice; but I'm not sure how to get around it. I was hoping to avoid copying the column names out individually, like
SELECT
foo.var1, foo.var2, foo.var3, ...
bar.other1, bar.other2, bar.other3, ...
FROM foo INNER JOIN bar ...
because there are hundreds of columns in these tables.
Is there some way around this I'm missing, or some altogether different way to approach the question at hand?
WITH foobar AS (
SELECT *
FROM foo
INNER JOIN bar USING(baz_id)
)
SELECT
baz_id
-- some other things as well
FROM
foobar
It leaves only one instance of the baz_id column in the select list.
From the documentation:
The USING clause is a shorthand that allows you to take advantage of the specific situation where both sides of the join use the same name for the joining column(s). It takes a comma-separated list of the shared column names and forms a join condition that includes an equality comparison for each one. For example, joining T1 and T2 with USING (a, b) produces the join condition ON T1.a = T2.a AND T1.b = T2.b.
Furthermore, the output of JOIN USING suppresses redundant columns: there is no need to print both of the matched columns, since they must have equal values. While JOIN ON produces all columns from T1 followed by all columns from T2, JOIN USING produces one output column for each of the listed column pairs (in the listed order), followed by any remaining columns from T1, followed by any remaining columns from T2.

Left outer join using 2 of 3 tables in Postgresql

I need to show all clients entered into the system for a date range.
All clients are assigned to a group, but not necessarily to a staff.
When I run the query as such:
SELECT
clients.name_lastfirst_cs,
to_char (clients.date_intake,'MM/DD/YY')AS Date_Created,
clients.client_id,
clients.display_intake,
staff.staff_name_cs,
groups.name
FROM
public.clients,
public.groups,
public.staff,
public.link_group
WHERE
clients.zrud_staff = staff.zzud_staff AND
clients.zzud_client = link_group.zrud_client AND
groups.zzud_group = link_group.zrud_group AND
clients.date_intake BETWEEN (now() - '8 days'::interval)::timestamp AND now()
ORDER BY
groups.name ASC,
clients.client_id ASC,
staff.staff_name_cs ASC
I get 121 entries
if I comment out:
SELECT
clients.name_lastfirst_cs,
to_char (clients.date_intake,'MM/DD/YY')AS Date_Created,
clients.client_id,
clients.display_intake,
-- staff.staff_name_cs, -- Line Commented out
groups.name
FROM
public.clients,
public.groups,
public.staff,
public.link_group
WHERE
-- clients.zrud_staff = staff.zzud_staff AND --Line commented out
clients.zzud_client = link_group.zrud_client AND
groups.zzud_group = link_group.zrud_group AND
clients.date_intake BETWEEN (now() - '8 days'::interval)::timestamp AND now()
ORDER BY
groups.name ASC,
clients.client_id ASC,
staff.staff_name_cs ASC
I get 173 entries
I know I need to do an outer join to capture all clients regardless of if there
is a staff assigned, but each attempt has failed. I have done outer joins with
two tables, but adding a third has twisted my brain.
Thanks for any suggestions
I have no way of testing this (or of knowing that it is right) but what I read in your query is that you want something similar to this:
SELECT --I just used short aliases. I choose something other than the table name so I know it is an alias "c" for client etc...
c.name_lastfirst_cs,
to_char (c.date_intake,'MM/DD/YY')AS Date_Created,
c.client_id,
c.display_intake,
s.staff_name_cs,
g.name,
l.zrud_client AS "link_client",--I'm selecting some data here so that I can debug later, you can just filter this out with another select if you need to
l.zzud_group AS "link_group" --Again, so I can see these relationships
FROM
public.clients c
LEFT OUTER JOIN staff s ON --is staff required? If it isn't then outer join (optional)
s.zzud_staff = c.zrud_staff --so we linked staff to clients here
LEFT OUTER JOIN public.link_group l ON --this looks like a lookup table to me so we select the lookup record
l.zrud_client = c.zzud_client -- this is how I define the lookup, a client id
LEFT OUTER JOIN public.groups g ON --then we use that to lookup a group
g.zzup_group = l.zrud_group --which is defined by this data here
WHERE -- the following must be true
c.date_intake BETWEEN (now() - '8 days'::interval)::timestamp AND now()
Now for the why: I've basically moved your where clause to JOIN x ON y=z syntax. In my experience this is a better way to write an maintain queries as it allows you to specify relationships between tables rather than doing a big-ol'-join and trying to filter that data with the where clause. Keep in mind each condition is REQUIRED not optional so when you say you want records with the following conditions you're going to get them (and if I read this right--I probably don't as I don't have a schema in-front of me) if a record is missing a link-table record OR a staff member you're going to filter it out.
Alternatively (possibly significantly slower) You can SELECT anything so you can chain it like:
SELECT
*
FROM
(
SELECT
*
FROM
public.clients
WHERE
x condition
)
WHERE
y condition
OR
SELECT * FROM x WHERE x.condition IN (SELECT * FROM y)
In your case this tactic probably won't be easier than a standard join syntax.
^And some serious opinion here: I recommend you use the join syntax I outlined above here. It is functionally the same as joining and specifying a where clause, but as you noted, if you don't understand the relationships it can cause a Cartesian join. http://www.tutorialspoint.com/sql/sql-cartesian-joins.htm . Lastly, I tend to specify what type of join I want. I write INNER JOIN and OUTER JOIN a lot in my queries because it helps the next person (usually me) figure out what the heck I meant. If it is optional use an outer join, if it is required use an inner join (default).
Good luck! There are much better SQL developers out there and there's probably another way to do it.

OR statement in select part of query in Postgres

I have a query
SELECT
cd.signoffdate,
min(cmp.dsignoff) as dsignoff
FROM clients AS c
LEFT JOIN campaigns AS cmp ORDER BY dsignoff;
If I want to have something like this built into the postgres query will it work and how do I do it
if the cd.signoffdate is empty it should take min(cmp.dsignoff) as dsignoff as the value and then order by this column, so in other words it should order by dsignoff and cd.signoffdate and tread it as one column, is this possible and how?
Your query could look like this:
SELECT c.client_id, COALESCE(c.signoffdate, min(cmp.dsignoff)) AS signoff
FROM clients c
LEFT JOIN campaigns cmp ON cmp.client_id = c.client_id -- join condition!
GROUP BY c.client_id, cd.signoffdate -- group by!
ORDER BY COALESCE(c.signoffdate, min(cmp.dsignoff));
Or, with simplified syntax:
SELECT c.client_id, COALESCE(c.signoffdate, min(cmp.dsignoff)) AS signoff
FROM clients c
LEFT JOIN campaigns cmp USING (client_id)
GROUP BY 1, cd.signoffdate
ORDER BY 2;
Major points:
Used alias c, but referenced as cd.
No join condition leads to cross join, probably not intended.
Missing GROUP BY.
I assume that you want to group by the primary key column of clients and call it client_id.
I also assume that client_id links the two tables together.
COALESCE() serves as fallback in case signoffdate IS NULL.
ORDER BY coalesce(cd.signoffdate, min(cmp.dsignoff));
But don't you need some GROUP BY in your original query?
You can use COALESCE
SELECT COALESCE(cd.signoffdate, min(cmp.dsignoff)) as dsignoff
I'm not sure if you can order by coalesce in Postgres - might be worth just ordering by both columns