DB2 Query Structure Using User-Defined Function as a Table - db2

I'm a little new to DB2, and am having trouble developing a query. I have created a user-defined function that returns a table of data which I want to then join and select from in larger select statement. I'm working on a sensitive db, so the query below isn't what I'm literally running, but it's almost exactly like it (without the other 10 joins I have to do lol).
select
A.customerId,
A.firstname,
A.lastname,
B.orderId,
B.orderDate,
F.currentLocationDate,
F.currentLocation
from
customer A
INNER JOIN order B
on A.customerId = B.customerId
INNER JOIN table(getShippingHistory(B.customerId)) as F
on B.orderId = F.orderId
where B.orderId = 35
This works great if I run this query without the where clause (or some other where clause that doesn't check for an ID). When I include the where clause, I get the following error:
Error during Prepare 58004(-901)[IBM][CLI Driver][DB2/LINUXX8664]
SQL0901N The SQL statement failed because of a non-severe system
error. Subsequent SQL statements can be processed. (Reason "Bad Plan;
Unresolved QNC found".) SQLSTATE=58004
I have tracked the issue down to fact that I'm using one of join criteria for the parameters (B.customerId). I have validated this fact by replacing B.customerId with a valid customerId, and the query works great. Problem is, I don't know the customerId when calling this query. I know only the orderId (in this example).
Any thoughts on how to restructure this so I can make only 1 call to get all the info? I know the plan is the problem b/c the customerId isn't getting resolved before the function is called.

So if I understand correctly, the function getShippingHistory(customerId) returns a table.
And if you call it with a single customer Id that table gets joined in your query above no problem at all.
But the way you have the query written above, you are asking db2 to call the function for every row returned by your query (i.e. every b.customerId that matches your join and where conditions).
So I'm not sure what behaviour you are expecting, because what you're asking for is a table back for every row in your query, and db2 (nor I) can figure out what the result is supposed to look like.
So in terms of restructuring your query, think about how you can change the getShippingHistory logic when multiple customer Ids are involved.

i found the best solution (given the current query structure) is to use a LEFT join instead of an INNER join in order force the LEFT part of the join to happen which will resolve the customerId to a value by the time it gets to the function call.
select
A.customerId,
A.firstname,
A.lastname,
B.orderId,
B.orderDate,
F.currentLocationDate,
F.currentLocation
from
customer A
INNER JOIN order B
on A.customerId = B.customerId
LEFT JOIN table(getShippingHistory(B.customerId)) as F
on B.orderId = F.orderId
where B.orderId = 35

Related

Postgresql get references from a dictionary

I'm trying to build a request to get the data from a table, but some of those columns have foreign keys I would like to replace by the associated keyword in one request.
Basically there's
table A with column 1:PKA-ID and column 2:name.
table B with column 1:PKB-ID, column 2:FKA-ID, column 3:amount.
I want to get all the lines in table B but with all foreign keys replaced by the associated names in table A.
I started building a request with a subrequest + alias to get that, but ofc I have more than one result per subrequest, yet I can't find a way to link that subrequest to the ID of table B [might be exhausted, dumb or both] from the main request. I did something like that:
SELECT (SELECT "NAME" FROM A JOIN B ON ID = FKA-ID) AS name, amount FROM TABLEB;
it feels so simple of a request yet...
You don't need a join in the subselect.
SELECT pkb_id,
(SELECT name FROM a WHERE a.pka_id = b.fka_id),
amount
FROM b;
(See it live in SQL Fiddle).
The subselect query runs for each and every row of its parent select and has the parent row available from the context.
You can also use a simple join.
SELECT b.pkb_id, a.name, b.amount
FROM b, a
WHERE a.pka_id = b.fka_id;
Note that the join version puts less restrictions on the PostgreSQL query optimizer so in some cases the join version might work faster. (For example, in PostgreSQL 9.6 the join might utilize multiple CPU units, cf. Parallel Query).

Updating for each row in a table

I have this query here which returns an error because of too many rows returned:
UPDATE tmp_rsl2 SET comm_percent=( SELECT c2.comm_percent
FROM tmp_rsl2 t1
INNER JOIN gn_salesperson g1 ON t1.sales_person=g1.sales_person
INNER JOIN comm_schema c1 ON g1.comm_schema=c1.comm_schema
INNER JOIN comm_schema_dt c2 ON c1.comm_schema_id=c2.comm_schema_id AND (t1.balance_amount::numeric <= (COALESCE(c2.value_amount,0)) );`
Basically for each row of the comm_percent column, I want to update all of them using the subquery SELECT statement. I imagine using a FOR loop or something but I'd like to hear ideas or to know a proper way to do this.
The error TOO_MANY_ROWS is about assigning a value to a variable, that can only take '1' (one) value, whereas the SELECT query is returning more than one.
Without a reference schema, its difficult to give an SQL that'd work (not to say that the issue lies with the Schema), but you need to ensure that the value assigned to comm_percent from the SELECT statement returns only 1 row. A very blind attempt at how it 'might' work in your case (given below), but again without knowing the schema its difficult to gauge whether it'd work.
UPDATE tmp_rsl2
SET comm_percent = c2.comm_percent
FROM gn_salesperson g1 ON
INNER JOIN comm_schema c1 ON g1.comm_schema = c1.comm_schema
INNER JOIN comm_schema_dt c2 ON c1.comm_schema_id = c2.comm_schema_id
AND (tmp_rsl2.balance_amount::NUMERIC <= (COALESCE(c2.value_amount, 0)))
WHERE tmp_rsl2.sales_person = g1.sales_person
UPDATE
As per below comments, have given an unrelated SQLFiddle example that should give an idea of how to perform an UPDATE of all rows of a table looking up corresponding values from another table.

Does inner join effect order by?

I have a function a() which gives result in a specific order.
I want to do:
select final.*,tablex.name
from a() as final
inner join tablex on (a.key=tablex.key2)
My question is, can I guarantee that the join won't effect the order of rows as a() set it?
a() is:
select ....
from....
joins...
order by x,y,z
The short version:
The order of rows returned by a SQL query is not guaranteed in any way unless you use an order by
Any order you see without an order by is pure coincidence and can not be relied upon.
So how did I always get the correct order so far? when I did Select * from a()
If your function is a SQL function, then the query inside the function is executed "as is" (it's essentially "inlined") so you only run a single query that does have an order by. If it's a PL/pgSQL function and the only thing it does is a RETURN QUERY ... then you again only have a single query that is executed which does have an order by.
Assuming you do use a SQL function, then running:
select final.*,tablex.name
from a() as final
join tablex on a.key=tablex.key2
is equivalent to:
select final.*,tablex.name
from (
-- this is your query inside the function
select ...
from ...
join ...
order by x,y,z
) as final
join tablex on a.key=tablex.key2;
In this case the order by inside the derived table doesn't make sense as it might be "overruled" by an overall order by statement. In fact some databases would outright reject this query (and I sometime wish Postgres would do as well).
Without an order by on the **overall* query, the database is free to choose any order of rows that it wants.
So to get back to the initial question:
can I guarantee that the join won't effect the order of rows as a() set it?
The answer to that is a clear: NO - the order of the rows for that query is in no way guaranteed. If you need an order that you can rely on, you have to specify an order by.
I would even go so far to remove the order by from the function - what if someone runs: select * from a() order by z,y,x - I don't think Postgres will be smart enough to remove the order by inside the function.

Postgresql Faulty Syntax on select/join/group

What about the following is not proper syntax for Postgresql?
select p.*, SUM(vote) as votes_count
FROM votes v, posts p
where p.id = v.`voteable_id`
AND v.`voteable_type` = 'Post'
group by v.voteable_id
order by votes_count DESC limit 20
I am in the process of installing postgresql locally but wanted to get this out sooner :)
Thank you
MySQL is a lot looser in its interpretation of standard SQL than PostgreSQL is. There are two issues with your query:
Backtick quoting is a MySQL thing.
Your GROUP BY is invalid.
The first one can be fixed by simply removing the offending quotes. The second one requires more work; from the fine manual:
When GROUP BY is present, it is not valid for the SELECT list expressions to refer to ungrouped columns except within aggregate functions, since there would be more than one possible value to return for an ungrouped column.
This means that every column mentioned in your SELECT either has to appear in an aggregate function or in the GROUP BY clause. So, you have to expand your p.* and make sure that all those columns are in the GROUP BY, you should end up with something like this but with real columns in place of p.column...:
select p.id, p.column..., sum(v.vote) as votes_count
from votes v, posts p
where p.id = v.voteable_id
and v.voteable_type = 'Post'
group by p.id, p.column...
order by votes_count desc
limit 20
This is a pretty common problem when moving from MySQL to anything else.

Inconsistent results between remote and local query

I have an interesting case of results sets returned by SQL Server being different depending on whether a query is executed locally or remotely.
In essence, if I execute the following query:
SELECT p.ID AS Id
FROM csdb.users.People AS p
LEFT JOIN csdb.users.Accounts AS a ON p.ID = a.IdentityId
then the left join is performed, and I obtain 633 rows back. However, if I execute the query remotely as:
SELECT p.ID AS Id
FROM quantdb.csdb.users.People AS p
LEFT JOIN quantdb.csdb.users.Accounts AS a ON p.ID = a.IdentityId
then the left join is not performed, an index scan of the People table is performed, and only 564 records are returned. In essence it seems as if in the remote case the optimiser is saying 'well, you are not requiring any of the columns from the Accounts table so I won't do the join' whereas in the other case it is saying 'I will do the join'.
Is this difference in behaviour of the optimiser for remote and local queries a known 'feature' of SQL Server, or is this a bug?
We are using SQL Server 2008, SP1
Not doing the join is a valid optimization if there is a unique constraint on Accounts.IdentityId, because the join cannot add any new row (a non-unique column would cause p.ID to repeat for each corresponding duplicate in Accounts.IdentityId).
Can you post the exact schema of the tables, including all indexes and constraints? I want to see if it reproes.