sp_executesql vs user defined scalar function - tsql

In the table below I am storing some conditions like this:
Then, generally, in second table, I am having the following records:
and what I need is to compare these values using the right condition and store the result ( let's say '0' for false, and '1' for true in additional column).
I am going to do this in a store procedure and basically I am going to compare from several to hundreds of records.
What of the possible solution is to use sp_executesql for each row building dynamic statements and the other is to create my own scalar function and to call it for eacy row using cross apply.
Could anyone tell which is the more efficient way?
Note: I know that the best way to answer this is to make the two solutions and test, but I am hoping that there might be answered of this, based on other stuff like caching and SQL internal optimizations and others, which will save me a lot of time because this is only part of a bigger problem.

I don't see the need in use of sp_executesql in this case. You can obtain result for all records at once in a single statement:
select Result = case
when ct.Abbreviation='=' and t.ValueOne=t.ValueTwo then 1
when ct.Abbreviation='>' and t.ValueOne>t.ValueTwo then 1
when ct.Abbreviation='>=' and t.ValueOne>=t.ValueTwo then 1
when ct.Abbreviation='<=' and t.ValueOne<=t.ValueTwo then 1
when ct.Abbreviation='<>' and t.ValueOne<>t.ValueTwo then 1
when ct.Abbreviation='<' and t.ValueOne<t.ValueTwo then 1
else 0 end
from YourTable t
join ConditionType ct on ct.ID = t.ConditionTypeID
and update additional column with something like:
;with cte as (
select t.AdditionalColumn, Result = case
when ct.Abbreviation='=' and t.ValueOne=t.ValueTwo then 1
when ct.Abbreviation='>' and t.ValueOne>t.ValueTwo then 1
when ct.Abbreviation='>=' and t.ValueOne>=t.ValueTwo then 1
when ct.Abbreviation='<=' and t.ValueOne<=t.ValueTwo then 1
when ct.Abbreviation='<>' and t.ValueOne<>t.ValueTwo then 1
when ct.Abbreviation='<' and t.ValueOne<t.ValueTwo then 1
else 0 end
from YourTable t
join ConditionType ct on ct.ID = t.ConditionTypeID
)
update cte
set AdditionalColumn = Result
If above logic is supposed to be applied in many places, not just over one table, then yes you may think about function. Though I would used rather inline table-valued function (not scalar), because of there is overhead imposed with use of user defined scalar functions (to call and return, and the more rows to be processed the more time wastes).
create function ftComparison
(
#v1 float,
#v2 float,
#cType int
)
returns table
as return
select
Result = case
when ct.Abbreviation='=' and #v1=#v2 then 1
when ct.Abbreviation='>' and #v1>#v2 then 1
when ct.Abbreviation='>=' and #v1>=#v2 then 1
when ct.Abbreviation='<=' and #v1<=#v2 then 1
when ct.Abbreviation='<>' and #v1<>#v2 then 1
when ct.Abbreviation='<' and #v1<#v2 then 1
else 0
end
from ConditionType ct
where ct.ID = #cType
which can be applied then as:
select f.Result
from YourTable t
cross apply ftComparison(ValueOne, ValueTwo, t.ConditionTypeID) f
or
select f.Result
from YourAnotherTable t
cross apply ftComparison(SomeValueColumn, SomeOtherValueColumn, #someConditionType) f

Related

Returning values based on delimited string entries

In TSQL, the string in the database record is 'A/A/A' or 'A/B/A' (examples). I want to parse the string and for the first instance return '1'; in the 2nd instance, return '2'. That is, if all the values between the separators are the same, return a value; otherwise return another value. What is the best way to do this?
A bit blind answer:
Read the whole value in a variable. Read the first value part in another:
declare #entire nvarchar(max), #single nvarchar(max)
select/set #entire=....
set #single=left(#entire,charindex('/',#entire)-1)
Compare entire with #single replicated after removing slashes:
set #entire=replace(#entire,'/','')
select case when replicate(#single,len(#entire)/len(#single))=#entire
then 1 else 0 end as [What you want]
Something like this should work:
SELECT
x.*,
CASE
WHEN N > 1 THEN 0
ELSE 1
END Result
FROM (
SELECT
t.Column1,
t.Column2,
t.Column3,
t.SomeColumn,
COUNT(DISTINCT s.value) N
FROM dbo.YourTable t
OUTER APPLY STRING_SPLIT(t.SomeColumn,'/') s
GROUP BY
t.Column1,
t.Column2,
t.Column3,
t.SomeColumn
) x
;
Based on your simple example (no edge cases accounted for) the following should work for you:
select string, iif(replace(s,v,'')='',1,0) as Result
from t
cross apply (
values(left(string,charindex('/', string)-1),(replace(string,'/','')))
)s(v,s);
Example Fiddle

T-SQL Join on foreign key that has leading zero

I need to link various tables that each have a common key (a serial number in this case). In some tables the key has a leading zero e.g. '037443' and on others it doesn't e.g. '37443'. In both cases the serial refers to the same product. To confound things serial 'numbers' are not always just numeric e.g. may be "BDO1234", in these cases there is never a leading zero.
I'd prefer to use the WHERE statement (WHERE a.key = b.key) but could use joins if required. Is there any way to do this?
I'm still learning so please keep it simple if possible. Many thanks.
Based on the accepted answer in this link, I've written a small tsql sample to show you what I meant by 'the right direction':
Create the test table:
CREATE TABLE tblTempTest
(
keyCol varchar(20)
)
GO
Populate it:
INSERT INTO tblTempTest VALUES
('1234'), ('01234'), ('10234'), ('0k234'), ('k2304'), ('00034')
Select values:
SELECT keyCol,
SUBSTRING(keyCol, PATINDEX('%[^0]%', keyCol + '.'), LEN(keyCol)) As trimmed
FROM tblTempTest
Results:
keyCol trimmed
-------------------- --------------------
1234 1234
01234 1234
10234 10234
0k234 k234
k2304 k2304
00034 34
Cleanup:
DROP TABLE tblTempTest
Note that the values are alpha-numeric, and only leading zeroes are trimmed.
One possible drawback is that if there is a 0 after a white space it will not be trimmed, but that's an easy fix - just add ltrim:
SUBSTRING(LTRIM(keyCol), PATINDEX('%[^0]%', LTRIM(keyCol + '.')), LEN(keyCol)) As trimmed
You need to create a function
CREATE FUNCTION CompareSerialNumbers(#SerialA varchar(max), #SerialB varchar(max))
RETURNS bit
AS
BEGIN
DECLARE #ReturnValue AS bit
IF (ISNUMERIC(#SerialA) = 1 AND ISNUMERIC(#SerialB) = 1)
SELECT #ReturnValue =
CASE
WHEN CAST(#SerialA AS int) = CAST(#SerialB AS int) THEN 1
ELSE 0
END
ELSE
SELECT #ReturnValue =
CASE
WHEN #SerialA = #SerialB THEN 1
ELSE 0
END
RETURN #ReturnValue
END;
GO
If both are numeric then it compares them as integers otherwise it compares them as strings.

Can someone help me figure out Oracle's (10g) AND/OR short circuitry?

Consider the following query and notice the CALCULATE_INCENTIVE function:
SELECT EMP.* FROM EMPLOYEES EMPS
WHERE
EMP.STATUS = 1 AND
EMP.HIRE_DATE > TO_DATE('1/1/2010') AND
EMP.FIRST_NAME = 'JOHN' AND
CALCULATE_INCENTIVE(EMP.ID) > 1000
ORDER BY EMPS.ID DESC;
I was under the impression that Oracle uses the same (or similar) short-circuitry that .NET uses in its and/or logic. For example, if EMP.STATUS = 2, it won't bother evaluating the rest of the expression since the entire expression would return false anyway.
In my case, the CALCULATE_INCENTIVE function is being called on every employee in the db rather than only on the 9 records that the first three WHERE expressions return. I've even tried putting parenthesis around the specific expressions that I want to group together for short-circuit evaluation, but I can't figure it out.
Anyone have any ideas how to get the CALCULATE_INCENTIVE not to be evaluated if any of the previous expressions return false?
One way is to put the primary criteria into a subquery that Oracle can't optimize away, then put the secondary criteria into the outer query. The easiest way to ensure that Oracle doesn't optimize out the subquery is to include rownum in the select statement:
SELECT * FROM (
SELECT EMP.*, ROWNUM
FROM EMPLOYEES EMPS
WHERE
EMP.STATUS = 1
AND EMP.HIRE_DATE > TO_DATE('1/1/2010')
AND EMP.FIRST_NAME = 'JOHN')
WHERE CALCULATE_INCENTIVE(ID) > 1000
ORDER BY EMPS.ID DESC;
Oracle supports short-circuit evaluation in PL/SQL. In SQL, however, the optimizer is free to evaluate the predicates in whatever order it desires, to push predicates into views and subqueries, and to otherwise transform the SQL statement as it sees fit. This means that you should not rely on predicates being applied in a particular order and makes the order predicates appear in the WHERE clause essentially irrelevant. The indexes that are available, the optimizer statistics that are present, the optimizer parameters, and system statistics are all vastly more important than the order of predicates in the WHERE clause.
In PL/SQL, for example, you can demonstrate this with a function that throws an error if it's actually called.
SQL> ed
Wrote file afiedt.buf
1 create function throw_error( p_parameter IN NUMBER )
2 return number
3 as
4 begin
5 raise_application_error( -20001, 'The function was called' );
6 return 1;
7* end;
SQL> /
Function created.
SQL> ed
Wrote file afiedt.buf
1 declare
2 l_num NUMBER;
3 begin
4 l_num := 1;
5 if( l_num = 2 and throw_error( l_num ) = 2 )
6 then
7 null;
8 else
9 dbms_output.put_line( 'Short-circuited the AND' );
10 end if;
11 if( l_num = 1 or throw_error( l_num ) = 2 )
12 then
13 dbms_output.put_line( 'Short-circuited the OR' );
14 end if;
15* end;
16 /
Short-circuited the AND
Short-circuited the OR
PL/SQL procedure successfully completed.
In SQL, on the other hand, the order of operations is determined by the optimizer, not by you, so the optimizer is free to short-circuit or not short-circuit however it wants. Jonathan Gennick has a great article Subquery Madness! that discusses this in some detail. In your particular case, if you had a composite index on (FIRST_NAME, HIRE_DATE, STATUS) along with appropriate statistics, the optimizer would almost certainly use the index to evaluate the first three conditions and then only call the CALCULATE_INCENTIVE function for the ID's that met the other three criteria. If you created a function-based index on CALCULATE_INCENTIVE(id), the optimizer would likely use that rather than calling the function at all at runtime. But the optimizer would be perfectly free to decide to call the function for every row in either case if it decided that it would be more efficient to do so.

Unexpected SQL results: string vs. direct SQL

Working SQL
The following code works as expected, returning two columns of data (a row number and a valid value):
sql_amounts := '
SELECT
row_number() OVER (ORDER BY taken)::integer,
avg( amount )::double precision
FROM
x_function( '|| id || ', 25 ) ca,
x_table m
WHERE
m.category_id = 1 AND
m.location_id = ca.id AND
extract( month from m.taken ) = 1 AND
extract( day from m.taken ) = 1
GROUP BY
m.taken
ORDER BY
m.taken';
FOR r, amount IN EXECUTE sql_amounts LOOP
SELECT array_append( v_row, r::integer ) INTO v_row;
SELECT array_append( v_amount, amount::double precision ) INTO v_amount;
END LOOP;
Non-Working SQL
The following code does not work as expected; the first column is a row number, the second column is NULL.
FOR r, amount IN
SELECT
row_number() OVER (ORDER BY taken)::integer,
avg( amount )::double precision
FROM
x_function( id, 25 ) ca,
x_table m
WHERE
m.category_id = 1 AND
m.location_id = ca.id AND
extract( month from m.taken ) = 1 AND
extract( day from m.taken ) = 1
GROUP BY
m.taken
ORDER BY
m.taken
LOOP
SELECT array_append( v_row, r::integer ) INTO v_row;
SELECT array_append( v_amount, amount::double precision ) INTO v_amount;
END LOOP;
Question
Why does the non-working code return a NULL value for the second column when the query itself returns two valid columns? (This question is mostly academic; if there is a way to express the query without resorting to wrapping it in a text string, that would be great to know.)
Full Code
http://pastebin.com/hgV8f8gL
Software
PostgreSQL 8.4
Thank you.
The two statements aren't strictly equivalent.
Assuming id = 4, the first one gets planned/prepared on each pass, and behaves like:
prepare dyn_stmt as '... x_function( 4, 25 ) ...'; execute dyn_stmt;
The other gets planned/prepared on the first pass only, and behaves more like:
prepare stc_stmt as '... x_function( $1, 25 ) ...'; execute stc_stmt(4);
(The loop will actually make it prepare a cursor for the above, but that's besides the point for our sake.)
A number of factors can make the two yield different results.
Search path changes before calling the procedure will be ignored by the second call. In particular if this makes x_table point to something different.
Constants of all kinds and calls to immutable functions are "hard-wired" in the second call's plan.
Consider this as an illustration of these side-effects:
deallocate all;
begin;
prepare good as select now();
prepare bad as select current_timestamp;
execute good; -- yields the current timestamp
execute bad; -- yields the current timestamp
commit;
execute good; -- yields the current timestamp
execute bad; -- yields the timestamp at which it was prepared
Why the two aren't returning the same results in your case would depend on the context (you only posted part of your pl/pgsql function, so it's hard to tell), but my guess is you're running into a variation of the above kind of problem.
From Tom Lane:
I think the problem is that you're assuming "amount" will refer to a table column of the query, when actually it's a local variable of the plpgsql function. The second interpretation will take precedence unless you qualify the column reference with the table's name/alias.
Note: PG 9.0 will throw an error by default when there is an ambiguity of this type.

How can I query 'between' numeric data on a not numeric field?

I've got a query that I've just found in the database that is failing causing a report to fall over. The basic gist of the query:
Select *
From table
Where IsNull(myField, '') <> ''
And IsNumeric(myField) = 1
And Convert(int, myField) Between #StartRange And #EndRange
Now, myField doesn't contain numeric data in all the rows [it is of nvarchar type]... but this query was obviously designed such that it only cares about rows where the data in this field is numeric.
The problem with this is that T-SQL (near as I understand) doesn't shortcircuit the Where clause thus causing it to ditch out on records where the data is not numeric with the exception:
Msg 245, Level 16, State 1, Line 1
Conversion failed when converting the nvarchar value '/A' to data type int.
Short of dumping all the rows where myField is numeric into a temporary table and then querying that for rows where the field is in the specified range, what can I do that is optimal?
My first parse purely to attempt to analyse the returned data and see what was going on was:
Select *
From (
Select *
From table
Where IsNull(myField, '') <> ''
And IsNumeric(myField) = 1
) t0
Where Convert(int, myField) Between #StartRange And #EndRange
But I get the same error I did for the first query which I'm not sure I understand as I'm not converting any data that shouldn't be numeric at this point. The subquery should only have returned rows where myField contains numeric data.
Maybe I need my morning tea, but does this make sense to anyone? Another set of eyes would help.
Thanks in advance
IsNumeric only tells you that the string can be converted to one of the numeric types in SQL Server. It may be able to convert it to money, or to a float, but may not be able to convert it to an int.
Change your
IsNumeric(myField) = 1
to be:
not myField like '%[^0-9]%' and LEN(myField) < 9
(that is, you want myField to contain only digits, and fit in an int)
Edit examples:
select ISNUMERIC('.'),ISNUMERIC('£'),ISNUMERIC('1d9')
result:
----------- ----------- -----------
1 1 1
(1 row(s) affected)
You'd have to force SQL to evaluate the expressions in a certain order.
Here is one solution
Select *
From ( TOP 2000000000
Select *
From table
Where IsNumeric(myField) = 1
And IsNull(myField, '') <> ''
ORDER BY Key
) t0
Where Convert(int, myField) Between #StartRange And #EndRange
and another
Select *
From table
Where
CASE
WHEN IsNumeric(myField) = 1 And IsNull(myField, '') <> ''
THEN Convert(int, myField) ELSE #StartRange-1
END Between #StartRange And #EndRange
The first technique is "intermediate materialisation": it forces a sort on a working table.
The 2nd relies on CASE ORDER evaluation is guaranteed
Neither is pretty or whizzy
SQL is declarative: you tell the optimiser what you want, not how to do it. The tricks above force things to be done in a certain order.
Not sure if this helps you, but I did read somewhere that incorrect conversion using CONVERT will always generate error in SQL. So I think it would be better to use CASE in where clause to avoid having CONVERT to run on all rows
Use a CASE statement.
declare #StartRange int
declare #EndRange int
set #StartRange = 1
set #EndRange = 3
select *
from TestData
WHERE Case WHEN ISNUMERIC(Value) = 0 THEN 0
WHEN Value IS NULL THEN 0
WHEN Value = '' THEN 0
WHEN CONVERT(int, Value) BETWEEN #StartRange AND #EndRange THEN 1
END = 1