Using one DECLARE statement for several variables - tsql

Is there any difference using one DECLARE statement for declaring all your variables in store procedure or function instead of using the statement for each one.
For example:
DECLARE #INST1 INT
,#INST2 BIGINT
,#INST3 DATETIME
.......
,#INSTN INT
and
DECLARE #INST1 INT
DECLARE #INST2 BIGINT
DECLARE #INST3 DATETIME
..................
DECLARE #INSTN INT
I am asking for differences like performance, reducing SQL server caches size and other internal for the server stuff, that I am not familiar with and can make the server job easier.

IHMO, there's no difference because the engine instantiate the same memory about variables in either cases. Is only a short ay to write code, but I prefer using a DECLARE for each variable because the code becomes read better

Related

Can Firebird SQL procedure know the parent procedure/trigger from which it is called from?

I have SQL procedure which should return a bit different result if it is called from one specific procedure. Is it possible for the SQL procedure to detect that it is called from one particular other SQL procedure?
Maybe monitoring mon$... table data can give the answer?
Question applied to Firebird 2.1
E.g. there is mon$call_stack table, but for mostly mon$... tables are empty for Firebird 2.1, they fill up for later versions of Firebird.
Hidden data dependencies are bad idea. There is a reason why programmers see "pure function" as a good thing to pursue. Perhaps not in all situations and not at all costs, but when other factors are not affected it better be so.
https://en.wikipedia.org/wiki/Pure_function
So, Mark is correct that if there is something that affects your procedure logic - then it better be explicitly documented by becoming an explicit function parameter. Unless your explicit goal was exactly to create a hidden backdoor.
This, however, mean that all the "clients" of that procedure, all the places where it can be called from, should be changed as well, and this should be done in concert, both during development and during upgrades at client deployment sites. Which can be complicated.
So I rather would propose creating a new procedure and moving all the actual logic into it.
https://en.wikipedia.org/wiki/Adapter_pattern
Assuming you have some
create procedure old_proc(param1 type1, param2 type2, param3 type3) as
begin
....some real work and logic here....
end;
transform it into something like
create procedure new_proc(param1 type1, param2 type2, param3 type3,
new_param smallint not null = 0) as
begin
....some real work and logic here....
....using new parameter for behavior fine-tuning...
end;
create procedure old_proc(param1 type1, param2 type2, param3 type3) as
begin
execute procedure new_proc(param1, param2, param3)
end;
...and then you explicitly make "one specific procedure" call new_proc(...., 1). Then gradually, one place after another, you would move ALL you programs from calling old_proc to calling new_proc and eventually you would retire the old_proc when all dependencies are moved to new API.
https://www.firebirdsql.org/rlsnotesh/rnfbtwo-psql.html#psql-default-args
There is one more option to pass "hidden backdoor parameter" - that is context variables, introduced in Firebird 2.0
https://www.firebirdsql.org/rlsnotesh/rlsnotes20.html#dml-dsql-context
and then your callee would check like that
.....normal execution
if ( rdb$get_context('USER_TRANSACTION','my_caller') is not null) THEN BEGIN
....new behavior...
end;
However, you would have to make that "one specific procedure" to properly set this variable before calling (which is tedious but not hard) AND properly delete it after the call (and this should be properly framed to properly happen even in case of any errors/exceptions, and this also is tedious and is not easy).
I'm not aware of any such option. If your procedure should exhibit special behaviour when called from a specific procedure, I'd recommend that you make it explicit by adding an extra parameter specifying the type of behaviour, or separating this into two different procedures.
That way, you can also test the behaviour directly.
Although I agree that the best way would probably be to add a parameter to the procedure to help identify where it is being called from, sometimes we don't have the luxury for that. Consider the scenario where the procedure signature can't change because it is in a legacy system and called in many places. In this scenario I would consider the following example;
The stored procedure that needs to know who called it will be called SPROC_A in this example.
First we create a Global Temp Table
CREATE GLOBAL TEMPORARY TABLE GTT_CALLING_PROC
( PKEY INTEGER primary key,
CALLING_PROC VARCHAR(31))
ON COMMIT DELETE ROWS;
Next we create another Stored procedure called SPROC_A_WRAPPER that will wrap the calling to SPROC_A
CREATE OR ALTER PROCEDURE SPROC_A_WRAPPER
AS
DECLARE CALLING_SPROC VARCHAR(31);
BEGIN
DELETE FROM GTT_CALLING_PROC
WHERE GTT_CALLING_PROC.PKEY = 1;
INSERT INTO GTT_CALLING_PROC (
PKEY,
CALLING_PROC)
VALUES (
1,
'SPROC_A_WRAPPPER');
EXECUTE PROCEDURE SPROC_A;
DELETE FROM GTT_CALLING_PROC
WHERE GTT_CALLING_PROC.PKEY = 1;
END
and finally we have SPROC_A
CREATE OR ALTER PROCEDURE SPROC_A
AS
DECLARE CALLING_SPROC VARCHAR(31);
BEGIN
SELECT FIRST 1 CALLING_PROC
FROM GTT_CALLING_PROC
WHERE GTT_CALLING_PROC.PKEY = 1
INTO :CALLING_SPROC;
IF (:CALLING_SPROC = 'SPROC_A_WRAPPER') THEN
BEGIN
/* Do Something */
END
ELSE
BEGIN
/* Do Something Else */
END
END
The SPROC_A_WRAPPER will populate the Temp table, call that SPROC_A and then delete the row from the Temp Table, in case SPROC_A is called from someplace else within the same transaction, it won't think SPROC_A_WRAPPER called it.
Although somewhat crude, I believe this would satisfy your need.

How to use original Postgres input/output_function in CREATE TYPE?

I have a table with some column of type smallint and want to provide a CAST from varchar to smallint to implement some conversions for that column only. So to be able to create a specific CAST for my needs, I need a type for that special column. Already tried using a domain, but Postgres warns about those being ignored in a CAST... So it looks like I'm stuck with CREATE TYPE, but I don't want to implement the required input/output_function on my own, as in the end I only need whatever should already be available for smallint in Postgres.
The problem is I don't know the names of those functions, in which lib those are stored, if I need to provide paths which can vary upon installation on different OS or if those are available at all.
So, is it possible to CREATE TYPE something like smallint which completely only uses Postgres functions and that in a platform/path independent manner?
I didn't find anyone doing something like that. Thanks!
You can create a type that is just like smallint like this:
CREATE TYPE myint;
CREATE FUNCTION myintin(cstring) RETURNS myint
LANGUAGE internal IMMUTABLE STRICT PARALLEL SAFE AS 'int2in';
CREATE FUNCTION myintout(myint) RETURNS cstring
LANGUAGE internal IMMUTABLE STRICT PARALLEL SAFE AS 'int2out';
CREATE FUNCTION myintrecv(internal) RETURNS myint
LANGUAGE internal IMMUTABLE STRICT PARALLEL SAFE AS 'int2recv';
CREATE FUNCTION myintsend(myint) RETURNS bytea
LANGUAGE internal IMMUTABLE STRICT PARALLEL SAFE AS 'int2send';
CREATE TYPE myint (
INPUT = myintin,
OUTPUT = myintout,
RECEIVE = myintrecv,
SEND = myintsend,
LIKE = smallint,
CATEGORY = 'N',
PREFERRED = FALSE,
DELIMITER = ',',
COLLATABLE = FALSE
);
You'd have to define casts to other numeric types if you want to use it in arithmetic expressions.
If you also add casts from varchar (or text), but beware that creating too many casts can lead to ambiguities and surprising behaviour during type resolution. This is the reason why many type casts were removed in PostgreSQL 8.3, see the release notes.
I'd recommend that you look for a simpler solution to your problem, such as explicit type casts.

PostgreSQL - Auto Cast for types?

I'm working on porting database from Firebird to PostgreSQL and have many errors related to type cast. For example let's take one simple function:
CREATE OR REPLACE FUNCTION f_Concat3 (
s1 varchar, s2 varchar, s3 varchar
)
RETURNS varchar AS
$body$
BEGIN
return s1||s2||s3;
END;
$body$ LANGUAGE 'plpgsql' IMMUTABLE CALLED ON NULL INPUT SECURITY INVOKER LEAKPROOF COST 100;
As Firebird is quite flexible to types this functions was called differently: some of the arguments might be another type: integer/double precision/timestamp. And of course in Postgres function call f_Concat3 ('1', 2, 345.345) causes an error like:
function f_Concat3(unknown, integer, numeric) not found.
The documentation is recomended to use an explicit cast like:
f_Concat3 ('1'::varchar, 2::varchar, 345.345::varchar)
Also I can create a function clones for all possible combinations of types what might occur and it will work. An example to resolve error:
CREATE OR REPLACE FUNCTION f_Concat3 (
s1 varchar, s2 integer, s3 numeric
)
RETURNS varchar AS
$body$
BEGIN
return s1::varchar||s2::varchar||s3::varchar;
END;
However this is very bad and ugly and it wont work with big functions.
Important: We have one general code base for all DB and use our own language to create application objects (forms, reports, etc) which contains select queries. It is not possible to use explicit cast on function calls cause we will lose compatibility with other DB.
I am confused that the integer argument can not be casted to the numeric or double precision, or date / number to a string. I even face problems with integer to smallint, and vice versa. Most database act not like this.
Is there any best practice for such situation?
Is there any alternatives for explicit cast?
SQL is a typed language, and PostgreSQL takes that more seriously than other relational databases. Unfortunately that means extra effort when porting an application with sloppy coding.
It is tempting to add implicit casts, but the documentation warns you from creating casts between built-in data types:
Additional casts can be added by the user with the CREATE CAST command. (This is usually done in conjunction with defining new data types. The set of casts between built-in types has been carefully crafted and is best not altered.)
This is not an idle warning, because function resolution and other things may suddenly fail or misbehave if you create new casts between existing types.
I think that if you really don't want to clean up the code (which would make it more portable for the future), you have no choice but to add more versions of your functions.
Fortunately PostgreSQL has function overloading which makes that possible.
You can make the job easier by using one argument with a polymorphic type in your function definition, like this:
CREATE OR REPLACE FUNCTION f_concat3 (
s1 text, s2 integer, s3 anyelement
) RETURNS text
LANGUAGE sql IMMUTABLE LEAKPROOF AS
'SELECT f_concat3(s1, s2::text, s3::text)';
You cannot use more than one anyelement argument though, because that will only work if all such parameters are of the same type.
If you use function overloading, be careful that you don't create ambiguities that would make function resolution fail.

Should I use the largest string data type when creating a user defined function which operates on strings?

When creating a user defined function is it "bad" to just automatically use the largest string possible?
For example, given the following UDF, I've used nvarchar(max) for my input string, where I know perfectly well that the function currently isn't going to need to accept nvarchar(max) and maybe I'm just forward thinking too much, but I supposed that there would always be the possibility that maybe an nvarchar(max) would be passed to this function.
By doing something "bad" I'm wondering that by declaring that this function could possibly receive and actual nvarchar(max) am I doing anything to possibly cripple performance?
CREATE FUNCTION [dbo].[IsMasNull] (#value nvarchar(max))
RETURNS BIT
AS
BEGIN
RETURN
CASE
WHEN #value IS NULL THEN 1
WHEN CHARINDEX(char(0), #value) > 0 THEN 1
ELSE 0
END
END
NVARCHAR(MAX) will affect performance if its a database column. As a parameter to a stored procedure it should make no difference. If at all there is a degraded performance its because of the sheer size of the data and not the datatype.

Return multiple values from a SQL Server function

How would I return multiple values (say, a number and a string) from a user-defined function in SQL Server?
Change it to a table-valued function
Please refer to the following link, for example.
Another option would be to use a procedure with output parameters - Using a Stored Procedure with Output Parameters
Here's the Query Analyzer template for an in-line function - it returns 2 values by default:
-- =============================================
-- Create inline function (IF)
-- =============================================
IF EXISTS (SELECT *
FROM sysobjects
WHERE name = N'<inline_function_name, sysname, test_function>')
DROP FUNCTION <inline_function_name, sysname, test_function>
GO
CREATE FUNCTION <inline_function_name, sysname, test_function>
(<#param1, sysname, #p1> <data_type_for_param1, , int>,
<#param2, sysname, #p2> <data_type_for_param2, , char>)
RETURNS TABLE
AS
RETURN SELECT #p1 AS c1,
#p2 AS c2
GO
-- =============================================
-- Example to execute function
-- =============================================
SELECT *
FROM <owner, , dbo>.<inline_function_name, sysname, test_function>
(<value_for_#param1, , 1>,
<value_for_#param2, , 'a'>)
GO
Erland Sommarskog has an exhaustive post about passing data in SQL Server located here:
http://www.sommarskog.se/share_data.html
He covers SQL Server 2000, 2005, and 2008, and it should probably be read in its full detail as there is ample coverage of each method's advantages and drawbacks. However, here are the highlights of the article (frozen in time as of July 2015) for the sake of providing search terms that can be used to look greater details:
This article tackles two related questions:
How can I use the result set from one stored procedure in another, also expressed as How can I use the result set from a stored
procedure in a SELECT statement?
How can I pass a table data in a parameter from one stored procedure to another?
OUTPUT Parameters
Not generally applicable, but sometimes overlooked.
Table-valued Functions
Often the best choice for output-only, but there are several restrictions.
Examples:
Inline Functions: Use this to reuse a single SELECT.
Multi-statement Functions: When you need to encapsulate more complex logic.
Using a Table
The most general solution. My favoured choice for input/output scenarios.
Examples:
Sharing a Temp Table: Mainly for a single pair of caller/callee.
Process-keyed Table: Best choice for many callers to the same callee.
Global Temp Tables: A variation of process-keyed.
Table-valued Parameters
Req. Version: SQL 2008
Mainly useful when passing data from a client.
INSERT-EXEC
Deceivingly appealing, but should be used sparingly.
Using the CLR
Req. Version: SQL 2005
Complex, but useful as a last resort when INSERT-EXEC does not work.
OPENQUERY
Tricky with many pitfalls. Discouraged.
Using XML
Req. Version: SQL 2005
A bit of a kludge, but not without advantages.
Using Cursor Variables
Not recommendable.
Example of using a stored procedure with multiple out parameters
As User Mr. Brownstone suggested you can use a stored procedure; to make it easy for all i created a minimalist example. First create a stored procedure:
Create PROCEDURE MultipleOutParameter
#Input int,
#Out1 int OUTPUT,
#Out2 int OUTPUT
AS
BEGIN
Select #Out1 = #Input + 1
Select #Out2 = #Input + 2
Select 'this returns your normal Select-Statement' as Foo
, 'amazing is it not?' as Bar
-- Return can be used to get even more (afaik only int) values
Return(#Out1+#Out2+#Input)
END
Calling the stored procedure
To execute the stored procedure a few local variables are needed to receive the value:
DECLARE #GetReturnResult int, #GetOut1 int, #GetOut2 int
EXEC #GetReturnResult = MultipleOutParameter
#Input = 1,
#Out1 = #GetOut1 OUTPUT,
#Out2 = #GetOut2 OUTPUT
To see the values content you can do the following
Select #GetReturnResult as ReturnResult, #GetOut1 as Out_1, #GetOut2 as Out_2
This will be the result: