Use of brackets around parameters when creating a stored procedure - tsql

I've recently spotted a few stored procedures declared with () brackets around the parameters:
CREATE PROCEDURE [Schema].[Table]
(
#Param1 char(3),
#Param2 bit,
#Param3 datetimeoffset(0)
)
I immediately went to the MS docs site but they're not saying anything about it and neither can I find anything on a web search.
The encasing of the parameters don't seem to cause any issues and when I dump the stored procedure using SSMS, it happily dumps it with the brackets (obviously because it's stored the original definition, just saying):
CREATE PROCEDURE [Schema].[Table]
(
#Param1 char(3),
#Param2 bit,
#Param3 datetimeoffset(0)
)
Whilst it's "allowed", yet undocumented, how valid is this declaration and is it good or bad? Apparently it was done to standardise the way both functions and stored procedures are declared.

Related

Can Firebird SQL procedure know the parent procedure/trigger from which it is called from?

I have SQL procedure which should return a bit different result if it is called from one specific procedure. Is it possible for the SQL procedure to detect that it is called from one particular other SQL procedure?
Maybe monitoring mon$... table data can give the answer?
Question applied to Firebird 2.1
E.g. there is mon$call_stack table, but for mostly mon$... tables are empty for Firebird 2.1, they fill up for later versions of Firebird.
Hidden data dependencies are bad idea. There is a reason why programmers see "pure function" as a good thing to pursue. Perhaps not in all situations and not at all costs, but when other factors are not affected it better be so.
https://en.wikipedia.org/wiki/Pure_function
So, Mark is correct that if there is something that affects your procedure logic - then it better be explicitly documented by becoming an explicit function parameter. Unless your explicit goal was exactly to create a hidden backdoor.
This, however, mean that all the "clients" of that procedure, all the places where it can be called from, should be changed as well, and this should be done in concert, both during development and during upgrades at client deployment sites. Which can be complicated.
So I rather would propose creating a new procedure and moving all the actual logic into it.
https://en.wikipedia.org/wiki/Adapter_pattern
Assuming you have some
create procedure old_proc(param1 type1, param2 type2, param3 type3) as
begin
....some real work and logic here....
end;
transform it into something like
create procedure new_proc(param1 type1, param2 type2, param3 type3,
new_param smallint not null = 0) as
begin
....some real work and logic here....
....using new parameter for behavior fine-tuning...
end;
create procedure old_proc(param1 type1, param2 type2, param3 type3) as
begin
execute procedure new_proc(param1, param2, param3)
end;
...and then you explicitly make "one specific procedure" call new_proc(...., 1). Then gradually, one place after another, you would move ALL you programs from calling old_proc to calling new_proc and eventually you would retire the old_proc when all dependencies are moved to new API.
https://www.firebirdsql.org/rlsnotesh/rnfbtwo-psql.html#psql-default-args
There is one more option to pass "hidden backdoor parameter" - that is context variables, introduced in Firebird 2.0
https://www.firebirdsql.org/rlsnotesh/rlsnotes20.html#dml-dsql-context
and then your callee would check like that
.....normal execution
if ( rdb$get_context('USER_TRANSACTION','my_caller') is not null) THEN BEGIN
....new behavior...
end;
However, you would have to make that "one specific procedure" to properly set this variable before calling (which is tedious but not hard) AND properly delete it after the call (and this should be properly framed to properly happen even in case of any errors/exceptions, and this also is tedious and is not easy).
I'm not aware of any such option. If your procedure should exhibit special behaviour when called from a specific procedure, I'd recommend that you make it explicit by adding an extra parameter specifying the type of behaviour, or separating this into two different procedures.
That way, you can also test the behaviour directly.
Although I agree that the best way would probably be to add a parameter to the procedure to help identify where it is being called from, sometimes we don't have the luxury for that. Consider the scenario where the procedure signature can't change because it is in a legacy system and called in many places. In this scenario I would consider the following example;
The stored procedure that needs to know who called it will be called SPROC_A in this example.
First we create a Global Temp Table
CREATE GLOBAL TEMPORARY TABLE GTT_CALLING_PROC
( PKEY INTEGER primary key,
CALLING_PROC VARCHAR(31))
ON COMMIT DELETE ROWS;
Next we create another Stored procedure called SPROC_A_WRAPPER that will wrap the calling to SPROC_A
CREATE OR ALTER PROCEDURE SPROC_A_WRAPPER
AS
DECLARE CALLING_SPROC VARCHAR(31);
BEGIN
DELETE FROM GTT_CALLING_PROC
WHERE GTT_CALLING_PROC.PKEY = 1;
INSERT INTO GTT_CALLING_PROC (
PKEY,
CALLING_PROC)
VALUES (
1,
'SPROC_A_WRAPPPER');
EXECUTE PROCEDURE SPROC_A;
DELETE FROM GTT_CALLING_PROC
WHERE GTT_CALLING_PROC.PKEY = 1;
END
and finally we have SPROC_A
CREATE OR ALTER PROCEDURE SPROC_A
AS
DECLARE CALLING_SPROC VARCHAR(31);
BEGIN
SELECT FIRST 1 CALLING_PROC
FROM GTT_CALLING_PROC
WHERE GTT_CALLING_PROC.PKEY = 1
INTO :CALLING_SPROC;
IF (:CALLING_SPROC = 'SPROC_A_WRAPPER') THEN
BEGIN
/* Do Something */
END
ELSE
BEGIN
/* Do Something Else */
END
END
The SPROC_A_WRAPPER will populate the Temp table, call that SPROC_A and then delete the row from the Temp Table, in case SPROC_A is called from someplace else within the same transaction, it won't think SPROC_A_WRAPPER called it.
Although somewhat crude, I believe this would satisfy your need.

Lack of user-defined table types for passing data between stored procedures in PostgreSQL

So I know there's already similar questions on this, but most of them are very old, or they have non-answers, like "why would you even want to do this?", or "table types aren't performant and we don't want them here", or even "you need to rethink your whole approach".
So what I would ideally want to do is to declare a user-defined table type like this:
CREATE TYPE my_table AS TABLE (
a int,
b date);
Then use this in a procedure as a parameter, like this:
CREATE PROCEDURE my_procedure (
my_table_parameter my_table)
Then be able to do stuff like this:
INSERT INTO
my_temp_table
SELECT
m.a,
m.b,
o.useful_data
FROM
my_table m
INNER JOIN my_schema.my_other_table o ON o.a = m.a;
This is for a billing system, let's make it a mobile phone billing system (it isn't but it's similar enough to work). Here's several ways I might call my procedure:
I sometimes want to call my procedure for one row, to create an adhoc bill for one customer. I want to do this while they are on the phone and get them an immediate result. Maybe I just fixed something wrong with their bill, and they're angry!
I sometimes want to bill everyone who's due a bill on a specific date. Maybe this is their preferred billing date, and they're on a monthly billing cycle?
I sometimes want to bill in bulk, but my data comes from a CSV file. Maybe this is a custom billing run that I have no way of understanding the motivation for?
Maybe I want to final bill customers who recently left?
Sometimes I might need to rebill customers because a mistake was made. Maybe a tariff rate was uploaded incorrectly, and everyone on that tariff needs their bill regenerating?
I want to split my code up into modules, it's easier to work like this, and it allows a large degree of reusability. So what I don't want to do is to write n billing systems, where each one handles one of the use cases above. I want a generic billing engine that is a stored procedure, uses set-based queries where possible, and works just about as well for one customer as it does for 1,000,000. If anything I want it optimised for lots of customers, as long as it only takes a few seconds to run for one customer.
If I had SQL Server I would create a user-defined table type, and this would contain a list of customers, the date they need billing to, and maybe anything else that would be useful. But let's just leave it at the simplest case possible, an integer representing a customer and a date to say what date I want to bill them up to, like my example above.
I've spent some days now looking at the options available in PostgreSQL, and these are the conclusions I have reached. I would be extremely grateful for any help with this, correcting my incorrect assumptions, or telling me of another way I have overlooked.
Process-Keyed Table
Create a table that looks like this:
CREATE TABLE customer_list (
process_key int,
customer_id int,
bill_to_date date);
When I want to call my billing system I get a unique key (probably from a sequence), load up the rows with my list of customers/ dates to bill them to, and add the unique key to every row. Now I can simply pass the unique key to my billing engine, and it can scoop up the data at the other side.
This seems the most optional way to proceed, but it's clunky, like something I would have done in SQL Server 20 years ago, when there weren't better options, it's prone to leaving data lying around, and it doesn't seem like it would be optimal, as the data will have to be squirted to physical storage, and read back into memory.
Use a Temporary Table
So I'm thinking that I create a temporary table, call it customer_temp, and make it ON COMMIT DROP. When I call my stored procedure to bill customers it picks the data out of the temporary table, does what it needs to do, and then when it ends the table is vacuumed away.
But this doesn't work if I call the billing engine more than once at a time. So I need to give my temporary tables unique names, and also pass this name into my billing engine, which has to use some vile dynamic SQL to get the data into some usable area (probably another temporary table?).
Use a TYPE
When I first saw this I thought I had the answer, but it turns out to not work for multidimensional arrays (or I'm doing something wrong). I quickly learned that for a single dimensional array I could get this working by just pretending that a PostgreSQL TYPE was a user defined table type. But it obviously isn't.
So passing in an array of integers, e.g. customer_list int[]; works fine, and I can use the ARRAY command to populate that array from a query, and then it's easy to access it with =ANY(customer_list) at the other side. It's not ideal, and I bet it's awful for large data sets, but it's neat.
However, it doesn't work for multidimensional arrays, the ARRAY command can't cope with a query that has more than one column in it, and the other side becomes more awkward, needing an UNNEST command to unpack the data into a format where it's usable.
I defined my type like this:
CREATE TYPE customer_list (
customer_id int,
bill_to_date date);
...and then used it in my procedure parameter list as customer_list[], which seems to work, but I have no nice way to populate this structure from the calling procedure.
I feel I'm missing something here, as I never got it to work properly as a prototype, but I also feel this is a potential dead end anyway, as it won't cope with large numbers of rows in a performant way, and this isn't what arrays are meant for. The first thing I do with the array at the other side, is unpack it back into a table again, which seems counterintuitive.
Ref Cursors
I read that you can use REF CURSORs, but I'm not quite sure how this works. It seems that you open a cursor in one procedure, and then pass a handle to it to another procedure. It doesn't seem like this is going to be set-based, but I could be wrong, and I just haven't found a way to convert a cursor back into a table again?
Write Everything as One Massive Procedure
I'm not ruling this out, as PostgreSQL seems to be leading me this way. If I write one enormous billing engine that copes with every eventuality, then my only issue will be when this is called using an externally provided list. Every other issue can be solved by just not having to pass data between procedures.
I can probably cope with this by loading the data into a batch table, and feeding this in as one of the options. It's awful, and it's like going back to the 1990s, but if this is what PostgreSQL wants me to do, then so be it.
Final Thoughts
I'm sure I'm going to be asked for code examples, which I will happily provide, but I avoided because this post is already uber-long, and what I'm trying to achieve is actually quite simple I feel.
Having typed all of this out, I'm still feeling that there must be a way of working around the "temporary table names must be unique", as this would work nicely if I found a way to let it be called in a multithreaded way.
Okay, taking the bits I was missing I came up with this, which seems to work:
CREATE TYPE IF NOT EXISTS bill_list AS (
customer_id int,
bill_date date);
CREATE TABLE IF NOT EXISTS billing.pending_bill (
customer_id int,
bill_date date);
CREATE TABLE IF NOT EXISTS billing.customer (
customer_id int,
billed boolean,
last_billed date);
INSERT INTO billing.customer
VALUES
(1, false, NULL::date),
(2, false, NULL::date),
(3, false, NULL::date);
INSERT INTO billing.pending_bill
VALUES
(1, '20210108'::date),
(2, '20210105'::date),
(3, '20210104'::date);
CREATE OR REPLACE PROCEDURE billing.bill_customer_list (
pending bill_list[])
LANGUAGE PLPGSQL
AS
$$
BEGIN
UPDATE
billing.customer c
SET
billed = true,
last_billed = p.bill_date
FROM
UNNEST(pending) p
WHERE
p.customer_id = c.customer_id;
END;
$$
CREATE OR REPLACE PROCEDURE billing.test ()
LANGUAGE PLPGSQL
AS
$$
DECLARE pending bill_list[];
BEGIN
pending := ARRAY(SELECT p FROM billing.pending_bill p);
CALL billing.bill_customer_list (pending);
END;
$$
Your select in the procedure returns multiple columns. But you want to create an array of a custom type. So your SELECT list needs to return the type, not *.
You don't need the bill_list type either, as every table has a corresponding type and you can simply pass an array of the table's type.
So you can use the following:
CREATE PROCEDURE bill_customer_list (
pending pending_bill[])
LANGUAGE PLPGSQL
AS
$$
BEGIN
UPDATE
customer c
SET
billed = true
FROM unnest(pending) p --<< treat the array as a table
WHERE
p.customer_id = c.customer_id;
END;
$$
;
CREATE PROCEDURE test ()
LANGUAGE PLPGSQL
AS
$$
DECLARE
pending pending_bill[];
BEGIN
pending := ARRAY(SELECT p FROM pending_bill p);
CALL bill_customer_list (pending);
END;
$$
;
The select p returns a record (of the same type as the table) as a single column in the result.
The := is the assignment operator in PL/pgSQL and typically much faster than a SELECT .. INTO variable. Although in this case the performance difference wouldn't matter much I guess.
Online example
If you do want to keep the extra type bill_list around because it e.g. contains less columns than pending_bill you need to select only those columns that match the type's column and create a record by enclosing them in parentheses. (a,b) is a single column with an anonymous record type (and two fields). a,b are two distinct columns
CREATE PROCEDURE test ()
LANGUAGE PLPGSQL
AS
$$
DECLARE
pending bill_list[];
BEGIN
pending := ARRAY(SELECT (id, customer_id) FROM pending_bill p);
CALL bill_customer_list (pending);
END;
$$
;
You should also note that DECLARE starts a block in PL/pgSQL where multiple variables can be defined. There is no need to write one DECLARE for each variable (your formatting of the DECLARE block let's me think that you assumed you need one DECLARE per variable as is the case in T-SQL)

PostgreSQL - Auto Cast for types?

I'm working on porting database from Firebird to PostgreSQL and have many errors related to type cast. For example let's take one simple function:
CREATE OR REPLACE FUNCTION f_Concat3 (
s1 varchar, s2 varchar, s3 varchar
)
RETURNS varchar AS
$body$
BEGIN
return s1||s2||s3;
END;
$body$ LANGUAGE 'plpgsql' IMMUTABLE CALLED ON NULL INPUT SECURITY INVOKER LEAKPROOF COST 100;
As Firebird is quite flexible to types this functions was called differently: some of the arguments might be another type: integer/double precision/timestamp. And of course in Postgres function call f_Concat3 ('1', 2, 345.345) causes an error like:
function f_Concat3(unknown, integer, numeric) not found.
The documentation is recomended to use an explicit cast like:
f_Concat3 ('1'::varchar, 2::varchar, 345.345::varchar)
Also I can create a function clones for all possible combinations of types what might occur and it will work. An example to resolve error:
CREATE OR REPLACE FUNCTION f_Concat3 (
s1 varchar, s2 integer, s3 numeric
)
RETURNS varchar AS
$body$
BEGIN
return s1::varchar||s2::varchar||s3::varchar;
END;
However this is very bad and ugly and it wont work with big functions.
Important: We have one general code base for all DB and use our own language to create application objects (forms, reports, etc) which contains select queries. It is not possible to use explicit cast on function calls cause we will lose compatibility with other DB.
I am confused that the integer argument can not be casted to the numeric or double precision, or date / number to a string. I even face problems with integer to smallint, and vice versa. Most database act not like this.
Is there any best practice for such situation?
Is there any alternatives for explicit cast?
SQL is a typed language, and PostgreSQL takes that more seriously than other relational databases. Unfortunately that means extra effort when porting an application with sloppy coding.
It is tempting to add implicit casts, but the documentation warns you from creating casts between built-in data types:
Additional casts can be added by the user with the CREATE CAST command. (This is usually done in conjunction with defining new data types. The set of casts between built-in types has been carefully crafted and is best not altered.)
This is not an idle warning, because function resolution and other things may suddenly fail or misbehave if you create new casts between existing types.
I think that if you really don't want to clean up the code (which would make it more portable for the future), you have no choice but to add more versions of your functions.
Fortunately PostgreSQL has function overloading which makes that possible.
You can make the job easier by using one argument with a polymorphic type in your function definition, like this:
CREATE OR REPLACE FUNCTION f_concat3 (
s1 text, s2 integer, s3 anyelement
) RETURNS text
LANGUAGE sql IMMUTABLE LEAKPROOF AS
'SELECT f_concat3(s1, s2::text, s3::text)';
You cannot use more than one anyelement argument though, because that will only work if all such parameters are of the same type.
If you use function overloading, be careful that you don't create ambiguities that would make function resolution fail.

Stored procedure hangs seemingly without explanation

we have a stored procedure that ran fine until 10 minutes ago and then it just hangs after you call it.
Observations:
Copying the code into a query window yields the query result in 1 second
SP takes > 2.5 minutes until I cancel it
Activity Monitor shows it's not being blocked by anything, it's just doing a SELECT.
Running sp_recompile on the SP doesn't help
Dropping and recreating the SP doesn't help
Setting LOCK_TIMEOUT to 1 second does not help
What else can be going on?
UPDATE: I'm guessing it had to do with parameter sniffing. I used Adam Machanic's routine to find out which subquery was hanging. I found things wrong with the query plan thanks to the hint by Martin Smith. I learned about EXEC ... WITH RECOMPILE, OPTION(RECOMPILE) for subqueries within the SP, and OPTION (OPTIMIZE FOR (#parameter = 1)) in order to attack parameter sniffing. I still don't know what was wrong in this particular case but I came out of this battle seasoned and much better armed. I know what to do next time. So here's the points!
I think that this is related to parameter sniffing and the need to parameterize your input params to local params within the SP. Adding with recompile causes the execution plan to be recreated and eliminates much of the benefits of having a SP. We were using With Recompile on many reports in an attempt to eliminate this hanging issue and it occassionally resulted in hanging SP's that may have been related to other locks and/or transactions accessing the same tables simultaneously. See this link for more details
Parameter Sniffing (or Spoofing) in SQL Server and change your SP's to the following to fix this:
CREATE PROCEDURE [dbo].[SPNAME] #p1 int, #p2 int
AS
DECLARE #localp1 int, #localp2 int
SET #localp1=#p1
SET #localp2=#p2
Run Adam Machanic's excellent sp_WhoIsActive stored proc while your query is running. It'll give you the wait information - meaning, what the stored proc is waiting on - plus things like the execution plan:
http://www.brentozar.com/archive/2010/09/sql-server-dba-scripts-how-to-find-slow-sql-server-queries/
If you want the outer command (like a calling stored procedure's full text), use the #get_outer_command = 1 parameter as well.
First thing First.
Please check if there are any uncommitted transactions. A begin transaction without "COMMIT TRANSACTION"
Thanks for all comments.
I still haven't found the answer, but I will post the progress here.
I failed to reproduce the problem before, but today I chanced upon another stored procedure with the same problem. Again the same symptoms appeared:
Hanging piece of query runs fine and quick (3 secs) in normal query window (hanging piece identified with sp_whoisactive)
No locks, according to Activity Monitor SPID is doing SELECT
Stored procedure runs for over 6 hours without response
Parameters passed to SP and variables declared in window are the same
Using above hints, I found the SP execution plan and it showed nothing out of the ordinary (to me, at least). Creating a new stored procedure with same contents did not solve the problem either. So I started stripping the SP to less and less contents until I encountered a UDF call to another database. When I removed that (replaced the call by the inline contents of the function, a CASE statement), it ran fine again.
So this COULD have been the problem, but I am not very certain, as last time the problem disappeared by itself and I also changed a lot of other things while stripping this SP.
When we add new data sometimes the execution plan becomes invalid or out of date then the stored procedure starts going into this limbo phase. Run the following commands on your database
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
It will flush the cache memory and rebuild the execution plan next time you will run the stored proc.
msdn.microsoft.com
I think I had the same problem. I removed my parameters from the subqueries. It ran fine after that. Not sure if this is possible in your script but that is what solved it for me.
An answer of Brent Ozar might work, but it returns only active command text by default. For example, it returns WAITFOR DELAY '00:00:05' for query like:
CREATE PROCEDURE spGetChangeNotifications
AS
BEGIN
SET NOCOUNT ON;
DECLARE
#actionType TINYINT;
WHILE #actionType IS NULL
BEGIN
WAITFOR DELAY '00:00:05';
SELECT TOP 1
#actionType = [ActionType]
FROM
TableChangeNotifications;
END;
SELECT
TOP 1000 [RecordID], [Component], [TableName], [ActionType], [Key1], [Key2], [Key3]
FROM
TableChangeNotifications;
END;
How it looks like:
Thus, check the parameter #get_outer_command as described here.
Also, try this one instead(slightly modified procedure from MS Docs):
DECLARE
#sessions TABLE
(
[SPID] INT,
STATUS VARCHAR(MAX),
[Login] VARCHAR(MAX),
[HostName] VARCHAR(MAX),
[BlkBy] VARCHAR(MAX),
[DBName] VARCHAR(MAX),
[Command] VARCHAR(MAX),
[CPUTime] INT,
[DiskIO] INT,
[LastBatch] VARCHAR(MAX),
[ProgramName] VARCHAR(MAX),
[SPID_1] INT,
[REQUESTID] INT
);
INSERT INTO #sessions
EXEC sp_who2;
SELECT
[req].[session_id],
[A].[Login] AS 'login',
[A].[HostName] AS 'hostname',
[req].[start_time],
[cpu_time] AS 'cpu_time_ms',
OBJECT_NAME([st].[objectid], [st].[dbid]) AS 'object_name',
SUBSTRING(REPLACE(REPLACE(SUBSTRING([ST].text, ([req].[statement_start_offset] / 2) + 1, ((CASE [statement_end_offset]
WHEN -1
THEN DATALENGTH([ST].text)
ELSE [req].[statement_end_offset]
END - [req].[statement_start_offset]) / 2) + 1), CHAR(10), ' '), CHAR(13), ' '), 1, 512) AS [statement_text],
[ST].text AS 'full_query_text'
FROM
sys.dm_exec_requests AS req
CROSS APPLY
sys.dm_exec_sql_text(req.sql_handle) AS ST
LEFT JOIN #sessions AS A
ON A.SPID = req.session_id
ORDER BY
[cpu_time] DESC;
How it looks like:
Of course, it's possible to modify code from Brent Ozar answer so it would select a full query text, too, though. Nearly same technique is chosen there(link of code of 18.07.2020 so might change after time):
I had the same problem today and I don't know what causes it but I found a solution. I took the input parameter and saved it into a new parameter, i.e.
declare #parameter2 as x = #parameter
Then i changed the references to the parameter in the queries from #parameter to #parameter2.

Return multiple values from a SQL Server function

How would I return multiple values (say, a number and a string) from a user-defined function in SQL Server?
Change it to a table-valued function
Please refer to the following link, for example.
Another option would be to use a procedure with output parameters - Using a Stored Procedure with Output Parameters
Here's the Query Analyzer template for an in-line function - it returns 2 values by default:
-- =============================================
-- Create inline function (IF)
-- =============================================
IF EXISTS (SELECT *
FROM sysobjects
WHERE name = N'<inline_function_name, sysname, test_function>')
DROP FUNCTION <inline_function_name, sysname, test_function>
GO
CREATE FUNCTION <inline_function_name, sysname, test_function>
(<#param1, sysname, #p1> <data_type_for_param1, , int>,
<#param2, sysname, #p2> <data_type_for_param2, , char>)
RETURNS TABLE
AS
RETURN SELECT #p1 AS c1,
#p2 AS c2
GO
-- =============================================
-- Example to execute function
-- =============================================
SELECT *
FROM <owner, , dbo>.<inline_function_name, sysname, test_function>
(<value_for_#param1, , 1>,
<value_for_#param2, , 'a'>)
GO
Erland Sommarskog has an exhaustive post about passing data in SQL Server located here:
http://www.sommarskog.se/share_data.html
He covers SQL Server 2000, 2005, and 2008, and it should probably be read in its full detail as there is ample coverage of each method's advantages and drawbacks. However, here are the highlights of the article (frozen in time as of July 2015) for the sake of providing search terms that can be used to look greater details:
This article tackles two related questions:
How can I use the result set from one stored procedure in another, also expressed as How can I use the result set from a stored
procedure in a SELECT statement?
How can I pass a table data in a parameter from one stored procedure to another?
OUTPUT Parameters
Not generally applicable, but sometimes overlooked.
Table-valued Functions
Often the best choice for output-only, but there are several restrictions.
Examples:
Inline Functions: Use this to reuse a single SELECT.
Multi-statement Functions: When you need to encapsulate more complex logic.
Using a Table
The most general solution. My favoured choice for input/output scenarios.
Examples:
Sharing a Temp Table: Mainly for a single pair of caller/callee.
Process-keyed Table: Best choice for many callers to the same callee.
Global Temp Tables: A variation of process-keyed.
Table-valued Parameters
Req. Version: SQL 2008
Mainly useful when passing data from a client.
INSERT-EXEC
Deceivingly appealing, but should be used sparingly.
Using the CLR
Req. Version: SQL 2005
Complex, but useful as a last resort when INSERT-EXEC does not work.
OPENQUERY
Tricky with many pitfalls. Discouraged.
Using XML
Req. Version: SQL 2005
A bit of a kludge, but not without advantages.
Using Cursor Variables
Not recommendable.
Example of using a stored procedure with multiple out parameters
As User Mr. Brownstone suggested you can use a stored procedure; to make it easy for all i created a minimalist example. First create a stored procedure:
Create PROCEDURE MultipleOutParameter
#Input int,
#Out1 int OUTPUT,
#Out2 int OUTPUT
AS
BEGIN
Select #Out1 = #Input + 1
Select #Out2 = #Input + 2
Select 'this returns your normal Select-Statement' as Foo
, 'amazing is it not?' as Bar
-- Return can be used to get even more (afaik only int) values
Return(#Out1+#Out2+#Input)
END
Calling the stored procedure
To execute the stored procedure a few local variables are needed to receive the value:
DECLARE #GetReturnResult int, #GetOut1 int, #GetOut2 int
EXEC #GetReturnResult = MultipleOutParameter
#Input = 1,
#Out1 = #GetOut1 OUTPUT,
#Out2 = #GetOut2 OUTPUT
To see the values content you can do the following
Select #GetReturnResult as ReturnResult, #GetOut1 as Out_1, #GetOut2 as Out_2
This will be the result: