Create a ID loop for a stored proceedure - tsql

I have a list of about 500 ModuleID's that I'd like to run through the stored procedure below. Rather than copy paste each one individually I'm assuming there's a way I can create a loop and do it programmatically.
USE [mydb]
GO
DECLARE #return_value int
EXEC #return_value = [dbo].[DeleteModule]
#ModuleId = 3363
SELECT 'Return Value' = #return_value
GO

There are two different ways to do this - your script to generate list of queries, which you can copy/paste and execute, or to write a query, which will generate dynamic SQL and execute it.
First option is easier - just write a simple select statement like this:
select CONCAT('exec [dbo].[DeleteModule] ', ModuleId)
from [Table with module IDs]
where ModuleId is not null
When you execute the script above, it will give you a list of statements, which you can copy, paste in a query window and execute it. It is convenient for one-time or maintenance tasks.
The other option is to generate these queries and execute them dynamically. The easiest way to do it is using a cursor loop through the data, generate the query and execute it with exec or sp_executesql:
declare #sql nvarchar(max), #ParamsDefinition nvarchar(max), #CurrentModuleId int
set #sql = N'exec exec [dbo].[DeleteModule] #ModuleId'
set #ParamsDefinition = N'#ModuleId int'
declare cModules cursor local fast_forward forward_only read_only for
select ModuleId
from [Table with module IDs]
where ModuleId is not null
open cModules
fetch next from #cModules into #CurrentModuleId
while ##FETCH_STATUS = 0
begin
exec sp_executesql #sql, #ParamsDefinition, #ModuleId = #CurrentModuleId
fetch next from #cModules into #CurrentModuleId
end
close cModules
deallocate cModules
The body of the loop (between begin and end) will be executed for each row returned by the cursor, i.e. for each ModuleId. It will execute the statement in #sql for each value stored in #CurrentModuleId.

Related

Value not Store in Dynamic SQL

I've different different tables to categorically store data and a log table where all the transactions log are recorded
e.g. 1) VoucherNO, Add, ...
2) VoucherNO, Delete, ..
After I backup the database and restore in another server for my Reporting Purpose. That time I want to ensure all the log data and transaction are available in TestDB if not then I remove log from 'AUD_USER_ACTIVITY'.
To find the transaction exist or not, I create a dynamic sql select statement and check whether record is exist or not.
Basis on #RecExist Value I do the action like if records is not available in TestDB the log will be remove, if record exist immediately break this loop and going for next procedure
But #RecExist variable is not updating in Dynamic SQL Execution. Please guide me
declare #MvDocNo varchar(50)
DECLARE #SCtr as DECIMAL(10,0)
declare #LocationCode varchar(4)
declare #UName Nvarchar(40)
declare #toe varchar(30)
declare #QryTxt as nvarchar(MAX);
Declare #RecExist as INT =0;
SET #RecExist=0
WHILE #RecExist=0
BEGIN
select top 1 #MvDocNo=DOCNO, #SCtr=SrlNo,#LocationCode =DMLTYPE,#UName=TABLENAME
FROM R_AUDDB..AUD_USER_ACTIVITY
WHERE DBNAME='TestDB' and DMLTYPE not in ('AD','D','PD') ORDER BY SRLNO DESC;
select top 1 #toe=docno from TestDB..M_TYPEOFENTRY where TBLNAME=#UName;
set #QryTxt='Select #RecExist=1 From R_TestDB..'+#UName+ ' Where '+#toe+'='''+#MvDocNo+''''
exec (#QryTxt)
IF #RecExist=0
BEGIN
DELETE R_AUDDB..AUD_USER_ACTIVITY WHERE SRLNO=#SCtr
END
END
The following code sample demonstrates how to check for a row in a table with a specific column and value using dynamic SQL. You ought to be able to change the values of the first three variables to reference a table and column in your database for testing.
Note that SQL injection is still possible: there is no validation of the table or column names.
-- Define the table to check and the target column name and value.
declare #TableName as SysName = 'Things';
declare #ColumnName as SysName = 'ThingName';
declare #TestValue as NVarChar(32) = 'Beth';
-- Create a SQL statement to check for a row in the target table with the specified column name and value.
declare #SQL as NVarChar(1024);
declare #Result as Bit;
-- Note that only object names are substituted into the statement at this point and QuoteName() is used to reduce problems.
set #SQL = N'select #iResult = case when exists ( select 42 from dbo.' + QuoteName( #TableName ) +
N' where ' + QuoteName( #ColumnName ) + N' = #iTestValue ) then 1 else 0 end;'
select #SQL as SQL;
-- Execute the SQL statement.
-- Note that parameters are used for all values, i.e. the target value and return value.
execute sp_executesql #stmt = #SQL,
#params = N'#iTestValue NVarChar(32), #iResult Bit output',
#iTestValue = #TestValue, #iResult = #Result output
-- Display the result.
select #Result as Result;

How do I fetch multiple columns for use in a cursor loop?

When I try to run the following SQL snippet inside a cursor loop,
set #cmd = N'exec sp_rename ' + #test + N',' +
RIGHT(#test,LEN(#test)-3) + '_Pct' + N',''COLUMN'''
I get the following message,
Msg 15248, Level 11, State 1, Procedure sp_rename, Line 213
Either the parameter #objname is ambiguous or the claimed #objtype (COLUMN) is wrong.
What is wrong and how do I fix it ? I tried wrapping the column name in brackets [], and double quotes "" like some of the search results suggested.
Edit 1 -
Here is the entire script. How do I pass the table name to the rename sp ? I'm not sure how to do that since the column names are in one of many tables.
BEGIN TRANSACTION
declare #cnt int
declare #test nvarchar(128)
declare #cmd nvarchar(500)
declare Tests cursor for
SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE COLUMN_NAME LIKE 'pct%' AND TABLE_NAME LIKE 'TestData%'
open Tests
fetch next from Tests into #test
while ##fetch_status = 0
BEGIN
set #cmd = N'exec sp_rename ' + #test + N',' + RIGHT(#test,LEN(#test)-3) + '_Pct' + N', column'
print #cmd
EXEC sp_executeSQL #cmd
fetch next from Tests into #test
END
close Tests
deallocate Tests
ROLLBACK TRANSACTION
--COMMIT TRANSACTION
Edit 2 -
The script is designed to rename columns whose names match a pattern, in this case with a "pct" prefix. The columns occur in a variety of tables within the database. All table names are prefixed with "TestData".
Here is slightly modified version. Changes are noted as code commentary.
BEGIN TRANSACTION
declare #cnt int
declare #test nvarchar(128)
-- variable to hold table name
declare #tableName nvarchar(255)
declare #cmd nvarchar(500)
-- local means the cursor name is private to this code
-- fast_forward enables some speed optimizations
declare Tests cursor local fast_forward for
SELECT COLUMN_NAME, TABLE_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME LIKE 'pct%'
AND TABLE_NAME LIKE 'TestData%'
open Tests
-- Instead of fetching twice, I rather set up no-exit loop
while 1 = 1
BEGIN
-- And then fetch
fetch next from Tests into #test, #tableName
-- And then, if no row is fetched, exit the loop
if ##fetch_status <> 0
begin
break
end
-- Quotename is needed if you ever use special characters
-- in table/column names. Spaces, reserved words etc.
-- Other changes add apostrophes at right places.
set #cmd = N'exec sp_rename '''
+ quotename(#tableName)
+ '.'
+ quotename(#test)
+ N''','''
+ RIGHT(#test,LEN(#test)-3)
+ '_Pct'''
+ N', ''column'''
print #cmd
EXEC sp_executeSQL #cmd
END
close Tests
deallocate Tests
ROLLBACK TRANSACTION
--COMMIT TRANSACTION

How to get row count from EXEC() in a TSQL SPROC?

I have a TSQL sproc that builds a query as and executes it as follows:
EXEC (#sqlTop + #sqlBody + #sqlBottom)
#sqlTop contains something like SELECT TOP(x) col1, col2, col3...
TOP(x) will limit the rows returned, so later I want to know what the actual number of rows in the table is that match the query.
I then replace #sqlTop with something like:
EXEC ('SELECT #ActualNumberOfResults = COUNT(*) ' + #sqlBody)
I can see why this is not working, and why a value not declared error occurs, but I think it adequately describes what I'm trying to accomplish.
Any ideas?
use sp_executesql and an output parameter
example
DECLARE #sqlBody VARCHAR(500),#TableCount INT, #SQL NVARCHAR(1000)
SELECT #sqlBody = 'from sysobjects'
SELECT #SQL = N'SELECT #TableCount = COUNT(*) ' + #sqlBody
EXEC sp_executesql #SQL, N'#TableCount INT OUTPUT', #TableCount OUTPUT
SELECT #TableCount
GO
You could instead have the dynamic query return the result as a row set, which you would then insert into a table variable (could be a temporary or ordinary table as well) using the INSERT ... EXEC syntax. Afterwards you can just read the saved value into a variable using SELECT #var = ...:
DECLARE #rowcount TABLE (Value int);
INSERT INTO #rowcount
EXEC('SELECT COUNT(*) ' + #sqlBody);
SELECT #ActualNumberOfResults = Value FROM #rowcount;
Late in the day, but I found this method much simpler:
-- test setup
DECLARE #sqlBody nvarchar(max) = N'SELECT MyField FROM dbo.MyTable WHERE MyOtherField = ''x''';
DECLARE #ActualNumberOfResults int;
-- the goods
EXEC sp_executesql #sqlBody;
SET #ActualNumberOfResults = ##ROWCOUNT;
SELECT #ActualNumberOfResults;
After executing your actual query store the result of ##ROWCOUNT in any variable which you can use later.
EXEC sp_executesql 'SELECT TOP 10 FROM ABX'
SET #TotRecord = ##ROWCOUNT into your variable for later use.
Keep in mind that dynamic SQL has its own scope. Any variable declared/modified there will go out of scope after your EXEC or your sp_executesql.
Suggest writing to a temp table, which will be in scope to your dynamic SQL statement, and outside.
Perhaps put it in your sqlBottom:
CREATE TABLE ##tempCounter(MyNum int);
EXEC('SELECT #ActualNumberOfResults = COUNT(*) ' + #sqlBody +
'; INSERT INTO ##tempCounter(MyNum) VALUES(#ActualNumberOfResults);');
SELECT MyNum FROM ##tempCounter;
You can use output variable in SP_EXECUTESQL
DECLARE #SQL NVARCHAR(MAX);
DECLARE #ParamDefinition NVARCHAR(100) = '#ROW_SQL INT OUTPUT'
DECLARE #AFFECTED_ROWS INT;
SELECT
#SQL = N'SELECT 1 UNION ALL SELECT 2'
SELECT #SQL += 'SELECT #ROW_SQL = ##ROWCOUNT;';
EXEC SP_EXECUTESQL #SQL, #ParamDefinition, #ROW_SQL=#AFFECTED_ROWS OUTPUT;
PRINT 'Number of affected rows: ' + CAST(#AFFECTED_ROWS AS VARCHAR(20));
Ouput:
SQL2.sql: Number of affected rows: 2
Thanks Jesus Fernandez!
The only problem with the answers that create temporary tables (either using "DECLARE #rowcount TABLE" or "CREATE TABLE ##tempCounter(MyNum int)") is that you're having to read all the affected records off disk into memory. If you're expecting a large number of records this may take some time.
So if the answer is likely to be large the "use sp_executesql and an output parameter" solution is a more efficient answer. And it does appear to work.

How to handle an empty result set from an OpenQuery call to linked analysis server in dynamic SQL?

I have a number of stored procedures structured similarly to this:
DECLARE #sql NVARCHAR(MAX)
DECLARE #mdx NVARCHAR(MAX)
CREATE table #result
(
[col1] NVARCHAR(50),
[col2] INT,
[col3] INT
)
SET #mdx = '{some dynamic MDX}'
SET #sql = 'SELECT a.* FROM OpenQuery(LinkedAnalysisServer, ''' + #mdx + ''') AS a'
INSERT INTO #result
EXEC sp_executesql #sql
SELECT * FROM #result
This works quite well when results exist in the cube. However, when the OpenQuery results are empty, the INSERT fails with this error:
Column name or number of supplied
values does not match table
definition.
My question is, what is the best way to handle this scenario? I'm using the results in a static report file (.rdlc), so the explicit typing of the temp table is (I'm pretty sure) required.
Use TRY/CATCH in your stored procedure, you'll notice there is a specific error number for your problem, so check the error number and if it is that, return an empty result set. As you already have the table defined that'll be easier.
PseudoCode looks something like this:
SET #mdx = '{some dynamic MDX}'
SET #sql = 'SELECT a.* FROM OpenQuery(LinkedAnalysisServer, ''' + #mdx + ''') AS a'
BEGIN TRY
INSERT INTO #result
EXEC sp_executesql #sql
END TRY
BEGIN CATCH
IF ERROR_NUMBER <> 'The error number you are seeing'
BEGIN
RAISERROR('Something happened that was not an empty result set')
END
END CATCH
SELECT * FROM #result
You'll want to check for that particular error, so that you don't just return empty result sets if your SSAS server crashes for example.
There is another solution to this issue, similar to the accepted answer, which involves using an IF statement instead of TRY...CATCH.
http://www.triballabs.net/2011/11/overcoming-openquery-mdx-challenges/
IF (SELECT COUNT(*)
FROM OPENQUERY("SSAS1",
'SELECT [Measures].[Target Places] ON COLUMNS
FROM [ebs4BI_FactEnrolment]
WHERE [DimFundingYear].[Funding Year].&[17]')) > 0
EXEC sp_executesql N'SELECT CONVERT(varchar(20),
"[DimPAPSCourse].[Prog Area].[Prog Area].[MEMBER_CAPTION]")
as ProgArea,
convert(float, "[Measures].[Target Places]") as Target
FROM OPENQUERY("SSAS1",
''SELECT [Measures].[Target Places] ON COLUMNS,
[DimPAPSCourse].[Prog Area].[Prog Area] ON ROWS
FROM [ebs4BI_FactEnrolment]
WHERE [DimFundingYear].[Funding Year].&[17]'')'
ELSE
SELECT '' as ProgArea, 0 as Target
WHERE 1=0

Array-like access to variables in T-SQL

In my stored procedure I have multiple similar variables #V1, #V2 ... #V20 (let's say 20 of them) FETCHED from a record. How would I use dynamic SQL to make 20 calls to another stored procedure using those variables as parameters?
Of course #V[i] syntax is incorrect, but it expresses the intent
fetch next from maincursor into #status, #V1, #V2, ...
while #i<21
begin
-- ??? execute sp_executesql 'SecondSP', '#myParam int', #myParam=#V[i]
-- or
-- ??? execute SecondSP #V[i]
set #i = #i+1
end
As others have said, set up a temporary table, insert the values that you need into it. Then "iterate" through it executing the necessary SQL from those values. This will allow you to have 0 to MANY values to be executed, so you don't have to set up a variable for each.
The following is a complete sample of how you may go about doing that without cursors.
SET NOCOUNT ON
DECLARE #dict TABLE (
id INT IDENTITY(1,1), -- a unique identity column for reference later
value VARCHAR(50), -- your parameter value to be passed into the procedure
executed BIT -- BIT to mark a record as being executed later
)
-- INSERT YOUR VALUES INTO #dict HERE
-- Set executed to 0 (so that the execution process will pick it up later)
-- This may be a SELECT statement into another table in your database to load the values into #dict
INSERT #dict
SELECT 'V1Value', 0 UNION ALL
SELECT 'V2Value', 0
DECLARE #currentid INT
DECLARE #currentvalue VARCHAR(50)
WHILE EXISTS(SELECT * FROM #dict WHERE executed = 0)
BEGIN
-- Get the next record to execute
SELECT
TOP 1 #currentid = id
FROM #dict
WHERE executed = 0
-- Get the parameter value
SELECT #currentvalue = value
FROM #dict
WHERE id = #currentid
-- EXECUTE THE SQL HERE
--sp_executesql 'SecondSP', '#myParam int', #myParam =
PRINT 'SecondSP ' + '#myParam int ' + '#myParam = ' + #currentvalue
-- Mark record as having been executed
UPDATE d
SET executed = 1
FROM #dict d
WHERE id = #currentid
END
Use a #TempTable
if you are at SQL Server 2005 you can create a #TempTable in the parent stored procedure, and it is available in the child stored procedure that it calls.
CREATE TABLE #TempTable
(col1 datatype
,col2 datatype
,col3 datatype
)
INSERT INTO #TempTable
(col1, col2, col3)
SELECT
col1, col2, col3
FROM ...
EXEC #ReturnCode=YourOtherProcedure
within the other procedure, you have access to #TempTable to select, delete, etc...
make that child procedure work on a set of data not on one element at a time
remember, in SQL, loops suck performance away!
Why not just use the table variable instead, and then just loop through the table getting each value.
Basically treat each row in a table as your array cell, with a table that has one column.
Just a thought. :)
This seems like an odd request - will you always have a fixed set of variables? What if the number changes from 20 to 21, and so on, are you constantly going to have to be declaring new variables?
Is it possible, instead of retrieving the values into separate variables, to return them each as individual rows and just loop through them in a cursor?
If not, and you have to use the individual variables as explained, here's one solution:
declare #V1 nvarchar(100)
set #V1 = 'hi'
declare #V2 nvarchar(100)
set #V2 = 'bye'
declare #V3 nvarchar(100)
set #V3 = 'test3'
declare #V4 nvarchar(100)
set #V4 = 'test4'
declare #V5 nvarchar(100)
set #V5 = 'end'
declare aCursor cursor for
select #V1
union select #V2 union select #V3
union select #V4 union select #V5
open aCursor
declare #V nvarchar(100)
fetch next from aCursor into #V
while ##FETCH_STATUS = 0
begin
exec TestParam #V
fetch next from aCursor into #V
end
close aCursor
deallocate aCursor
I don't really like this solution, it seems messy and unscalable. Also, as a side note - the way you phrased your question seems to be asking if there are arrays in T-SQL. By default there aren't, although a quick search on google can point you in the direction of workarounds for this if you absolutely need them.