Is it possible to display in the results-window of a query in TSQL (SSMS) window conditionally?
For example, display column-header and result of:
SELECT COUNT(1) AS ourCount FROM [ourDatabase].[dbo].[ourTable]
only if it is > 0
NOTE: We use SQL Server 2008, r-2
This is in the context of a larger system of queries with many results. I don't want to clutter the results if this particular query has a zero-value. Of course, the concept could be generalized for other situations.
So I am monitoring the query output, and one could think of the results as an 'alert' to myself (informally).
This will push the result into a variable, and then only display it if it's greater than zero, you could also use PRINT, etc.
DECLARE #Count INT;
SELECT #Count = COUNT(1) AS ourCount FROM [ourDatabase].[dbo].[ourTable];
IF #Count > 0
BEGIN
SELECT #Count;
END;
If the answer is <= 0 then you will see nothing but the row count in the message part of SSMS. You can even stop this by adding:
SET NOCOUNT ON;
...at the top of your script, but remember to add:
SET NOCOUNT OFF;
Related
I have two queries which split comma separated list into rows and insert into table variable.
For first query I have used custom function which is:
USER defined Function for Spilt.
Create FUNCTION [dbo].[Split_S]
(
#sInputList VARCHAR(MAX)
,#sDelimiter VARCHAR(8)
)
RETURNS #List TABLE ([item] VARCHAR(8000))
AS
BEGIN
DECLARE #sItem VARCHAR(MAX)
WHILE CHARINDEX(#sDelimiter,#sInputList,0) <> 0
BEGIN
SELECT
#sItem=RTRIM(LTRIM(SUBSTRING(#sInputList,1,CHARINDEX(#sDelimiter,#sInputList,0)-1)))
,#sInputList=RTRIM(LTRIM(SUBSTRING(#sInputList,CHARINDEX(#sDelimiter,#sInputList,0)+LEN(#sDelimiter),LEN(#sInputList))))
IF LEN(#sItem) > 0
INSERT INTO #List SELECT #sItem
END
IF LEN(#sInputList) > 0
INSERT INTO #List SELECT #sInputList-- Put the last item in
RETURN
END
Query 1 :
DECLARE #F TABLE(F BIGINT)
INSERT INTO #F
SELECT [item] FROM [dbo].[Split_S]
(N'82,13,51,68,6',',')
Query 2 :
DECLARE #F2 TABLE(F BIGINT)
INSERT INTO #F2
SELECT Value
from
STRING_SPLIT(N'82,13,51,68,6',',')
Query Plan of Both Query
Why 37% and using STRING_SPLIT Its 63% .
but if i only compare select statement then query cost of STRING_SPLIT is 1%.
Which query has better performance and why?
If you will check only the part of the query that include the select query, then you will get that using STRING_SPLIT gives much better performance according too execution plan (EP). the result will be 99% vs 1%.
But when we use the data that returned by the STRING_SPLIT function (for example "select... into" or like in your case "insert...select'), then you might notice that the server uses "table spool (Eager Spool)" which make the difference. This operator takes the rows and stores them in a hidden temporary object stored in the tempdb database (the idea of using this logic is that the spooled data can be reused later in the execution plan). The "eager" spool takes ALL rows from the previously operator at one time, which means that this is "blocking operator".
I have a database that gets populated daily with incremental data and then at the end of each month a full download of the month's data is put into the system. Our business wants each day put into the system and then at the end of the month the daily stuff is removed and the full month data is left. I have written the query below and if you could help I'd appreciate it.
DECLARE #looper INT
DECLARE #totalindex int;
select name, (substring(name,17,8)) as Attempt, substring(name,17,4) as [year], substring(name,21,2) as [month], create_date
into #work_to_do_for
from sys.databases d
where name like 'Snapshot%' and
d.database_id >4 and
(substring(name,21,2) = DATEPART(m, DATEADD(m, -1, getdate()))) AND (substring(name,17,4) = DATEPART(yyyy, DATEADD(m, -1, getdate())))
order by d.create_date asc
SELECT #totalindex = COUNT(*) from #work_to_do_for
SET #looper = 1 -- reset and reuse counter
WHILE (#looper < #totalindex)
BEGIN;
set #looper=#looper+1
END;
DROP TABLE #work_to_do_for;
I'd need to perform the purge on several tables.
Thanks in advance.
When I delete large numbers of records, I always do it in batches and off-hours so as not to use up resources during production processes. To accomplish this, you incorporate a loop and some testing to find the optimal number to delete at a time.
begin transaction del -- I always use transactions as a safeguard
declare #count int = 1
while #count > 0
begin
delete top (100000) t
from dbo.MyTable t -- JOIN if necessary
-- WHERE if necessary
set #count = ##ROWCOUNT
end
Run this manually (without the WHILE loop) 1 time with 100000 records in parenthesis and see what your execution time is. Write it down. Run it again with 200000 records. Check the time; write it down. Run it with 500000 records. What you're looking for is a trend in the execution time. As long as the time required to delete 100000 records is decreasing as you increase the batch size, keep increasing it. You might end at 500k, but this method will help you find the optimal number to delete per batch. Then, run it as a loop.
That being said, if you are literally deleting MILLIONS of records, it might make more sense to drop and recreate the table as long as you aren't going to interfere with other processes. If you needed to save some of the data, you could insert what you needed into a new table (eg MyTable_New), drop the original table (MyTable), and rename MyTable_New to MyTable.
The script you've posted iterating through with a while loop to delete the rows should be changed to a set-based operation if at all possible. Relational database engines excel at set-based operations like
Delete dbo.table WHERE yourcolumn = 5
as opposed to iterating through one at a time. Especially if it will be for "several million" rows as you indicated in the comments above.
#rwking where are you putting the COMMIT to the Transaction.. I mean are you keeping the all eligible Delete count in single Transaction and doing one final Commit?
I have the similar type of Requirement where I have to Delete in batches, and also track the number of count affected in the end.
My Sample Code is as Follows:
Declare #count int
Declare #deletecount int
set #count=0
While(1=1)
BEGIN
BEGIN TRY
BEGIN TRAN
DELETE TOP 1000 FROM --CONDITION
SET #COUNT = #COUNT+##ROWCOUNT
IF (##ROWCOUNT)=0
Break;
COMMIT
END CATCH
BEGIN CATCH
ROLLBACK;
END CATCH
END
set #deletecount=#COUNT
Above Code Works fine, but how to keep track of #deletecount if Rollback happens in one of the batch.
This query works great in SQL Server 2005 and 2008. How would I write it in SQL Server 2000?
UPDATE TOP 10 myTable
SET myBooleanColumn = 1
OUTPUT inserted.*
Is there any way to do it besides running multiple queries?
To be honest, your query doesn't really make sense, and I have a hard time understanding your criteria for "great." Sure, it updates 10 rows, and doesn't give an error. But do you really not care which 10 rows it updates? Your current TOP without ORDER BY suggests that you want SQL Server to decide which rows to update (and that's exactly what it will do).
To accomplish this in SQL Server 2000 (without using a trigger), I think you would want to do something like this:
SET NOCOUNT ON;
SELECT TOP 10 key_column
INTO #foo
FROM dbo.myTable
ORDER BY some_logical_ordering_clause;
UPDATE dbo.MyTable
SET myBooleanColumn = 1
FROM #foo AS f
WHERE f.key_column = dbo.MyTable.key_column;
SELECT * FROM dbo.MyTable AS t
INNER JOIN #foo AS f
ON t.key_column = f.key_column;
If you want a simple query, then you can have this trigger:
CREATE TRIGGER dbo.upd_tr_myTable
ON dbo.myTable
FOR UPDATE
AS
BEGIN
SET NOCOUNT ON;
SELECT * FROM inserted;
END
GO
Note that this trigger can't tell if you're doing your TOP 10 update or something else, so all users will get this resultset when they perform an update. Even if you filter on IF UPDATE(myBooleanColumn), other users may still update that column.
In any case, you'll still want to fix your update statement so that you know which rows you're updating. (You may even consider a WHERE clause.)
I have a TSQL sproc that does three loops in order to find relevant data. If the first loop renders no results, then the second one normally does. I append another table that has multiple values that I can use later on.
So at most I should only have two tables returned in the dataset from the sproc.
The issue is that if the first loop is blank, I then end up with three data tables in my data set.
In my C# code, I can remove this empty table, but would rather not have it returned at all from the sproc.
Is there a way to remove the empty table from within the sproc, given the following:
EXEC (#sqlTop + #sqlBody + #sqlBottom)
SET #NumberOfResultsReturned = ##ROWCOUNT;
.
.
.
IF #NumberOfResultsReturned = 0
BEGIN
SET #searchLoopCount = #searchLoopCount + 1
END
ELSE
BEGIN
-- we have data, so no need to run again
BREAK
END
The process goes as follows: On the first loop there could be no results. Thus the rowcount will be zero because the EXEC executes a dynamically created SQL query. That's one table.
In the next iteration, results are returned, making that two data tables in the dataset output, plus my third one added on the end.
I didn't want to do a COUNT(*) then if > 0 then perform the query as I want to minimize the queries.
Thanks.
You can put the result for your SP in a table variable and then check if the table variable has any data in it.
Something like this with a SP named GetData that returns one integer column.
declare #T table(ID int)
declare #SQL varchar(25)
-- Create dynamic SQL
set #SQL = 'select 1'
-- Insert result from #SQL to #T
insert into #T
exec (#SQL)
-- Check for data
if not exists(select * from #T)
begin
-- No data continue loop
set #searchLoopCount = #searchLoopCount + 1
end
else
begin
-- Have data so wee need to query the data
select *
from #T
-- Terminate loop
break
end
Imagine the scene, you're updating some legacy Sybase code and come across a cursor. The stored procedure builds up a result set in a #temporary table which is all ready to be returned except that one of columns isn't terribly human readable, it's an alphanumeric code.
What we need to do, is figure out the possible distinct values of this code, call another stored procedure to cross reference these discrete values and then update the result set with the newly deciphered values:
declare c_lookup_codes for
select distinct lookup_code
from #workinprogress
while(1=1)
begin
fetch c_lookup_codes into #lookup_code
if ##sqlstatus<>0
begin
break
end
exec proc_code_xref #lookup_code #xref_code OUTPUT
update #workinprogress
set xref = #xref_code
where lookup_code = #lookup_code
end
Now then, whilst this may give some folks palpitations, it does work. My question is, how best would one avoid this kind of thing?
_NB: for the purposes of this example you can also imagine that the result set is in the region of 500k rows and that there are 100 distinct values of look_up_code and finally, that it is not possible to have a table with the xref values in as the logic in proc_code_xref is too arcane._
You have to have a XRef table if you want to take out the cursor. Assuming you know the 100 distinct lookup values (and that they're static) it's simple to generate one by calling proc_code_xref 100 times and inserting the results into a table
Unless you are willing to duplicate the code in the xref proc, there is no way to avoid using a cursor.
They say, that if you must use cursor, then, you must have done something wrong ;-) here's solution without cursor:
declare #lookup_code char(8)
select distinct lookup_code
into #lookup_codes
from #workinprogress
while 1=1
begin
select #lookup_code = lookup_code from #lookup_codes
if ##rowcount = 0 break
exec proc_code_xref #lookup_code #xref_code OUTPUT
delete #lookup_codes
where lookup_code = #lookup_code
end