When running code, SSMS 2012 is writing (1 row(s) affected) into the messages window for every line in mydataset. This isn't unexpected (see details below) but is there any way to suppress these messages, while still getting important error messages?
I am executing code that uses a cursor to WHILE loop through a table and do some rather complex comparisons to prior records and manipulations, then collect the results in a #Temp table before writing it out to the database:
WHILE ##FETCH_STATUS = 0
BEGIN
--do stuff here, then collect the results
INSERT #Temptable(value)
SELECT #value;
FETCH NEXT FROM c INTO #value
END
SSMS 2012 writes (1 row(s) affected) into the messages window for every INSERT, which makes sense, but is annoying in this case, and since I'm on a lousy VPN where bandwidth is precious, the chattering back and forth has some impact.
Put SET NOCOUNT ON; at the beginning of your script, or wherever you want to begin suppressing the "x rows affected" messages. To resume seeing them later in the script (if desired), put in SET NOCOUNT OFF;
Related
I have a database that gets populated daily with incremental data and then at the end of each month a full download of the month's data is put into the system. Our business wants each day put into the system and then at the end of the month the daily stuff is removed and the full month data is left. I have written the query below and if you could help I'd appreciate it.
DECLARE #looper INT
DECLARE #totalindex int;
select name, (substring(name,17,8)) as Attempt, substring(name,17,4) as [year], substring(name,21,2) as [month], create_date
into #work_to_do_for
from sys.databases d
where name like 'Snapshot%' and
d.database_id >4 and
(substring(name,21,2) = DATEPART(m, DATEADD(m, -1, getdate()))) AND (substring(name,17,4) = DATEPART(yyyy, DATEADD(m, -1, getdate())))
order by d.create_date asc
SELECT #totalindex = COUNT(*) from #work_to_do_for
SET #looper = 1 -- reset and reuse counter
WHILE (#looper < #totalindex)
BEGIN;
set #looper=#looper+1
END;
DROP TABLE #work_to_do_for;
I'd need to perform the purge on several tables.
Thanks in advance.
When I delete large numbers of records, I always do it in batches and off-hours so as not to use up resources during production processes. To accomplish this, you incorporate a loop and some testing to find the optimal number to delete at a time.
begin transaction del -- I always use transactions as a safeguard
declare #count int = 1
while #count > 0
begin
delete top (100000) t
from dbo.MyTable t -- JOIN if necessary
-- WHERE if necessary
set #count = ##ROWCOUNT
end
Run this manually (without the WHILE loop) 1 time with 100000 records in parenthesis and see what your execution time is. Write it down. Run it again with 200000 records. Check the time; write it down. Run it with 500000 records. What you're looking for is a trend in the execution time. As long as the time required to delete 100000 records is decreasing as you increase the batch size, keep increasing it. You might end at 500k, but this method will help you find the optimal number to delete per batch. Then, run it as a loop.
That being said, if you are literally deleting MILLIONS of records, it might make more sense to drop and recreate the table as long as you aren't going to interfere with other processes. If you needed to save some of the data, you could insert what you needed into a new table (eg MyTable_New), drop the original table (MyTable), and rename MyTable_New to MyTable.
The script you've posted iterating through with a while loop to delete the rows should be changed to a set-based operation if at all possible. Relational database engines excel at set-based operations like
Delete dbo.table WHERE yourcolumn = 5
as opposed to iterating through one at a time. Especially if it will be for "several million" rows as you indicated in the comments above.
#rwking where are you putting the COMMIT to the Transaction.. I mean are you keeping the all eligible Delete count in single Transaction and doing one final Commit?
I have the similar type of Requirement where I have to Delete in batches, and also track the number of count affected in the end.
My Sample Code is as Follows:
Declare #count int
Declare #deletecount int
set #count=0
While(1=1)
BEGIN
BEGIN TRY
BEGIN TRAN
DELETE TOP 1000 FROM --CONDITION
SET #COUNT = #COUNT+##ROWCOUNT
IF (##ROWCOUNT)=0
Break;
COMMIT
END CATCH
BEGIN CATCH
ROLLBACK;
END CATCH
END
set #deletecount=#COUNT
Above Code Works fine, but how to keep track of #deletecount if Rollback happens in one of the batch.
I have big stored procedures that handle user actions.
They consist of multiple select statements. These are filtered, most of the times only getting one row. The Selects are copied into temptables or otherwise evaluated.
Finally, a merge-Statement does the needed changes in the DB.
All is encapsulated in a transaction.
I have concurrent input from users, and the selected rows of the select statements should be locked to keep data integrity.
How can I lock the selected Rows of all select statements, so that they aren't updated through other transactions while the current transaction is in process?
Does a table hint combination of ROWLOCK and HOLDLOCK work in a way that only the selected rows are locked, or are the whole tables locked because of the HOLDLOCK?
SELECT *
FROM dbo.Test
WITH (ROWLOCK HOLDLOCK )
WHERE id = #testId
Can I instead use
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
right after the start of the transaction? Or does this lock the whole tables?
I am using SQL2008 R2, but would also be interested if things work differently in SQL2012.
PS: I just read about the table hints UPDLOCK and SERIALIZE. UPDLOCK seems to be a solution to lock only one row, and it seems as if UPDLOCK always locks instead of ROWLOCK, which does only specify that locks are row based IF locks are applied. I am still confused about the best way to solve this...
Changing the isolation level fixed the problem (and locked on row level):
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Here is how I tested it.
I created a statement in a blank page of the SQL Management Studio:
begin tran
select
*
into #message
from dbo.MessageBody
where MessageBody.headerId = 28
WAITFOR DELAY '0:00:05'
update dbo.MessageBody set [message] = 'message1'
where headerId = (select headerId from #message)
select * from dbo.MessageBody where headerId = (select headerId from #message)
drop table #message
commit tran
While executing this statement (which takes at last 5 seconds due to the delay), I called the second query in another window:
begin tran
select
*
into #message
from dbo.MessageBody
where MessageBody.headerId = 28
update dbo.MessageBody set [message] = 'message2'
where headerId = (select headerId from #message)
select * from dbo.MessageBody where headerId = (select headerId from #message)
drop table #message
commit tran
and I was rather surprised that it executed instantaneously. This was due to the default SQL Server transaction level "Read Commited" http://technet.microsoft.com/en-us/library/ms173763.aspx . Since the update of the first script is done after the delay, during the second script there are no umcommited changes yet, so the row 28 is read and updated.
Changing the Isolation level to Serialization prevented this, but it also prevented concurrency - both scipts were executed consecutively.
That was OK, since both scripts read and changed the same row (via headerId=28). Changing headerId to another value in the second script, the statements were executed parallel. So the lock from SERIALIZATION seems to be on row level.
Adding the table hint
WITH ( SERIALIZABLE)
in the first select of the first statement does also prevent further reads oth the selected row.
I have a table DB.DATA_FEED that I update using a T/SQL Procedure. Every minute, the procedure below is executed 100 times for different data.
ALTER PROCEDURE [DB].[UPDATE_DATA_FEED]
#P_MARKET_DATE varchar(max),
#P_CURR1 int,
#P_CURR2 int,
#P_PERIOD float(53),
#P_MID float(53)
AS
BEGIN
BEGIN TRY
UPDATE DB.DATA_FEED
SET
MID = #P_MID,
MARKET_DATE = convert(datetime,#P_MARKET_DATE, 103)
WHERE
cast(MARKET_DATE as date) =
cast(convert(datetime,#P_MARKET_DATE, 103) as date) AND
CURR1 = #P_CURR1 AND
CURR2 = #P_CURR2 AND
PERIOD = #P_PERIOD
IF ##TRANCOUNT > 0
COMMIT WORK
END TRY
BEGIN CATCH
--error code
END CATCH
END
END
When Users use the application, then they also read from this table, as per the SQL below. Potentially this select can run thousands of times in one minute. (Questions marks are replaced by parser with appropriate date/numbers)
DECLARE #MYDATE AS DATE;
SET #MYDATE='?'
SELECT *
FROM DB.DATA_FEED
WHERE MARKET_DATE>=#MYDATE AND MARKET_DATE<DATEADD(D,1,#MYDATE)
AND CURR1 = ?
AND CURR2 = ?
AND PERIOD = ?
ORDER BY PERIOD
I have sometimes, albeit rarely, got a database lock.
Using the the script from http://sqlserverplanet.com/troubleshooting/blocking-processes-lead-blocker I saw it was SPID=58. I then did DECLARE #SPID INT; SET #SPID = 58; DBCC INPUTBUFFER(#SPID) to find the SQL script which turned out to be my select statement.
Is there something wrong with my SQL code? What can I do to prevent such locks happening in the future?
Thanks
Readers have priority over writers so when someone is writing the readers have to wait for the writing to finish. There are two Table Hints you ca try one is NOLOCK that reads uncommited lines (dirty reads) and the other is READPAST (only reads information that has been commited on the last commit). In both cases the readers never block the table, there for do not deadlock a writer.
Writers can block other writers but, if I understood correctly, only one write per execution so the readers will intercalate writes, diminuishing the deadlocks.
Hope it helps.
This query works great in SQL Server 2005 and 2008. How would I write it in SQL Server 2000?
UPDATE TOP 10 myTable
SET myBooleanColumn = 1
OUTPUT inserted.*
Is there any way to do it besides running multiple queries?
To be honest, your query doesn't really make sense, and I have a hard time understanding your criteria for "great." Sure, it updates 10 rows, and doesn't give an error. But do you really not care which 10 rows it updates? Your current TOP without ORDER BY suggests that you want SQL Server to decide which rows to update (and that's exactly what it will do).
To accomplish this in SQL Server 2000 (without using a trigger), I think you would want to do something like this:
SET NOCOUNT ON;
SELECT TOP 10 key_column
INTO #foo
FROM dbo.myTable
ORDER BY some_logical_ordering_clause;
UPDATE dbo.MyTable
SET myBooleanColumn = 1
FROM #foo AS f
WHERE f.key_column = dbo.MyTable.key_column;
SELECT * FROM dbo.MyTable AS t
INNER JOIN #foo AS f
ON t.key_column = f.key_column;
If you want a simple query, then you can have this trigger:
CREATE TRIGGER dbo.upd_tr_myTable
ON dbo.myTable
FOR UPDATE
AS
BEGIN
SET NOCOUNT ON;
SELECT * FROM inserted;
END
GO
Note that this trigger can't tell if you're doing your TOP 10 update or something else, so all users will get this resultset when they perform an update. Even if you filter on IF UPDATE(myBooleanColumn), other users may still update that column.
In any case, you'll still want to fix your update statement so that you know which rows you're updating. (You may even consider a WHERE clause.)
I have a stored procedure that retrieves sensitive information from an SQL Server 2008 database. I would like to modify the procedure so that any time it is called, it records information about who called it in a separate table.
I thought something like the following would work:
declare #account varchar(255);
set #account = (SELECT SYSTEM_USER);
INSERT into AUDIT_LOG(ACCOUNT, TSTAMP)
VALUES(#account, getdate())
;
--Now fetch data
SELECT x,y,z from sensitive_info;
My issue is that the client application can issue a call to this stored procedure and get the sensitive information, but not commit the connection and the INSERT never occurs!
Is there some way to force the INSERT to happen before the SELECT?
I am using SQL Server 2008.
Thanks,
Carl
You only COMMIT if a transaction has been started.
So you can test for an open transaction first and disallow the read. This will ensure that no transaction is open to be rolled back. I've used XACT_STATE() here
Using SET XACT_ABORT ON and TRY/CATCH too will mean that the INSERT for logging must happen too before the read happens. Any errors at all on INSERT will go to the CATCH block. So no read and the logging fail can itself be logged too.
So: this is your guarantee of "read only if logged"
Having an explicit transaction doesn't help: the INSERT is an atomic action anyway. And if the called opens a transaction the log entry can be rolled back
CREATE PROC getSecretStuff
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN TRY
IF XACT_STATE() <> 0
RAISERRROR ('Call not allowed in an active transaction', 16, 1)
INSERT into AUDIT_LOG(ACCOUNT, TSTAMP)
VALUES(SYSTEM_USER, getdate());
--Now fetch data
SELECT x,y,z from sensitive_info;
END TRY
BEGIN CATCH
-- error handling etc
END CATCH
GO
Why not use the build in auditing functionality?
Have you tried using expicit transactions and doing the select after the commit statement?
On you insert a record in a table you should be albe to get the SCOPE_IDENTITY() of the ast inserted value. Before doing SELECT x,y,z from sensitive_info; you can check if SCOPE_IDENTITY() > 0 then only execute SELECT statement.