I run a script which SELECTs data from several databases on the same server.
USE db1
SELECT x from tbl1
USE db2
SELECT y from tbl2
... etc.
If one of the databases is being restored from a backup, the script errors out on the USE statement. How can I handle these errors? TRY - CATCH doesn't work.
This is on 2008R2.
Edit: The error returned is:
Msg 927, Level 14, State 2, Line 4
Database 'db2' cannot be opened. It is in the middle of a restore.
You can obtain the current status of a database using DATABASEPROPERTYEX:
SELECT DATABASEPROPERTYEX('db_name', 'Status')
This will output the current status of your database, which will include whether it is restoring (which will give a status of 'RESTORING').
Before executing your USE, do the following:
DECLARE #Status SQL_VARIANT
SELECT #Status = DATABASEPROPERTYEX('db1', 'Status')
IF (#Status = 'ONLINE')
BEGIN
USE db1
-- Do stuff
END
ELSE
BEGIN
-- Do other stuff
END
Related
I'm looking to get around the limitations from how to force a postgres function to not be in a transaction.
I'd like to execute two seperate transactions within a single function and am wondering if using dblink calls on loopback will solve this.
A dblink query uses its own connection created explicitly by dblink_connect or implicitly when a connection info string is delivered in a dblink exec command. In the sense of transactions, you can consider it independent of the transaction in which it ran.
Example setup:
create table test(id int, str text);
insert into test values(1, '');
Two transactions inside an outer one:
begin;
select dblink_connect('db1', 'dbname=test');
select dblink_connect('db2', 'dbname=test');
select dblink_exec('db1', 'begin');
select dblink_exec('db1', 'update test set str = ''db1'' where id = 1');
select dblink_exec('db1', 'commit'); -- UPDATE!
select dblink_exec('db2', 'begin');
select dblink_exec('db2', 'update test set str = ''db2'' where id = 1');
select dblink_exec('db2', 'rollback');
select dblink_disconnect('db1');
select dblink_disconnect('db2');
rollback;
Despite the last rollback, the update in transaction db1 was successful:
select *
from test;
id | str
----+-----
1 | db1
(1 row)
Note that while transactions are essentially independent, sessions are not. The commands in two internal sessions are executed one after the other in a linear manner, and no concurrency can be achieved. A possible conflict would create a deadlock.
Below is an example of a simplified set of scripts that accurately reproduces an issue the exists in a more complex scripts being written for Prod.
When simulation.bat is run before the sandbox database exists, it works fine - the database is created along with the one populated table and one view. Here is the terminal output from that -
However, after the initial execution of the batch file, subsequent executions of it cause a database error message to surface even though set NOEXEC on; was used in the if block. It appears to choke on the view creation because the table doesn't exist. While it makes sense that the table doesn't exist, why is it trying to create the view at all when set NOEXEC on has been set? How can the logic be modified to ensure that it does not try to create the view if the database already exists?
4 files -
simulation.bat
#echo off
sqlcmd -S TheServerName -E -d master -i .\Simulation.sql -v db_name=sandbox
pause
Simulation.sql
print 'database name $(db_name)';
--if database already exists then print a message and exit script, otherwise create database
if exists (select 1 from sys.databases where [name] = '$(db_name)')
begin
print '--- Database $(db_name) already exists - script execution aborted ---';
set NOEXEC on; --prevent execution of statements in this batch and batches that follow it
end;
if not exists (select 1 from sys.databases where [name] = '$(db_name)')
begin
create database $(db_name);
print '--- $(db_name) database created ---';
end;
go
use $(db_name);
go
:r .\Tally.sql
go
print 'Table creation is complete.';
go
:r .\vw_TallyRows.sql
go
print 'View creation is complete.';
go
set NOEXEC off; --allow execution of statements that follow
go
print 'Reached end of script.';
go
Tally.sql
create table Tally (n int not null primary key);
insert into Tally values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
vw_TallyRows.sql
create view vw_TallyRows as
select rows = count(1) from dbo.Tally;
At some point, it occurred to me that if the database existed (as indicated by the line printed to the terminal in the 2nd screenshot) then the dbo.Tally table should also exist. Based on that it didn't make sense that an invalid object name error message was appearing. The root cause: in Simulation.sql use $(db_name); came after the block of code that checks whether or not the database exists.
Therefore, the solution was to rearrange the order of the statements at the beginning of the Simulation.sql script and add another existence check -
print 'database name $(db_name)';
if not exists (select 1 from sys.databases where [name] = '$(db_name)')
begin
create database $(db_name);
print '--- $(db_name) database created ---';
end;
go
use $(db_name);
go
if exists (select 1 from sys.databases where [name] = '$(db_name)') and exists (select distinct 1 from sys.all_objects where is_ms_shipped = 0)
begin
print '--- Database $(db_name) already exists - script execution aborted ---';
set NOEXEC on; --prevent execution of statements in this batch and batches that follow it
end;
go
--no change to remaining logic in script
Terminal output when sandbox database does not exist yet:
Terminal output when sandbox database does exist:
I have a stored procedure that retrieves sensitive information from an SQL Server 2008 database. I would like to modify the procedure so that any time it is called, it records information about who called it in a separate table.
I thought something like the following would work:
declare #account varchar(255);
set #account = (SELECT SYSTEM_USER);
INSERT into AUDIT_LOG(ACCOUNT, TSTAMP)
VALUES(#account, getdate())
;
--Now fetch data
SELECT x,y,z from sensitive_info;
My issue is that the client application can issue a call to this stored procedure and get the sensitive information, but not commit the connection and the INSERT never occurs!
Is there some way to force the INSERT to happen before the SELECT?
I am using SQL Server 2008.
Thanks,
Carl
You only COMMIT if a transaction has been started.
So you can test for an open transaction first and disallow the read. This will ensure that no transaction is open to be rolled back. I've used XACT_STATE() here
Using SET XACT_ABORT ON and TRY/CATCH too will mean that the INSERT for logging must happen too before the read happens. Any errors at all on INSERT will go to the CATCH block. So no read and the logging fail can itself be logged too.
So: this is your guarantee of "read only if logged"
Having an explicit transaction doesn't help: the INSERT is an atomic action anyway. And if the called opens a transaction the log entry can be rolled back
CREATE PROC getSecretStuff
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN TRY
IF XACT_STATE() <> 0
RAISERRROR ('Call not allowed in an active transaction', 16, 1)
INSERT into AUDIT_LOG(ACCOUNT, TSTAMP)
VALUES(SYSTEM_USER, getdate());
--Now fetch data
SELECT x,y,z from sensitive_info;
END TRY
BEGIN CATCH
-- error handling etc
END CATCH
GO
Why not use the build in auditing functionality?
Have you tried using expicit transactions and doing the select after the commit statement?
On you insert a record in a table you should be albe to get the SCOPE_IDENTITY() of the ast inserted value. Before doing SELECT x,y,z from sensitive_info; you can check if SCOPE_IDENTITY() > 0 then only execute SELECT statement.
I want to see how many rows my delete query effects so I know its correct.
Is this possible using pgadmin?
Start a transaction, delete and then rollback;
In psql :
test1=> begin;
BEGIN
test1=> delete from test1 where test1_id = 1;
DELETE 2
test1=> rollback;
ROLLBACK
In pgAdmin (in the "History" tab on the "Output pane"):
-- Executing query:
begin;
Query returned successfully with no result in 16 ms.
-- Executing query:
delete from test1 where test1_id = 1;
Query returned successfully: 2 rows affected, 16 ms execution time.
-- Executing query:
rollback;
Query returned successfully with no result in 16 ms.
I'm not sure how to automatically do this but you can always do a select then a delete.
SELECT COUNT(*) FROM foo WHERE delete_me=true;
DELETE FROM foo WHERE delete_me=true;
As Andrew said, when doing interactive administration, you can just replace DELETE by SELECT COUNT(*).
If you want to this information in a program of yours (after executing the DELETE), many programming languages provide a construct for this. For example, in PHP it's pg_affected_rows and in .NET it's the return value of ExecuteNonQuery.
Use RETURNING and fetch the result like you would fetch a SELECT-result:
DELETE FROM test1 WHERE test1_id = 1 RETURNING id;
This works as of version 8.2
I'm currently in the process of detaching a development database on the production server. Since this is a production server I don't want to restart the sql service. That is the worst case scenario.
Obviously I tried detaching it through SSMS. Told me there was an active connection and I disconnected it. When detaching the second time it told me that was impossible since it was in use.
I tried EXEC sp_detach_db 'DB' with no luck.
I tried getting the database offline. That ran for about 15 minutes when I got bored and turned it off.
Anyway, I tried everything ... I made sure all connections were killed using the connections indicator in detach database using SSMS.
The following returned 0 results:
USE master
SELECT * FROM sys.sysprocesses WHERE dbid = DB_ID('DB')
And the following is running for 18 minutes now:
ALTER DATABASE DB SET OFFLINE WITH ROLLBACK IMMEDIATE
I did restart SMSS regularly during all this to make sure SSMS wasn't the culprit by locking something invisibly.
Isn't there a way to brute force it? The database schema is something I'm pretty fond of but the data is expendable.
Hopefully there is some sort of a quick fix? :)
The DBA will try to reset the process tonight but I'd like to know the fix for this just in case.
Thx!
ps: I'm using DTC ... so perhaps this might explain why my database got locked up all of a sudden?
edit:
I'm now doing the following which results in an infinite execution of the final part. The first query even returns 0, so I suppose the killing of the users won't even matter.
USE [master]
GO
SELECT * FROM sys.sysprocesses WHERE dbid = DB_ID('Database')
GO
DECLARE #return_value int
EXEC #return_value = [dbo].[usp_KillUsers]
#p_DBName = 'Database'
SELECT 'Return Value' = #return_value
GO
ALTER DATABASE Database SET OFFLINE WITH ROLLBACK IMMEDIATE
GO
How are you connecting to SQL Server? Is it possible that you're trying to detach the database while you yourself are connected to it? This can block a Detach, depending on the version of SQL Server involved.
You can try using the DAC for stuff like this.
Try killing all connections before detaching the database, IE:
USE [master]
GO
/****** Object: StoredProcedure [dbo].[usp_KillUsers] Script Date: 08/18/2009 10:42:48 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[usp_KillUsers]
#p_DBName SYSNAME = NULL
AS
/* Check Paramaters */
/* Check for a DB name */
IF (#p_DBName IS NULL)
BEGIN
PRINT 'You must supply a DB Name'
RETURN
END -- DB is NULL
IF (#p_DBName = 'master')
BEGIN
PRINT 'You cannot run this process against the master database!'
RETURN
END -- Master supplied
IF (#p_DBName = DB_NAME())
BEGIN
PRINT 'You cannot run this process against your connections database!'
RETURN
END -- your database supplied
SET NOCOUNT ON
/* Declare Variables */
DECLARE #v_spid INT,
#v_SQL NVARCHAR(255)
/* Declare the Table Cursor (Identity) */
DECLARE c_Users CURSOR
FAST_FORWARD FOR
SELECT spid
FROM master..sysprocesses (NOLOCK)
WHERE db_name(dbid) LIKE #p_DBName
OPEN c_Users
FETCH NEXT FROM c_Users INTO #v_spid
WHILE (##FETCH_STATUS <> -1)
BEGIN
IF (##FETCH_STATUS <> -2)
BEGIN
SELECT #v_SQL = 'KILL ' + CONVERT(NVARCHAR, #v_spid)
-- PRINT #v_SQL
EXEC (#v_SQL)
END -- -2
FETCH NEXT FROM c_Users INTO #v_spid
END -- While
CLOSE c_Users
DEALLOCATE c_Users
This is a script to kill all user connections to a database, just pass the database name, and it will close them. Then you can try to detach the database. This script is one I found a while back and I cannot claim it as my own. I do not mean this as any sort of plagarism, I just don't have the source.
SELECT DISTINCT req_transactionUOW FROM syslockinfo
KILL 'number_returned' (the one(s) with process_id -2)
The cause was DTC being a little bit annoying and locking up the database completely with a failed transaction. Now I would like to know the reason why this happened. But at least it gives me the ability to reset the broken transactions when the problem re-occurs.
I'm posting it here since I'm sure it'll help some people who are experiencing the same issues.