How to flush a buffer in a ADS stored procedure? - stored-functions

I've inherited a stored procedure that works perfectly when we are running single user, but when more than one user is running the procedure all hell breaks lose. I know this is poor structure but like I said, I inherited it and have to live with it.
Is there any way to flush the buffer after set line? I just want to make sure the number generated is always unique.
begin transaction;
try
update s
set s.NEXT_YMMPC = s.NEXT_YMMPC + 1
from system s;
#YMMPC_Key = (select top 1 cast(s.NEXT_YMMPC as sql_char(10)) from system s);
#YMMPC_Key = right('0000000'+trim(#YMMPC_Key),10);
insert into YMMProductCatalog (Year,MakeCode,ModelCode,ProductCode,CatalogCode,YMMPC_Key) values (#Year,#MakeCode,#ModelCode,#ProductCode,#CatalogCode,#YMMPC_Key);
commit work;
insert into __output select #YMMPC_Key from system.iota;
catch all
rollback work;
raise;
end;
TIA - Jay

Related

How to prevent or avoid running update and delete statements without where clauses in PostgreSQL

How to prevent or avoid running update or delete statements without where clauses in PostgreSQL?
Same as SQL_SAFE_UPDATES statement in MySQL is needed for PostgreSQL.
For example:
UPDATE table_name SET active=1; -- Prevent this statement or throw error message.
UPDATE table_name SET active=1 WHERE id=1; -- This is allowed
My company database has many users with insert and update privilege any one of the users do that unsafe update.
In this secoario how to handle this.
Any idea can write trigger or any extension to handle the unsafe update in PostgreSQL.
I have switched off autocommits to avoid these errors. So I always have a transaction that I can roll back. All you have to do is modify .psqlrc:
\set AUTOCOMMIT off
\echo AUTOCOMMIT = :AUTOCOMMIT
\set PROMPT1 '%[%033[32m%]%/%[%033[0m%]%R%[%033[1;32;40m%]%x%[%033[0m%]%# '
\set PROMPT2 '%[%033[32m%]%/%[%033[0m%]%R%[%033[1;32;40m%]%x%[%033[0m%]%# '
\set PROMPT3 '>> '
You don't have to insert the PROMPT statements. But they are helpful because they change the psql prompt to show the transaction status.
Another advantage of this approach is that it gives you a chance to prevent any erroneous changes.
Example (psql):
database=# SELECT * FROM my_table; -- implicit start transaction; see prompt
-- output result
database*# UPDATE my_table SET my_column = 1; -- missed where clause
UPDATE 525125 -- Oh, no!
database*# ROLLBACK; -- Puh! revert wrong changes
ROLLBACK
database=# -- I'm completely operational and all of my circuits working perfectly
There actually was a discussion on the hackers list about this very feature. It had a mixed reception, but might have been accepted if the author had persisted.
As it is, the best you can do is a statement level trigger that bleats if you modify too many rows:
CREATE TABLE deleteme
AS SELECT i FROM generate_series(1, 1000) AS i;
CREATE FUNCTION stop_mass_deletes() RETURNS trigger
LANGUAGE plpgsql AS
$$BEGIN
IF (SELECT count(*) FROM OLD) > TG_ARGV[0]::bigint THEN
RAISE EXCEPTION 'must not modify more than % rows', TG_ARGV[0];
END IF;
RETURN NULL;
END;$$;
CREATE TRIGGER stop_mass_deletes AFTER DELETE ON deleteme
REFERENCING OLD TABLE AS old FOR EACH STATEMENT
EXECUTE FUNCTION stop_mass_deletes(10);
DELETE FROM deleteme WHERE i < 100;
ERROR: must not modify more than 10 rows
CONTEXT: PL/pgSQL function stop_mass_deletes() line 1 at RAISE
DELETE FROM deleteme WHERE i < 10;
DELETE 9
This will have a certain performance impact on deletes.
This works from v10 on, when transition tables were introduced.
If you can afford making it a little less convinient for your users, you might try revoking UPDATE privilege for all "standard" users and creating a stored procedure like this:
CREATE FUNCTION update(table_name, col_name, new_value, condition) RETURNS void
/*
Check if condition is acceptable, create and run UPDATE statement
*/
LANGUAGE plpgsql SECURITY DEFINER
Because of SECURITY DEFINER this way your users will be able to UPDATE despite not having UPDATE privilege.
I'm not sure if this is a good approach, but this way you can force as strict UPDATE (or anything else) requirements as you wish.
Of course the more complicated UPDATES are required, the more complicated has to be your procedure, but if this is mostly just about updating single row by ID (as in your example) this might be worth a try.

Postgresql insert stopped working, duplicate key value violations

About 8 months ago I used a suggestion to set up a holding table, then push to the formal table and prevent duplicate entries, per this post: Best way to prevent duplicate data on copy csv postgresql
It's been working very nicely, but today I noticed some errors and gaps in the data.
Here's my insert statement:
And here's how the index is set up:
And here's an example of the error I'm getting, although on the next chron insert, it went through.
Here's it going through fine:
I haven't noted any large changes in the data incoming. Here's what the data looks like that's coming in now:
In summary, I've noticed recent oddities with the insert statements, and success is erratic, resulting in large data gaps in the database. Thanks for any help, and I'm happy to provide more details, but I wanted to see if my information sounds like something someone else has already dealt with.
Thanks very much for any help,
S
As Gordon pointed out in his answer to your previous question, this approach only works if you have exclusive access to the table. There is a delay between the existence check and the insert itself, and if another process modifies the table during this window, you may end up with duplicates.
If you're on Postgres 9.5+, the best approach is to skip the existence check and simply use an INSERT ... ON CONFLICT DO NOTHING statement.
On earlier versions, the simplest solution (if you can afford to do so) would be to lock the table for the duration of the import. Otherwise, you can emulate ON CONFLICT DO NOTHING (albeit less efficiently) using a loop and an exception handler:
DO $$
DECLARE r RECORD;
BEGIN
FOR r IN (SELECT * FROM holding) LOOP
BEGIN
INSERT INTO ltg_data (pulsecount, intensity, time, lon, lat, ltg_geom)
VALUES (r.pulsecount, r.intensity, r.time, r.lon, r.lat, r.ltg_geom);
EXCEPTION WHEN unique_violation THEN
END;
END LOOP;
END
$$
As an aside, DELETE FROM holding is time-consuming and probably unnecessary. Create your staging table as TEMP, and it will be cleaned up automatically at the end of your session. You can easily build a table which matches the structure of your import target:
CREATE TEMP TABLE holding (LIKE ltg_data);

Conflict between UPDATE and SELECT

I have a table DB.DATA_FEED that I update using a T/SQL Procedure. Every minute, the procedure below is executed 100 times for different data.
ALTER PROCEDURE [DB].[UPDATE_DATA_FEED]
#P_MARKET_DATE varchar(max),
#P_CURR1 int,
#P_CURR2 int,
#P_PERIOD float(53),
#P_MID float(53)
AS
BEGIN
BEGIN TRY
UPDATE DB.DATA_FEED
SET
MID = #P_MID,
MARKET_DATE = convert(datetime,#P_MARKET_DATE, 103)
WHERE
cast(MARKET_DATE as date) =
cast(convert(datetime,#P_MARKET_DATE, 103) as date) AND
CURR1 = #P_CURR1 AND
CURR2 = #P_CURR2 AND
PERIOD = #P_PERIOD
IF ##TRANCOUNT > 0
COMMIT WORK
END TRY
BEGIN CATCH
--error code
END CATCH
END
END
When Users use the application, then they also read from this table, as per the SQL below. Potentially this select can run thousands of times in one minute. (Questions marks are replaced by parser with appropriate date/numbers)
DECLARE #MYDATE AS DATE;
SET #MYDATE='?'
SELECT *
FROM DB.DATA_FEED
WHERE MARKET_DATE>=#MYDATE AND MARKET_DATE<DATEADD(D,1,#MYDATE)
AND CURR1 = ?
AND CURR2 = ?
AND PERIOD = ?
ORDER BY PERIOD
I have sometimes, albeit rarely, got a database lock.
Using the the script from http://sqlserverplanet.com/troubleshooting/blocking-processes-lead-blocker I saw it was SPID=58. I then did DECLARE #SPID INT; SET #SPID = 58; DBCC INPUTBUFFER(#SPID) to find the SQL script which turned out to be my select statement.
Is there something wrong with my SQL code? What can I do to prevent such locks happening in the future?
Thanks
Readers have priority over writers so when someone is writing the readers have to wait for the writing to finish. There are two Table Hints you ca try one is NOLOCK that reads uncommited lines (dirty reads) and the other is READPAST (only reads information that has been commited on the last commit). In both cases the readers never block the table, there for do not deadlock a writer.
Writers can block other writers but, if I understood correctly, only one write per execution so the readers will intercalate writes, diminuishing the deadlocks.
Hope it helps.

Guarantee T-SQL Stored Procedure INSERTs before SELECT?

I have a stored procedure that retrieves sensitive information from an SQL Server 2008 database. I would like to modify the procedure so that any time it is called, it records information about who called it in a separate table.
I thought something like the following would work:
declare #account varchar(255);
set #account = (SELECT SYSTEM_USER);
INSERT into AUDIT_LOG(ACCOUNT, TSTAMP)
VALUES(#account, getdate())
;
--Now fetch data
SELECT x,y,z from sensitive_info;
My issue is that the client application can issue a call to this stored procedure and get the sensitive information, but not commit the connection and the INSERT never occurs!
Is there some way to force the INSERT to happen before the SELECT?
I am using SQL Server 2008.
Thanks,
Carl
You only COMMIT if a transaction has been started.
So you can test for an open transaction first and disallow the read. This will ensure that no transaction is open to be rolled back. I've used XACT_STATE() here
Using SET XACT_ABORT ON and TRY/CATCH too will mean that the INSERT for logging must happen too before the read happens. Any errors at all on INSERT will go to the CATCH block. So no read and the logging fail can itself be logged too.
So: this is your guarantee of "read only if logged"
Having an explicit transaction doesn't help: the INSERT is an atomic action anyway. And if the called opens a transaction the log entry can be rolled back
CREATE PROC getSecretStuff
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN TRY
IF XACT_STATE() <> 0
RAISERRROR ('Call not allowed in an active transaction', 16, 1)
INSERT into AUDIT_LOG(ACCOUNT, TSTAMP)
VALUES(SYSTEM_USER, getdate());
--Now fetch data
SELECT x,y,z from sensitive_info;
END TRY
BEGIN CATCH
-- error handling etc
END CATCH
GO
Why not use the build in auditing functionality?
Have you tried using expicit transactions and doing the select after the commit statement?
On you insert a record in a table you should be albe to get the SCOPE_IDENTITY() of the ast inserted value. Before doing SELECT x,y,z from sensitive_info; you can check if SCOPE_IDENTITY() > 0 then only execute SELECT statement.

detach database/take offline fails

I'm currently in the process of detaching a development database on the production server. Since this is a production server I don't want to restart the sql service. That is the worst case scenario.
Obviously I tried detaching it through SSMS. Told me there was an active connection and I disconnected it. When detaching the second time it told me that was impossible since it was in use.
I tried EXEC sp_detach_db 'DB' with no luck.
I tried getting the database offline. That ran for about 15 minutes when I got bored and turned it off.
Anyway, I tried everything ... I made sure all connections were killed using the connections indicator in detach database using SSMS.
The following returned 0 results:
USE master
SELECT * FROM sys.sysprocesses WHERE dbid = DB_ID('DB')
And the following is running for 18 minutes now:
ALTER DATABASE DB SET OFFLINE WITH ROLLBACK IMMEDIATE
I did restart SMSS regularly during all this to make sure SSMS wasn't the culprit by locking something invisibly.
Isn't there a way to brute force it? The database schema is something I'm pretty fond of but the data is expendable.
Hopefully there is some sort of a quick fix? :)
The DBA will try to reset the process tonight but I'd like to know the fix for this just in case.
Thx!
ps: I'm using DTC ... so perhaps this might explain why my database got locked up all of a sudden?
edit:
I'm now doing the following which results in an infinite execution of the final part. The first query even returns 0, so I suppose the killing of the users won't even matter.
USE [master]
GO
SELECT * FROM sys.sysprocesses WHERE dbid = DB_ID('Database')
GO
DECLARE #return_value int
EXEC #return_value = [dbo].[usp_KillUsers]
#p_DBName = 'Database'
SELECT 'Return Value' = #return_value
GO
ALTER DATABASE Database SET OFFLINE WITH ROLLBACK IMMEDIATE
GO
How are you connecting to SQL Server? Is it possible that you're trying to detach the database while you yourself are connected to it? This can block a Detach, depending on the version of SQL Server involved.
You can try using the DAC for stuff like this.
Try killing all connections before detaching the database, IE:
USE [master]
GO
/****** Object: StoredProcedure [dbo].[usp_KillUsers] Script Date: 08/18/2009 10:42:48 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[usp_KillUsers]
#p_DBName SYSNAME = NULL
AS
/* Check Paramaters */
/* Check for a DB name */
IF (#p_DBName IS NULL)
BEGIN
PRINT 'You must supply a DB Name'
RETURN
END -- DB is NULL
IF (#p_DBName = 'master')
BEGIN
PRINT 'You cannot run this process against the master database!'
RETURN
END -- Master supplied
IF (#p_DBName = DB_NAME())
BEGIN
PRINT 'You cannot run this process against your connections database!'
RETURN
END -- your database supplied
SET NOCOUNT ON
/* Declare Variables */
DECLARE #v_spid INT,
#v_SQL NVARCHAR(255)
/* Declare the Table Cursor (Identity) */
DECLARE c_Users CURSOR
FAST_FORWARD FOR
SELECT spid
FROM master..sysprocesses (NOLOCK)
WHERE db_name(dbid) LIKE #p_DBName
OPEN c_Users
FETCH NEXT FROM c_Users INTO #v_spid
WHILE (##FETCH_STATUS <> -1)
BEGIN
IF (##FETCH_STATUS <> -2)
BEGIN
SELECT #v_SQL = 'KILL ' + CONVERT(NVARCHAR, #v_spid)
-- PRINT #v_SQL
EXEC (#v_SQL)
END -- -2
FETCH NEXT FROM c_Users INTO #v_spid
END -- While
CLOSE c_Users
DEALLOCATE c_Users
This is a script to kill all user connections to a database, just pass the database name, and it will close them. Then you can try to detach the database. This script is one I found a while back and I cannot claim it as my own. I do not mean this as any sort of plagarism, I just don't have the source.
SELECT DISTINCT req_transactionUOW FROM syslockinfo
KILL 'number_returned' (the one(s) with process_id -2)
The cause was DTC being a little bit annoying and locking up the database completely with a failed transaction. Now I would like to know the reason why this happened. But at least it gives me the ability to reset the broken transactions when the problem re-occurs.
I'm posting it here since I'm sure it'll help some people who are experiencing the same issues.