MS-SQL 2000: Turn off logging during stored procedure - tsql

Here's my scenario:
I have a simple stored procedure that removes a specific set of rows from a table (we'll say about 30k rows), and then inserts about the same amount of rows. This generally should only take a few seconds; however, the table has a trigger on it that watches for inserts/deletes, and tries to mimic what happened to a linked table on another server.
This process in turn is unbareably slow due to the trigger, and the table is also locked during this process. So here are my two questions:
I'm guessing a decent part of the slowdown is due to the transaction log. Is there a way for me to specify in my stored procedure that I do not want what's in the procedure to be logged?
Is there a way for me to do my 'DELETE FROM' and 'INSERT INTO' commands without me locking the table during the entire process?
Thanks!
edit - Thanks for the answers; I figured it was the case (not being able to do either of the above), but wanted to make sure. The trigger was created a long time ago, and doesn't look very effecient, so it looks like my next step will be to go in to that and find out what's needed and how it can be improved. Thanks!

1) no, also you are not doing a minimally logged operation like TRUNCATE or BULK INSERT
2) No, how would you prevent corruption otherwise?

I wouldn't automatically assume that the performance problem is due to logging. In fact, it's likely that the trigger is written in such a way that is causing your performance problems. I encourage you to modify your original question and show the code for the trigger.

You can't turn off transactional integrity when modifying the data. You could ignore locks when you select data using select * from table (nolock); however, you need to be very careful and ensure your application can handle doing dirty reads.

It doesn't help with your trigger, but the solution to the locking issue is to perform the transactions in smaller batches.
Instead of
DELETE FROM Table WHERE <Condition>
Do something like
WHILE EXISTS ( SELECT * FROM table WHERE <condition to delete>)
BEGIN
SET ROWCOUNT 1000
DELETE FROM Table WHERE <Condition>
SET ROWCOUNT 0
END

You can temporarily disable the trigger, run your proc, then do whatever the trigger was doing in a more efficient manner.
-- disable trigger
ALTER TABLE [Table] DISABLE TRIGGER [Trigger]
GO
-- execute your proc
EXEC spProc
GO
-- do more stuff to clean up / sync with other server
GO
-- enable trigger
ALTER TABLE [Table] ENABLE TRIGGER [Trigger]
GO

Related

Cannot drop priorly modified new table in execute block

I'm not well acquainted with FB database and its subtleties.
On script executing, the problem occurres:
EXECUTE ibeblock
AS
BEGIN
-- 1. Create temporary table
execute statement 'recreate GLOBAL TEMPORARY table TMPTBL (ID bigint) /*on commit delete rows*/;';
commit;
-- 2. dummy fill of temporary table
insert into tmptbl (ID)
values (0xFE);
commit; -- not necessary
-- 3. perform some actions...
-- 4. Delete temporary table
execute statement 'drop table TMPTBL;';
commit; -- FAILURE!
END
The idea of script is primitive: 1) create temporary table; 2) fill it with records; 3) perform actions on other DB objects using populated records; 4) drop temp table.
For simulation, step-3 is useless (skipped). Step-4 leads to an error on commit: "This operation is not defined for system tables. unsuccessful metadata update. object TABLE "TMPTBL" is in use.".
Neither triggers nor constraints are applied for the table. Obviously, there should be nothing locking temp table.
Help, please, with resolution. Hopefully I missed something.
P.S.: FB 2.5, IBExpert 2017.12.13.1 used as DB managing tool
There are a number of problems with your code:
A global temporary table is intended as a permanent object, it is just the content that is temporary (either for transaction or connection duration). So normally you would create a global temporary table once, and not drop it, but instead reuse its definition.
Although you technically can execute DDL using execute statement, you are not supposed to, and it is not guaranteed to work. Your code is specifically an example of one of the things that will not work.
The problem here, is that you are trying to drop the table in the same transaction that used it (though to be honest, I'm surprised the insert even worked, because normally you can't insert into a table that was created in the same transaction).
The insert you executed on TMPTBL will mark the table in use, and given the transaction isn't committed yet, you can't drop the table: it is in use.
You shouldn't call commit in PSQL code (to be honest, I thought this wasn't even possible).
In short, you need to rethink how you use global temporary tables: define it once, and do not use execute statement to create it, but create it separately.
If you do want to create and drop it and not retain the definition of the global temporary table, then create it before the execute block, commit, then the execute block (with only the inserts and the 'perform some actions'), commit, and then drop it (and commit).
Alternatively, you might get away with executing the create using execute statement ... with autonomous transaction, the inserts and the 'perform some actions' in another execute statement ... with autonomous transaction, and finally the drop in yet another execute statement ... with autonomous transaction. However that makes your code very brittle, and this is not a recommend approach.
I have been forced again by devops guys to find robust solution to provide DB structure upgrades. Requirements: safely combine DDL and DML statements; ability to create temporary tables (for heavy selections); leave no garbage. Of course, upgrade is handled within single connection.
Referencing to the clues given by Mark a deeper insight and lots of experiments were made.
Here is template filescript that really worked out (isql native utility used):
SET TERM #;
-- 1. Create temporary table
EXECUTE BLOCK
AS
BEGIN
execute statement 'recreate GLOBAL TEMPORARY table TMPTBL (ID bigint) /*on commit preserve rows*/;';
END#
commit#
-- Data manipulations
EXECUTE BLOCK
AS
declare xid bigint;
BEGIN
-- 2. dummy fill of temporary table
begin
insert into TMPTBL (ID) values (0xFE);
end
-- 3. perform some actions...
for
select tt.ID
from TMPTBL tt
into :xid
do
begin
-- use :xid var
end
END#
commit#
-- 4. Delete temporary table
EXECUTE BLOCK
AS
BEGIN
execute statement 'drop table TMPTBL;';
END#
commit#
SET TERM ;#
Might be usefull for someone.
Damn, Firebird do drives crazy!

Move truncated records to another table in Postgresql 9.5

Problem is following: remove all records from one table, and insert them to another.
I have a table that is partitioned by date criteria. To avoid partitioning each record one by one, I'm collecting the data in one table, and periodically move them to another table. Copied records have to be removed from first table. I'm using DELETE query with RETURNING, but the side effect is that autovacuum is having a lot of work to do to clean up the mess from original table.
I'm trying to achieve the same effect (copy and remove records), but without creating additional work for vacuum mechanism.
As I'm removing all rows (by delete without where conditions), I was thinking about TRUNCATE, but it does not support RETURNING clause. Another idea was to somehow configure the table, to automatically remove tuple from page on delete operation, without waiting for vacuum, but I did not found if it is possible.
Can you suggest something, that I could use to solve my problem?
You need to use something like:
--Open your transaction
BEGIN;
--Prevent concurrent writes, but allow concurrent data access
LOCK TABLE table_a IN SHARE MODE;
--Copy the data from table_a to table_b, you can also use CREATE TABLE AS to do this
INSERT INTO table_b AS SELECT * FROM table_a;
--Zeroying table_a
TRUNCATE TABLE table_a;
--Commits and release the lock
COMMIT;

INSERT statement that does not fire an INSERT trigger

I am using PostgreSQL 9.2 and I need to write an INSERT statement which copies data from table A to table B without firing the INSERT trigger defined on table B (maybe some sort of bulk insertion operation??).
On this specific table (table B) many INSERT, UPDATE and DELETE operations are executed. During each and every one of this executions, a trigger must fire.
I cannot temporary disable the triggers because of standard, day-to-day DML operations.
Can anyone help me with the syntax for this non-trigger-firing INSERT statement?
Run your "privileged" inserts as a different user. That way your trigger can check the current user and exit if it shouldn't do anything.

db2 reorganize a table

When I alter a table in db2, I have to reorganize it
so I execute the next query:
Call Sysproc.admin_cmd ('reorg Table myTable');
I m searching an appropriate solution to reorganize a table when it s altered, or reorganize all the schema after making various modifications
You can determine when tables will require a REORG by looking at SYSIBMADM.ADMINTABINFO:
select tabschema, tabname
from sysibmadm.admintabinfo
where reorg_pending = 'Y'
You may also want to look at the NUM_REORG_REC_ALTERS column as this may show you additional tables that don't require reorganization due to various ALTER TABLE statements.
The reorg operation is similar to a defrag in hard disk. It frees empty spaces in pages, and eventually it could reorganize data according to an index. Depending on the features, it creates the compression dictionary and compress data.
As you can see, reorg operation is an administrative task, and it is not necessary each time data is modified. A database could run without reorg.
It order to ease this, DB2 included autonomic features like automatic backup, however this doesn't answer you own question. This will only trigger reorg on tables that need that.
To reorg a table explicitly you need to execute the command reorg http://publib.boulder.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.admin.cmd.doc/doc/r0001966.html
or via the admin_cmd http://publib.boulder.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.sql.rtn.doc/doc/r0023582.html
in db2 config we have:
Automatic reorganization (AUTO_REORG) = OFF
we can set auto_reorg to on

wrapping postgresql commands in a transaction: truncate vs delete or upsert/merge

I am using the following commands below in postgresql 9.1.3 to move data from a temp staging table to a table being used in a webapp (geoserver) all in the same db. Then dropping the temp table.
TRUNCATE table_foo;
INSERT INTO table_foo
SELECT * FROM table_temp;
DROP TABLE table_temp;
I want to wrap this in a transaction to allow for concurrency. The data-set is small less than 2000 rows and truncating is faster than delete.
What is the best way to run these commands in a transaction?
Is creating a function advisable or writing a UPSERT/MERGE etc in a CTE?
Would it be better to DELETE all rows then bulk INSERT from temp table instead of TRUNCATE?
In postgres which would allow for a roll back TRUNCATE or DELETE?
The temp table is delivered daily via an ETL scripted in arcpy how could I automate the truncate/delete/bulk insert parts within postgres?
I am open to using PL/pgsql, PL/python (or the recommended py for postgres)
Currently I am manually executing the sql commands after the temp staging table is imported into my DB.
Both, truncate and delete can be rolled back (which is clearly documented in the manual).
truncate - due to its nature - has some oddities regarding the visibility.
See the manual for details: http://www.postgresql.org/docs/current/static/sql-truncate.html (the warning at the bottom)
If your application can live with the fact that table_foo is "empty" during that process, truncate is probably better (again see the big red box in the manual for an explanation). If you don't want the application to notice, you need to use delete
To run these statements in a transaction simply put them into one:
begin transaction;
delete from table_foo;
insert into ....
drop table_temp;
commit;
Whether you do that in a function or not is up to you.
truncate/insert will be faster (than delete/insert) as that minimizes the amount of WAL generated.