Db2 stored procedure error (not valid in the context where it is used) - db2

My issue is when i call the stored procedure it is not working
drop procedure DELETE_WITH_COMMIT_COUNT1
DB20000I The SQL command completed successfully.
CREATE PROCEDURE DELETE_WITH_COMMIT_COUNT1(IN v_TABLE_NAME VARCHAR(24), IN v_COMMIT_COUNT INTEGER )
NOT DETERMINISTIC
LANGUAGE SQL
P1 : BEGIN
-- DECLARE Statements
DECLARE SQLCODE INTEGER;
DECLARE v_DELETE_QUERY VARCHAR(1024);
DECLARE v_DELETE_STATEMENT STATEMENT;
P2 : BEGIN
DECLARE V1 CHAR(24) FOR BIT DATA;
DECLARE V2 CHAR(24) FOR BIT DATA ;
DECLARE cur1 CURSOR WITH RETURN TO CLIENT FOR select min(MESSAGE_ID),max(MESSAGE_ID) from TESTING where TIMESTAMP between (select TIMESTAMP(date(min(timestamp))) from TESTING with ur) and (select TIMESTAMP(date(min(timestamp))) + 1 day from TESTING with ur) ;
OPEN cur1;
FETCH FROM cur1 INTO V1, V2;
SET v_DELETE_QUERY = 'DELETE FROM (SELECT 1 FROM ' || v_TABLE_NAME || ' WHERE MESSAGE_ID between V1 and V2 '
|| ' FETCH FIRST ' || RTRIM(CHAR(v_COMMIT_COUNT)) || ' ROWS ONLY) AS DELETE_TABLE';
PREPARE v_DELETE_STATEMENT FROM v_DELETE_QUERY;
DEL_LOOP:
LOOP
EXECUTE v_DELETE_STATEMENT;
IF SQLCODE = 100 THEN
LEAVE DEL_LOOP;
END IF;
COMMIT;
END LOOP;
COMMIT;
END P2;
END P1
DB20000I The SQL command completed successfully.
My procedure was created successfully but when I call it the following issue happens:
db2 "call DELETE_WITH_COMMIT_COUNT1 ('TESTING',50)" SQL0206N "V1" is
not valid in the context where it is used. SQLSTATE=42703
More information :
db2 "select min(MESSAGE_ID),max(MESSAGE_ID) from TESTING where
TIMESTAMP between (select TIMESTAMP(date(min(timestamp))) from TESTING
with ur) and (select TIMESTAMP(date(min(timestamp))) + 1 day from
TESTING with ur) "
1 2
--------------------------------------------------- ---------------------------------------------------
x'4B5753313032353039313133392020202020202020202020' x'4B5753313032353039313230302020202020202020202020'
1 record(s) selected.
I want to delete the records between these values and i currently have 99 records between minimum and maximum message id
message_id column is defined as CHAR(24) FOR BIT DATA on the table .

Your problem is with this statement:
SET v_DELETE_QUERY = 'DELETE FROM (SELECT 1 FROM ' || v_TABLE_NAME || ' WHERE MESSAGE_ID between V1 and V2 '
|| ' FETCH FIRST ' || RTRIM(CHAR(v_COMMIT_COUNT)) || ' ROWS ONLY) AS DELETE_TABLE';
In this statement, it looks like you want to use the values of your previously declared variables, V1 and V2:
' WHERE MESSAGE_ID between V1 and V2 '
DB2 sees this as a string literal. Instead, try changing this part of the statement like so:
SET v_DELETE_QUERY = 'DELETE FROM (SELECT 1 FROM ' || v_TABLE_NAME || ' WHERE MESSAGE_ID between ' || V1 || ' and ' || V2
|| ' FETCH FIRST ' || RTRIM(CHAR(v_COMMIT_COUNT)) || ' ROWS ONLY) AS DELETE_TABLE';

Related

Update Null columns to Zero dynamically in Redshift

Here is the code in SAS, It finds the numeric columns with blank and replace with 0's
DATA dummy_table;
SET dummy_table;
ARRAY DUMMY _NUMERIC_;
DO OVER DUMMY;
IF DUMMY=. THEN DUMMY=0;
END;
RUN;
I am trying to replicate this in Redshift, here is what I tried
create or replace procedure sp_replace_null_to_zero(IN tbl_nm varchar) as $$
Begin
Execute 'declare ' ||
'tot_cnt int := (select count(*) from information_schema.columns where table_name = ' || tbl_nm || ');' ||
'init_loop int := 0; ' ||
'cn_nm varchar; '
Begin
While init_loop <= tot_cnt
Loop
Raise info 'init_loop = %', Init_loop;
Raise info 'tot_cnt = %', tot_cnt;
Execute 'Select column_name into cn_nm from information_schema.columns ' ||
'where table_name ='|| tbl_nm || ' and ordinal_position = init_loop ' ||
'and data_type not in (''character varying'',''date'',''text''); '
Raise info 'cn_nm = %', cn_nm;
if cn_nm is not null then
Execute 'Update ' || tbl_nm ||
'Set ' || cn_nm = 0 ||
'Where ' || cn_nm is null or cn_nm =' ';
end if;
init_loop = init_loop + 1;
end loop;
End;
End;
$$ language plpgsql;
Issues I am facing
When I pass the Input parameter here, I am getting 0 count
tot_cnt int := (select count(*) from information_schema.columns where table_name = ' || tbl_nm || ');'
For testing purpose I tried hardcode the table name inside proc, I am getting the error amazon invalid operation: value for domain information_schema.cardinal_number violates check constraint "cardinal_number_domain_check"
Is this even possible in redshift, How can I do this logic or any other workaround.
Need Expertise advise here!!
You can simply run an UPDATE over the table(s) using the NVL(cn_nm,0) function
UPDATE tbl_raw
SET col2 = NVL(col2,0);
However UPDATE is a fairly expensive operation. Consider just using a view over your table that wraps the columns in NVL(cn_nm,0)
CREATE VIEW tbl_clean
AS
SELECT col1
, NVL(col2,0) col2
FROM tbl_raw;

PostgreSQL ERROR: EXECUTE of SELECT ... INTO is not implemented

When I run the following command from a function I defined, I get the error "EXECUTE of SELECT ... INTO is not implemented". Does this mean the specific command is not allowed (i.e. "SELECT ...INTO")? Or does it just mean I'm doing something wrong? The actual code causing the error is below. I apologize if the answer is already out here, however I looked and could not find this specific error. Thanks in advance... For whatever it's worth I'm running 8.4.7
vCommand = 'select ' || stmt.column_name || ' as id ' ||
', count(*) as nCount
INTO tmpResults
from ' || stmt.table_name || '
WHERE ' || stmt.column_name || ' IN (select distinct primary_id from anyTable
WHERE primary_id = ' || stmt.column_name || ')
group by ' || stmt.column_name || ';';
EXECUTE vCommand;
INTO is ambiguous in this use case and then is prohibited there.
You can use a CREATE TABLE AS SELECT instead.
CREATE OR REPLACE FUNCTION public.f1(tablename character varying)
RETURNS integer
LANGUAGE plpgsql
AS $function$
begin
execute 'create temp table xx on commit drop as select * from '
|| quote_ident(tablename);
return (select count(*) from xx);
end;
$function$
postgres=# select f1('omega');
f1
────
2
(1 row)

Self-managing PostgreSQL partition tables

I am trying to make a self-managing partition table setup with Postgres. It all revolves around this function but I can't seem to get Postgres to accept my table names. Any ideas or examples of self-managing partition table trigger functions?
My current function:
DECLARE
day integer;
year integer;
tablename text;
startdate text;
enddate text;
BEGIN
day:=date_part('doy',to_timestamp(NEW.date));
year:=date_part('year',to_timestamp(NEW.date));
tablename:='pings_'||year||'_'||day||'_'||NEW.id;
-- RAISE EXCEPTION 'tablename=%',tablename;
PERFORM 'tablename' FROM pg_tables WHERE 'schemaname'=tablename;
-- RAISE EXCEPTION 'found=%',FOUND;
IF FOUND <> TRUE THEN
startdate:=date_part('year',to_timestamp(NEW.date))||'-'||date_part('month',to_timestamp(NEW.date))||'-'||date_part('day',to_timestamp(NEW.date));
enddate:=startdate::timestamp + INTERVAL '1 day';
EXECUTE 'CREATE TABLE $1 (
CHECK ( date >= DATE $2 AND date < DATE $3 )
) INHERITS (pings)' USING quote_ident(tablename),startdate,enddate;
END IF;
EXECUTE 'INSERT INTO $1 VALUES (NEW.*)' USING quote_ident(tablename);
RETURN NULL;
END;
I want it to auto-create a table called pings_YEAR_DOY_ID but it always fails with:
2011-10-24 13:39:04 CDT [15804]: [1-1] ERROR: invalid input syntax for type double precision: "-" at character 45
2011-10-24 13:39:04 CDT [15804]: [2-1] QUERY: SELECT date_part('year',to_timestamp( $1 ))+'-'+date_part('month',to_timestamp( $2 ))+'-'+date_part('day',to_timestamp( $3 ))
2011-10-24 13:39:04 CDT [15804]: [3-1] CONTEXT: PL/pgSQL function "ping_partition" line 15 at assignment
2011-10-24 13:39:04 CDT [15804]: [4-1] STATEMENT: INSERT INTO pings VALUES (0,0,5);
TRY 2
After applying the changes and modifying it some more (date is a unixtimestamp column, my thinking being that an integer column is faster than a timestamp column when selecting). I get the below error, not sure if I am using the proper syntax for USING NEW?
Updated function:
CREATE FUNCTION ping_partition() RETURNS trigger
LANGUAGE plpgsql
AS $_$DECLARE
day integer;
year integer;
tablename text;
startdate text;
enddate text;
BEGIN
day:=date_part('doy',to_timestamp(NEW.date));
year:=date_part('year',to_timestamp(NEW.date));
tablename:='pings_'||year||'_'||day||'_'||NEW.id;
-- RAISE EXCEPTION 'tablename=%',tablename;
PERFORM 'tablename' FROM pg_tables WHERE 'schemaname'=tablename;
-- RAISE EXCEPTION 'found=%',FOUND;
IF FOUND <> TRUE THEN
startdate := to_char(to_timestamp(NEW.date), 'YYYY-MM-DD');
enddate:=startdate::timestamp + INTERVAL '1 day';
EXECUTE 'CREATE TABLE ' || quote_ident(tablename) || ' (
CHECK ( date >= EXTRACT(EPOCH FROM DATE ' || quote_literal(startdate) || ')
AND date < EXTRACT(EPOCH FROM DATE ' || quote_literal(enddate) || ') )
) INHERITS (pings)';
END IF;
EXECUTE 'INSERT INTO ' || quote_ident(tablename) || ' SELECT $1' USING NEW;
RETURN NULL;
END;
$_$;
My statement:
INSERT INTO pings VALUES (0,0,5);
SQL error:
ERROR: column "date" is of type integer but expression is of type pings
LINE 1: INSERT INTO pings_1969_365_0 SELECT $1
^
HINT: You will need to rewrite or cast the expression.
QUERY: INSERT INTO pings_1969_365_0 SELECT $1
CONTEXT: PL/pgSQL function "ping_partition" line 22 at EXECUTE statement
Note: Since Postgres 10 declarative partitioning is typically superior to partitioning by inheritance as used in this case.
You are mixing double precision output of date_part() with text '-'. That doesn't make sense to PostgreSQL. You would need an explicit cast to text. But there is a much simpler way to do all of this:
startdate:=date_part('year',to_timestamp(NEW.date))
||'-'||date_part('month',to_timestamp(NEW.date))
||'-'||date_part('day',to_timestamp(NEW.date));
Use instead:
startdate := to_char(NEW.date, 'YYYY-MM-DD');
This makes no sense either:
EXECUTE 'CREATE TABLE $1 (
CHECK (date >= DATE $2 AND date < DATE $3 )
) INHERITS (pings)' USING quote_ident(tablename),startdate,enddate;
You can only supply values with the USING clause. Read the manual here. Try instead:
EXECUTE 'CREATE TABLE ' || quote_ident(tablename) || ' (
CHECK ("date" >= ''' || startdate || ''' AND
"date" < ''' || enddate || '''))
INHERITS (ping)';
Or better yet, use format(). See below.
Also, like #a_horse answered: You need to put your text values in single quotes.
Similar here:
EXECUTE 'INSERT INTO $1 VALUES (NEW.*)' USING quote_ident(tablename);
Instead:
EXECUTE 'INSERT INTO ' || quote_ident(tablename) || ' VALUES ($1.*)'
USING NEW;
Related answer:
How to dynamically use TG_TABLE_NAME in PostgreSQL 8.2?
Aside: While "date" is allowed for a column name in PostgreSQL it is a reserved word in every SQL standard. Don't name your column "date", it leads to confusing syntax errors.
Complete working demo
CREATE TABLE ping (ping_id integer, the_date date);
CREATE OR REPLACE FUNCTION trg_ping_partition()
RETURNS trigger
LANGUAGE plpgsql SET client_min_messages = 'WARNING' AS
$func$
DECLARE
_schema text := 'public'; -- double-quoted if necessary
_tbl text := to_char(NEW.the_date, '"ping_"YYYY_DDD_') || NEW.ping_id;
BEGIN
EXECUTE format('CREATE TABLE IF NOT EXISTS %1$s.%2$s
(CHECK (the_date >= %3$L
AND the_date < %4$L)) INHERITS (%1$s.ping)'
, _schema -- %1$s
, _tbl -- %2$s -- legal(!) name needs no quotes
, to_char(NEW.the_date, 'YYYY-MM-DD') -- %3$L
, to_char(NEW.the_date + 1, 'YYYY-MM-DD') -- %4$L
);
EXECUTE 'INSERT INTO ' || _tbl || ' VALUES ($1.*)'
USING NEW;
RETURN NULL;
END
$func$;
CREATE TRIGGER insbef
BEFORE INSERT ON ping
FOR EACH ROW EXECUTE FUNCTION trg_ping_partition();
Postgres 9.1 added the clause IF NOT EXISTS for CREATE TABLE. See:
PostgreSQL create table if not exists
Postgres 11 added the more appropriate syntax variant EXECUTE FUNCTION for triggers. Use EXECUTE PROCEDURE in older versions.
See:
Trigger uses a procedure or a function?
to_char() can take a date as $1. That's converted to timestamp automatically. See:
The manual on date / time functions.
I SET client_min_messages = 'WARNING' for the scope of the function to silence the flood of notices that would otherwise be raised on conflict by IF NOT EXISTS.
Multiple other simplifications and improvements. Compare the code.
Tests:
INSERT INTO ping VALUES (1, now()::date);
INSERT INTO ping VALUES (2, now()::date);
INSERT INTO ping VALUES (2, now()::date + 1);
INSERT INTO ping VALUES (2, now()::date + 1);
fiddle
OLD sqlfiddle
Dynamic partitioning in PostgreSQL is just a bad idea. Your code is not safe in a multi-user environment. For it to be safe you would have to use locks, which slows down execution. The optimal number of partitions is about one hundred. You can easily create that many well in advance to dramatically simplify the logic necessary for partitioning.
You need to put your date literals in single quotes. Currently you are executing something like this:
CHECK ( date >= DATE 2011-10-25 AND date < DATE 2011-11-25 )
which is invalid. In this case 2011-10-25 is interpreted as 2011 minus 10 minus 25
Your code needs to create the SQL using single quotes around the date literal:
CHECK ( date >= DATE '2011-10-25' AND date < DATE '2011-11-25' )
I figured out the entirety and it works great, even have an auto-delete after 30 days. I hope this helps out future people looking for an autopartition trigger function.
CREATE FUNCTION ping_partition() RETURNS trigger
LANGUAGE plpgsql
AS $_$
DECLARE
_keepdate text;
_tablename text;
_startdate text;
_enddate text;
_result record;
BEGIN
_keepdate:=to_char(to_timestamp(NEW.date) - interval '30 days', 'YYYY-MM-DD');
_startdate := to_char(to_timestamp(NEW.date), 'YYYY-MM-DD');
_tablename:='pings_'||NEW.id||'_'||_startdate;
PERFORM 1
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind = 'r'
AND c.relname = _tablename
AND n.nspname = 'pinglog';
IF NOT FOUND THEN
_enddate:=_startdate::timestamp + INTERVAL '1 day';
EXECUTE 'CREATE TABLE pinglog.' || quote_ident(_tablename) || ' (
CHECK ( date >= EXTRACT(EPOCH FROM DATE ' || quote_literal(_startdate) || ')
AND date < EXTRACT(EPOCH FROM DATE ' || quote_literal(_enddate) || ')
AND id = ' || quote_literal(NEW.id) || '
)
) INHERITS (pinglog.pings)';
EXECUTE 'CREATE INDEX ' || quote_ident(_tablename||'_indx1') || ' ON pinglog.' || quote_ident(_tablename) || ' USING btree (microseconds) WHERE microseconds IS NULL';
EXECUTE 'CREATE INDEX ' || quote_ident(_tablename||'_indx2') || ' ON pinglog.' || quote_ident(_tablename) || ' USING btree (date, id)';
EXECUTE 'CREATE INDEX ' || quote_ident(_tablename||'_indx3') || ' ON pinglog.' || quote_ident(_tablename) || ' USING btree (date, id, microseconds) WHERE microseconds IS NULL';
END IF;
EXECUTE 'INSERT INTO ' || quote_ident(_tablename) || ' VALUES ($1.*)' USING NEW;
FOR _result IN SELECT * FROM pg_tables WHERE schemaname='pinglog' LOOP
IF char_length(substring(_result.tablename from '[0-9-]*$')) <> 0 AND (to_timestamp(NEW.date) - interval '30 days') > to_timestamp(substring(_result.tablename from '[0-9-]*$'),'YYYY-MM-DD') THEN
-- RAISE EXCEPTION 'timestamp=%,table=%,found=%',to_timestamp(substring(_result.tablename from '[0-9-]*$'),'YYYY-MM-DD'),_result.tablename,char_length(substring(_result.tablename from '[0-9-]*$'));
-- could have it check for non-existant ids as well, or for archive bit and only delete if the archive bit is not set
EXECUTE 'DROP TABLE ' || quote_ident(_result.tablename);
END IF;
END LOOP;
RETURN NULL;
END;
$_$;

How can I measure the amount of space taken by blobs on a Firebird 2.1 database?

I have a production database, using Firebird 2.1, where I need to find out how much space is used by each table, including the blobs. The blob-part is the tricky one, because it is not covered using the standard statistical report.
I do not have easy access to the server's desktop, so installing UDFs etc. is not a good solution.
How can I do this easily?
You can count total size of all BLOB fields in a database with following statement:
EXECUTE BLOCK RETURNS (BLOB_SIZE BIGINT)
AS
DECLARE VARIABLE RN CHAR(31) CHARACTER SET UNICODE_FSS;
DECLARE VARIABLE FN CHAR(31) CHARACTER SET UNICODE_FSS;
DECLARE VARIABLE S BIGINT;
BEGIN
BLOB_SIZE = 0;
FOR
SELECT r.rdb$relation_name, r.rdb$field_name
FROM rdb$relation_fields r JOIN rdb$fields f
ON r.rdb$field_source = f.rdb$field_name
WHERE f.rdb$field_type = 261
INTO :RN, :FN
DO BEGIN
EXECUTE STATEMENT
'SELECT SUM(OCTET_LENGTH(' || :FN || ')) FROM ' || :RN ||
' WHERE NOT ' || :FN || ' IS NULL'
INTO :S;
BLOB_SIZE = :BLOB_SIZE + COALESCE(:S, 0);
END
SUSPEND;
END
I modified the code example of Andrej to show the size of each blob field, not only the sum of all blobs.
And used SET TERM so you can copy&paste this snippet directly to tools like FlameRobin.
SET TERM #;
EXECUTE BLOCK
RETURNS (BLOB_SIZE BIGINT, TABLENAME CHAR(31), FIELDNAME CHAR(31) )
AS
DECLARE VARIABLE RN CHAR(31) CHARACTER SET UNICODE_FSS;
DECLARE VARIABLE FN CHAR(31) CHARACTER SET UNICODE_FSS;
DECLARE VARIABLE S BIGINT;
BEGIN
BLOB_SIZE = 0;
FOR
SELECT r.rdb$relation_name, r.rdb$field_name
FROM rdb$relation_fields r JOIN rdb$fields f
ON r.rdb$field_source = f.rdb$field_name
WHERE f.rdb$field_type = 261
INTO :RN, :FN
DO BEGIN
EXECUTE STATEMENT
'SELECT SUM(OCTET_LENGTH(' || :FN || ')) AS BLOB_SIZE, ''' || :RN || ''', ''' || :FN || '''
FROM ' || :RN ||
' WHERE NOT ' || :FN || ' IS NULL'
INTO :BLOB_SIZE, :TABLENAME, :FIELDNAME;
SUSPEND;
END
END
#
SET TERM ;#
This example doesn't work with ORDER BY, maybe a more elegant solution without EXECUTE BLOCK exists.

How do I delete the data from all my tables in ORACLE 10g

I have an ORACLE schema containing hundreds of tables. I would like to delete the data from all the tables (but don't want to DROP the tables).
Is there an easy way to do this or do I have to write an SQL script that retrieves all the table names and runs the TRUNCATE command on each ?
I would like to delete the data using commands in an SQL-Plus session.
If you have any referential integrity constraints (foreign keys) then truncate won't work; you cannot truncate the parent table if any child tables exist, even if the children are empty.
The following PL/SQL should (it's untested, but I've run similar code in the past) iterate over the tables, disabling all the foreign keys, truncating them, then re-enabling all the foreign keys. If a table in another schema has an RI constraint against your table, this script will fail.
set serveroutput on size unlimited
declare
l_sql varchar2(2000);
l_debug number := 1; -- will output results if non-zero
-- will execute sql if 0
l_drop_user varchar2(30) := '' -- set the user whose tables you're dropping
begin
for i in (select table_name, constraint_name from dba_constraints
where owner = l_drop_user
and constraint_type = 'R'
and status = 'ENABLED')
loop
l_sql := 'alter table ' || l_drop_user || '.' || i.table_name ||
' disable constraint ' || i.constraint_name;
if l_debug = 0 then
execute immediate l_sql;
else
dbms_output.put_line(l_sql);
end if;
end loop;
for i in (select table_name from dba_tables
where owner = l_drop_user
minus
select view_name from dba_views
where owner = l_drop_user)
loop
l_sql := 'truncate table ' || l_drop_user || '.' || i.table_name ;
if l_debug = 0 then
execute immediate l_sql;
else
dbms_output.put_line(l_sql);
end if;
end loop;
for i in (select table_name, constraint_name from dba_constraints
where owner = l_drop_user
and constraint_type = 'R'
and status = 'DISABLED')
loop
l_sql := 'alter table ' || l_drop_user || '.' || i.table_name ||
' enable constraint ' || i.constraint_name;
if l_debug = 0 then
execute immediate l_sql;
else
dbms_output.put_line(l_sql);
end if;
end loop;
end;
/
Probably the easiest way is to export the schema without data, then drop an re-import it.
I was looking at this too.
Seems like you do need to go through all the table names.
Have you seen this? Seems to do the trick.
I had to do this recently and wrote a stored procedure which you can run via: exec sp_truncate;. Most of the code is based off this: answer on disabling constraints
CREATE OR REPLACE PROCEDURE sp_truncate AS
BEGIN
-- Disable all constraints
FOR c IN
(SELECT c.owner, c.table_name, c.constraint_name
FROM user_constraints c, user_tables t
WHERE c.table_name = t.table_name
AND c.status = 'ENABLED'
ORDER BY c.constraint_type DESC)
LOOP
DBMS_UTILITY.EXEC_DDL_STATEMENT('ALTER TABLE ' || c.owner || '.' || c.table_name || ' disable constraint ' || c.constraint_name);
DBMS_OUTPUT.PUT_LINE('Disabled constraints for table ' || c.table_name);
END LOOP;
-- Truncate data in all tables
FOR i IN (SELECT table_name FROM user_tables)
LOOP
EXECUTE IMMEDIATE 'TRUNCATE TABLE ' || i.table_name;
DBMS_OUTPUT.PUT_LINE('Truncated table ' || i.table_name);
END LOOP;
-- Enable all constraints
FOR c IN
(SELECT c.owner, c.table_name, c.constraint_name
FROM user_constraints c, user_tables t
WHERE c.table_name = t.table_name
AND c.status = 'DISABLED'
ORDER BY c.constraint_type)
LOOP
DBMS_UTILITY.EXEC_DDL_STATEMENT('ALTER TABLE ' || c.owner || '.' || c.table_name || ' enable constraint ' || c.constraint_name);
DBMS_OUTPUT.PUT_LINE('Enabled constraints for table ' || c.table_name);
END LOOP;
COMMIT;
END sp_truncate;
/
Putting the details from the OTN Discussion Forums: truncating multiple tables with single query thread into one SQL script gives the following which can be run in an SQL-Plus session:
SET SERVEROUTPUT ON
BEGIN
-- Disable constraints
DBMS_OUTPUT.PUT_LINE ('Disabling constraints');
FOR reg IN (SELECT uc.table_name, uc.constraint_name FROM user_constraints uc) LOOP
EXECUTE IMMEDIATE 'ALTER TABLE ' || reg.table_name || ' ' || 'DISABLE' ||
' CONSTRAINT ' || reg.constraint_name || ' CASCADE';
END LOOP;
-- Truncate tables
DBMS_OUTPUT.PUT_LINE ('Truncating tables');
FOR reg IN (SELECT table_name FROM user_tables) LOOP
EXECUTE IMMEDIATE 'TRUNCATE TABLE ' || reg.table_name;
END LOOP;
-- Enable constraints
DBMS_OUTPUT.PUT_LINE ('Enabling constraints');
FOR reg IN (SELECT uc.table_name, uc.constraint_name FROM user_constraints uc) LOOP
EXECUTE IMMEDIATE 'ALTER TABLE ' || reg.table_name || ' ' || 'ENABLE' ||
' CONSTRAINT ' || reg.constraint_name;
END LOOP;
END;
/