Why can 'create table' in Postgres take several seconds? - postgresql

In my project we have to sometimes copy all the data from one schema into another. I automated this by simple truncate / insert into select * script, but sooner realized that this way is not tolerant to changes in the source schema (adding/deleteing tables required modifying the script). So today I decided to change it to PL/PGSQL script which creates tables and copies data using dynamic queries. My first implementation was something like this:
do
$$
declare
source_schema text := 'source_schema';
dest_schema text := 'dest_schema';
obj_name text;
source_table text;
dest_table text;
alter_columns text;
begin
for dest_table in
select table_schema || '.' || table_name
from information_schema.tables
where table_schema = dest_schema
order by table_name
loop
execute 'drop table ' || dest_table;
end loop;
raise notice 'Data cleared';
for obj_name in
select table_name
from information_schema.tables
where table_schema = source_schema
order by table_name
loop
source_table := source_schema || '.' || obj_name;
dest_table := dest_schema || '.' || obj_name;
execute 'create unlogged table ' || dest_table
|| ' (like ' || source_table || ' including comments)';
alter_columns := (
select string_agg('alter column ' || column_name || ' drop not null', ', ')
from information_schema.columns
where table_schema = dest_schema and table_name = obj_name
and is_nullable = 'NO');
if alter_columns is not null then
execute 'alter table ' || dest_table || ' ' || alter_columns;
end if;
execute 'insert into ' || dest_table || ' select * from ' || source_table;
raise notice '% done', obj_name;
end loop;
end;
$$
language plpgsql;
As destination schema is read only, I create it without constrants to reach maximum performance. I don't think that NOT NULL constraints is a big deal, but I decided to leave everything here as it was.
This solution worked perfectly but I noticed that it was taking longer time to copy data in comparison to static script. Not dramatically, but steadily it took 20-30 seconds longer than static script.
I decided to investigate it. My first step was to comment insert into select * statement to find out what time takes everything else. It shown that it takes only half a second to clear and recreate all tables. My clue was that INSERT statements somehow work longer in procedureal context.
Then I added measuring of the execution time:
ts := clock_timestamp();
execute 'insert into ...';
raise notice 'obj_name: %', clock_timestamp() - ts;
Also I performed the old static script with \timing in psql. But this shown that my assumption was wrong. All insert statements took more or less the same time, predominantly even faster in dynamic script (I suppose it was due to autocommit and network roundtrips after each statement in psql). However the overal time of dynamic script was again longer than time of static script.
Mysticism?
Then I added very verbose logging with timestamps like this:
raise notice '%: %', clock_timestamp()::timestamp(3), 'label';
I discovered that sometimes create table executes immediately, but sometimes it takes several seconds to finish. OK, but how come all these statements for all tables took just milliseconds to complete in my first experiment?
Then I basically split one loop into two: first one creates all the tables (and we now know it takes just milliseconds) and the second one only inserts data:
do
$$
declare
source_schema text := 'onto_oper';
dest_schema text := 'onto';
obj_name text;
source_table text;
dest_table text;
alter_columns text;
begin
raise notice 'Clearing data...';
for dest_table in
select table_schema || '.' || table_name
from information_schema.tables
where table_schema = dest_schema
order by table_name
loop
execute 'drop table ' || dest_table;
end loop;
raise notice 'Data cleared';
for obj_name in
select table_name
from information_schema.tables
where table_schema = source_schema
order by table_name
loop
source_table := source_schema || '.' || obj_name;
dest_table := dest_schema || '.' || obj_name;
execute 'create unlogged table ' || dest_table
|| ' (like ' || source_table || ' including comments)';
alter_columns := (
select string_agg('alter column ' || column_name || ' drop not null', ', ')
from information_schema.columns
where table_schema = dest_schema and table_name = obj_name
and is_nullable = 'NO');
if alter_columns is not null then
execute 'alter table ' || dest_table || ' ' || alter_columns;
end if;
end loop;
raise notice 'All tables created';
for obj_name in
select table_name
from information_schema.tables
where table_schema = source_schema
order by table_name
loop
source_table := source_schema || '.' || obj_name;
dest_table := dest_schema || '.' || obj_name;
execute 'insert into ' || dest_table || ' select * from ' || source_table;
raise notice '% done', obj_name;
end loop;
end;
$$
language plpgsql;
Surprisingly it fixed everything! This version works faster than the old static script!
We are coming to very weird conclusion: create table after inserts sometime may take long time. This is very frustrating. Despite the fact I solved my problem I don't understand why it happens. Does anybody have any idea?

Related

How to move all timestamptz dates on the postgresql database?

I have a postgresql dump of some seed database. This dump was created few months ago so all data is there are about the past. It is not very convenient to develop on the past data because I have to always scroll in UI to that past date.
I was thinking to automatically shift every timestamptz field in the database by specific offset. It sounds doable via some script which will go throw database schema, find every timestamptz field, and then build a SQL update for every field.
So, are there any ready-made solutions for this?
I solved it using this SQL query:
--
-- This SQL query shift all timestamptz fields in the database
--
--
BEGIN;
DO $$
declare
sql_query text;
table_row record;
column_row record;
trigger_row record;
BEGIN
FOR table_row IN (
SELECT table_schema, table_name
FROM information_schema.tables
WHERE table_type = 'BASE TABLE' AND table_schema = 'public'
) LOOP
sql_query := '';
RAISE NOTICE 'Checking %', table_row.table_name;
FOR column_row IN (
SELECT column_name
FROM information_schema.columns
WHERE
table_schema = table_row.table_schema
AND table_name = table_row.table_name
AND udt_name = 'timestamptz'
AND is_updatable = 'YES'
) LOOP
sql_query := sql_query ||
'"' || column_row.column_name || '" = "' || column_row.column_name || '" + interval ''100'' day,';
END LOOP;
IF sql_query != '' THEN
sql_query := substr(sql_query,1, length(sql_query)-1); -- Remove last ","
sql_query := 'UPDATE ' || table_row.table_schema || '.' || table_row.table_name || ' SET ' || sql_query || ';';
-- There might be some triggers which so let's disable them before update
FOR trigger_row IN (
SELECT trigger_name FROM information_schema.triggers WHERE
trigger_schema = table_row.table_schema
AND event_object_table = table_row.table_name
AND event_manipulation = 'UPDATE' and
(action_timing = 'BEFORE' or action_timing = 'AFTER')
) LOOP
sql_query := 'alter table ' || table_row.table_schema || '.' || table_row.table_name ||
' disable trigger ' || trigger_row.trigger_name || ';' ||
sql_query ||
'alter table ' || table_row.table_schema || '.' || table_row.table_name ||
' enable trigger ' || trigger_row.trigger_name || ';';
END LOOP;
-- Same for the row level security, disable it if it was enabled
IF (SELECT pg_class.oid FROM pg_class
LEFT JOIN pg_catalog.pg_namespace ON pg_catalog.pg_namespace.oid = pg_class.relnamespace
WHERE relname = table_row.table_name AND
pg_catalog.pg_namespace.nspname = table_row.table_schema AND relrowsecurity) IS NOT NULL THEN
sql_query := 'alter table ' || table_row.table_schema || '.' || table_row.table_name ||
' disable row level security;' ||
sql_query ||
'alter table ' || table_row.table_schema || '.' || table_row.table_name ||
' enable row level security;';
END IF;
RAISE NOTICE ' %', sql_query;
EXECUTE sql_query;
RAISE NOTICE '---------------------------';
END IF;
END LOOP;
END$$;
COMMIT;
Just add things to the database and then update it with this query, change the column name, table name and the amount of days you want it incremented by
UPDATE table_name
SET timestamptz = timestamptz + interval '1' day
WHERE 1 = 1;

Update Null columns to Zero dynamically in Redshift

Here is the code in SAS, It finds the numeric columns with blank and replace with 0's
DATA dummy_table;
SET dummy_table;
ARRAY DUMMY _NUMERIC_;
DO OVER DUMMY;
IF DUMMY=. THEN DUMMY=0;
END;
RUN;
I am trying to replicate this in Redshift, here is what I tried
create or replace procedure sp_replace_null_to_zero(IN tbl_nm varchar) as $$
Begin
Execute 'declare ' ||
'tot_cnt int := (select count(*) from information_schema.columns where table_name = ' || tbl_nm || ');' ||
'init_loop int := 0; ' ||
'cn_nm varchar; '
Begin
While init_loop <= tot_cnt
Loop
Raise info 'init_loop = %', Init_loop;
Raise info 'tot_cnt = %', tot_cnt;
Execute 'Select column_name into cn_nm from information_schema.columns ' ||
'where table_name ='|| tbl_nm || ' and ordinal_position = init_loop ' ||
'and data_type not in (''character varying'',''date'',''text''); '
Raise info 'cn_nm = %', cn_nm;
if cn_nm is not null then
Execute 'Update ' || tbl_nm ||
'Set ' || cn_nm = 0 ||
'Where ' || cn_nm is null or cn_nm =' ';
end if;
init_loop = init_loop + 1;
end loop;
End;
End;
$$ language plpgsql;
Issues I am facing
When I pass the Input parameter here, I am getting 0 count
tot_cnt int := (select count(*) from information_schema.columns where table_name = ' || tbl_nm || ');'
For testing purpose I tried hardcode the table name inside proc, I am getting the error amazon invalid operation: value for domain information_schema.cardinal_number violates check constraint "cardinal_number_domain_check"
Is this even possible in redshift, How can I do this logic or any other workaround.
Need Expertise advise here!!
You can simply run an UPDATE over the table(s) using the NVL(cn_nm,0) function
UPDATE tbl_raw
SET col2 = NVL(col2,0);
However UPDATE is a fairly expensive operation. Consider just using a view over your table that wraps the columns in NVL(cn_nm,0)
CREATE VIEW tbl_clean
AS
SELECT col1
, NVL(col2,0) col2
FROM tbl_raw;

In postgres how can I delete all columns that share the same prefix

I have been using the following code for dropping all tables that share the same prefix (in this case delete all tables that their name starts with 'supenh_'):
DO
$do$
DECLARE
_tbl text;
BEGIN
FOR _tbl IN
SELECT quote_ident(table_schema) || '.'
|| quote_ident(table_name) -- escape identifier and schema-qualify!
FROM information_schema.tables
WHERE table_name LIKE 'supenh_' || '%' -- your table name prefix
AND table_schema NOT LIKE 'pg_%' -- exclude system schemas
LOOP
-- RAISE NOTICE '%',
EXECUTE
'DROP TABLE ' || _tbl;
END LOOP;
END
$do$;
Is there a way to amend this code / or to use a different script in order to delete from one specific table all the columns that start with the same prefix (for example, 'patient1_') ?
You could write it as PL/pgSQL function:
CREATE OR REPLACE FUNCTION drop_columns_with_prefix(tbl_name TEXT, column_prefix TEXT) RETURNS VOID AS
$BODY$
DECLARE
_column TEXT;
BEGIN
FOR _column IN
SELECT quote_ident(column_name)
FROM information_schema.columns
WHERE table_name = tbl_name
AND column_name LIKE column_prefix || '%'
AND table_schema NOT LIKE 'pg_%'
LOOP
-- RAISE NOTICE '%',
EXECUTE
'ALTER TABLE ' || tbl_name || ' DROP COLUMN ' || _column;
END LOOP;
END
$BODY$
LANGUAGE plpgsql VOLATILE;
Call it using:
SELECT drop_columns_with_prefix('tbl_name', 'prefix_');
Or if you don't want to use it as a function:
DO
$do$
DECLARE
_column TEXT;
BEGIN
FOR _column IN
SELECT quote_ident(column_name)
FROM information_schema.columns
WHERE table_name = 'tbl_name'
AND column_name LIKE 'prefix_%'
AND table_schema NOT LIKE 'pg_%'
LOOP
-- RAISE NOTICE '%',
EXECUTE
'ALTER TABLE tbl_name DROP COLUMN ' || _column;
END LOOP;
END
$do$

Copy schema and create new schema with different name in the same data base

I there a way to copy the existing schema and generate new schema with another name in the same database in postgres.
Use pg_dump to dump your current schema in a SQL-formated file. Open the file, replace the schemaname with the new name and excute this script in your database to create the new schema and all other objects inside this schema.
Check out this PostgreSQL's wiki page. It contains a function for clone_schema as you required, but this function only clones tables. The page refers to this post, which contains a function that clones everything you need for the schema. This function worked well for me, I managed to execute it with JDBC API.
But I had some problems when the schema names contained - or capital letters. After a research I found out that the source of the problem is quote_ident() method. I changes the clone_schema function to work with any schema names. I share the new function here, hope it will help somebody:
-- Function: clone_schema(text, text)
-- DROP FUNCTION clone_schema(text, text);
CREATE OR REPLACE FUNCTION clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
RETURNS void AS
$BODY$
-- This function will clone all sequences, tables, data, views & functions from any existing schema to a new one
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);
DECLARE
src_oid oid;
tbl_oid oid;
func_oid oid;
object text;
buffer text;
srctbl text;
default_ text;
column_ text;
qry text;
dest_qry text;
v_def text;
seqval bigint;
sq_last_value bigint;
sq_max_value bigint;
sq_start_value bigint;
sq_increment_by bigint;
sq_min_value bigint;
sq_cache_value bigint;
sq_log_cnt bigint;
sq_is_called boolean;
sq_is_cycled boolean;
sq_cycled char(10);
BEGIN
-- Check that source_schema exists
SELECT oid INTO src_oid
FROM pg_namespace
WHERE nspname = source_schema;
IF NOT FOUND
THEN
RAISE EXCEPTION 'source schema % does not exist!', source_schema;
RETURN ;
END IF;
-- Check that dest_schema does not yet exist
PERFORM nspname
FROM pg_namespace
WHERE nspname = dest_schema;
IF FOUND
THEN
RAISE EXCEPTION 'dest schema % already exists!', dest_schema;
RETURN ;
END IF;
EXECUTE 'CREATE SCHEMA "' || dest_schema || '"';
-- Create sequences
-- TODO: Find a way to make this sequence's owner is the correct table.
FOR object IN
SELECT sequence_name::text
FROM information_schema.sequences
WHERE sequence_schema = source_schema
LOOP
EXECUTE 'CREATE SEQUENCE "' || dest_schema || '".' || quote_ident(object);
srctbl := '"' || source_schema || '".' || quote_ident(object);
EXECUTE 'SELECT last_value, max_value, start_value, increment_by, min_value, cache_value, log_cnt, is_cycled, is_called
FROM "' || source_schema || '".' || quote_ident(object) || ';'
INTO sq_last_value, sq_max_value, sq_start_value, sq_increment_by, sq_min_value, sq_cache_value, sq_log_cnt, sq_is_cycled, sq_is_called ;
IF sq_is_cycled
THEN
sq_cycled := 'CYCLE';
ELSE
sq_cycled := 'NO CYCLE';
END IF;
EXECUTE 'ALTER SEQUENCE "' || dest_schema || '".' || quote_ident(object)
|| ' INCREMENT BY ' || sq_increment_by
|| ' MINVALUE ' || sq_min_value
|| ' MAXVALUE ' || sq_max_value
|| ' START WITH ' || sq_start_value
|| ' RESTART ' || sq_min_value
|| ' CACHE ' || sq_cache_value
|| sq_cycled || ' ;' ;
buffer := '"' || dest_schema || '".' || quote_ident(object);
IF include_recs
THEN
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_last_value || ', ' || sq_is_called || ');' ;
ELSE
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_start_value || ', ' || sq_is_called || ');' ;
END IF;
END LOOP;
-- Create tables
FOR object IN
SELECT TABLE_NAME::text
FROM information_schema.tables
WHERE table_schema = source_schema
AND table_type = 'BASE TABLE'
LOOP
buffer := '"' || dest_schema || '".' || quote_ident(object);
EXECUTE 'CREATE TABLE ' || buffer || ' (LIKE "' || source_schema || '".' || quote_ident(object)
|| ' INCLUDING ALL)';
IF include_recs
THEN
-- Insert records from source table
EXECUTE 'INSERT INTO ' || buffer || ' SELECT * FROM "' || source_schema || '".' || quote_ident(object) || ';';
END IF;
FOR column_, default_ IN
SELECT column_name::text,
REPLACE(column_default::text, source_schema, dest_schema)
FROM information_schema.COLUMNS
WHERE table_schema = dest_schema
AND TABLE_NAME = object
AND column_default LIKE 'nextval(%"' || source_schema || '"%::regclass)'
LOOP
EXECUTE 'ALTER TABLE ' || buffer || ' ALTER COLUMN ' || column_ || ' SET DEFAULT ' || default_;
END LOOP;
END LOOP;
-- add FK constraint
FOR qry IN
SELECT 'ALTER TABLE "' || dest_schema || '".' || quote_ident(rn.relname)
|| ' ADD CONSTRAINT ' || quote_ident(ct.conname) || ' ' || pg_get_constraintdef(ct.oid) || ';'
FROM pg_constraint ct
JOIN pg_class rn ON rn.oid = ct.conrelid
WHERE connamespace = src_oid
AND rn.relkind = 'r'
AND ct.contype = 'f'
LOOP
EXECUTE qry;
END LOOP;
-- Create views
FOR object IN
SELECT table_name::text,
view_definition
FROM information_schema.views
WHERE table_schema = source_schema
LOOP
buffer := '"' || dest_schema || '".' || quote_ident(object);
SELECT view_definition INTO v_def
FROM information_schema.views
WHERE table_schema = source_schema
AND table_name = quote_ident(object);
EXECUTE 'CREATE OR REPLACE VIEW ' || buffer || ' AS ' || v_def || ';' ;
END LOOP;
-- Create functions
FOR func_oid IN
SELECT oid
FROM pg_proc
WHERE pronamespace = src_oid
LOOP
SELECT pg_get_functiondef(func_oid) INTO qry;
SELECT replace(qry, source_schema, dest_schema) INTO dest_qry;
EXECUTE dest_qry;
END LOOP;
RETURN;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION clone_schema(text, text, boolean)
OWNER TO postgres;
I ran a few tests and found the result is referencing the source schema. So here's my improved version:
-- Function: clone_schema(source text, dest text, include_records boolean default true, show_details boolean default false)
-- DROP FUNCTION clone_schema(text, text, boolean, boolean);
CREATE OR REPLACE FUNCTION clone_schema(
source_schema text,
dest_schema text,
include_recs boolean DEFAULT true,
show_details boolean DEFAULT false)
RETURNS void AS
$BODY$
-- This function will clone all sequences, tables, data, views & functions from any existing schema to a new one
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema');
-- SELECT clone_schema('public', 'new_schema', TRUE);
-- SELECT clone_schema('public', 'new_schema', TRUE, TRUE);
DECLARE
src_oid oid;
tbl_oid oid;
func_oid oid;
object text;
buffer text;
srctbl text;
default_ text;
column_ text;
qry text;
xrec record;
dest_qry text;
v_def text;
seqval bigint;
sq_last_value bigint;
sq_max_value bigint;
sq_start_value bigint;
sq_increment_by bigint;
sq_min_value bigint;
sq_cache_value bigint;
sq_log_cnt bigint;
sq_is_called boolean;
sq_is_cycled boolean;
sq_cycled char(10);
rec record;
source_schema_dot text = source_schema || '.';
dest_schema_dot text = dest_schema || '.';
BEGIN
-- Check that source_schema exists
SELECT oid INTO src_oid
FROM pg_namespace
WHERE nspname = quote_ident(source_schema);
IF NOT FOUND
THEN
RAISE NOTICE 'source schema % does not exist!', source_schema;
RETURN ;
END IF;
-- Check that dest_schema does not yet exist
PERFORM nspname
FROM pg_namespace
WHERE nspname = quote_ident(dest_schema);
IF FOUND
THEN
RAISE NOTICE 'dest schema % already exists!', dest_schema;
RETURN ;
END IF;
EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;
-- Defaults search_path to destination schema
PERFORM set_config('search_path', dest_schema, true);
-- Create sequences
-- TODO: Find a way to make this sequence's owner is the correct table.
FOR object IN
SELECT sequence_name::text
FROM information_schema.sequences
WHERE sequence_schema = quote_ident(source_schema)
LOOP
EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || quote_ident(object);
srctbl := quote_ident(source_schema) || '.' || quote_ident(object);
EXECUTE 'SELECT last_value, max_value, start_value, increment_by, min_value, cache_value, log_cnt, is_cycled, is_called
FROM ' || quote_ident(source_schema) || '.' || quote_ident(object) || ';'
INTO sq_last_value, sq_max_value, sq_start_value, sq_increment_by, sq_min_value, sq_cache_value, sq_log_cnt, sq_is_cycled, sq_is_called ;
IF sq_is_cycled
THEN
sq_cycled := 'CYCLE';
ELSE
sq_cycled := 'NO CYCLE';
END IF;
EXECUTE 'ALTER SEQUENCE ' || quote_ident(dest_schema) || '.' || quote_ident(object)
|| ' INCREMENT BY ' || sq_increment_by
|| ' MINVALUE ' || sq_min_value
|| ' MAXVALUE ' || sq_max_value
|| ' START WITH ' || sq_start_value
|| ' RESTART ' || sq_min_value
|| ' CACHE ' || sq_cache_value
|| sq_cycled || ' ;' ;
buffer := quote_ident(dest_schema) || '.' || quote_ident(object);
IF include_recs
THEN
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_last_value || ', ' || sq_is_called || ');' ;
ELSE
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_start_value || ', ' || sq_is_called || ');' ;
END IF;
IF show_details THEN RAISE NOTICE 'Sequence created: %', object; END IF;
END LOOP;
-- Create tables
FOR object IN
SELECT TABLE_NAME::text
FROM information_schema.tables
WHERE table_schema = quote_ident(source_schema)
AND table_type = 'BASE TABLE'
LOOP
buffer := dest_schema || '.' || quote_ident(object);
EXECUTE 'CREATE TABLE ' || buffer || ' (LIKE ' || quote_ident(source_schema) || '.' || quote_ident(object)
|| ' INCLUDING ALL)';
IF include_recs
THEN
-- Insert records from source table
EXECUTE 'INSERT INTO ' || buffer || ' SELECT * FROM ' || quote_ident(source_schema) || '.' || quote_ident(object) || ';';
END IF;
FOR column_, default_ IN
SELECT column_name::text,
REPLACE(column_default::text, source_schema, dest_schema)
FROM information_schema.COLUMNS
WHERE table_schema = dest_schema
AND TABLE_NAME = object
AND column_default LIKE 'nextval(%' || quote_ident(source_schema) || '%::regclass)'
LOOP
EXECUTE 'ALTER TABLE ' || buffer || ' ALTER COLUMN ' || column_ || ' SET DEFAULT ' || default_;
END LOOP;
IF show_details THEN RAISE NOTICE 'base table created: %', object; END IF;
END LOOP;
-- add FK constraint
FOR xrec IN
SELECT ct.conname as fk_name, rn.relname as tb_name, 'ALTER TABLE ' || quote_ident(dest_schema) || '.' || quote_ident(rn.relname)
|| ' ADD CONSTRAINT ' || quote_ident(ct.conname) || ' ' || replace(pg_get_constraintdef(ct.oid), source_schema_dot, '') || ';' as qry
FROM pg_constraint ct
JOIN pg_class rn ON rn.oid = ct.conrelid
WHERE connamespace = src_oid
AND rn.relkind = 'r'
AND ct.contype = 'f'
LOOP
IF show_details THEN RAISE NOTICE 'Creating FK constraint %.%...', xrec.tb_name, xrec.fk_name; END IF;
--RAISE NOTICE 'DEF: %', xrec.qry;
EXECUTE xrec.qry;
END LOOP;
-- Create functions
FOR xrec IN
SELECT proname as func_name, oid as func_oid
FROM pg_proc
WHERE pronamespace = src_oid
LOOP
IF show_details THEN RAISE NOTICE 'Creating function %...', xrec.func_name; END IF;
SELECT pg_get_functiondef(xrec.func_oid) INTO qry;
SELECT replace(qry, source_schema_dot, '') INTO dest_qry;
EXECUTE dest_qry;
END LOOP;
-- add Table Triggers
FOR rec IN
SELECT
trg.tgname AS trigger_name,
tbl.relname AS trigger_table,
CASE
WHEN trg.tgenabled='O' THEN 'ENABLED'
ELSE 'DISABLED'
END AS status,
CASE trg.tgtype::integer & 1
WHEN 1 THEN 'ROW'::text
ELSE 'STATEMENT'::text
END AS trigger_level,
CASE trg.tgtype::integer & 66
WHEN 2 THEN 'BEFORE'
WHEN 64 THEN 'INSTEAD OF'
ELSE 'AFTER'
END AS action_timing,
CASE trg.tgtype::integer & cast(60 AS int2)
WHEN 16 THEN 'UPDATE'
WHEN 8 THEN 'DELETE'
WHEN 4 THEN 'INSERT'
WHEN 20 THEN 'INSERT OR UPDATE'
WHEN 28 THEN 'INSERT OR UPDATE OR DELETE'
WHEN 24 THEN 'UPDATE OR DELETE'
WHEN 12 THEN 'INSERT OR DELETE'
WHEN 32 THEN 'TRUNCATE'
END AS trigger_event,
'EXECUTE PROCEDURE ' || (SELECT nspname FROM pg_namespace where oid = pc.pronamespace )
|| '.' || proname || '('
|| regexp_replace(replace(trim(trailing '\000' from encode(tgargs,'escape')), '\000',','),'{(.+)}','''{\1}''','g')
|| ')' as action_statement
FROM pg_trigger trg
JOIN pg_class tbl on trg.tgrelid = tbl.oid
JOIN pg_proc pc ON pc.oid = trg.tgfoid
WHERE trg.tgname not like 'RI_ConstraintTrigger%'
AND trg.tgname not like 'pg_sync_pg%'
AND tbl.relnamespace = (SELECT oid FROM pg_namespace where nspname = quote_ident(source_schema) )
LOOP
buffer := dest_schema || '.' || quote_ident(rec.trigger_table);
IF show_details THEN RAISE NOTICE 'Creating trigger % % % ON %...', rec.trigger_name, rec.action_timing, rec.trigger_event, rec.trigger_table; END IF;
EXECUTE 'CREATE TRIGGER ' || rec.trigger_name || ' ' || rec.action_timing
|| ' ' || rec.trigger_event || ' ON ' || buffer || ' FOR EACH '
|| rec.trigger_level || ' ' || replace(rec.action_statement, source_schema_dot, '');
END LOOP;
-- Create views
FOR object IN
SELECT table_name::text,
view_definition
FROM information_schema.views
WHERE table_schema = quote_ident(source_schema)
LOOP
buffer := dest_schema || '.' || quote_ident(object);
SELECT replace(view_definition, source_schema_dot, '') INTO v_def
FROM information_schema.views
WHERE table_schema = quote_ident(source_schema)
AND table_name = quote_ident(object);
IF show_details THEN RAISE NOTICE 'Creating view % AS %', object, regexp_replace(v_def, '[\n\r]+', ' ', 'g'); END IF;
EXECUTE 'CREATE OR REPLACE VIEW ' || buffer || ' AS ' || v_def || ';' ;
END LOOP;
RETURN;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
You can use this simple script :
DO LANGUAGE plpgsql
$body$
DECLARE
old_schema NAME = 'src_schema';
new_schema NAME = 'dst_schema';
tbl TEXT;
sql TEXT;
BEGIN
EXECUTE format('CREATE SCHEMA IF NOT EXISTS %I', new_schema);
FOR tbl IN
SELECT table_name
FROM information_schema.tables
WHERE table_schema=old_schema
LOOP
sql := format(
'CREATE TABLE IF NOT EXISTS %I.%I '
'(LIKE %I.%I INCLUDING INDEXES INCLUDING CONSTRAINTS)'
, new_schema, tbl, old_schema, tbl);
EXECUTE sql;
sql := format(
'INSERT INTO %I.%I '
'SELECT * FROM %I.%I'
, new_schema, tbl, old_schema, tbl);
EXECUTE sql;
END LOOP;
END
$body$;
If by copy schema you mean copy a database, then just use TEMPLATE option to create a copy: CREATE DATABASE dbname_target TEMPLATE dbname_source;
This will copy data too. So you might want to create your own template if you need many copies. See Template Databases.
If you need only schema, then I suggest that you put your DB DDL scripts under source control (which is a good idea anyways) and have a separate (templated) script which will create the schema for you. Basically you have one SQL file, where you replace a ${schema_name} with your new schema name, and then execute this script on the database. In this way if you make a changes to this schema, you can also have scripts to update the schema to a new version, which you will have to do for every user schema in this case.
Using #IdanDavidi's solution, I was able to solve the case where sequences were owned by and referring to the source schema instead of the destination schema.
-- Function: clone_schema(text, text)
-- DROP FUNCTION clone_schema(text, text);
CREATE OR REPLACE FUNCTION clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
RETURNS void AS
$BODY$
-- This function will clone all sequences, tables, data, views & functions from any existing schema to a new one
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);
DECLARE
src_oid oid;
tbl_oid oid;
func_oid oid;
table_rec record;
seq_rec record;
object text;
sequence_ text;
table_ text;
buffer text;
seq_buffer text;
table_buffer text;
srctbl text;
default_ text;
column_ text;
qry text;
dest_qry text;
v_def text;
seqval bigint;
sq_last_value bigint;
sq_max_value bigint;
sq_start_value bigint;
sq_increment_by bigint;
sq_min_value bigint;
sq_cache_value bigint;
sq_log_cnt bigint;
sq_is_called boolean;
sq_is_cycled boolean;
sq_cycled char(10);
BEGIN
-- Check that source_schema exists
SELECT oid INTO src_oid
FROM pg_namespace
WHERE nspname = source_schema;
IF NOT FOUND
THEN
RAISE EXCEPTION 'source schema % does not exist!', source_schema;
RETURN ;
END IF;
-- Check that dest_schema does not yet exist
PERFORM nspname
FROM pg_namespace
WHERE nspname = dest_schema;
IF FOUND
THEN
RAISE EXCEPTION 'dest schema % already exists!', dest_schema;
RETURN ;
END IF;
EXECUTE 'CREATE SCHEMA "' || dest_schema || '"';
-- Create tables
FOR object IN
SELECT TABLE_NAME::text
FROM information_schema.tables
WHERE table_schema = source_schema
AND table_type = 'BASE TABLE'
LOOP
buffer := '"' || dest_schema || '".' || quote_ident(object);
EXECUTE 'CREATE TABLE ' || buffer || ' (LIKE "' || source_schema || '".' || quote_ident(object)
|| ' INCLUDING ALL);';
IF include_recs
THEN
-- Insert records from source table
EXECUTE 'INSERT INTO ' || buffer || ' SELECT * FROM "' || source_schema || '".' || quote_ident(object) || ';';
END IF;
END LOOP;
-- add FK constraint
FOR qry IN
SELECT 'ALTER TABLE "' || dest_schema || '".' || quote_ident(rn.relname)
|| ' ADD CONSTRAINT ' || quote_ident(ct.conname) || ' ' || pg_get_constraintdef(ct.oid) || ';'
FROM pg_constraint ct
JOIN pg_class rn ON rn.oid = ct.conrelid
WHERE connamespace = src_oid
AND rn.relkind = 'r'
AND ct.contype = 'f'
LOOP
EXECUTE qry;
END LOOP;
-- Create sequences
FOR seq_rec IN
SELECT
s.sequence_name::text,
table_name,
column_name
FROM information_schema.sequences s
JOIN (
SELECT
substring(column_default from E'^nextval\\(''(?:[^"'']?.*["'']?\\.)?([^'']*)''(?:::text|::regclass)?\\)')::text as seq_name,
table_name,
column_name
FROM information_schema.columns
WHERE column_default LIKE 'nextval%'
AND table_schema = source_schema
) c ON c.seq_name = s.sequence_name
WHERE sequence_schema = source_schema
LOOP
seq_buffer := quote_ident(dest_schema) || '.' || quote_ident(seq_rec.sequence_name);
EXECUTE 'CREATE SEQUENCE ' || seq_buffer || ';';
qry := 'SELECT last_value, max_value, start_value, increment_by, min_value, cache_value, log_cnt, is_cycled, is_called
FROM "' || source_schema || '".' || quote_ident(seq_rec.sequence_name) || ';';
EXECUTE qry INTO sq_last_value, sq_max_value, sq_start_value, sq_increment_by, sq_min_value, sq_cache_value, sq_log_cnt, sq_is_cycled, sq_is_called ;
IF sq_is_cycled
THEN
sq_cycled := 'CYCLE';
ELSE
sq_cycled := 'NO CYCLE';
END IF;
EXECUTE 'ALTER SEQUENCE ' || seq_buffer
|| ' INCREMENT BY ' || sq_increment_by
|| ' MINVALUE ' || sq_min_value
|| ' MAXVALUE ' || sq_max_value
|| ' START WITH ' || sq_start_value
|| ' RESTART ' || sq_min_value
|| ' CACHE ' || sq_cache_value
|| ' OWNED BY ' || quote_ident(dest_schema ) || '.'
|| quote_ident(seq_rec.table_name) || '.'
|| quote_ident(seq_rec.column_name) || ' '
|| sq_cycled || ' ;' ;
IF include_recs
THEN
EXECUTE 'SELECT setval( ''' || seq_buffer || ''', ' || sq_last_value || ', ' || sq_is_called || ');' ;
ELSE
EXECUTE 'SELECT setval( ''' || seq_buffer || ''', ' || sq_start_value || ', ' || sq_is_called || ');' ;
END IF;
table_buffer := quote_ident(dest_schema) || '.' || quote_ident(seq_rec.table_name);
FOR table_rec IN
SELECT column_name::text AS column_,
REPLACE(column_default::text, source_schema, quote_ident(dest_schema)) AS default_
FROM information_schema.COLUMNS
WHERE table_schema = dest_schema
AND TABLE_NAME = seq_rec.table_name
AND column_default LIKE 'nextval(%' || seq_rec.sequence_name || '%::regclass)'
LOOP
EXECUTE 'ALTER TABLE ' || table_buffer || ' ALTER COLUMN ' || table_rec.column_ || ' SET DEFAULT nextval(' || quote_literal(seq_buffer) || '::regclass);';
END LOOP;
END LOOP;
-- Create views
FOR object IN
SELECT table_name::text,
view_definition
FROM information_schema.views
WHERE table_schema = source_schema
LOOP
buffer := '"' || dest_schema || '".' || quote_ident(object);
SELECT view_definition INTO v_def
FROM information_schema.views
WHERE table_schema = source_schema
AND table_name = quote_ident(object);
EXECUTE 'CREATE OR REPLACE VIEW ' || buffer || ' AS ' || v_def || ';' ;
END LOOP;
-- Create functions
FOR func_oid IN
SELECT oid
FROM pg_proc
WHERE pronamespace = src_oid
LOOP
SELECT pg_get_functiondef(func_oid) INTO qry;
SELECT replace(qry, source_schema, dest_schema) INTO dest_qry;
EXECUTE dest_qry;
END LOOP;
RETURN;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
This seems to be the best solution I came up to.
The idea is to use pg_dump with -O (no owner) and -o (oids) options to get plain text output without source schema and owner information.
Such output i filter through sed replacing the default entry
SET search_path = source_schema, pg_catalog;
with command to create the new schema and set the default search path to it
CREATE SCHEMA new_schema;
SET search_path = new_schema, pg_catalog;
After that I redirect the stream to psql logging to desired user and database to which copy of the schema will be transfered.
The final command to copy schema 'public' to schema '2016' in the same database 'b1' looks like this:
pg_dump -U postgres -Oo -n public -d b1 | sed 's/SET search_path = public, pg_catalog;/CREATE SCHEMA "2016";SET search_path = "2016", pg_catalog;/' | psql -U postgres -d b1
Please note that GRANTS are not transfered from the source schema to the new one.
In the case you are OK with only tables and columns (without constraints, keys etc.) this simple script could be helpful
DO LANGUAGE plpgsql
$body$
DECLARE
old_schema NAME = 'src_schema';
new_schema NAME = 'dst_schema';
tbl TEXT;
sql TEXT;
BEGIN
EXECUTE format('CREATE SCHEMA IF NOT EXISTS %I', new_schema);
FOR tbl IN
SELECT table_name
FROM information_schema.tables
WHERE table_schema=old_schema
LOOP
sql := format(
'CREATE TABLE IF NOT EXISTS %I.%I '
'AS '
'SELECT * FROM %I.%I'
, new_schema, tbl, old_schema, tbl);
raise notice 'Sql: %', sql;
EXECUTE sql;
END LOOP;
END
$body$;

How do I delete the data from all my tables in ORACLE 10g

I have an ORACLE schema containing hundreds of tables. I would like to delete the data from all the tables (but don't want to DROP the tables).
Is there an easy way to do this or do I have to write an SQL script that retrieves all the table names and runs the TRUNCATE command on each ?
I would like to delete the data using commands in an SQL-Plus session.
If you have any referential integrity constraints (foreign keys) then truncate won't work; you cannot truncate the parent table if any child tables exist, even if the children are empty.
The following PL/SQL should (it's untested, but I've run similar code in the past) iterate over the tables, disabling all the foreign keys, truncating them, then re-enabling all the foreign keys. If a table in another schema has an RI constraint against your table, this script will fail.
set serveroutput on size unlimited
declare
l_sql varchar2(2000);
l_debug number := 1; -- will output results if non-zero
-- will execute sql if 0
l_drop_user varchar2(30) := '' -- set the user whose tables you're dropping
begin
for i in (select table_name, constraint_name from dba_constraints
where owner = l_drop_user
and constraint_type = 'R'
and status = 'ENABLED')
loop
l_sql := 'alter table ' || l_drop_user || '.' || i.table_name ||
' disable constraint ' || i.constraint_name;
if l_debug = 0 then
execute immediate l_sql;
else
dbms_output.put_line(l_sql);
end if;
end loop;
for i in (select table_name from dba_tables
where owner = l_drop_user
minus
select view_name from dba_views
where owner = l_drop_user)
loop
l_sql := 'truncate table ' || l_drop_user || '.' || i.table_name ;
if l_debug = 0 then
execute immediate l_sql;
else
dbms_output.put_line(l_sql);
end if;
end loop;
for i in (select table_name, constraint_name from dba_constraints
where owner = l_drop_user
and constraint_type = 'R'
and status = 'DISABLED')
loop
l_sql := 'alter table ' || l_drop_user || '.' || i.table_name ||
' enable constraint ' || i.constraint_name;
if l_debug = 0 then
execute immediate l_sql;
else
dbms_output.put_line(l_sql);
end if;
end loop;
end;
/
Probably the easiest way is to export the schema without data, then drop an re-import it.
I was looking at this too.
Seems like you do need to go through all the table names.
Have you seen this? Seems to do the trick.
I had to do this recently and wrote a stored procedure which you can run via: exec sp_truncate;. Most of the code is based off this: answer on disabling constraints
CREATE OR REPLACE PROCEDURE sp_truncate AS
BEGIN
-- Disable all constraints
FOR c IN
(SELECT c.owner, c.table_name, c.constraint_name
FROM user_constraints c, user_tables t
WHERE c.table_name = t.table_name
AND c.status = 'ENABLED'
ORDER BY c.constraint_type DESC)
LOOP
DBMS_UTILITY.EXEC_DDL_STATEMENT('ALTER TABLE ' || c.owner || '.' || c.table_name || ' disable constraint ' || c.constraint_name);
DBMS_OUTPUT.PUT_LINE('Disabled constraints for table ' || c.table_name);
END LOOP;
-- Truncate data in all tables
FOR i IN (SELECT table_name FROM user_tables)
LOOP
EXECUTE IMMEDIATE 'TRUNCATE TABLE ' || i.table_name;
DBMS_OUTPUT.PUT_LINE('Truncated table ' || i.table_name);
END LOOP;
-- Enable all constraints
FOR c IN
(SELECT c.owner, c.table_name, c.constraint_name
FROM user_constraints c, user_tables t
WHERE c.table_name = t.table_name
AND c.status = 'DISABLED'
ORDER BY c.constraint_type)
LOOP
DBMS_UTILITY.EXEC_DDL_STATEMENT('ALTER TABLE ' || c.owner || '.' || c.table_name || ' enable constraint ' || c.constraint_name);
DBMS_OUTPUT.PUT_LINE('Enabled constraints for table ' || c.table_name);
END LOOP;
COMMIT;
END sp_truncate;
/
Putting the details from the OTN Discussion Forums: truncating multiple tables with single query thread into one SQL script gives the following which can be run in an SQL-Plus session:
SET SERVEROUTPUT ON
BEGIN
-- Disable constraints
DBMS_OUTPUT.PUT_LINE ('Disabling constraints');
FOR reg IN (SELECT uc.table_name, uc.constraint_name FROM user_constraints uc) LOOP
EXECUTE IMMEDIATE 'ALTER TABLE ' || reg.table_name || ' ' || 'DISABLE' ||
' CONSTRAINT ' || reg.constraint_name || ' CASCADE';
END LOOP;
-- Truncate tables
DBMS_OUTPUT.PUT_LINE ('Truncating tables');
FOR reg IN (SELECT table_name FROM user_tables) LOOP
EXECUTE IMMEDIATE 'TRUNCATE TABLE ' || reg.table_name;
END LOOP;
-- Enable constraints
DBMS_OUTPUT.PUT_LINE ('Enabling constraints');
FOR reg IN (SELECT uc.table_name, uc.constraint_name FROM user_constraints uc) LOOP
EXECUTE IMMEDIATE 'ALTER TABLE ' || reg.table_name || ' ' || 'ENABLE' ||
' CONSTRAINT ' || reg.constraint_name;
END LOOP;
END;
/