DB2: Extract definition of a global temp table created using "as select" - db2

In a perl dbi script, I need to create a db2 temp table on database A using "as select" to define columns.
Then, I need to redefine the temp table on another database B.
Does any of you have any hints on how to generate the ddl of the global temp table on database A?
Thanks,
Mike

You can always build DDL by accessing the catalog tables
SELECT
NAME || ' ' || COLTYPE || '(' || CHAR(LENGTH) || ') ,'
FROM SYSIBM.SYSCOLUMNS
WHERE TBNAME = ?
AND TBCREATOR = USER
Set the ? parameter to the name of the GTT's name.

Related

postgresql - how to use a cursor or select statement to generate mulitple DML statements

New to postgres and I'm using Postgresql 9.3. Is there a way with postgresql to generate a file with multiple DML statements?
For example, I want to select table name where tablename like '_foo%' and then rename all those tables to '_bar%'. Do I need to do this in a cursor or can I do this within a select statement? (like in Oracle)
ALTER TABLE RENAME tst1_foo TO tst1_bar;
ALTER TABLE RENAME tst2_foo TO tst2_bar;
ALTER TABLE RENAME tst3_foo TO tst3_bar;
I'd like to print those out to a .sql file.
Please provide a basic example if possible. Thanks.
You can use psql and the pg_tables system view. Set the output to unaligned mode:
\a
Set the output to show only rows:
\t on
Send output to your file:
\o yourfile.sql
Run the query:
SELECT 'ALTER TABLE RENAME ' || tablename || ' TO ' ||
REGEXP_REPLACE ( tablename, '_foo$', '_bar' ) || ';'
FROM pg_tables
WHERE tablename LIKE '%_foo';
Close the file:
\o
and/or close psql:
\q

Executing dynamic SQL in Postgres 8.2

I am trying to create a table inside a function using dynamic SQL and immediately copy it into another table.
execute 'create table week_temp as
select w.*, ww.*
from employer_weekly w
left join employer_weekly_' || $1 || '_2 ww
on w.w_employer::int = ww.emp_' || $1 || '::int';
drop table if exists employer_weekly;
create table employer_weekly as select * from week_temp;
I am receiving the following error:
Error : ERROR: relation with OID 9288742 does not exist
CONTEXT: SQL statement "create table employer_weekly as select * from
week_temp"
Checking manually, I can see week_temp and can access it correctly.
Would appreciate any clues!
What #a_horse_with_no_name said.
But - the problem you are seeing is that the non-dynamic statements in your function are being compiled. This means the "FROM week_temp" is looking at an older version of week_temp that no longer exists. Make that statement dynamic SQL too and it should work.

How to duplicate schemas in PostgreSQL

I have a database with schema public and schema_A. I need to create a new schema schema_b with the same structure than schema_a.
I found the function below, the problem is that it does not copy the foreign key constraints.
CREATE OR REPLACE FUNCTION clone_schema(source_schema text, dest_schema text)
RETURNS void AS
$BODY$
DECLARE
object text;
buffer text;
default_ text;
column_ text;
BEGIN
EXECUTE 'CREATE SCHEMA ' || dest_schema ;
-- TODO: Find a way to make this sequence's owner is the correct table.
FOR object IN
SELECT sequence_name::text FROM information_schema.SEQUENCES WHERE sequence_schema = source_schema
LOOP
EXECUTE 'CREATE SEQUENCE ' || dest_schema || '.' || object;
END LOOP;
FOR object IN
SELECT table_name::text FROM information_schema.TABLES WHERE table_schema = source_schema
LOOP
buffer := dest_schema || '.' || object;
EXECUTE 'CREATE TABLE ' || buffer || ' (LIKE ' || source_schema || '.' || object || ' INCLUDING CONSTRAINTS INCLUDING INDEXES INCLUDING DEFAULTS)';
FOR column_, default_ IN
SELECT column_name::text, REPLACE(column_default::text, source_schema, dest_schema) FROM information_schema.COLUMNS WHERE table_schema = dest_schema AND table_name = object AND column_default LIKE 'nextval(%' || source_schema || '%::regclass)'
LOOP
EXECUTE 'ALTER TABLE ' || buffer || ' ALTER COLUMN ' || column_ || ' SET DEFAULT ' || default_;
END LOOP;
END LOOP;
END;
$BODY$ LANGUAGE plpgsql
How can I clone/copy schema_A with the foreign key constraints?
You can probably do it from the command line without using files:
pg_dump -U user --schema='fromschema' database | sed 's/fromschmea/toschema/g' | psql -U user -d database
Note that this searches and replaces all occurrences of the string that is your schema name, so it may affect your data.
I would use pg_dump to dump the schema without data:
-s
--schema-only
Dump only the object definitions (schema), not data.
This option is the inverse of --data-only. It is similar to, but for historical reasons not identical to, specifying --section=pre-data --section=post-data.
(Do not confuse this with the --schema option, which uses the word "schema" in a different meaning.)
To exclude table data for only a subset of tables in the database, see --exclude-table-data.
pg_dump $DB -p $PORT -n $SCHEMA -s -f filename.pgsql
Then rename the schema in the dump (search & replace) and restore it with psql.
psql $DB -f filename.pgsql
Foreign key constraints referencing tables in other schemas are copied to point to the same schema.
References to tables within the same schema point to the respective tables within the copied schema.
I will share a solution for my problem which was the same with a small addition. I needed to clone a schema, create a new database user and assign ownership of all objects in the new schema to that user.
For the following example let's assume that the reference schema is called ref_schema and the target schema new_schema. The reference schema and all the objects within are owned by a user called ref_user.
1. dump the reference schema with pg_dump:
pg_dump -n ref_schema -f dump.sql database_name
2. create a new database user with the name new_user:
CREATE USER new_user
3. rename the schema ref_schema to new_schema:
ALTER SCHEMA ref_schema RENAME TO new_schema
4. change ownership of all objects in the renamed schema to the new user
REASSIGN OWNED BY ref_user TO new_user
5. restore the original reference schema from the dump
psql -f dump.sql database_name
I hope someone finds this helpful.
A bit late to the party but, some sql here could help you along your way:
get schema oid:
namespace_id = SELECT oid
FROM pg_namespace
WHERE nspname = '<schema name>';
get table's oid:
table_id = SELECT relfilenode
FROM pg_class
WHERE relnamespace = '<namespace_id>' AND relname = '<table_name>'
get foreign key constraints:
SELECT con.conname, pg_catalog.pg_get_constraintdef(con.oid) AS condef
FROM pg_catalog.pg_constraint AS con
JOIN pg_class AS cl ON cl.relnamespace = con.connamespace AND cl.relfilenode = con.conrelid
WHERE con.conrelid = '<table_relid>'::pg_catalog.oid AND con.contype = 'f';
A good resource for PostgreSQL system tables can be found here. Additionally, you can learn more about the internal queries pg_dump makes to gather dump information by viewing it's source code.
Probably the easiest way to see how pg_dump gathers all your data would be to use strace on it, like so:
$ strace -f -e sendto -s8192 -o pg_dump.trace pg_dump -s -n <schema>
$ grep -oP '(SET|SELECT)\s.+(?=\\0)' pg_dump.trace
You'll still have to sort through the morass of statements but, it should help you piece together a cloning tool programmatically and avoid having to drop to a shell to invoke pg_dump.
Just ran into same. Sometimes I am missing remap_schema :)
The problem - neither from above addresses the Fc - standard format which is crucial for large schemas.
So I came up with something which uses it :
Pseudo code below - should work.
Requires rename of source for duration of pg_dump which, of course, might not be an option :(
Source :
pg_dump --pre-data in sql format
psql rename sosurce to target
pg_dump -Fc --data-only
psql rename back
pg_dump --post-data in sql format
Target :
sed source_schema->target_schema pre-data sql |psql
pg_restore Fc dump
sed source_schema->target_schema post-data sql |psql
sed above usually will include any other manipulations ( say different user names between source and target ) But it will be way much faster as data will not be part of the file

Exporting sequences in PostgreSQL

I want to export ONLY the sequences created in a Database created in PostgreSQL.
There is any option to do that?
Thank you!
You could write a query to generate a script that will create your existing sequence objects by querying this information schema view.
select *
from information_schema.sequences;
Something like this.
SELECT 'CREATE SEQUENCE ' || sequence_name || ' START ' || start_value || ';'
from information_schema.sequences;
I know its too old but today I had similar requirement so I tried to solve it the same way by creating a series of "CREATE SEQUENCE" queries which can be used to RE-create sequences on the other DB with bad import (missing sequences)
here is the SQL I used:
SELECT
'CREATE SEQUENCE '||c.relname||
' START '||(select setval(c.relname::text, nextval(c.relname::text)-1))
AS "CREATE SEQUENCE SQLs"
FROM
pg_class c
WHERE
c.relkind = 'S'
Maybe that can be helpful for someone.
Using DBeaver, you can
open a schema
select its sequences
crtl-F to search for the sequences you're interested in
crtl-A to select all of them
Right-click and select Generate SQL -> DDL
You will be given SQL statements to create all of the sequences selected.

Drop all functions from Postgres database

I have a database with an old broken version of PostGIS installed in it. I would like to easily drop all functions in the database (they're all from PostGIS). Is there a simple way to do this? Even simply extracting a list of function names would be acceptable as I could just make a large DROP FUNCTION statement.
A fine answer to this question can be found here:
SELECT 'DROP FUNCTION ' || ns.nspname || '.' || proname
|| '(' || oidvectortypes(proargtypes) || ');'
FROM pg_proc INNER JOIN pg_namespace ns ON (pg_proc.pronamespace = ns.oid)
WHERE ns.nspname = 'my_messed_up_schema' order by proname;
Just as there was a postgis.sql enabler install script, there is also an uninstall_postgis.sql uninstall script.
psql -d [yourdatabase] -f /path/to/uninstall_postgis.sql
Warning: Be prepared to see your geometry/geography columns and data disappear!