I have a database with schema public and schema_A. I need to create a new schema schema_b with the same structure than schema_a.
I found the function below, the problem is that it does not copy the foreign key constraints.
CREATE OR REPLACE FUNCTION clone_schema(source_schema text, dest_schema text)
RETURNS void AS
$BODY$
DECLARE
object text;
buffer text;
default_ text;
column_ text;
BEGIN
EXECUTE 'CREATE SCHEMA ' || dest_schema ;
-- TODO: Find a way to make this sequence's owner is the correct table.
FOR object IN
SELECT sequence_name::text FROM information_schema.SEQUENCES WHERE sequence_schema = source_schema
LOOP
EXECUTE 'CREATE SEQUENCE ' || dest_schema || '.' || object;
END LOOP;
FOR object IN
SELECT table_name::text FROM information_schema.TABLES WHERE table_schema = source_schema
LOOP
buffer := dest_schema || '.' || object;
EXECUTE 'CREATE TABLE ' || buffer || ' (LIKE ' || source_schema || '.' || object || ' INCLUDING CONSTRAINTS INCLUDING INDEXES INCLUDING DEFAULTS)';
FOR column_, default_ IN
SELECT column_name::text, REPLACE(column_default::text, source_schema, dest_schema) FROM information_schema.COLUMNS WHERE table_schema = dest_schema AND table_name = object AND column_default LIKE 'nextval(%' || source_schema || '%::regclass)'
LOOP
EXECUTE 'ALTER TABLE ' || buffer || ' ALTER COLUMN ' || column_ || ' SET DEFAULT ' || default_;
END LOOP;
END LOOP;
END;
$BODY$ LANGUAGE plpgsql
How can I clone/copy schema_A with the foreign key constraints?
You can probably do it from the command line without using files:
pg_dump -U user --schema='fromschema' database | sed 's/fromschmea/toschema/g' | psql -U user -d database
Note that this searches and replaces all occurrences of the string that is your schema name, so it may affect your data.
I would use pg_dump to dump the schema without data:
-s
--schema-only
Dump only the object definitions (schema), not data.
This option is the inverse of --data-only. It is similar to, but for historical reasons not identical to, specifying --section=pre-data --section=post-data.
(Do not confuse this with the --schema option, which uses the word "schema" in a different meaning.)
To exclude table data for only a subset of tables in the database, see --exclude-table-data.
pg_dump $DB -p $PORT -n $SCHEMA -s -f filename.pgsql
Then rename the schema in the dump (search & replace) and restore it with psql.
psql $DB -f filename.pgsql
Foreign key constraints referencing tables in other schemas are copied to point to the same schema.
References to tables within the same schema point to the respective tables within the copied schema.
I will share a solution for my problem which was the same with a small addition. I needed to clone a schema, create a new database user and assign ownership of all objects in the new schema to that user.
For the following example let's assume that the reference schema is called ref_schema and the target schema new_schema. The reference schema and all the objects within are owned by a user called ref_user.
1. dump the reference schema with pg_dump:
pg_dump -n ref_schema -f dump.sql database_name
2. create a new database user with the name new_user:
CREATE USER new_user
3. rename the schema ref_schema to new_schema:
ALTER SCHEMA ref_schema RENAME TO new_schema
4. change ownership of all objects in the renamed schema to the new user
REASSIGN OWNED BY ref_user TO new_user
5. restore the original reference schema from the dump
psql -f dump.sql database_name
I hope someone finds this helpful.
A bit late to the party but, some sql here could help you along your way:
get schema oid:
namespace_id = SELECT oid
FROM pg_namespace
WHERE nspname = '<schema name>';
get table's oid:
table_id = SELECT relfilenode
FROM pg_class
WHERE relnamespace = '<namespace_id>' AND relname = '<table_name>'
get foreign key constraints:
SELECT con.conname, pg_catalog.pg_get_constraintdef(con.oid) AS condef
FROM pg_catalog.pg_constraint AS con
JOIN pg_class AS cl ON cl.relnamespace = con.connamespace AND cl.relfilenode = con.conrelid
WHERE con.conrelid = '<table_relid>'::pg_catalog.oid AND con.contype = 'f';
A good resource for PostgreSQL system tables can be found here. Additionally, you can learn more about the internal queries pg_dump makes to gather dump information by viewing it's source code.
Probably the easiest way to see how pg_dump gathers all your data would be to use strace on it, like so:
$ strace -f -e sendto -s8192 -o pg_dump.trace pg_dump -s -n <schema>
$ grep -oP '(SET|SELECT)\s.+(?=\\0)' pg_dump.trace
You'll still have to sort through the morass of statements but, it should help you piece together a cloning tool programmatically and avoid having to drop to a shell to invoke pg_dump.
Just ran into same. Sometimes I am missing remap_schema :)
The problem - neither from above addresses the Fc - standard format which is crucial for large schemas.
So I came up with something which uses it :
Pseudo code below - should work.
Requires rename of source for duration of pg_dump which, of course, might not be an option :(
Source :
pg_dump --pre-data in sql format
psql rename sosurce to target
pg_dump -Fc --data-only
psql rename back
pg_dump --post-data in sql format
Target :
sed source_schema->target_schema pre-data sql |psql
pg_restore Fc dump
sed source_schema->target_schema post-data sql |psql
sed above usually will include any other manipulations ( say different user names between source and target ) But it will be way much faster as data will not be part of the file
Related
question is pretty simple, but can't seem to find a concrete answer anywhere.
I need to update all tables inside my postgresql schema to include a timestamp column with default NOW(). I'm wondering how I can do this via a query instead of having to go to each individual table. There are several hundred tables in the schema and they all just need to have the one column added with the default value.
Any help would be greatly appreciated!
The easy way with psql, run a query to generate the commands, save and run the results
-- Turn off headers:
\t
-- Use SQL to build SQL:
SELECT 'ALTER TABLE public.' || table_name || ' add fecha timestamp not null default now();'
FROM information_schema.tables
WHERE table_type = 'BASE TABLE' AND table_schema='public';
-- If the output looks good, write it to a file and run it:
\g out.tmp
\i out.tmp
-- or if you don't want the temporal file, use gexec to run it:
\gexec
New to postgres and I'm using Postgresql 9.3. Is there a way with postgresql to generate a file with multiple DML statements?
For example, I want to select table name where tablename like '_foo%' and then rename all those tables to '_bar%'. Do I need to do this in a cursor or can I do this within a select statement? (like in Oracle)
ALTER TABLE RENAME tst1_foo TO tst1_bar;
ALTER TABLE RENAME tst2_foo TO tst2_bar;
ALTER TABLE RENAME tst3_foo TO tst3_bar;
I'd like to print those out to a .sql file.
Please provide a basic example if possible. Thanks.
You can use psql and the pg_tables system view. Set the output to unaligned mode:
\a
Set the output to show only rows:
\t on
Send output to your file:
\o yourfile.sql
Run the query:
SELECT 'ALTER TABLE RENAME ' || tablename || ' TO ' ||
REGEXP_REPLACE ( tablename, '_foo$', '_bar' ) || ';'
FROM pg_tables
WHERE tablename LIKE '%_foo';
Close the file:
\o
and/or close psql:
\q
I'm trying to create a multi tenent app using Symfony 2.6 and PostgreSQL schemas (namespaces). I would like to know how can I change some entity schema on the pre persist event?
I know that it's possible to set the schema as annotation #Table(schema="schema") but this is static solution I need something more dynamic!
The purpose using PostgreSQL is take advantage of schemas feature like:
CREATE TABLE tenant_1.users (
# table schema
);
CREATE TABLE tenant_2.users (
# table schema
);
So, if I want only users from tenant_2 my query will be something like SELECT * FROM tenant_2.users;
This way my data will be separated and I will have only one database to connect and maintain.
$schema = sprintf('tenant_%d', $id);
$em->getConnection()->exec('SET search_path TO ' . $schema);
You might also want to involve PostgreSQL's row level security instead - that way you can actually prevent the tenant from accessing the data, not just hiding it by prefixing a schema path.
Check this one out: https://www.tangramvision.com/blog/hands-on-with-postgresql-authorization-part-2-row-level-security. I just set a working tenant separation with the information on that page and I'm quite excited about it.
In my case, my tenants are called organisations, and some (not all) tables have an organisation_id that permanently binds a row to it.
Here is a version of my script I run during a schema update, which finds all tables with column organisation_id and enables the row level security with a policy that only shows rows that an org owns, if the org role is set:
CREATE ROLE "org";
-- Find all tables with organisation_id and enable the row level security
DO $$ DECLARE
r RECORD;
BEGIN
FOR r IN (
SELECT
t.table_name, t.table_schema, c.column_name
FROM
information_schema.tables t
INNER JOIN
information_schema.columns c ON
c.table_name = t.table_name
AND c.table_schema = t.table_schema
AND c.column_name = 'organisation_id'
WHERE
t.table_type = 'BASE TABLE'
AND t.table_schema != 'information_schema'
AND t.table_schema NOT LIKE 'pg_%'
) LOOP
EXECUTE 'ALTER TABLE ' || quote_ident(r.table_schema) || '.' || quote_ident(r.table_name) || ' ENABLE ROW LEVEL SECURITY';
EXECUTE 'DROP POLICY IF EXISTS org_separation ON ' || quote_ident(r.table_schema) || '.' || quote_ident(r.table_name);
EXECUTE 'CREATE POLICY org_separation ON ' || quote_ident(r.table_schema) || '.' || quote_ident(r.table_name) || 'FOR ALL to org USING (organisation_id = substr(current_user, 5)::int)';
END LOOP;
END $$;
-- Grant usage on all tables in all schemas to the org role
DO $do$
DECLARE
sch text;
BEGIN
FOR sch IN (
SELECT
schema_name
FROM
information_schema.schemata
WHERE
schema_name != 'information_schema'
AND schema_name NOT LIKE 'pg_%'
) LOOP
EXECUTE format($$ GRANT USAGE ON SCHEMA %I TO org $$, sch);
EXECUTE format($$ GRANT SELECT, UPDATE ON ALL SEQUENCES IN SCHEMA %I TO org $$, sch);
EXECUTE format($$ GRANT SELECT, UPDATE, INSERT, DELETE ON ALL TABLES IN SCHEMA %I TO org $$, sch);
EXECUTE format($$ ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT SELECT, UPDATE ON SEQUENCES TO org $$, sch);
EXECUTE format($$ ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT INSERT, SELECT, UPDATE, DELETE ON TABLES TO org $$, sch);
END LOOP;
END;
$do$;
Step two, when I create a new organisation, I also create a role for it:
CREATE ROLE "org:86" LOGIN;
GRANT org TO "org:86";
Step three, at the beginning of every request that should be scoped to a particular organisation, I call SET ROLE "org:86"; to enable the restrictions.
There is much more happening around what we do with all of this, but the code above should be complete enough to help people get started.
Good luck!
I want to export a Postgres database into a CSV file. Is this possible?
If it is possible, then how can I do this? I have seen that we can convert a particular table into a CSV file but I don't know about a whole database.
I made this pl/pgsql function to create one .csv file per table (excluding views, thanks to #tarikki):
CREATE OR REPLACE FUNCTION db_to_csv(path TEXT) RETURNS void AS $$
declare
tables RECORD;
statement TEXT;
begin
FOR tables IN
SELECT (table_schema || '.' || table_name) AS schema_table
FROM information_schema.tables t INNER JOIN information_schema.schemata s
ON s.schema_name = t.table_schema
WHERE t.table_schema NOT IN ('pg_catalog', 'information_schema')
AND t.table_type NOT IN ('VIEW')
ORDER BY schema_table
LOOP
statement := 'COPY ' || tables.schema_table || ' TO ''' || path || '/' || tables.schema_table || '.csv' ||''' DELIMITER '';'' CSV HEADER';
EXECUTE statement;
END LOOP;
return;
end;
$$ LANGUAGE plpgsql;
And I use it this way:
SELECT db_to_csv('/home/user/dir');
-- this will create one csv file per table, in /home/user/dir/
You can use this at psql console:
\copy (SELECT foo,bar FROM whatever) TO '/tmp/file.csv' DELIMITER ',' CSV HEADER
Or it in bash console:
psql -P format=unaligned -P tuples_only -P fieldsep=\, -c "SELECT foo,bar FROM whatever" > output_file
Modified jlldoras brilliant answer by adding one line to prevent the script from trying to copy views:
CREATE OR REPLACE FUNCTION db_to_csv(path TEXT) RETURNS void AS $$
declare
tables RECORD;
statement TEXT;
begin
FOR tables IN
SELECT (table_schema || '.' || table_name) AS schema_table
FROM information_schema.tables t INNER JOIN information_schema.schemata s
ON s.schema_name = t.table_schema
WHERE t.table_schema NOT IN ('pg_catalog', 'information_schema', 'configuration')
AND t.table_type NOT IN ('VIEW')
ORDER BY schema_table
LOOP
statement := 'COPY ' || tables.schema_table || ' TO ''' || path || '/' || tables.schema_table || '.csv' ||''' DELIMITER '';'' CSV HEADER';
EXECUTE statement;
END LOOP;
return;
end;
$$ LANGUAGE plpgsql;
If you want to specify the database and user while exporting you can just modify the answer given by Piotr as follows
psql -P format=unaligned -P tuples_only -P fieldsep=\, -c "select * from tableName" > tableName_exp.csv -U <USER> -d <DB_NAME>
Do you want one big CSV file with data from all tables?
Probably not. You want separate files for each table or one big file with more information that can be expressed in CSV file header.
Separate files
Other answers shows how to create separate files for each table. You can query database to show you all tables with such query:
SELECT DISTINCT table_name
FROM information_schema.columns
WHERE table_schema='public'
AND position('_' in table_name) <> 1
ORDER BY 1
One big file
One big file with all tables in CSV format used by PostgreSQL COPY command can be created with pg_dump command. Output will also have all CREATE TABLE, CREATE FUNCTION etc, but with Python, Perl or similar language you can easily extract only CSV data.
I downloaded a copy of RazorSQL, opened the database server and right-clicked on the database and selected Export Tables and it gave me the option of CSV, EXCEL, SQL etc...
I have a database with an old broken version of PostGIS installed in it. I would like to easily drop all functions in the database (they're all from PostGIS). Is there a simple way to do this? Even simply extracting a list of function names would be acceptable as I could just make a large DROP FUNCTION statement.
A fine answer to this question can be found here:
SELECT 'DROP FUNCTION ' || ns.nspname || '.' || proname
|| '(' || oidvectortypes(proargtypes) || ');'
FROM pg_proc INNER JOIN pg_namespace ns ON (pg_proc.pronamespace = ns.oid)
WHERE ns.nspname = 'my_messed_up_schema' order by proname;
Just as there was a postgis.sql enabler install script, there is also an uninstall_postgis.sql uninstall script.
psql -d [yourdatabase] -f /path/to/uninstall_postgis.sql
Warning: Be prepared to see your geometry/geography columns and data disappear!