I want to restore the database with a different schema - postgresql

I have taken a dump of a database named temp1, by using the follwing command
$ pg_dump -i -h localhost -U postgres -F c -b -v -f pub.backup temp1
Now I want to restore the dump in a different database called "db_temp" , but in that I just want that all the tables should be created in a "temp_schema" ( not the default schema which is in the fms temp1 database ) which is in the "db_temp" database.
Is there any way to do this using pg_restore command?
Any other method also be appreciated!

A quick and dirty way:
1) rename default schema:
alter schema public rename to public_save;
2) create new schema as default schema:
create schema public;
3) restore data
pg_restore -f pub.backup db_temp [and whatever other options]
4) rename schemas according to need:
alter schema public rename to temp_schema;
alter schema public_save rename to public;

There is a simple solution:
Create your backup dump in plain SQL format (format "p" using the parameter --format=p or -F p)
Edit your pub.backup.sql dump with your favorite editor and add the following two lines at the top of your file:
create schema myschema;
SET search_path TO myschema;
Now you can restore your backup dump with the command
psql -f pub.backup.sql
The set search_path to <schema> command will set myschema as the default, so that new tables and other objects are created in this schema, independently of the "default" schema where they lived before.

There's no way in pg_restore itself. What you can do is use pg_restore to generate SQL output, and then send this through for example a sed script to change it. You need to be careful about how you write that sed script though, so it doesn't match and change things inside your data.

Probably the easiest method would be to simply rename the schema after restore, ie with the following SQL:
ALTER SCHEMA my_schema RENAME TO temp_schema
I believe that because you're using the compressed archive format for the output of pg_dump you can't alter it before restoring. The option would be to use the default output and do a search and replace on the schema name, but that would be risky and could perhaps cause data to be corrupted if you were not careful.

If you only have a few tables then you can restore one table at a time, pg_restore accepts -d database when you specify -t tablename. Of course, you'll have to set up the schema before restoring the tables and then sort out the indexes and constraints when you're done restoring the tables.
Alternatively, set up another server on a different port, restore using the new PostgreSQL server, rename the schema, dump it, and restore into your original database. This is a bit of a kludge of course but it will get the job done.
If you're adventurous you might be able to change the database name in the dump file using a hex editor. I think it is only mentioned in one place in the dump and as long as the new and old database names are the same it should work. YMMV, don't do anything like this in a production environment, don't blame me if this blows up and levels your home town, and all the rest of the usual disclaimers.

Rename the schema in a temporary database.
Export the schema:
pg_dump --schema-only --schema=prod > prod.sql
Create a new database. Restore the export:
psql -f prod.sql
ALTER SCHEMA prod RENAME TO somethingelse;
pg_dump --schema-only --schema=somethingelse > somethingelse.sql
(delete the database)
For the data you can just modify the set search_path at the top.

As noted, there's no direct support in pg_dump, psql or pg_restore to change the schema name during a dump/restore process. But it's fairly straightforward to export using "plain" format then modify the .sql file. This Bash script does the basics:
rename_schema () {
# Change search path so by default everything will go into the specified schema
perl -pi -e "s/SET search_path = $2, pg_catalog/SET search_path = $3, pg_catalog, $2;/" "$1"
# Change 'ALTER FUNCTION foo.' to 'ALTER FUNCTION bar.'
perl -pi -e 's/^([A-Z]+ [A-Z]+) '$2'\./$1 '$3'./' "$1"
# Change the final GRANT ALL ON SCHEMA foo TO PUBLIC
perl -pi -e 's/SCHEMA '$2'/SCHEMA '$3'/' "$1"
}
Usage:
pg_dump --format plain --schema=foo --file dump.sql MYDB
rename_schema dump.sql foo bar
psql -d MYDB -c 'CREATE SCHEMA bar;'
psql -d MYDB -f dumpsql

The question is pretty old, but maybe can help some one.
Streaming the output of pg_restore to sed and replace the schema name in order to import the dump to a different schema.
Something like:
pg_restore ${dumpfile} | \
sed -e "s/OWNER TO ${source_owner}/OWNER TO ${target_owner}/" \
-e "s/${source_schema}/${target_schema}/" | \
psql -h ${pgserver} -d ${dbname} -U ${pguser}

Related

pg_dump custom format file contains 'DROP DATABASE'

I wish to pg_dump a specific schema from one database and pg_restore it into an already existing database without dropping it.
The command I have been using for the pg_dump is as follows:
pg_dump -n mySchema -Z 9 -b -f mySchema.sql.gz -F c -U ${db_user} -h ${db_host} ${db_name}
-F c generates a custom format file suitable for pg_restore.
I'm aware that if I included the --clean flag it would 'clean (drop) database objects prior to outputting the commands for creating them.' according to the documentation here.
I did not include this flag.
However, when I run head on the generated file, I can see the following within it: DROP DATABASE myDatabase. Why is it here? I'm afraid that if I do a pg_restore with this file that it will drop my existing database.
Here is what head returns:
PGDMP
ymyDatabase11.8"11.10 (Ubuntu 11.10-1.pgdg18.04+1)�0ENCODINENCODINGSET client_encoding = 'UTF8';
false�00
STDSTRINGS
STDSTRINGS(SET standard_conforming_strings = 'on';
false�00
SEARCHPATH
SEARCHPATH8SELECT pg_catalog.set_config('search_path', '', false);
false�126221356smarDATABASEwCREATE DATABASE myDatabase WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
DROP DATABASE myDatabase;
myUserfalse�00DATABASE smartACL/GRANT CONNECT ON DATABASE myDatabase TO readaccess;
A custom format dump includes the DDL to drop the listed objects.
That is because with a custom format dump, you can specify the --clean option with pg_restore, that is, you don't need if you need the DROP statements until the dump is restored.
However, if you don't use --clean with pg_restore, the DROP statements won't be executed, so you don't have to worry.

How to restore output from pg_dump into a new table name

I am dumping a large Postgres table like this:
pg_dump -h myserver -U mt_user --table=mytable -Fc -Z 9 --file mytable.dump mydb
The above creates a mytable.dump file. I now want to restore this dump into a new table called mytable_restored.
How can I use the pg_restore command to do this?
There is no pg_restore option to rename tables.
I would do it like this:
-- create a table that is defined like the original
CREATE TABLE mytable_restored (LIKE mytable INCLUDING ALL);
-- copy the table contents
COPY mytable TO '/tmp/mytable.dmp';
COPY mytable_restored FROM '/tmp/mytable.dmp';
you can append sed to your pg_dump so that while pg_dump outputs the table name, sed will replace it with the right one.
pg_dump mytable mydb | sed 's/mytable/mytable_restored/g' > mytable_restored.dump
My answer may not the exact answer for the question but for my case (directory dump with many tables), I have improved Laurenz's answer, it may be useful.
In my case I have just needed two tables to restore to another location.
First I created the tables in new location.
I find the table data in the toc.dat file.
pg_restore -Fd <dir_name> -l | grep <table_name>
This will probably return more than single line. Then we look for "TABLE DATA" line. After getting [file_number].dat.gz file and use it in copy command like below in psql.
COPY <table_name> from PROGRAM 'zcat /<path>/<file_number>.dat.gz'

Postgres: Copying particular tables from remote db, interrelated with foreign key relationships

I have a Postgres RDS instance for one of my apps.
I need to copy 3 tables from it to a similar clone of the database.
I see mytable_id_seq tables also, which now I know are called sequences in postgres terminology.
When I created a dump of those three tables, and restore them, do I have to do anything with the _id_seq sequences ?
Do I have to restore them too, for the dump data to work as it did in the original table?
When you restore the entire database from a dump, it contains CREATE SEQUENCE statements by default. These statements initialize sequences to the proper state. But if you make a partial dump, with only selected tables, you must set sequences manualy.
Assuming, that your table's name is "clip", you can check the current value using this query:
SELECT last_value FROM clip_id_seq
And if you want to update the sequence after restore, you can do it with this simple query:
SELECT SETVAL('clip_id_seq', SELECT MAX(id) FROM clip)
pg_dump -d database_name -t mg_cnd -F c > database_backup.sql
pg_restore -U database_user --data-only -d database_name -t mg_cnd -F c <file_location>
-d = database name
-t = table name
-F = Format
c = plain
-U = User
--data-only = transfer only table data
pg_dump documentation
pg_restore documentation

How can I specify the schema to run an sql file against in the Postgresql command line

I run scripts against my database like this...
psql -d myDataBase -a -f myInsertFile.sql
The only problem is I want to be able to specify in this command what schema to run the script against. I could call set search_path='my_schema_01' but the files are supposed to be portable. How can I do this?
You can create one file that contains the set schema ... statement and then include the actual file you want to run:
Create a file run_insert.sql:
set schema 'my_schema_01';
\i myInsertFile.sql
Then call this using:
psql -d myDataBase -a -f run_insert.sql
More universal way is to set search_path (should work in PostgreSQL 7.x and above):
SET search_path TO myschema;
Note that set schema myschema is an alias to above command that is not available in 8.x.
See also: http://www.postgresql.org/docs/9.3/static/ddl-schemas.html
Main Example
The example below will run myfile.sql on database mydatabase using schema myschema.
psql "dbname=mydatabase options=--search_path=myschema" -a -f myfile.sql
The way this works is the first argument to the psql command is the dbname argument. The docs mention a connection string can be provided.
If this parameter contains an = sign or starts with a valid URI prefix
(postgresql:// or postgres://), it is treated as a conninfo string
The dbname keyword specifies the database to connect to and the options keyword lets you specify command-line options to send to the server at connection startup. Those options are detailed in the server configuration chapter. The option we are using to select the schema is search_path.
Another Example
The example below will connect to host myhost on database mydatabase using schema myschema. The = special character must be url escaped with the escape sequence %3D.
psql postgres://myuser#myhost?options=--search_path%3Dmyschema
The PGOPTIONS environment variable may be used to achieve this in a flexible way.
In an Unix shell:
PGOPTIONS="--search_path=my_schema_01" psql -d myDataBase -a -f myInsertFile.sql
If there are several invocations in the script or sub-shells that need the same options, it's simpler to set PGOPTIONS only once and export it.
PGOPTIONS="--search_path=my_schema_01"
export PGOPTIONS
psql -d somebase
psql -d someotherbase
...
or invoke the top-level shell script with PGOPTIONS set from the outside
PGOPTIONS="--search_path=my_schema_01" ./my-upgrade-script.sh
In Windows CMD environment, set PGOPTIONS=value should work the same.
I'm using something like this and works very well:* :-)
(echo "set schema 'acme';" ; \
cat ~/git/soluvas-framework/schedule/src/main/resources/org/soluvas/schedule/tables_postgres.sql) \
| psql -Upostgres -hlocalhost quikdo_app_dev
Note: Linux/Mac/Bash only, though probably there's a way to do that in Windows/PowerShell too.
This works for me:
psql postgresql://myuser:password#myhost/my_db -f myInsertFile.sql
In my case, I wanted to add schema to a file dynamically so that whatever schema name user will provide from the cli, I will run sql file with that provided schema name.
For this, I replaced some text in the sql file. First I added {{schema}} in the file like this
CREATE OR REPLACE FUNCTION {{schema}}.usp_dailygaintablereportdata(
then replace {{schema}} dynamically with user provided schema name with the help of sed command
sed -i "s/{{schema}}/$pgSchemaName/" $filename
result=$(psql -U $user -h $host -p $port -d $dbName -f "$filename" 2>&1)
sed -i "s/$pgSchemaName/{{schema}}/" $filename
First replace is done, then target file is run and then again our replace is reverted back
I was facing similar problems trying to do some dat import on an intermediate schema (that later we move on to the final one). As we rely on things like extensions (for example PostGIS), the "run_insert" sql file did not fully solved the problem.
After a while, we've found that at least with Postgres 9.3 the solution is far easier... just create your SQL script always specifying the schema when refering to the table:
CREATE TABLE "my_schema"."my_table" (...);
COPY "my_schema"."my_table" (...) FROM stdin;
This way using psql -f xxxxx works perfectly, and you don't need to change search_paths nor use intermediate files (and won't hit extension schema problems).

How to exclude PL/pgSQL functions in export?

I use following command to dump some structures from server' database to be able to create sample of data on my local hard drive.
pg_dump -h myserver.com -U product_user -s -f ./data/base.structure.postgresql.sql -F p -v -T public.* -T first_product.* -T second_product.* -T another_product.locales mydatabase
I need to exclude some schemas otherwise it would ended up on permissions or other errors. Even that I exclude schema public, it dumps all functions in that schema, like this:
REVOKE ALL ON FUNCTION gin_extract_trgm(text, internal) FROM PUBLIC;
psql:./data/base.structure.postgresql.sql:8482: ERROR: function gin_extract_trgm(text, internal) does not exist
I know this comes from the fulltext or similarity plugin in PostgreSQL, but I don't use it and don't need it on my machine, so I'd like to exclude these functions.
How could I do that?
There is a way to do it. Say your backup is named backup.dump. What you need to do is:
$ pg_restore -l -f out.txt backup.dump
That will create a file out.txt that contains a list of objects that are in the dump. You need to edit the file and delete the items you don't want restored. Then you do this:
$ pg_restore -L out.txt -h your.host.name -U username .... backup.dump
This will use a file out.txt (that you edited) to select the things that will be restored. Pretty handy especially in case the dump is large and you cannot re-dump the database.
I need to exclude some schemas
pg_dump has a switch to exclude schemas:
pg_dump -N schema ...
I quote the manual about pg_dump:
-N schema
--exclude-schema=schema
Do not dump any schemas matching the schema pattern. The pattern is interpreted according to the same rules as for -n. -N can be given
more than once to exclude schemas matching any of several patterns.
...
With PostgreSQL 9.1 or later you have new options to move extensions into a separate schema - even pre-installed old-style modules. You can register old object with your (new-style) extension and then use the new tools. With fulltext and similarity you probably mean fuzzystrmatch and tsearch2. Example:
Register existing old-style objects for the extension fuzzystrmatch:
CREATE EXTENSION fuzzystrmatch SCHEMA public FROM unpackaged;
Drop the extension:
DROP EXTENSION fuzzystrmatch;
Install it to another schema:
CREATE EXTENSION fuzzystrmatch SCHEMA my_schema;
Of course, you cannot drop the extension, if objects from it are in use.
Also, if you install to another schema, you need to schema-qualify its functions in use or add the schema to the search_path.
In addition to the answer from Bartosz above, you can use the following sed command to remove e.g. a certain FUNCTION from the list before restoring:
sed -r -i -e '/FUNCTION public plpgsql_call_handler\(\) postgres/d' /var/backup/${DBNAME}.list