Postgres taking backup of master data and schema for few of the tables - postgresql

In my database, I have the master tables starting with m_* and other. What I want to take the back of tables with following scenario.
Backup schema + data for master tables i.e table names starting with m_*
Backuo schema structure for the rest of the tables.
I did read the following command somewhere
pg_dump -U "postgres" -h "local" -p "5432"
-d dbName -F c -b -v -f c:\uti\backup.dmp
--exclude-table-data '*.table_name_pattern_*'
--exclude-table-data 'some_schema.another_*_pattern_*'
But I have so many tables and I find it tedious to put each table name in it. Any tidy way to get around it?

Using Linux:
File foo.sh (adjust filtering conditions):
psql <connection and other parameters> -c "copy (select format('--exclude-table-data=%s.%s', schemaname, tablename) from pg_tables where schemaname in ('public', 'foo') and tablename<>'t') to stdout;"
Command (note about backticks):
pg_dump <connection and other parameters> `./foo.sh`
Note that it is very flexible approach.

Related

Restore multiple tables postgresql pg_dump

I have a database with several tables and I want to add the data and structure of a selection of them to a different database containing already different tables.
I have created the dump file in the following way:
"C:\Program Files\PostgreSQL\9.1\bin\pg_dump.exe" -U postgres -w DBName -t table1 -t Table2 -t Table3 > "E:\Dump.sql"
This works fine and I have E:\Dump.sql with the dump of the three tables in it.
I have tried to make a restore with the following code:
"C:\Program Files\PostgreSQL\9.6\bin\pg_dump.exe" -C -U User -t table1 -t Table2 -t Table4 destdb < Dump.sql
but I get the error
no matching tables were found
.
Where am I doing wrong?
https://www.postgresql.org/docs/current/static/backup-dump.html
pg_dump is meant for **taking* backups, next section of manual is for restoring a backup:
Text files created by pg_dump are intended to be read in by the psql
program. The general command form to restore a dump is
psql dbname < infile
Also you mention three tables, two of which are mixed case - in order for it to work, you have to double quote it I believe. But anyway - restoring for you file would be:
psql YOURDBNAME -f E:\Dump.sql

pg_dump ignoring schema specified with -n

I am using pg_dump (PostgreSQL) 9.2.4.
I have a database with two different schemas that have identical table names.
- mydatabase
- schema_a
- mytable
- someothertable
- schema_b
- mytable
- another table
I want to copy both schema_a.mytable and schema_b.mytable from orig_host to new_host. I log into new_host and type:
% psql -c "drop schema schema_a cascade" mydatabase
% psql -c "create schema schema_a" musicbrainz_db
% pg_dump -h orig_host -n schema_a -t mytable mydatabase | psql mydatabase
No problem, but when I do the same for schema_b, I get conflicts:
% psql -c "drop schema schema_b cascade" mydatabase
% psql -c "create schema schema_b" musicbrainz_db
% pg_dump -h orig_host -n schema_b -t mytable mydatabase | psql mydatabase
ERROR: relation "artist" already exists
I confirm by dumping this last command to a file that it is setting the search path to schema_a, which causes the failure. It does seem to work if I do
% pg_dump -h orig_host -t schema_b.mytable mydatabase | psql mydatabase
But shouldn't the -n switch work here?
From the manual -
The -n and -N switches have no effect when -t is used, because tables selected by -t will be dumped regardless of those switches, and non-table objects will not be dumped.
You might have schema_a in your search_path, which is why the first command works.
Take a look at the SQL produced. I believe what you should see is:
Everything in schema_a
All tables named *.mytable
That is, the options are additive.
I'm guessing the intent is that schema S might well depend on one or two other tables so you can dump them all together.

Creating a database dump for specific tables and entries Postgres

I have a database with hundreds of tables, what I need to do is export specified tables and insert statements for the data to one sql file.
The only statement I know can achieve this is
pg_dump -D -a -t zones_seq interway > /tmp/zones_seq.sql
Should I run this statement for each and every table or is there a way to run a similar statement to export all selected tables into one big sql big. The pg_dump above does not export the table schema only inserts, I need both
Any help will be appreciated.
Right from the manual: "Multiple tables can be selected by writing multiple -t switches"
So you need to list all of your tables
pg_dump --column-inserts -a -t zones_seq -t interway -t table_3 ... > /tmp/zones_seq.sql
Note that if you have several table with the same prefix (or suffix) you can also use wildcards to select them with the -t parameter:
"Also, the table parameter is interpreted as a pattern according to the same rules used by psql's \d commands"
If those specific tables match a particular pattern, you can use that with the -t option in pg_dump.
pg_dump -D -a -t zones_seq -t interway -t "<pattern>" -f /tmp/zones_seq.sql <DBNAME>
For example to dump tables which start with "test", you can use
pg_dump -D -a -t zones_seq -t interway -t "^test*" -f /tmp/zones_seq.sql <DBNAME>

I want to restore the database with a different schema

I have taken a dump of a database named temp1, by using the follwing command
$ pg_dump -i -h localhost -U postgres -F c -b -v -f pub.backup temp1
Now I want to restore the dump in a different database called "db_temp" , but in that I just want that all the tables should be created in a "temp_schema" ( not the default schema which is in the fms temp1 database ) which is in the "db_temp" database.
Is there any way to do this using pg_restore command?
Any other method also be appreciated!
A quick and dirty way:
1) rename default schema:
alter schema public rename to public_save;
2) create new schema as default schema:
create schema public;
3) restore data
pg_restore -f pub.backup db_temp [and whatever other options]
4) rename schemas according to need:
alter schema public rename to temp_schema;
alter schema public_save rename to public;
There is a simple solution:
Create your backup dump in plain SQL format (format "p" using the parameter --format=p or -F p)
Edit your pub.backup.sql dump with your favorite editor and add the following two lines at the top of your file:
create schema myschema;
SET search_path TO myschema;
Now you can restore your backup dump with the command
psql -f pub.backup.sql
The set search_path to <schema> command will set myschema as the default, so that new tables and other objects are created in this schema, independently of the "default" schema where they lived before.
There's no way in pg_restore itself. What you can do is use pg_restore to generate SQL output, and then send this through for example a sed script to change it. You need to be careful about how you write that sed script though, so it doesn't match and change things inside your data.
Probably the easiest method would be to simply rename the schema after restore, ie with the following SQL:
ALTER SCHEMA my_schema RENAME TO temp_schema
I believe that because you're using the compressed archive format for the output of pg_dump you can't alter it before restoring. The option would be to use the default output and do a search and replace on the schema name, but that would be risky and could perhaps cause data to be corrupted if you were not careful.
If you only have a few tables then you can restore one table at a time, pg_restore accepts -d database when you specify -t tablename. Of course, you'll have to set up the schema before restoring the tables and then sort out the indexes and constraints when you're done restoring the tables.
Alternatively, set up another server on a different port, restore using the new PostgreSQL server, rename the schema, dump it, and restore into your original database. This is a bit of a kludge of course but it will get the job done.
If you're adventurous you might be able to change the database name in the dump file using a hex editor. I think it is only mentioned in one place in the dump and as long as the new and old database names are the same it should work. YMMV, don't do anything like this in a production environment, don't blame me if this blows up and levels your home town, and all the rest of the usual disclaimers.
Rename the schema in a temporary database.
Export the schema:
pg_dump --schema-only --schema=prod > prod.sql
Create a new database. Restore the export:
psql -f prod.sql
ALTER SCHEMA prod RENAME TO somethingelse;
pg_dump --schema-only --schema=somethingelse > somethingelse.sql
(delete the database)
For the data you can just modify the set search_path at the top.
As noted, there's no direct support in pg_dump, psql or pg_restore to change the schema name during a dump/restore process. But it's fairly straightforward to export using "plain" format then modify the .sql file. This Bash script does the basics:
rename_schema () {
# Change search path so by default everything will go into the specified schema
perl -pi -e "s/SET search_path = $2, pg_catalog/SET search_path = $3, pg_catalog, $2;/" "$1"
# Change 'ALTER FUNCTION foo.' to 'ALTER FUNCTION bar.'
perl -pi -e 's/^([A-Z]+ [A-Z]+) '$2'\./$1 '$3'./' "$1"
# Change the final GRANT ALL ON SCHEMA foo TO PUBLIC
perl -pi -e 's/SCHEMA '$2'/SCHEMA '$3'/' "$1"
}
Usage:
pg_dump --format plain --schema=foo --file dump.sql MYDB
rename_schema dump.sql foo bar
psql -d MYDB -c 'CREATE SCHEMA bar;'
psql -d MYDB -f dumpsql
The question is pretty old, but maybe can help some one.
Streaming the output of pg_restore to sed and replace the schema name in order to import the dump to a different schema.
Something like:
pg_restore ${dumpfile} | \
sed -e "s/OWNER TO ${source_owner}/OWNER TO ${target_owner}/" \
-e "s/${source_schema}/${target_schema}/" | \
psql -h ${pgserver} -d ${dbname} -U ${pguser}

Generate DDL programmatically on Postgresql

How can I generate the DDL of a table programmatically on Postgresql? Is there a system query or command to do it? Googling the issue returned no pointers.
Use pg_dump with this options:
pg_dump -U user_name -h host database -s -t table_or_view_names -f table_or_view_names.sql
Description:
-s or --schema-only : Dump only ddl / the object definitions (schema), without data.
-t or --table Dump : Dump only tables (or views or sequences) matching table
Examples:
-- dump each ddl table elon build.
$ pg_dump -U elon -h localhost -s -t spacex -t tesla -t solarcity -t boring > companies.sql
Sorry if out of topic. Just wanna help who googling "psql dump ddl" and got this thread.
You can use the pg_dump command to dump the contents of the database (both schema and data). The --schema-only switch will dump only the DDL for your table(s).
Why would shelling out to psql not count as "programmatically?" It'll dump the entire schema very nicely.
Anyhow, you can get data types (and much more) from the information_schema (8.4 docs referenced here, but this is not a new feature):
=# select column_name, data_type from information_schema.columns
-# where table_name = 'config';
column_name | data_type
--------------------+-----------
id | integer
default_printer_id | integer
master_host_enable | boolean
(3 rows)
The answer is to check the source code for pg_dump and follow the switches it uses to generate the DDL. Somewhere inside the code there's a number of queries used to retrieve the metadata used to generate the DDL.
Here is a good article on how to get the meta information from information schema,
http://www.alberton.info/postgresql_meta_info.html.
I saved 4 functions to mock up pg_dump -s behaviour partially. Based on \d+ metacommand. The usage would be smth alike:
\pset format unaligned
select get_ddl_t(schemaname,tablename) as "--" from pg_tables where tableowner <> 'postgres';
Of course you have to create functions prior.
Working sample here at rextester