How to do a pg_dump for only tables only and not triggers and functions? - postgresql

What I want:
I want a pg_dump of a database (let's call the database as 'test').
In this pg_dump I want only the tables without the following: data, triggers, functions, sequences, etc.
What I am doing to get what I want:
The command I run is as follows:
pg_dump -U postgres -s test > dump_test.sql
What I am observing:
Then when I try to restore this dump on another server as follows:
pg_dump -U postgres new_amazing_test < dump_test.sql
I notice that part of the output of running the above command says the following:
CREATE TRIGGER
CREATE FUNCTION
CREATE SEQUENCE
CREATE INDEX
What I actually want:
All I want is the table itself and not these triggers, functions, sequence and indexes. How do I only get the tables only?
Other things I have tried/considered:
I have tried doing this:
pg_dump -U postgres -s -schema=\dtmvE test > dump_test.sql
but it didn't work because the pattern needs to be a name not a \d pattern.
See here: https://www.postgresql.org/docs/13/app-pgdump.html for information on -n pattern option.
One thing that may solve it is to use multiple switches like this:
pgdump -t mytable1 -t mytable2 -t mytable3 ... -t mytableN > dump_test.sql
However, the above solution is impractical because I have some 70+ tables on my database.
Other relevant info:
PostgreSQL version is 13.1
Ubuntu version v16.04 (I have also tried this on Ubuntu v18.04)

I would dump everything with a custom format schema-only dump (-F c -s) and run pg_restore -l on the resulting dump. That gives you a table of contents. Delete everything except the tables from that file and use it as input to pg_restore -L to restore exactly those items from the archive that you need.
This may not be as simple as you have hoped for, but it is certainly simpler than writing tons of -t options, and you may be able to automatize it.

you can use the flag --section as described in the postgres documentation
--section=sectionname
Only dump the named section. The section name can be pre-data, data, or post-data. This option can be specified more than once to select multiple sections. The default is to dump all sections.
The data section contains actual table data, large-object contents, and sequence values. Post-data items include definitions of indexes, triggers, rules, and constraints other than validated check constraints. Pre-data items include all other data definition items.
example:
pg_dump --schema-only --section=pre-data

Related

How To Restore Specific Schema From Dump file in PostgreSQL?

I have a dump file (size around 5 GB) which is taken via this command:
pg_dump -U postgres -p 5440 MYPRODDB > MYPRODDB_2022.dmp
The database consists multiple schemas (let's say Schema A,B,C and D) but i need to restore only one schema (schema A).
How can i achieve that? The command below didn't work and gave error:
pg_restore -U postgres -d MYPRODDB -n A -p 5440 < MYPRODDB_2022.dmp
pgrestore: error: input file appears to be a text format dump. please
use psql.
You cannot do that with a plain format dump. That's one of the reasons why you always use a different format unless you need an SQL script.
If you want to stick with a plain text dump:
pg_dump -U postgres -p 5440 -n A MYPRODDB > MYPRODDB_2022.dmp
psql -U postgres -d MYPRODDB -p 5440 -f MYPRODDB_2022.dmp
Though dumping back over the same database as above will throw errors unless you use --clean or its short form -c to create commands to drop existing objects before restoring them:
-c
--clean
Output commands to clean (drop) database objects prior to outputting the commands for creating them. (Unless --if-exists is also specified, restore might generate some harmless error messages, if any objects were not present in the destination database.)
This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call pg_restore.
Probably also a good idea to throw in --if-exists:
--if-exists
Use conditional commands (i.e., add an IF EXISTS clause) when cleaning database objects. This option is not valid unless --clean is also specified.

Constraints missing after pg_restore

After dumping a table and importing it to another postgres db constraints are missing.
I'm using this to dump:
pg_dump --host=local --username=user -W --encoding=UTF-8 -j 10 --file=dump_test --format=d -s --dbname=mydb -t addendum
This to import:
pg_restore -d myOtherdb --host=local -n public --username=user -W --exit-on-error --format=d -j 10 -t addendum dump_test/
What I can see in the resulting toc.dat is something like this:
ADD CONSTRAINT pk_addendum PRIMARY KEY (addendum_id);
> ALTER TABLE ONLY public.addendum DROP CONSTRAINT pk_addendum;
That looks like its creating and destroying the PK, but I'm not sure if my interpretation is correct as the file is binary.
edit: I'm using PostgreSQL 9.3
From the documentation:
Note: When -t is specified, pg_dump makes no attempt to dump any other database objects that the selected table(s) might depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be successfully restored by themselves into a clean database.
You thus have some admittedly unattractive choices:
You can rebuild the constraints manually, especially if you still have the DDL which created them.
You can do a database-wide pg_dump to text, obtain the constraint DDL from there, see step 1.
You can do a database-wide pg_dump, and restore it fully.
I had the situation where the table already exists but using pg_restore deleted the constraints of the table.
There is an accepted answer already but I will try to provide an answer for those cases where the table to be restored is already available. In such cases, the constraints are deleted, only if you are trying to drop and recreate the table (-c or -C). Whereas if you only want the data from the dump you can perform delete all records on the table (DELETE FROM tableName) and then use pg_restore with -a flag. You can thus exclude -c or -C flag from you pg_restore command.
A little late to the party but here's something that may help.
If you're restoring a single table from a large dump file and having trouble getting the indexes with pg_restore (-t doesn't do indexes and constraints)
pg_restore db_dump_file.dump | awk '/table_name/{nr[NR]; nr[NR+1]}; NR in nr' > table_name_indexes_tmp.psql
You also need the subsequent line after a match for indexes and constraints. The awk command above gets line + 1 after every match.
This output file should contain your indexes (assuming the dump file actually contains them, plus data). Then you can apply them back to the table you restored as individual commands.
Not a perfect solution but better than trying to re-create them manually.

How to import data from a particular table from a large postgres file?

I want to import data just for particular tables in Postgres. How can I do that?
I've tried the following command. Didnt worked though.
pg_dump -U postgres -a -d -t data_pptlconfig db_name > db_file
Assuming you mean "a dump" when you say a "postgres file":
If it's an SQL format dump, you'd have to extract the part you want with a text editor and run just that part. The dump is essentially an SQL "program" to re-create the database, so there's really no other way to selectively restore bits of it.
If it's a custom-format dump, you can use pg_restore with the -t flag.
Use file the-dump-file to find out which it is if you do not know. Or look at the file with a text editor - if the first five bytes are PGDMP it's a PostgreSQL custom format dump; otherwise it'll be an SQL format dump.

PostgreSQL: How do I backup database with name A and load it to database with name B?

I have two databases on the same server. One named A and one named B. Booth databases have the same structure. I want to empty database B and load it with data from database A. Which is the best way to do this?
I have tried to take backup of database A in plain format. Then open the resulting sql-file and replace every occurence of 'A' with 'B' and then run the sql-script. This worked but I think it should be an easier way to move data from one database to another. Is it?
I use 'pgAdmin III' as my tool, but this is not necessary.
This is my first post here, hope the question is relevant and structured well enough. I tried google first but found it hard to find anyone with the same question.
Thanks in advance!
/David
SOLUTION: After help from Craig, this is how I did it
pg_dump -Fc -a -f a.dbbackup A
psql -c 'TRUNCATE table1, table2, ..., tableX CASCADE'
pg_restore dblive.backup -d B -c (not sure if -c was necessary)
Backup:
pg_dump -Fc -f a.dbbackup
Restore:
psql -c 'CREATE DATABASE b;'
pg_restore --dbname b a.dbbackup
Use the -U, -h etc options as required to connect to the correct host as the correct user with permissions to dump, create and restore the DB. See the docs for psql, pg_dump and pg_restore for more info (they all take the same options for connection control).

How to exclude PL/pgSQL functions in export?

I use following command to dump some structures from server' database to be able to create sample of data on my local hard drive.
pg_dump -h myserver.com -U product_user -s -f ./data/base.structure.postgresql.sql -F p -v -T public.* -T first_product.* -T second_product.* -T another_product.locales mydatabase
I need to exclude some schemas otherwise it would ended up on permissions or other errors. Even that I exclude schema public, it dumps all functions in that schema, like this:
REVOKE ALL ON FUNCTION gin_extract_trgm(text, internal) FROM PUBLIC;
psql:./data/base.structure.postgresql.sql:8482: ERROR: function gin_extract_trgm(text, internal) does not exist
I know this comes from the fulltext or similarity plugin in PostgreSQL, but I don't use it and don't need it on my machine, so I'd like to exclude these functions.
How could I do that?
There is a way to do it. Say your backup is named backup.dump. What you need to do is:
$ pg_restore -l -f out.txt backup.dump
That will create a file out.txt that contains a list of objects that are in the dump. You need to edit the file and delete the items you don't want restored. Then you do this:
$ pg_restore -L out.txt -h your.host.name -U username .... backup.dump
This will use a file out.txt (that you edited) to select the things that will be restored. Pretty handy especially in case the dump is large and you cannot re-dump the database.
I need to exclude some schemas
pg_dump has a switch to exclude schemas:
pg_dump -N schema ...
I quote the manual about pg_dump:
-N schema
--exclude-schema=schema
Do not dump any schemas matching the schema pattern. The pattern is interpreted according to the same rules as for -n. -N can be given
more than once to exclude schemas matching any of several patterns.
...
With PostgreSQL 9.1 or later you have new options to move extensions into a separate schema - even pre-installed old-style modules. You can register old object with your (new-style) extension and then use the new tools. With fulltext and similarity you probably mean fuzzystrmatch and tsearch2. Example:
Register existing old-style objects for the extension fuzzystrmatch:
CREATE EXTENSION fuzzystrmatch SCHEMA public FROM unpackaged;
Drop the extension:
DROP EXTENSION fuzzystrmatch;
Install it to another schema:
CREATE EXTENSION fuzzystrmatch SCHEMA my_schema;
Of course, you cannot drop the extension, if objects from it are in use.
Also, if you install to another schema, you need to schema-qualify its functions in use or add the schema to the search_path.
In addition to the answer from Bartosz above, you can use the following sed command to remove e.g. a certain FUNCTION from the list before restoring:
sed -r -i -e '/FUNCTION public plpgsql_call_handler\(\) postgres/d' /var/backup/${DBNAME}.list