After dumping a table and importing it to another postgres db constraints are missing.
I'm using this to dump:
pg_dump --host=local --username=user -W --encoding=UTF-8 -j 10 --file=dump_test --format=d -s --dbname=mydb -t addendum
This to import:
pg_restore -d myOtherdb --host=local -n public --username=user -W --exit-on-error --format=d -j 10 -t addendum dump_test/
What I can see in the resulting toc.dat is something like this:
ADD CONSTRAINT pk_addendum PRIMARY KEY (addendum_id);
> ALTER TABLE ONLY public.addendum DROP CONSTRAINT pk_addendum;
That looks like its creating and destroying the PK, but I'm not sure if my interpretation is correct as the file is binary.
edit: I'm using PostgreSQL 9.3
From the documentation:
Note: When -t is specified, pg_dump makes no attempt to dump any other database objects that the selected table(s) might depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be successfully restored by themselves into a clean database.
You thus have some admittedly unattractive choices:
You can rebuild the constraints manually, especially if you still have the DDL which created them.
You can do a database-wide pg_dump to text, obtain the constraint DDL from there, see step 1.
You can do a database-wide pg_dump, and restore it fully.
I had the situation where the table already exists but using pg_restore deleted the constraints of the table.
There is an accepted answer already but I will try to provide an answer for those cases where the table to be restored is already available. In such cases, the constraints are deleted, only if you are trying to drop and recreate the table (-c or -C). Whereas if you only want the data from the dump you can perform delete all records on the table (DELETE FROM tableName) and then use pg_restore with -a flag. You can thus exclude -c or -C flag from you pg_restore command.
A little late to the party but here's something that may help.
If you're restoring a single table from a large dump file and having trouble getting the indexes with pg_restore (-t doesn't do indexes and constraints)
pg_restore db_dump_file.dump | awk '/table_name/{nr[NR]; nr[NR+1]}; NR in nr' > table_name_indexes_tmp.psql
You also need the subsequent line after a match for indexes and constraints. The awk command above gets line + 1 after every match.
This output file should contain your indexes (assuming the dump file actually contains them, plus data). Then you can apply them back to the table you restored as individual commands.
Not a perfect solution but better than trying to re-create them manually.
Related
I have a dump file (size around 5 GB) which is taken via this command:
pg_dump -U postgres -p 5440 MYPRODDB > MYPRODDB_2022.dmp
The database consists multiple schemas (let's say Schema A,B,C and D) but i need to restore only one schema (schema A).
How can i achieve that? The command below didn't work and gave error:
pg_restore -U postgres -d MYPRODDB -n A -p 5440 < MYPRODDB_2022.dmp
pgrestore: error: input file appears to be a text format dump. please
use psql.
You cannot do that with a plain format dump. That's one of the reasons why you always use a different format unless you need an SQL script.
If you want to stick with a plain text dump:
pg_dump -U postgres -p 5440 -n A MYPRODDB > MYPRODDB_2022.dmp
psql -U postgres -d MYPRODDB -p 5440 -f MYPRODDB_2022.dmp
Though dumping back over the same database as above will throw errors unless you use --clean or its short form -c to create commands to drop existing objects before restoring them:
-c
--clean
Output commands to clean (drop) database objects prior to outputting the commands for creating them. (Unless --if-exists is also specified, restore might generate some harmless error messages, if any objects were not present in the destination database.)
This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call pg_restore.
Probably also a good idea to throw in --if-exists:
--if-exists
Use conditional commands (i.e., add an IF EXISTS clause) when cleaning database objects. This option is not valid unless --clean is also specified.
What I want:
I want a pg_dump of a database (let's call the database as 'test').
In this pg_dump I want only the tables without the following: data, triggers, functions, sequences, etc.
What I am doing to get what I want:
The command I run is as follows:
pg_dump -U postgres -s test > dump_test.sql
What I am observing:
Then when I try to restore this dump on another server as follows:
pg_dump -U postgres new_amazing_test < dump_test.sql
I notice that part of the output of running the above command says the following:
CREATE TRIGGER
CREATE FUNCTION
CREATE SEQUENCE
CREATE INDEX
What I actually want:
All I want is the table itself and not these triggers, functions, sequence and indexes. How do I only get the tables only?
Other things I have tried/considered:
I have tried doing this:
pg_dump -U postgres -s -schema=\dtmvE test > dump_test.sql
but it didn't work because the pattern needs to be a name not a \d pattern.
See here: https://www.postgresql.org/docs/13/app-pgdump.html for information on -n pattern option.
One thing that may solve it is to use multiple switches like this:
pgdump -t mytable1 -t mytable2 -t mytable3 ... -t mytableN > dump_test.sql
However, the above solution is impractical because I have some 70+ tables on my database.
Other relevant info:
PostgreSQL version is 13.1
Ubuntu version v16.04 (I have also tried this on Ubuntu v18.04)
I would dump everything with a custom format schema-only dump (-F c -s) and run pg_restore -l on the resulting dump. That gives you a table of contents. Delete everything except the tables from that file and use it as input to pg_restore -L to restore exactly those items from the archive that you need.
This may not be as simple as you have hoped for, but it is certainly simpler than writing tons of -t options, and you may be able to automatize it.
you can use the flag --section as described in the postgres documentation
--section=sectionname
Only dump the named section. The section name can be pre-data, data, or post-data. This option can be specified more than once to select multiple sections. The default is to dump all sections.
The data section contains actual table data, large-object contents, and sequence values. Post-data items include definitions of indexes, triggers, rules, and constraints other than validated check constraints. Pre-data items include all other data definition items.
example:
pg_dump --schema-only --section=pre-data
I try to restore a large table
pg_restore.exe -U postgres -d db_name --clean --if-exists --single-transaction F:\Backups\PostgreSQL\data.dump.gz
So I have a read lock for a few minutes. How to restore data with zero downtime for reading? I need only reading.
You would need to not do the --clean and instead do --data-only, but then do a DELETE from tablename inside the same transaction, before the COPY. I don't think there is a way to make pg_restore do this for you, but you could dump the output of pg_restore to a file and edit it, or use something like sed or perl to inject the DELETE.
This should work for table names which don't need to be quoted, and assuming none of the data being copied has first column which starts with 'COPY ':
pg_restore --data-only --single-transaction dmp.dmp -f -| perl -pe 's/^COPY ([\w.]+)/delete from $1; copy $1/' | psql -U postgres -d db_name
However, your schema changing method doesn't seem so dirty to me. It still requires a momentary access exclusive lock, so it isn't really zero downtime, but it might be unnoticable downtime if it can acquire said lock quickly enough.
So I made a backup of a table using pg_dump:
pg_dump -U bob -F c -d commerce -t orders > orders.dump
This table had several listed indexes such as a primary key
However when I restore this table into a development database on another system using pg_restore:
pg_restore -U bob -d commerce -t orders > orders.dump
No primary key or indexes are listed
What am I doing wrong?
You are doing nothing wrong, unfortunately pg_restore -t restores only the table, nothing else, regardless of how you created the dump and what is inside the dump itself.
This has been somehow clarified in V12 PostgreSQL docs, that states:
This flag does not behave identically to the -t flag of pg_dump. There is not currently any provision for wild-card matching in pg_restore, nor can you include a schema name within its -t. And, while pg_dump's -t flag will also dump subsidiary objects (such as indexes) of the selected table(s), pg_restore's -t flag does not include such subsidiary objects.
the only way to make sure that restoring a table will carry all the indexes is to address them by name, something like:
pg_restore -U bob -d commerce -t orders -I index1 -I index2 -I index3 > orders.dump
I have two databases on the same server. One named A and one named B. Booth databases have the same structure. I want to empty database B and load it with data from database A. Which is the best way to do this?
I have tried to take backup of database A in plain format. Then open the resulting sql-file and replace every occurence of 'A' with 'B' and then run the sql-script. This worked but I think it should be an easier way to move data from one database to another. Is it?
I use 'pgAdmin III' as my tool, but this is not necessary.
This is my first post here, hope the question is relevant and structured well enough. I tried google first but found it hard to find anyone with the same question.
Thanks in advance!
/David
SOLUTION: After help from Craig, this is how I did it
pg_dump -Fc -a -f a.dbbackup A
psql -c 'TRUNCATE table1, table2, ..., tableX CASCADE'
pg_restore dblive.backup -d B -c (not sure if -c was necessary)
Backup:
pg_dump -Fc -f a.dbbackup
Restore:
psql -c 'CREATE DATABASE b;'
pg_restore --dbname b a.dbbackup
Use the -U, -h etc options as required to connect to the correct host as the correct user with permissions to dump, create and restore the DB. See the docs for psql, pg_dump and pg_restore for more info (they all take the same options for connection control).