Copy only constraint Keys from one database to another in pgAdmin - postgresql

I have two servers (and two databases each one from a server on pgAdmin):
The first server is for testing and the second one is the real server that the client will work on it.
The two databases have same tables: 9 tables on the first one and the same 9 tables on the second one.
But on the test server, I have constraint keys (Check, foreign, unique, not null) that I have tested and ready to be deployed on the server 2.
How i can copy only those constraints?
Thank you so much.

The simplest way to do this is to use a command-line pg_dump tool, which is included in all PostgreSQL installations.
Create a backup of the target database:
$ pg_dump -h /var/run/postgresql/ -d prod -Fd -f prod.pg_dump
Create a backup of the source database schema:
$ pg_dump -h /var/run/postgresql/ -d test --section=pre-data --section=post_data -Fd -f test_schema.pg_dump
Drop the target database:
postgres=# drop database prod;
Recreate empty target database:
postgres=# create database prod;
Import to the target database:
test table definitions:
$ pg_restore -d prod --section=pre-data test_schema.pg_dump
prod data:
$ pg_restore -d prod --section=data prod.pg_dump
test constraints and indexes:
$ pg_restore -d prod --section=post-data test_schema.pg_dump
Analyze the target database:
prod=> analyze;
This can be optimized with parallel restore with proper usage of pg_restore -j.
You might consider a proper schema change management solution for the future.

Related

How to create a table from backup file in Postgres?

After making some schema change in table message in postgresql 13 database, the table is backed up in pgadmin4 in a file named message.sql. The schema change needs to be populated in another database. What I did is to drop the table message in that database. But I have hard time to re-create table message from the backup file messages.sql (not using CREATE table message ....). I use pg_restore to restore data but was not successful for re-creating table schema. Here is command I tried with no luck:
pg_restore --data-only -h localhost -U postgres -W -d dynamo -t messages /home/download/messages.sql
pg_restore --clean -h localhost -U postgres -W -d dynamo -t messages /home/download/messages.sql
I also tried to create an blank table (no column) called messages in the database and repeated above command, again without luck.
Here is how a new table is added to the current database:
backup the current db on the server with pg_dump
create the new database on dev PC with addition of new table
On dev PC, use pgadmin to restore the backup file in step 1 to new database created in step 2.
backup in pgadmin and create a .backup file for db in step 3.
on server, use pg_restore to overwrite the current db with new database backup file created in step 4:
pg_restore --clean -U postgres --dbname=mydb -W -h localhost --verbose /home/download/mydb.backup
open psql terminal to verify the new table

If I drop my heroku table, can I later restore the database with a Heroku backup?

I'm having some issue with my migrations in Heroku. I've tried several things but none seem to work.
I feel like I should just drop the table that's causing the issue, which will delete all the data in production.
If I drop the table, and re-created the table, will I be able to restore all of the data I lost? Because I will backup my database on Heroku before I drop the table.
Thanks!
You should run a backup with
pg_dump -h hostname -p 5432 -U username -F c -t mytable -f dumpfile mydatabase
Then, after you have dropped and re-created the table, you can restore the data with
pg_restore -h hostname -p 5432 -U username -a -d mydatabase dumpfile
However, this will not work if the table structure has changed.
In that case, you might want to use COPY directly to write the data to a file and restore them from there.
Let's for example assume you plan to add another column. Then you could dump with
COPY (SELECT *, NULL FROM mytable) TO '/file/on/dbserver';
After the table was created with the new column, you can
COPY mytable FROM '/file/on/dbserver';
The new column will be filled with the NULL values.
Modify this basic recipe for more fancy requirements.

How to pg_restore one table and its schema from a Postgres dump?

I am having some difficulties with restoring the schema of a table. I dumped my Heroku Postgres db and I used pg_restore to restore one table from it into my local db (it has more than 20 tables). It was successfully restored, but I was having issues when I tried to insert new data into the table.
When I opened up my database using psql, I found out that the restored table is available with all the data, but its schema has zero rows. Is there anyway I could import both the table and its schema from the dump? Thank you very much.
This is how I restored the table into my local db:
pg_restore -U postgres --dbname my_db --table=message latest.dump
Edit:
I tried something like this following the official docs, but it just gets blocked and nothing happened. My db is small, no more than a couple of megabytes and the table's schema I am trying to restore has no more than 100 row.
pg_restore -U postgres --dbname mydb --table=message --schema=message_id_seq latest.dump
As a more general answer (I needed to restore a single table from a huge backup), you may want to take a look at this post: https://thequantitative.medium.com/restoring-individual-tables-from-postgresql-pg-dump-using-pg-restore-options-ef3ce2b41ab6
# run the schema-only restore as root
pg_restore -U postgres --schema-only -d new_db /directory/path/db-dump-name.dump
# Restore per table data using something like
pg_restore -U postgres --data-only -d target-db-name -t table_name /directory/path/dump-name.dump
From the Heroku DevCenter here
Heroku Postgres is integrated directly into the Heroku CLI and offers
many helpful commands that simplify common database tasks
You can check here if your environment is correctly configured.
In this way, you can use the Heroku CLI pg:pull command to pull remote data from a Heroku Postgres database to a local database on your machine.
For example:
$ heroku pg:pull HEROKU_POSTGRESQL_MAGENTA mylocaldb --app sushi

can I force usage of specific tablespace for singe pg_restore task?

is there some clever way to force usage of specific tablespace for pg_restore task in situation when I need to run independently in parallel several pg_restore tasks and I would need to direct some of them into tablespace on SSD and others into specific tablespaces on other standard disks?
We got this use case - daily during the night we need to copy daily partitions into new warehouse database. Standard pg_dump / pg_restore is currently used (logical replication is currently not possible for internal policy reasons although it would be highly desirable).
More pg_restore tasks run in parallel on target database and I would need to set specific target tablespace for specific task - therefore global "default_tablespace" setting does not help. I also cannot reconfigure source database to have proper tablespace directly in dump not to mention that with growth of warehouse DB I would need to change target tablespaces from time to time.
Originally I thought PG env var "PGTARGETSESSIONATTRS" could maybe help me to set "default_tablespace" for specific session of pg_restore but looks like this var cannot do it.
Databases are PG 10 (source) and PG 11 (target).
Based on comment from #a_horse_with_no_name I tested this sequence of commands which does what I need:
pg_dump -U user1 -v -F c --no-tablespaces -t schema.table database1 -f exportfile
pg_restore -U user2 -v --no-tablespace -d database2 --schema-only -c exportfile
psql -U user2 -d database2 -c "alter table schema.table set tablespace xxxx"
pg_restore -U user2 -v --no-tablespace -d database2 --data-only exportfile

postgres copy tables from another database using pg_dump?

I would like to copy two tables from database A to database B, in postgres
how can I do it using pg_dump without losing the previous tables and data in database B ?
I read some answers in Stack Overflow suggesting using pg_dump but in the documentation page I read?
The idea behind this dump method is to generate a text file with SQL
commands that, when fed back to the server, will recreate the database
in the same state as it was at the time of the dump
Doesn't that mean it will delete the previous data in database B?
If someone could tell me step by step solution to move two tables in database A to database B without losing any previous data in Database B, it would be helpful.
I found the answer to my question :
sudo -u OWNER_USER pg_dump -t users databasename1 | sudo -u OWNER_USER psql databasename2
if you pg_restore a database into b database, of course a will replace b. instead pick specific table you would like to restore using pg_restore -t
you could pg_restore to different schema, by using -O (no_owner)
so let say
pg_dump -Fc -f dump.dmp -v -h host -U user_login -n schema_to_dump
you can
pg_restore -v -h host -U user_login -n schema_to_import -a --disable-triggers dump.dmp