Move data from a postgres database to another database with a different owner - postgresql

I have two postgres databases with the same structure and tables, hosted on the same server, the databases are owned by different users:
database1: owner1
database2: owner2
I want to know the best way to copy the content of database1 into database2 (overriding the original content of database2).
I tried pg_dump & pg_restore but the dump will explicitly specify owner1 as the owner of the tables and then I have permission issues when trying to get the data from database2 using owner2. I had to manually re-grant all privileges on database2 to owner2 and set the owner of all tables again to owner2.
My approach:
pg_dump database1 > database1.psql
postgres=# drop database database2;
postgres=# create database database2;
psql --d database2 -f database1.psql
Is there a simpler way to copy the data from database1 to database2 without having to update the user permissions manually after the restore.

Yes, you can use pg_dump to specify that you don't want to export ownership:
-O (or --no-owner) does not export ownership
-x (or --no-privileges) Prevent dumping of access privileges (grant/revoke commands)
pg_dump db_name -O -x > output_file

Related

If I drop my heroku table, can I later restore the database with a Heroku backup?

I'm having some issue with my migrations in Heroku. I've tried several things but none seem to work.
I feel like I should just drop the table that's causing the issue, which will delete all the data in production.
If I drop the table, and re-created the table, will I be able to restore all of the data I lost? Because I will backup my database on Heroku before I drop the table.
Thanks!
You should run a backup with
pg_dump -h hostname -p 5432 -U username -F c -t mytable -f dumpfile mydatabase
Then, after you have dropped and re-created the table, you can restore the data with
pg_restore -h hostname -p 5432 -U username -a -d mydatabase dumpfile
However, this will not work if the table structure has changed.
In that case, you might want to use COPY directly to write the data to a file and restore them from there.
Let's for example assume you plan to add another column. Then you could dump with
COPY (SELECT *, NULL FROM mytable) TO '/file/on/dbserver';
After the table was created with the new column, you can
COPY mytable FROM '/file/on/dbserver';
The new column will be filled with the NULL values.
Modify this basic recipe for more fancy requirements.

postgres: taking backup and restore as different user: Should i use both --username and --role along with --no-owner

I have read few posts regarding this topic
Currently for backing up I do the following. My owner of the database is owner1
pg_dump --username=postgres -Fc dbname -f db_name.dump
I heard that any superuser (here postgres) or even user with read permission can take backup. It need not be the owner of the database.
Now I want to restore with a different user who is also a superuser (we will call him owner2).
I have database called owner2_db created by owner2
pg_restore -v -d owner2_db --no-owner --username=owner2 --role=owner2 db_name.dump
Because some places I saw they don't use --role=owner2. So in my case what's the correct way?
If you connect to the database as role x, there is no need to run an extra SET ROLE x, so you can omit the --role option. It is useful if you want the objects to be owned by a NOLOGIN role.
When restoring as a different user, and the original roles don't exist on the destination cluster, you may want to use the -x and -O options.

can I force usage of specific tablespace for singe pg_restore task?

is there some clever way to force usage of specific tablespace for pg_restore task in situation when I need to run independently in parallel several pg_restore tasks and I would need to direct some of them into tablespace on SSD and others into specific tablespaces on other standard disks?
We got this use case - daily during the night we need to copy daily partitions into new warehouse database. Standard pg_dump / pg_restore is currently used (logical replication is currently not possible for internal policy reasons although it would be highly desirable).
More pg_restore tasks run in parallel on target database and I would need to set specific target tablespace for specific task - therefore global "default_tablespace" setting does not help. I also cannot reconfigure source database to have proper tablespace directly in dump not to mention that with growth of warehouse DB I would need to change target tablespaces from time to time.
Originally I thought PG env var "PGTARGETSESSIONATTRS" could maybe help me to set "default_tablespace" for specific session of pg_restore but looks like this var cannot do it.
Databases are PG 10 (source) and PG 11 (target).
Based on comment from #a_horse_with_no_name I tested this sequence of commands which does what I need:
pg_dump -U user1 -v -F c --no-tablespaces -t schema.table database1 -f exportfile
pg_restore -U user2 -v --no-tablespace -d database2 --schema-only -c exportfile
psql -U user2 -d database2 -c "alter table schema.table set tablespace xxxx"
pg_restore -U user2 -v --no-tablespace -d database2 --data-only exportfile

Postgres generate user grant statements for all objects

We have a dev Postgres DB that one of the developers has created an application in. Is there an existing query that will pull information from the role_table_grants table and generate all the correct statements to move into production? PGAdmin will create all the generate scripts for certain things but I haven't found a less manual way rather than just writing all the statements by hand based on the role_table_grants table. Not asking anyone to dump time into creating it, just thought I would ask if there are some existing migration scripts out there that would help.
Thanks.
Dump the schema to a file; use pg_dump or pg_dumpall with the --schema-only option.
Then use grep to get all the GRANT and REVOKE statements.
On my dev machine, I might do something like this.
$ pg_dump -h localhost -p 5435 -U postgres --schema-only sandbox > sandbox.sql
$ grep "^GRANT\|^REVOKE" sandbox.sql
REVOKE ALL ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON SCHEMA public FROM postgres;
GRANT ALL ON SCHEMA public TO postgres;
[snip]
Perhaps pg_dumpall is what you need. Probably with --schema-only option in order to dump just schema, not development data.
If you need to move not all databases, you can use pg_dumpall --globals-only to dump roles (which don't belong to any particular database), and then use pg_dump to dump one certain databases.

Create a copy of a postgres database while other users are connected to it

I know two methods of copying a postgres database, but both of them require you to have exclusive access to the database, something you do not have while trying to copy a database from production in order to use it for testing something, like a software upgrade/migration.
psql>create database mydb_test with template mydb owner dbuser;
ERROR: source database "mydb" is being accessed by other users
>createdb -O dbuser -T mydb mydb_test
createdb: database creation failed: ERROR: source database "mydb" is being accessed by other users
That worked:
psql
create database mydb_test owner dbuser;
\q
pg_dump mydb|psql -d mydb_test