Postgres generate user grant statements for all objects - postgresql

We have a dev Postgres DB that one of the developers has created an application in. Is there an existing query that will pull information from the role_table_grants table and generate all the correct statements to move into production? PGAdmin will create all the generate scripts for certain things but I haven't found a less manual way rather than just writing all the statements by hand based on the role_table_grants table. Not asking anyone to dump time into creating it, just thought I would ask if there are some existing migration scripts out there that would help.
Thanks.

Dump the schema to a file; use pg_dump or pg_dumpall with the --schema-only option.
Then use grep to get all the GRANT and REVOKE statements.
On my dev machine, I might do something like this.
$ pg_dump -h localhost -p 5435 -U postgres --schema-only sandbox > sandbox.sql
$ grep "^GRANT\|^REVOKE" sandbox.sql
REVOKE ALL ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON SCHEMA public FROM postgres;
GRANT ALL ON SCHEMA public TO postgres;
[snip]

Perhaps pg_dumpall is what you need. Probably with --schema-only option in order to dump just schema, not development data.
If you need to move not all databases, you can use pg_dumpall --globals-only to dump roles (which don't belong to any particular database), and then use pg_dump to dump one certain databases.

Related

PSQL prevent "COMMENT ON" on the DB dump

We are migrating some products, one of the steps is to migrate the product databases.
I have steps to
export the existing DB pg_dump --no-owner --clean --blobs --no-privileges -U dbuser old_dbname -f bkpfile.sql
import the dump to a different DB psql -U dbuser2 new_dbname -f bkpfile.sql
The problem is the old database contains statement COMMENT ON DATABASE old_dbname IS 'Rxxxxx';
The new DB user must not have permissions on the old database and imho it's not good to refer the old database name anyway in the dump.
Is there a way to create a complete DB dump without the COMMENT ON DATABASE statement?
Edit:
PostgreSQL 9.6
Steps to reproduce:
CREATE DATABASE testdb;
COMMENT ON DATABASE testdb IS 'some comment';
CREATE TABLE xx (id int);
and then dump the database, the dump contains reference to the database name COMMENT ON DATABASE testdb IS 'some comment'; which prevents importing the backup to a new database
pg_dump --no-owner --clean --blobs --no-privileges testdb
We could manually remove the comment statement or filter the comment using different tools (grep), but manual intervention or text-based filtering on top of the backup could cause data corruption.
This comment is only dumped in PostgreSQL versions below v11. See this entry in the release notes:
pg_dump and pg_restore, without --create, no longer dump/restore database-level comments and security labels; those are now treated as properties of the database.
9.6 will go out of support soon anyway, so this is a good opportunity to upgrade.

pg_restore --clean is not dropping and clearing the database

I am having an issue with pg_restore --clean not clearing the database.
Or do I misunderstand what the --clean does, I am expecting it to truncate the database tables and reinitialize the indexes/primary keys.
I am using 9.5 on rds
This is the full command we use
pg_restore --verbose --clean --no-acl --no-owner -h localhost -U superuser -d mydatabase backup.dump
Basically what is happening is this.
I do a nightly backup of my production db, and restore it to an analytics db for the analyst to churn and run their reports.
I found out recently that the rails application used to view the reports was complaining that the primary keys were missing from the restored analytics database.
So I started investigating the production db, the analytics db etc. Which was when I realized that multiple rows with the same primary key existed in the analytics database.
I ran a few short experiments and realized that every time the pg_restore script is run it inserts duplicate data into the tables, this leads me to think that the --clean is not dropping and restoring the data. Because if I were to drop the schema beforehand, I don't get duplicate data.
To remove all tables from a database (but keep the database itself), you have two options.
Option 1: Drop the entire schema
You will need to re-create the schema and its permissions. This is usually good enough for development machines only.
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
GRANT ALL ON SCHEMA public TO postgres;
GRANT ALL ON SCHEMA public TO public;
Applications usually use the "public" schema. You may encounter other schema names when working with a (legacy) application's database.
Note that for Rails applications, dropping and recreating the database itself is usually fine in development. You can use bin/rake db:drop db:create for that.
Option 2: Drop each table individually
Prefer this for production or staging servers. Permissions may be managed by your operations team, and you do not want to be the one who messed up permissions on a shared database cluster.
The following SQL code will find all table names and execute a DROP TABLE statement for each.
DO $$ DECLARE
r RECORD;
BEGIN
FOR r IN (SELECT tablename FROM pg_tables WHERE schemaname = current_schema()) LOOP
EXECUTE 'DROP TABLE IF EXISTS ' || quote_ident(r.tablename) || ' CASCADE'; -- DROP TABLE IF EXISTS instead DROP TABLE - thanks for the clarification Yaroslav Schekin
END LOOP;
END $$;
Original:
https://makandracards.com/makandra/62111-how-to-drop-all-tables-in-postgresql

Can i stop the "Public" schema from being Dropped with pg_dump?

I am doing a pg_dump command as follows:
/Library/PostgreSQL/8.4/bin/pg_dump --host localhost --port 5432 --username xxx --format plain --clean --inserts --verbose --file /Users/xxx/documents/output/SYSTEM_admin_20131203101809.sql --exclude-table public.dbmirror_mirroredtransaction --exclude-table public.dbmirror_mirrorhost --exclude-table public.dbmirror_pending --exclude-table public.dbmirror_pendingdata --exclude-table public.mdflog --exclude-table public.fcpersistentstore --exclude-table public.backup_restore --exclude-table public.mdflogeventcode testdb
The problem I have is that in the plain sql file that is created it adds a command to try and DROP the whole of the PUBLIC schema as shown in this snippet:
...
DROP FUNCTION public.f_updateeventlog();
DROP FUNCTION public.f_updateadmindata();
DROP PROCEDURAL LANGUAGE plpgsql;
DROP SCHEMA public;
CREATE SCHEMA public;
ALTER SCHEMA public OWNER TO postgres;
COMMENT ON SCHEMA public IS 'standard public schema';
...
I DO want to drop all the other objects I have not excluded in the exclude-tables parameters I have provided, but I DONT want it to DROP the schema.
I have tried adding the schema as an exclude-table parameter but that did not work.
I am using Postgresql 8.4 for the pg_dump.
EDIT: I wanted to update this question and say I believe that it is not possible to get pg_dump to exclude the DROP / CREATE public command in the plain format. I believe you have to use the custom format and then pg_restore in order to stop that from happening. As I am using psql to restore and pg_dump with plain format, I simply remove the commands I don't want from the sql file after its created automatically as part of the Java process I am creating and I can get around this. I am leaving the question in case someone does find a way of doing this.
I think your best bet is to post-process the dump after it has been made. You could do this with grep or the like or just search and delete from the dump after it has been made.
The fact is that you are using the --clean option which drops the restored database objects. There isn't a ready way to tell pg_dump to drop only some restored database objects, which is probably a good thing.

pg_dump vs pg_dumpall? which one to use to database backups?

I tried pg_dump and then on a separate machine I tried to import the sql and populate the database, I see
CREATE TABLE
ERROR: role "prod" does not exist
CREATE TABLE
ERROR: role "prod" does not exist
CREATE TABLE
ERROR: role "prod" does not exist
CREATE TABLE
ERROR: role "prod" does not exist
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
WARNING: no privileges could be revoked for "public"
REVOKE
ERROR: role "postgres" does not exist
ERROR: role "postgres" does not exist
WARNING: no privileges were granted for "public"
GRANT
which means my user and roles and grant information is not in pg_dump
On the other hand we have pg_dumpall, I read conversation, and this does not lead me anywhere?
Question
- Which one should I be using for database backups? pg_dump or pg_dumpall?
- the requirement is that I can take the backup and should be able to import to any machine and it should work just fine.
The usual process is:
pg_dumpall --globals-only to get users/roles/etc
pg_dump -Fc for each database to get a nice compressed dump suitable for use with pg_restore.
Yes, this kind of sucks. I'd really like to teach pg_dump to embed pg_dumpall output into -Fc dumps, but right now unfortunately it doesn't know how so you have to do it yourself.
Up until PostgreSQL 11 there was also a nasty caveat with this approach: Neither pg_dump, nor pg_dumpall in --globals-only mode would dump user access GRANTs on DATABASEs. So you pretty much had to extract them from the catalogs or filter a pg_dumpall. This is fixed in PostgreSQL 11; see the release notes.
Make pg_dump dump the properties of a database, not just its contents (Haribabu Kommi)
Previously, attributes of the database itself, such as database-level GRANT/REVOKE permissions and ALTER DATABASE SET variable settings, were only dumped by pg_dumpall. Now pg_dump --create and pg_restore --create will restore these database properties in addition to the objects within the database. pg_dumpall -g now only dumps role- and tablespace-related attributes. pg_dumpall's complete output (without -g) is unchanged.
You should also know about physical backups - pg_basebackup, PgBarman and WAL archiving, PITR, etc. These offer much "finer grained" recovery, down to the minute or individual transaction. The downside is that they take up more space, are only restoreable to the same PostgreSQL version on the same platform, and back up all tables in all databases with no ability to exclude anything.

How to solve privileges issues when restore PostgreSQL Database

I have dumped a clean, no owner backup for Postgres Database with the command
pg_dump sample_database -O -c -U
Later, when I restore the database with
psql -d sample_database -U app_name
However, I encountered several errors which prevents me from restoring the data:
ERROR: must be owner of extension plpgsql
ERROR: must be owner of schema public
ERROR: schema "public" already exists
ERROR: must be owner of schema public
CREATE EXTENSION
ERROR: must be owner of extension plpgsql
I digged into the plain-text SQL pg_dump generates and I found it contains SQL
CREATE SCHEMA public;
COMMENT ON SCHEMA public IS 'standard public schema';
CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;
COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
I think the causes are that the user app_name doesn't have the privileges to alter the public schema and plpgsql.
How could I solve this issue?
To solve the issue you must assign the proper ownership permissions. Try the below which should resolve all permission related issues for specific users but as stated in the comments this should not be used in production:
root#server:/var/log/postgresql# sudo -u postgres psql
psql (8.4.4)
Type "help" for help.
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------------+-------------+-----------
<user-name> | Superuser | {}
: Create DB
postgres | Superuser | {}
: Create role
: Create DB
postgres=# alter role <user-name> superuser;
ALTER ROLE
postgres=#
So connect to the database under a Superuser account sudo -u postgres psql and execute a ALTER ROLE <user-name> Superuser; statement.
Keep in mind this is not the best solution on multi-site hosting server so take a look at assigning individual roles instead: https://www.postgresql.org/docs/current/static/sql-set-role.html and https://www.postgresql.org/docs/current/static/sql-alterrole.html.
AWS RDS users if you are getting this it is because you are not a superuser and according to aws documentation you cannot be one. I have found I have to ignore these errors.
For people using Google Cloud Platform, any error will stop the import process.
Personally I encountered two different errors depending on the pg_dump command I issued :
1- The input is a PostgreSQL custom-format dump. Use the pg_restore command-line client to restore this dump to a database.
Occurs when you've tried to dump your DB in a non plain text format. I.e when the command lacks the -Fp or --format=plain parameter. However, if you add it to your command, you may then encounter the following error :
2- SET SET SET SET SET SET CREATE EXTENSION ERROR: must be owner of extension plpgsql
This is a permission issue I have been unable to fix using the command provided in the GCP docs, the tips from this current thread, or following advice from Google Postgres team here. Which recommended to issue the following command :
pg_dump -Fp --no-acl --no-owner -U myusername myDBName > mydump.sql
The only thing that did the trick in my case was manually editing the dump file and commenting out all commands relating to plpgsql.
I hope this helps GCP-reliant souls.
Update :
It's easier to dump the file commenting out extensions, especially since some dumps can be huge :
pg_dump ... | grep -v -E '(CREATE\ EXTENSION|COMMENT\ ON)' > mydump.sql
Which can be narrowed down to plpgsql :
pg_dump ... | grep -v -E '(CREATE\ EXTENSION\ IF\ NOT\ EXISTS\ plpgsql|COMMENT\ ON\ EXTENSION\ plpgsql)' > mydump.sql
Try using the -L flag with pg_restore by specifying the file taken from pg_dump -Fc
-L list-file
--use-list=list-file
Restore only those archive elements that are listed in list-file, and restore them in the order they appear in the file. Note that if filtering switches such as -n or -t are used with -L, they will further restrict the items restored.
list-file is normally created by editing the output of a previous -l operation. Lines can be moved or removed, and can also be commented out by placing a semicolon (;) at the start of the line. See below for examples.
https://www.postgresql.org/docs/9.5/app-pgrestore.html
pg_dump -Fc -f pg.dump db_name
pg_restore -l pg.dump | grep -v 'COMMENT - EXTENSION' > pg_restore.list
pg_restore -L pg_restore.list pg.dump
Here you can see the Inverse is true by outputting only the comment:
pg_dump -Fc -f pg.dump db_name
pg_restore -l pg.dump | grep 'COMMENT - EXTENSION' > pg_restore_inverse.list
pg_restore -L pg_restore_inverse.list pg.dump
--
-- PostgreSQL database dump
--
-- Dumped from database version 9.4.15
-- Dumped by pg_dump version 9.5.14
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
--
-- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner:
--
COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
--
-- PostgreSQL database dump complete
--
You can probably safely ignore the error messages in this case. Failing to add a comment to the public schema and installing plpgsql (which should already be installed) aren't going to cause any real problems.
However, if you want to do a complete re-install you'll need a user with appropriate permissions. That shouldn't be the user your application routinely runs as of course.
Shorter answer: ignore it.
This module is the part of Postgres that processes the SQL language. The error will often pop up as part of copying a remote database, such as with
a 'heroku pg:pull'. It does not overwrite your SQL processor and warns you about that.
For people using AWS, the COMMENT ON EXTENSION is possible only as superuser, and as we know by the docs, RDS instances are managed by Amazon. As such, to prevent you from breaking things like replication, your users - even the root user you set up when you create the instance - will not have full superuser privileges:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html
When you create a DB instance, the master user system account that you
create is assigned to the rds_superuser role. The rds_superuser role
is a pre-defined Amazon RDS role similar to the PostgreSQL superuser
role (customarily named postgres in local instances), but with some
restrictions. As with the PostgreSQL superuser role, the rds_superuser
role has the most privileges on your DB instance and you should not
assign this role to users unless they need the most access to the DB
instance.
In order to fix this error, just use -- to comment out the lines of SQL that contains COMMENT ON EXTENSION
EDIT 1: As suggested by Dmitrii I., you can also omit comments when dumping: pg_dump --no-comments
For people who have narrowed down the issue to the COMMENT ON statements (as per various answers below) and who have superuser access to the source database from which the dump file is created, the simplest solution might be to prevent the comments from being included to the dump file in the first place, by removing them from the source database being dumped...
COMMENT ON EXTENSION postgis IS NULL;
COMMENT ON EXTENSION plpgsql IS NULL;
COMMENT ON SCHEMA public IS NULL;
Future dumps then won't include the COMMENT ON statements.
Use the postgres (admin) user to dump the schema, recreate it and grant priviledges for use before you do your restore.
In one command:
sudo -u postgres psql -c "DROP SCHEMA public CASCADE;
create SCHEMA public;
grant usage on schema public to public;
grant create on schema public to public;" myDBName
For me, I was setting up a database with pgAdmin and it seems setting the owner during database creation was not enough. I had to navigate down to the 'public' schema and set the owner there as well (was originally 'postgres').
Some of the answers have already provided various approaches related to getting rid of the create extension and comment on extensions. For me, the following command line seemed to work and be the simplest approach to solve the problem:
cat /tmp/backup.sql.gz | gunzip - | \
grep -v -E '(CREATE\ EXTENSION|COMMENT\ ON)' | \
psql --set ON_ERROR_STOP=on -U db_user -h localhost my_db
Some notes
The first line is just uncompressing my backup and you may need to adjust accordingly.
The second line is using grep to get rid of offending lines.
the third line is my psql command; you may need to adjust as you normally would use psql for restore.