I have dumped a clean, no owner backup for Postgres Database with the command
pg_dump sample_database -O -c -U
Later, when I restore the database with
psql -d sample_database -U app_name
However, I encountered several errors which prevents me from restoring the data:
ERROR: must be owner of extension plpgsql
ERROR: must be owner of schema public
ERROR: schema "public" already exists
ERROR: must be owner of schema public
CREATE EXTENSION
ERROR: must be owner of extension plpgsql
I digged into the plain-text SQL pg_dump generates and I found it contains SQL
CREATE SCHEMA public;
COMMENT ON SCHEMA public IS 'standard public schema';
CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;
COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
I think the causes are that the user app_name doesn't have the privileges to alter the public schema and plpgsql.
How could I solve this issue?
To solve the issue you must assign the proper ownership permissions. Try the below which should resolve all permission related issues for specific users but as stated in the comments this should not be used in production:
root#server:/var/log/postgresql# sudo -u postgres psql
psql (8.4.4)
Type "help" for help.
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------------+-------------+-----------
<user-name> | Superuser | {}
: Create DB
postgres | Superuser | {}
: Create role
: Create DB
postgres=# alter role <user-name> superuser;
ALTER ROLE
postgres=#
So connect to the database under a Superuser account sudo -u postgres psql and execute a ALTER ROLE <user-name> Superuser; statement.
Keep in mind this is not the best solution on multi-site hosting server so take a look at assigning individual roles instead: https://www.postgresql.org/docs/current/static/sql-set-role.html and https://www.postgresql.org/docs/current/static/sql-alterrole.html.
AWS RDS users if you are getting this it is because you are not a superuser and according to aws documentation you cannot be one. I have found I have to ignore these errors.
For people using Google Cloud Platform, any error will stop the import process.
Personally I encountered two different errors depending on the pg_dump command I issued :
1- The input is a PostgreSQL custom-format dump. Use the pg_restore command-line client to restore this dump to a database.
Occurs when you've tried to dump your DB in a non plain text format. I.e when the command lacks the -Fp or --format=plain parameter. However, if you add it to your command, you may then encounter the following error :
2- SET SET SET SET SET SET CREATE EXTENSION ERROR: must be owner of extension plpgsql
This is a permission issue I have been unable to fix using the command provided in the GCP docs, the tips from this current thread, or following advice from Google Postgres team here. Which recommended to issue the following command :
pg_dump -Fp --no-acl --no-owner -U myusername myDBName > mydump.sql
The only thing that did the trick in my case was manually editing the dump file and commenting out all commands relating to plpgsql.
I hope this helps GCP-reliant souls.
Update :
It's easier to dump the file commenting out extensions, especially since some dumps can be huge :
pg_dump ... | grep -v -E '(CREATE\ EXTENSION|COMMENT\ ON)' > mydump.sql
Which can be narrowed down to plpgsql :
pg_dump ... | grep -v -E '(CREATE\ EXTENSION\ IF\ NOT\ EXISTS\ plpgsql|COMMENT\ ON\ EXTENSION\ plpgsql)' > mydump.sql
Try using the -L flag with pg_restore by specifying the file taken from pg_dump -Fc
-L list-file
--use-list=list-file
Restore only those archive elements that are listed in list-file, and restore them in the order they appear in the file. Note that if filtering switches such as -n or -t are used with -L, they will further restrict the items restored.
list-file is normally created by editing the output of a previous -l operation. Lines can be moved or removed, and can also be commented out by placing a semicolon (;) at the start of the line. See below for examples.
https://www.postgresql.org/docs/9.5/app-pgrestore.html
pg_dump -Fc -f pg.dump db_name
pg_restore -l pg.dump | grep -v 'COMMENT - EXTENSION' > pg_restore.list
pg_restore -L pg_restore.list pg.dump
Here you can see the Inverse is true by outputting only the comment:
pg_dump -Fc -f pg.dump db_name
pg_restore -l pg.dump | grep 'COMMENT - EXTENSION' > pg_restore_inverse.list
pg_restore -L pg_restore_inverse.list pg.dump
--
-- PostgreSQL database dump
--
-- Dumped from database version 9.4.15
-- Dumped by pg_dump version 9.5.14
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
--
-- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner:
--
COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
--
-- PostgreSQL database dump complete
--
You can probably safely ignore the error messages in this case. Failing to add a comment to the public schema and installing plpgsql (which should already be installed) aren't going to cause any real problems.
However, if you want to do a complete re-install you'll need a user with appropriate permissions. That shouldn't be the user your application routinely runs as of course.
Shorter answer: ignore it.
This module is the part of Postgres that processes the SQL language. The error will often pop up as part of copying a remote database, such as with
a 'heroku pg:pull'. It does not overwrite your SQL processor and warns you about that.
For people using AWS, the COMMENT ON EXTENSION is possible only as superuser, and as we know by the docs, RDS instances are managed by Amazon. As such, to prevent you from breaking things like replication, your users - even the root user you set up when you create the instance - will not have full superuser privileges:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html
When you create a DB instance, the master user system account that you
create is assigned to the rds_superuser role. The rds_superuser role
is a pre-defined Amazon RDS role similar to the PostgreSQL superuser
role (customarily named postgres in local instances), but with some
restrictions. As with the PostgreSQL superuser role, the rds_superuser
role has the most privileges on your DB instance and you should not
assign this role to users unless they need the most access to the DB
instance.
In order to fix this error, just use -- to comment out the lines of SQL that contains COMMENT ON EXTENSION
EDIT 1: As suggested by Dmitrii I., you can also omit comments when dumping: pg_dump --no-comments
For people who have narrowed down the issue to the COMMENT ON statements (as per various answers below) and who have superuser access to the source database from which the dump file is created, the simplest solution might be to prevent the comments from being included to the dump file in the first place, by removing them from the source database being dumped...
COMMENT ON EXTENSION postgis IS NULL;
COMMENT ON EXTENSION plpgsql IS NULL;
COMMENT ON SCHEMA public IS NULL;
Future dumps then won't include the COMMENT ON statements.
Use the postgres (admin) user to dump the schema, recreate it and grant priviledges for use before you do your restore.
In one command:
sudo -u postgres psql -c "DROP SCHEMA public CASCADE;
create SCHEMA public;
grant usage on schema public to public;
grant create on schema public to public;" myDBName
For me, I was setting up a database with pgAdmin and it seems setting the owner during database creation was not enough. I had to navigate down to the 'public' schema and set the owner there as well (was originally 'postgres').
Some of the answers have already provided various approaches related to getting rid of the create extension and comment on extensions. For me, the following command line seemed to work and be the simplest approach to solve the problem:
cat /tmp/backup.sql.gz | gunzip - | \
grep -v -E '(CREATE\ EXTENSION|COMMENT\ ON)' | \
psql --set ON_ERROR_STOP=on -U db_user -h localhost my_db
Some notes
The first line is just uncompressing my backup and you may need to adjust accordingly.
The second line is using grep to get rid of offending lines.
the third line is my psql command; you may need to adjust as you normally would use psql for restore.
Related
I'm having trouble performing the restore from a dump. The scenario is as follows: I am migrating an environment from GCP to AWS, and at the moment I am working on the migration of the bank.
A partner dumped db that is in GCP and placed the file on AWS S3 (I don't know the command he used to perform the dump).
I created an EC2 in the AWS environment and copied the dump from S3 to EC2 (the file is 13 GB). I also created the RDS to host the new db with all the correct security group settings.
Here comes the problem, I connect to the RDS from the server (EC2) without problems, but when doing the restore using pg_restore I get the following error message: pg_restore: too many command line arguments (first is "dbclient. dump ").
The complete command I used was this:
pg_restore -h client-aurora-cluster-hmg-legado-instance-1.c23ltjbbz7ms.us-east-1.rds.amazonaws.com -U postgres -d db_hmg_legado dbclient.dump -W
OK, I changed the approach. I tried with psql instead of pg_restore and then the command was like this:
psql -h client-aurora-cluster-hmg-legado-instance-1.c23ltjbbz7ms.us-east-1.rds.amazonaws.com -U postgres -d db_hmg_legado dbclient.dump -W
Only this time it worked !!!!
But I received some error messages while performing the restore. Which I put below:
psql: dbclient.dump: 23: ERROR: schema "dw" already exists
CREATE EXTENSION
psql: dbclient.dump: 37: ERROR: must be owner of extension hstore
CREATE EXTENSION
psql: dbclient.dump: 51: ERROR: must be owner of extension intarray
CREATE EXTENSION
psql: dbclient.dump: 65: ERROR: must be owner of extension pg_trgm
CREATE EXTENSION
psql: dbclient.dump: 79: ERROR: must be owner of extension unaccent
But the restore takes a long time and is partially finished.
In general I wanted to understand why pg_restore didn't work. Has anyone ever experienced this?
And about these owner errors does anyone know how to resolve this using psql?
As documented in the manual the file to be restored is the last parameter and it is specified without a "switch". But you are using -W after the dump file. Move the -W parameter somewhere before that (although it's usually not necessary to begin with)
So you need something like this:
pg_restore -W -h ... -U postgres -d db_hmg_legado dbclient.dump
However, if the restore worked when using psql then the dump file is a "plain text" dump which can't be restored using pg_restore to begin with.
Concerning the errors:
You should restore the dump into an empty database that doesn't contain any schemas except the default ones.
You need a superuser for CREATE EXTENSION, which you don't have in a hosted database. So pre-install these extensions with the techniques that Amazon provides, then restore the dump and ignore the errors.
We have a dev Postgres DB that one of the developers has created an application in. Is there an existing query that will pull information from the role_table_grants table and generate all the correct statements to move into production? PGAdmin will create all the generate scripts for certain things but I haven't found a less manual way rather than just writing all the statements by hand based on the role_table_grants table. Not asking anyone to dump time into creating it, just thought I would ask if there are some existing migration scripts out there that would help.
Thanks.
Dump the schema to a file; use pg_dump or pg_dumpall with the --schema-only option.
Then use grep to get all the GRANT and REVOKE statements.
On my dev machine, I might do something like this.
$ pg_dump -h localhost -p 5435 -U postgres --schema-only sandbox > sandbox.sql
$ grep "^GRANT\|^REVOKE" sandbox.sql
REVOKE ALL ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON SCHEMA public FROM postgres;
GRANT ALL ON SCHEMA public TO postgres;
[snip]
Perhaps pg_dumpall is what you need. Probably with --schema-only option in order to dump just schema, not development data.
If you need to move not all databases, you can use pg_dumpall --globals-only to dump roles (which don't belong to any particular database), and then use pg_dump to dump one certain databases.
I'm trying to restore a database from backup but I can't connect to postgresql.
namespace :db do
task import: :environment do
import_path = "~/backups"
sql_file = "PostgreSQL.sql"
database_config = Rails.configuration.database_configuration[Rails.env]
system "psql --username=#{database_config['username']} -no-password # {database_config['database']} < #{import_path}/#{sql_file}"
end
end
I tried changing the pg_hba.conf file (peer to md5).
In the console I tried the same thing with the super user postgres, but it still fails.
BTW, does anyone know a better way to restore a database? I used the backup gem.
EDIT:
I restarted the postgresql server and the passed the authentication. But, didn't restored the db. I reverted the changes in the file and just added -h localhost to the psql command. The database restores now. The only errors I get now are:
must be owner of extension plpgsql //and
no privileges could be revoked for "public"
after change pg_hba.conf, you shold reload or send a SIGHUP signal to postmaster pid. so that change applyed.
why not use psql -f to execute the backup sql file?
or you can use pg_dump backup and pg_restore restore. or copy command backup and restore.
LIKE :
digoal=# copy tbl_join_1 to '/home/pg93/tbl_join_1.dmp';
COPY 10
digoal=# delete from tbl_join_1;
DELETE 10
digoal=# copy tbl_join_1 from '/home/pg93/tbl_join_1.dmp';
COPY 10
OR
pg93#db-172-16-3-150-> pg_dump -f ./tbl_join_1.dmp -t tbl_join_1
pg93#db-172-16-3-150-> psql
psql (9.3.3)
Type "help" for help.
digoal=# drop table tbl_join_1;
DROP TABLE
digoal=# \q
pg93#db-172-16-3-150-> psql -f ./tbl_join_1.dmp
SET
SET
SET
SET
SET
SET
SET
SET
SET
CREATE TABLE
ALTER TABLE
ALTER TABLE
I am doing a pg_dump command as follows:
/Library/PostgreSQL/8.4/bin/pg_dump --host localhost --port 5432 --username xxx --format plain --clean --inserts --verbose --file /Users/xxx/documents/output/SYSTEM_admin_20131203101809.sql --exclude-table public.dbmirror_mirroredtransaction --exclude-table public.dbmirror_mirrorhost --exclude-table public.dbmirror_pending --exclude-table public.dbmirror_pendingdata --exclude-table public.mdflog --exclude-table public.fcpersistentstore --exclude-table public.backup_restore --exclude-table public.mdflogeventcode testdb
The problem I have is that in the plain sql file that is created it adds a command to try and DROP the whole of the PUBLIC schema as shown in this snippet:
...
DROP FUNCTION public.f_updateeventlog();
DROP FUNCTION public.f_updateadmindata();
DROP PROCEDURAL LANGUAGE plpgsql;
DROP SCHEMA public;
CREATE SCHEMA public;
ALTER SCHEMA public OWNER TO postgres;
COMMENT ON SCHEMA public IS 'standard public schema';
...
I DO want to drop all the other objects I have not excluded in the exclude-tables parameters I have provided, but I DONT want it to DROP the schema.
I have tried adding the schema as an exclude-table parameter but that did not work.
I am using Postgresql 8.4 for the pg_dump.
EDIT: I wanted to update this question and say I believe that it is not possible to get pg_dump to exclude the DROP / CREATE public command in the plain format. I believe you have to use the custom format and then pg_restore in order to stop that from happening. As I am using psql to restore and pg_dump with plain format, I simply remove the commands I don't want from the sql file after its created automatically as part of the Java process I am creating and I can get around this. I am leaving the question in case someone does find a way of doing this.
I think your best bet is to post-process the dump after it has been made. You could do this with grep or the like or just search and delete from the dump after it has been made.
The fact is that you are using the --clean option which drops the restored database objects. There isn't a ready way to tell pg_dump to drop only some restored database objects, which is probably a good thing.
I tried pg_dump and then on a separate machine I tried to import the sql and populate the database, I see
CREATE TABLE
ERROR: role "prod" does not exist
CREATE TABLE
ERROR: role "prod" does not exist
CREATE TABLE
ERROR: role "prod" does not exist
CREATE TABLE
ERROR: role "prod" does not exist
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
WARNING: no privileges could be revoked for "public"
REVOKE
ERROR: role "postgres" does not exist
ERROR: role "postgres" does not exist
WARNING: no privileges were granted for "public"
GRANT
which means my user and roles and grant information is not in pg_dump
On the other hand we have pg_dumpall, I read conversation, and this does not lead me anywhere?
Question
- Which one should I be using for database backups? pg_dump or pg_dumpall?
- the requirement is that I can take the backup and should be able to import to any machine and it should work just fine.
The usual process is:
pg_dumpall --globals-only to get users/roles/etc
pg_dump -Fc for each database to get a nice compressed dump suitable for use with pg_restore.
Yes, this kind of sucks. I'd really like to teach pg_dump to embed pg_dumpall output into -Fc dumps, but right now unfortunately it doesn't know how so you have to do it yourself.
Up until PostgreSQL 11 there was also a nasty caveat with this approach: Neither pg_dump, nor pg_dumpall in --globals-only mode would dump user access GRANTs on DATABASEs. So you pretty much had to extract them from the catalogs or filter a pg_dumpall. This is fixed in PostgreSQL 11; see the release notes.
Make pg_dump dump the properties of a database, not just its contents (Haribabu Kommi)
Previously, attributes of the database itself, such as database-level GRANT/REVOKE permissions and ALTER DATABASE SET variable settings, were only dumped by pg_dumpall. Now pg_dump --create and pg_restore --create will restore these database properties in addition to the objects within the database. pg_dumpall -g now only dumps role- and tablespace-related attributes. pg_dumpall's complete output (without -g) is unchanged.
You should also know about physical backups - pg_basebackup, PgBarman and WAL archiving, PITR, etc. These offer much "finer grained" recovery, down to the minute or individual transaction. The downside is that they take up more space, are only restoreable to the same PostgreSQL version on the same platform, and back up all tables in all databases with no ability to exclude anything.