I am getting this error while restoring data from the dump file.
nishant#nishant-Lenovo-G50-70:~/Documents$ psql sortation_gor1 < dump.sql
psql: FATAL: role "nishant" does not exist
I have followed the Postgress Ubuntu Documentation
But when I am trying to restore the database I am getting this error.
Any IDea. ?
PostgreSQL pg_dump doesn't save a roles. Roles in PostgreSQL are related to database cluster, not to single database. It does pg_dumpall with option -r. You should to create missing roles manually with SQL statement CREATE ROLE name LOGIN or you have to use export roles with pg_dump -r.
The I did it with the psql -U postgres -d d1atabase_name -f dump.sql
Related
I am having some difficulties with restoring the schema of a table. I dumped my Heroku Postgres db and I used pg_restore to restore one table from it into my local db (it has more than 20 tables). It was successfully restored, but I was having issues when I tried to insert new data into the table.
When I opened up my database using psql, I found out that the restored table is available with all the data, but its schema has zero rows. Is there anyway I could import both the table and its schema from the dump? Thank you very much.
This is how I restored the table into my local db:
pg_restore -U postgres --dbname my_db --table=message latest.dump
Edit:
I tried something like this following the official docs, but it just gets blocked and nothing happened. My db is small, no more than a couple of megabytes and the table's schema I am trying to restore has no more than 100 row.
pg_restore -U postgres --dbname mydb --table=message --schema=message_id_seq latest.dump
As a more general answer (I needed to restore a single table from a huge backup), you may want to take a look at this post: https://thequantitative.medium.com/restoring-individual-tables-from-postgresql-pg-dump-using-pg-restore-options-ef3ce2b41ab6
# run the schema-only restore as root
pg_restore -U postgres --schema-only -d new_db /directory/path/db-dump-name.dump
# Restore per table data using something like
pg_restore -U postgres --data-only -d target-db-name -t table_name /directory/path/dump-name.dump
From the Heroku DevCenter here
Heroku Postgres is integrated directly into the Heroku CLI and offers
many helpful commands that simplify common database tasks
You can check here if your environment is correctly configured.
In this way, you can use the Heroku CLI pg:pull command to pull remote data from a Heroku Postgres database to a local database on your machine.
For example:
$ heroku pg:pull HEROKU_POSTGRESQL_MAGENTA mylocaldb --app sushi
I am trying to migrate bunch of databases from AWS RDS Postresql server to GCP Cloud SQL.
Since both are postgresql engine, I thought it will be a simple solution to take pgdump from aws and do import in gcp.
However I am surprised when import cloud sql failed with error complaining some roles are missing.
Below are the steps which are tried
Dump of database in AWS RDS
pg_dump -h <connection_endpoint> -U root -f db_dump.sql <db_name>
Then I tried to import it in GCP Cloud sql with below command
instance-1:~$ PGPASSWORD=<passwprd> psql --host=<host_name> --port=5432 --username=postgres --dbname=<db_name> < db_dump.sql
SET
SET
SET
SET
SET
set_config
------------
(1 row)
SET
SET
SET
CREATE SCHEMA
ERROR: must be member of role "rdsadmin"
CREATE SCHEMA
ERROR: must be member of role "root"
CREATE SCHEMA
ERROR: must be member of role "root"
CREATE SCHEMA
ERROR: must be member of role "root"
CREATE SCHEMA
ERROR: must be member of role "root"
CREATE EXTENSION
ERROR: must be owner of extension plpgsql
CREATE EXTENSION
COMMENT
SET
SET
CREATE TABLE...
As you can see rdsadmin and root roles are missing.
How to make sure that these missing roles are present in GCP Cloud sql with correct settings because even after creating roles with same name in cloud sql it doesn't succeed?
Any solution please?
Answering my own question as I found a solution for it.
Since rdsAdmin user is created by AWS for administrative tasks on RDS cluster. Taking pg_dump without owners and restoring it without owners does the trick and I am able to perform restore in cloud sql.
pg_dump -Fc -O -h <rds-host> -U <user> -d <db> > db.dump
pg_restore -U postgres -d <db> -h <cloudsql-host> -v --no-owner db.dump
pg_restore needs formatted compressed output to restore. To fulfil that -Fc is used in pg_dump command
Backup command: pg_dump -U username backupdbname -f backupfilename.sql
Restore Command: psql -v ON_ERROR_STOP=1 -f backupfilename.sql -d newdbname;
Actually Tried this command. Backup is working. But while restoring it will throw error psql:pr_staging.sql:7624: ERROR: relation "res_company" already exists. Because for restore, we need one newdb. So that i am creating newdb from browser manually. Thats why i facing the error.
I am creating a new db using terminal command. But it not showing in browser localhost:8069/web/database/selector.
How to restore the backup db?
If you create a db by using Odoo's db manager (interface) there already will be basic tables (the module base will be installed automatically).
There are some ways to restore the db. For example (template0 is a default template db from postgres):
createdb -T template0 newdbname
cat backupfilename | psql newdbname
You shouldn't have a Odoo server running while doing this.
You could also use Odoo's database interface to backup and restore/duplicate databases.
I'm looking to load a database from a backup.gz. The backup is raw sql generated from pg_dump -U postgres app_development -f backup.gz -Z9.
I've tried dropping the db with psql -Upostgres -c "drop database app_development" but I get:
ERROR: database "app_development" is being accessed by other users
DETAIL: There are 3 other sessions using the database.
The same thing happens when I use dropdb.
I don't want to dump to a non-ascii version so I don't think I can use pg_restore.
Also, I'm not sure if it helps, but all this is happening in docker.
I moved my PostgresQL database from one hard drive to another using
pg_dump -U postgres db_name > db_name.dump
and then
psql -U postgres db_name < db_name.dump
I created the database db_name the same way in both instances. In the new database when I run my Java program with a JPA query (or a JDBC query) I get this error:
"ERROR: relation "table1" does not exist"
The query is:
select count(0) from table1
I know I've got a connection because if I change the password in the connection parameters I get an error.
For some reason in the new PostgresQL instance it thinks that table1 does not exist in the imported schema.
If I change the query to
select count(0) from myschema.table1
Then it complains about permissions:
"ERROR: permission denied for schema myschema"
Why would the permissions be different?
The table table1 exists in myschema because I can see it in the pgAdmin tool. All the rows were imported into the new PostgresQL instance.
When I do a query from Java the combination of pg_dump and psql created a problem.
What do I need to do to solve this issue?
Are you moving to the same version of PostgreSQL? There might be issues if you make a dump with pg_dump 8.3 and try to restore it in Postgresql 8.4. Anyway, assuming that it is the same version try the following:
Dump all global objects, such as users and groups (don't know if they were missing in your dump):
pg_dumpall -g -U postgres > globals.sql
Dump schema of database:
pg_dump -Fp -s -v -f db-schema.sql -U postgres dbname
Dump contents of database:
pg_dump -Fc -v -f full.dump -U postgres dbname
Now restore.
psql -f globals.sql
psql -f db-schema.sql dbname
pg_restore -a -d dbname -Fc full.dump
That is my $0.02. Hope it helps.
I encountered this problem. Then I realized that I forgot to install postgis extension.
Don't forget to install the extensions you use.
I was able to solve it by changing the database privileges to public CONNECT and the schema privileges for public and postgres = USAGE and CREATE.
My backup scripts apparently didn't preserve the privileges, at least not when moving from 8.3 to 8.4.