I am trying to restore a database from a backup at Heroku, but it fails.
I am using their instructions:
heroku pg:backups:restore b101 DATABASE_URL --app example-app
But get the following error:
Restoring... !
An error occurred and the backup did not finish.
pg_restore: creating FK CONSTRAINT "public.user_disciplines_usertoken_tokens usertoken_id_refs_id_a11f5e9d"
pg_restore: warning: errors ignored on restore: 16
pg_restore finished with errors
waiting for download to complete
download finished successfully
Run heroku pg:backups:info r2192 for more details.
When I run the suggested command, I get the following:
2022-08-31 15:09:55 +0000 pg_restore: connecting to database for restore
2022-08-31 15:09:55 +0000 pg_restore: creating EXTENSION "hstore"
2022-08-31 15:09:55 +0000 pg_restore: while PROCESSING TOC:
2022-08-31 15:09:55 +0000 pg_restore: from TOC entry 2; 3079 95267 EXTENSION hstore (no owner)
2022-08-31 15:09:55 +0000 pg_restore: error: could not execute query: ERROR: Extensions can only be created on heroku_ext schema
2022-08-31 15:09:55 +0000 CONTEXT: PL/pgSQL function inline_code_block line 7 at RAISE
2022-08-31 15:09:55 +0000 Command was: CREATE EXTENSION IF NOT EXISTS "hstore" WITH SCHEMA "public";
2022-08-31 15:09:55 +0000
2022-08-31 15:09:55 +0000
2022-08-31 15:09:55 +0000 pg_restore: creating COMMENT "EXTENSION "hstore""
2022-08-31 15:09:55 +0000 pg_restore: from TOC entry 5891; 0 0 COMMENT EXTENSION "hstore"
2022-08-31 15:09:55 +0000 pg_restore: error: could not execute query: ERROR: extension "hstore" does not exist
2022-08-31 15:09:55 +0000 Command was: COMMENT ON EXTENSION "hstore" IS 'data type for storing sets of (key, value) pairs';
2022-08-31 15:09:55 +0000
2022-08-31 15:09:55 +0000
2022-08-31 15:09:55 +0000 pg_restore: creating EXTENSION "pg_stat_statements"
2022-08-31 15:09:55 +0000 pg_restore: from TOC entry 3; 3079 95394 EXTENSION pg_stat_statements (no owner)
2022-08-31 15:09:55 +0000 pg_restore: error: could not execute query: ERROR: Extensions can only be created on heroku_ext schema
2022-08-31 15:09:55 +0000 CONTEXT: PL/pgSQL function inline_code_block line 6 at RAISE
2022-08-31 15:09:55 +0000 Command was: CREATE EXTENSION IF NOT EXISTS "pg_stat_statements" WITH SCHEMA "public";
2022-08-31 15:09:55 +0000
2022-08-31 15:09:55 +0000
2022-08-31 15:09:55 +0000 pg_restore: creating COMMENT "EXTENSION "pg_stat_statements""
2022-08-31 15:09:55 +0000 pg_restore: from TOC entry 5892; 0 0 COMMENT EXTENSION "pg_stat_statements"
2022-08-31 15:09:55 +0000 pg_restore: error: could not execute query: ERROR: extension "pg_stat_statements" does not exist
2022-08-31 15:09:55 +0000 Command was: COMMENT ON EXTENSION "pg_stat_statements" IS 'track planning and execution statistics of all SQL statements executed';
So it appears the issue is with hstore extension and recently Heroku made some changes: https://devcenter.heroku.com/changelog-items/2446
I contacted Heroku support, but got no reply yet. Maybe someone has an idea how to deal with this issue?
The following solution works:
Restoring a backup that includes extensions installed in public
heroku pg:backups:restore <BACKUP_ID> <DATABASE_NAME> --extensions '<LIST_OF_EXTENSIONS>' -a <APP_NAME>
e.g:
heroku pg:backups:restore b010 STAGING_DATABASE_URL --extensions 'pg_stat_statements,hstore,pgcrypto' -a example_app
More details here:
https://help.heroku.com/ZOFBHJCJ/heroku-postgres-extension-changes-faq
Related
We have a PostgreSQL custom format ( -F c ) database backup ~1Gb in size that could not be restored on two of our users machines.
The error that occurs is
:pg_restore: [archiver (db)] error returned by PQputCopyData and in logs there is error in Copy command.
All reports we found with errors in Copy command during pg_restore were related to textual (sql ) backup which is not the case.
Any ideas?
Below is the information that describe the issue in more details:
1. File integrity is ok checked with "Microsoft File Checksum Integrity Verifier"
2. Backup and restore and restore are performed with PostgreSQL 9.6.5 64 bit.
3. Backup format of pg_dump is called
pg_dump -U username -F c -Z 9 mydatabase > myarchive
4. Database on client is created with:
CREATE DATABASE mydatabase WITH TEMPLATE = template0 ENCODING = 'UTF8' OWNER=user;
5. Pg_resote call:
pg_restore.exe -U user --dbname=mydatabase --verbose --no-owner --role=user
6. Example of logs, there are repeating rows with random table errors:
2020-12-07 13:40:56 GMT LOG: checkpoints are occurring too frequently (21 seconds apart)
2020-12-07 13:40:56 GMT HINT: Consider increasing the configuration parameter "max_wal_size".
2020-12-07 13:40:57 GMT ERROR: extra data after last expected column
2020-12-07 13:40:57 GMT CONTEXT: COPY substance, line 21511: "21743 \N 2 1d8c29d2d4dc17ccec4a29710c2f190a e98906e08d4cf1ac23bc4a5a26f83e73 1d8c29d2d4dc17ccec4a297..."
2020-12-07 13:40:57 GMT STATEMENT: COPY substance (id, text_id, storehouse_id, i_tb_id, i_twod_tb_id, tb_id, twod_tb_id, o_smiles, i_smiles_id, i_twod_smiles_id, smiles_id, twod_smiles_id, substance_type)
2020-12-07 13:40:57 GMT FATAL: invalid frontend message type 48
2020-12-07 13:40:57 GMT LOG: PID 105976 in cancel request did not match any process
or
2020-12-07 14:35:42 GMT LOG: checkpoints are occurring too frequently (16 seconds apart)
2020-12-07 14:35:42 GMT HINT: Consider increasing the configuration parameter "max_wal_size".
2020-12-07 14:35:59 GMT LOG: checkpoints are occurring too frequently (17 seconds apart)
2020-12-07 14:35:59 GMT HINT: Consider increasing the configuration parameter "max_wal_size".
2020-12-07 14:36:09 GMT ERROR: invalid byte sequence for encoding "UTF8": 0x00
2020-12-07 14:36:09 GMT CONTEXT: COPY scalar_calculation, line 3859209
2020-12-07 14:36:09 GMT STATEMENT: COPY scalar_calculation (calculator_id, smiles_id, mean_value, remark) FROM stdin;
2020-12-07 14:36:09 GMT FATAL: invalid frontend message type 49
2020-12-07 14:36:10 GMT LOG: PID 109816 in cancel request did not match any process
I am seeing similar behavior on windows 10 pro machines with PG 11.x.
I used pg_dump as suggested above and restored to said machines with psql and had no error.
I also noted that the error shifted around using pg_restore with different "-j" settings. For instance without the setting or "-j 1" pg_restore always fails on the same table and record. Changing to "-j 4" results in that table succeeding to apply the record without error but it occurs on another table.
Changing a particular column to null in the record satisfies the entire restore.
Using pgAdmin 4 to run the restore never produces the error.
Copying the exact command displayed in pgAdmin reproduces the same error:
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 32780; 0 5435293 TABLE DATA REDACTED_TABLE_NAME postgres
pg_restore: [archiver (db)] COPY failed for table "REDACTED_TABLE_NAME": ERROR: extra data after last expected column
CONTEXT: COPY mi_gmrfutil, line 117: "REDACTED PLAIN TEXT \N REDACTED PLAIN TEXT \N \N \N \N \N \N REDACTED PLAIN TEXT \N \N REDACTED PLAIN TEXT \N ..."
pg_restore: FATAL: invalid frontend message type 49
I tried using pg_restore version 14 with the same outcome.
The problem has already been approached here and here but without any outcome for me to work with.
I am backing up using as required in the Heroku documentation.
pg_dump -Fc --no-acl --no-owner --no-privileges -h [HOST] -U [DB_USER] [DB_NAME] > backup.dump
When I restore like so (the file's size is ~100MB):
heroku pg:backups --app [APP_NAME] restore 'https://[DUMP_URL]' [HEROKU_DATABASE_URL]
I receive an error message after the progress of uploading stops at 19.4MB and logs state:
$ heroku pg:backups --app [APP_NAME] info r081
=== Backup info: r081
Database: BACKUP
Started: 2016-04-29 10:10:37 +0000
Finished: 2016-04-29 10:11:31 +0000
Status: Failed
Type: Manual
Backup Size: 19.4MB
=== Backup Logs
2016-04-29 10:10:39 +0000: pg_restore: connecting to database for restore
2016-04-29 10:10:41 +0000: pg_restore: creating SCHEMA public
2016-04-29 10:10:41 +0000: pg_restore: creating EXTENSION plpgsql
2016-04-29 10:10:41 +0000: pg_restore: creating COMMENT EXTENSION plpgsql
2016-04-29 10:10:41 +0000: pg_restore: [archiver (db)] Error while PROCESSING TOC:
2016-04-29 10:10:41 +0000: pg_restore: [archiver (db)] Error from TOC entry 2474; 0 0 COMMENT EXTENSION plpgsql
2016-04-29 10:10:41 +0000: pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of extension plpgsql
2016-04-29 10:10:41 +0000: Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
2016-04-29 10:10:41 +0000:
2016-04-29 10:10:41 +0000:
2016-04-29 10:10:41 +0000:
2016-04-29 10:10:41 +0000: pg_restore: creating SEQUENCE answer_seq
[...]
2016-04-29 10:10:48 +0000: pg_restore: executing SEQUENCE SET category_seq
2016-04-29 10:10:49 +0000: pg_restore: processing data for table "file"
2016-04-29 10:11:31 +0000: out of memory
2016-04-29 10:11:31 +0000: waiting for restore to complete
2016-04-29 10:11:31 +0000: restore done
2016-04-29 10:11:31 +0000: waiting for download to complete
2016-04-29 10:11:31 +0000: download done
I first thought it could be due to the table file which contains BLOBS. But I can't seem to understand why it should be a problem. The largest file is 4MB.
The logs also state that only 19.4 of the 100MB have been imported.
I use the following version of the heroku toolbelt:
heroku-toolbelt/3.43.0 (x86_64-linux-gnu) ruby/2.1.5
heroku-cli/4.30.0-2dfc0f4 (amd64-linux) go1.6.2
=== Installed Plugins
heroku-apps#2.0.3
heroku-cli-addons#0.3.0
heroku-fork#4.1.3
heroku-git#2.5.1
heroku-local#5.0.2
heroku-orgs#1.1.0
heroku-pg-extras
heroku-pipelines#1.1.5
heroku-run#3.2.3
heroku-spaces#2.1.2
heroku-status#2.1.4
I suppose my database plan was not big enough to support the operation. I can exclude the other issues statet in the heroku FAQ
i am run pg_dump on my vps server, it throw me error:
pg_dump: [archiver (db)] query failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
pg_dump: [archiver (db)] query was: SELECT
( SELECT alias FROM pg_catalog.ts_token_type('22171'::pg_catalog.oid) AS t
WHERE t.tokid = m.maptokentype ) AS tokenname,
m.mapdict::pg_catalog.regdictionary AS dictname
FROM pg_catalog.pg_ts_config_map AS m
WHERE m.mapcfg = '22172'
ORDER BY m.mapcfg, m.maptokentype, m.mapseqno
Then I notice the sql on the above error:
SELECT
( SELECT alias FROM pg_catalog.ts_token_type('22171'::pg_catalog.oid) AS t
WHERE t.tokid = m.maptokentype ) AS tokenname,
m.mapdict::pg_catalog.regdictionary AS dictname
FROM pg_catalog.pg_ts_config_map AS m
WHERE m.mapcfg = '22172'
ORDER BY m.mapcfg, m.maptokentype, m.mapseqno
So I try to run SELECT alias FROM pg_catalog.ts_token_type('22171'::pg_catalog.oid) on psql
So it throw me error:
pzz_development=# SELECT alias FROM pg_catalog.ts_token_type('22171'::pg_catalog.oid);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!> \q
How can I figure out the problem, and dump my data properly?
EDIT:
Then i check postgresql log at /var/log/postgresql/postgresql-9.3-main.log
2015-08-10 16:22:49 CST LOG: server process (PID 4029) was terminated by signal 11: Segmentation fault
2015-08-10 16:22:49 CST DETAIL: Failed process was running: SELECT
( SELECT alias FROM pg_catalog.ts_token_type('22171'::pg_catalog.oid) AS t
WHERE t.tokid = m.maptokentype ) AS tokenname,
m.mapdict::pg_catalog.regdictionary AS dictname
FROM pg_catalog.pg_ts_config_map AS m
WHERE m.mapcfg = '22172'
ORDER BY m.mapcfg, m.maptokentype, m.mapseqno
2015-08-10 16:22:49 CST LOG: terminating any other active server processes
2015-08-10 16:22:49 CST WARNING: terminating connection because of crash of another server process
2015-08-10 16:22:49 CST DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2015-08-10 16:22:49 CST HINT: In a moment you should be able to reconnect to the database and repeat your command.
2015-08-10 16:22:49 CST LOG: all server processes terminated; reinitializing
2015-08-10 16:22:49 CST LOG: database system was interrupted; last known up at 2015-08-10 16:22:45 CST
2015-08-10 16:22:50 CST LOG: database system was not properly shut down; automatic recovery in progress
2015-08-10 16:22:50 CST LOG: unexpected pageaddr 0/2AE6000 in log segment 000000010000000000000004, offset 11427840
2015-08-10 16:22:50 CST LOG: redo is not required
2015-08-10 16:22:50 CST LOG: MultiXact member wraparound protections are now enabled
2015-08-10 16:22:50 CST LOG: autovacuum launcher started
2015-08-10 16:22:50 CST LOG: database system is ready to accept connections
I'm getting this below error while restoring dump into mongodb,
Mon Jul 27 14:08:52.936 going into namespace [test.BC_2022_tmp]
Mon Jul 27 14:08:52.936 warning: Restoring to viacrm.BC_2022_tmp without dropping. Restored data will be inserted without raising errors; check your server log
Mon Jul 27 14:08:52.937 file /home/dev/test/BC_2022_tmp.bson empty, skipping
Mon Jul 27 14:08:52.937 Creating index: { key: { _id: 1 }, ns: "viacrm.BC_2022_tmp", name: "_id_" }
Mon Jul 27 14:08:52.938 ERROR: Error creating index test.BC_2022_tmp: 13347 err: "local.oplog.rs missing. did you drop it? if so restart server"
Abandon (core dumped)
Dump Restore command:
mongorestore -d test /home/dev/crm
Please can anyone help me to resolve this issue
I want to import osm planet-131113.osm.pbf whit osm2pgsql.
But whent it proccessing relation the window throw and say "osm2pgsql has stopped working" and then proccess killed.What is the problem?
My hardware is:
Cpu:intel xeon E5-2620.
Ram:24GB
Mainbord:supermico x9dr3-f-o
HDD:seagate 2tb barracuda
My os:
windows server 2008 R2
and Postgresql9.3 with postgis2.1 bundle
Postgresql config:
effective-chache-size=28GB
shared_buffers 2GB
maintenance_work_mem = 2GB
work-mem=512MB
checkpoint_segments = 100
chekpoint_completion-target=0.9
autovacuum = off
fsync=off
synchronous-commit=off
full-page-writes=off
and osm2pgsql command:
F:\OSM\x64>osm2pgsql -d OSMPlanet -s -S default.style -C 16000 -U postgres -r pbf -k -v --number-processes 12 planet-131113.osm.pbf
osm2pgsql SVN version af61cae663 (64bit id space)
release notes: 'Windows version built by Dominik Perpeet (http://www.customdebug
.com/osm2pgsql/index.html)'
WARNING: osm2pgsql was compiled without fork, only using one process!
Using projection SRS 900913 (Spherical Mercator)
Setting up table: planet_osm_point
NOTICE: table "planet_osm_point" does not exist, skipping
NOTICE: table "planet_osm_point_tmp" does not exist, skipping
Setting up table: planet_osm_line
NOTICE: table "planet_osm_line" does not exist, skipping
NOTICE: table "planet_osm_line_tmp" does not exist, skipping
Setting up table: planet_osm_polygon
NOTICE: table "planet_osm_polygon" does not exist, skipping
NOTICE: table "planet_osm_polygon_tmp" does not exist, skipping
Setting up table: planet_osm_roads
NOTICE: table "planet_osm_roads" does not exist, skipping
NOTICE: table "planet_osm_roads_tmp" does not exist, skipping
Allocating memory for sparse node cache
Node-cache: cache=16000MB, maxblocks=2048001*zd, allocation method=8192
Mid: pgsql, scale=100 cache=16000
Setting up table: planet_osm_nodes
NOTICE: table "planet_osm_nodes" does not exist, skipping
Setting up table: planet_osm_ways
NOTICE: table "planet_osm_ways" does not exist, skipping
Setting up table: planet_osm_rels
NOTICE: table "planet_osm_rels" does not exist, skipping
Reading in file: planet-131113.osm.pbf
Processing: Node(2084251k 142.6k/s) Way(204149k 0.84k/s) Relation(95360 5.64/s)
postgresql log:
2013-12-09 00:46:19 PST ERROR: unexpected EOF on client connection with an open transaction
2013-12-09 00:46:19 PST CONTEXT: COPY planet_osm_nodes, line 1
2013-12-09 00:46:19 PST STATEMENT: COPY planet_osm_nodes FROM STDIN;
2013-12-09 00:46:19 PST ERROR: unexpected EOF on client connection with an open transaction
2013-12-09 00:46:19 PST CONTEXT: COPY planet_osm_rels, line 1
2013-12-09 00:46:19 PST STATEMENT: COPY planet_osm_rels FROM STDIN;
2013-12-09 00:46:19 PST LOG: could not send data to client: No connection could be made because the target machine actively refused it.
2013-12-09 00:46:19 PST STATEMENT: COPY planet_osm_ways FROM STDIN;
2013-12-09 00:46:19 PST LOG: could not send data to client: No connection could be made because the target machine actively refused it.
2013-12-09 00:46:19 PST STATEMENT: COPY planet_osm_nodes FROM STDIN;
2013-12-09 00:46:19 PST LOG: could not send data to client: No connection could be made because the target machine actively refused it.
2013-12-09 00:46:19 PST STATEMENT: COPY planet_osm_rels FROM STDIN;
2013-12-09 00:46:19 PST FATAL: connection to client lost
2013-12-09 00:46:19 PST FATAL: connection to client lost
2013-12-09 00:46:19 PST FATAL: connection to client lost
2013-12-09 00:46:19 PST FATAL: connection to client lost
2013-12-09 00:48:53 PST LOG: received fast shutdown request
2013-12-09 00:48:53 PST LOG: aborting any active transactions
2013-12-09 00:48:53 PST LOG: shutting down
2013-12-09 00:48:53 PST LOG: database system is shut down