I installed Postgres/Postgis for mac osx. I followed all the steps needed to import osm data into the database as required (this includes setting up a user, a database and having spherical mercator projection done) but for some reason, the tables planet_osm_line, planet_osm_roads, planet_osm_polygon, etc are not even showing up in the database I created. Following is the success message for importing osm data in the database:
Using projection SRS 900913 (Spherical Mercator)
Setting up table: planet_osm_point
NOTICE: table "planet_osm_point_tmp" does not exist, skipping
Setting up table: planet_osm_line
NOTICE: table "planet_osm_line_tmp" does not exist, skipping
Setting up table: planet_osm_polygon
NOTICE: table "planet_osm_polygon_tmp" does not exist, skipping
Setting up table: planet_osm_roads
NOTICE: table "planet_osm_roads_tmp" does not exist, skipping
Using built-in tag processing pipeline
Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=800MB, maxblocks=102400*8192, allocation method=3
Mid: Ram, scale=100
Reading in file: map.osm
Processing: Node(0k 0.9k/s) Way(0k 0.09k/s) Relation(0 0.00/s) parse time: 0s
Node stats: total(923), max(3044483353) in 0s
Way stats: total(86), max(291932583) in 0s
Relation stats: total(0), max(0) in 0s
Committing transaction for planet_osm_point
Committing transaction for planet_osm_line
Committing transaction for planet_osm_polygon
Committing transaction for planet_osm_roads
...
Creating indexes on planet_osm_roads finished
All indexes on planet_osm_roads created in 0s
Completed planet_osm_roads
Osm2pgsql took 0s overall
I followed the steps listed here: http://skipperkongen.dk/2011/01/20/how-to-import-open-street-map-data-into-postgresql/
and here: https://wiki.archlinux.org/index.php/GpsDrive
but still the relations above do not show up. I just get an error saying such relations do not exist when I query my database. What else am I missing here?
Related
I am new to Postgres and one of my reports that is using select and extracting JSON return the following error.
ERROR: unexpected chunk number 0 (expected 1) for toast value 12599063 in pg_toast_16687
SQL state: XX000
I do not know how to proceed in fixing my query. Any idea?
Run this command:
select reltoastrelid::regclass from pg_class where relname = 'table_name';
Where the table_name is where the error occur. Then check the result if it is the same toast# like pg_toast.pg_toast_XXXXX. Mine happen to be 16687
Then run these commands to reindex:
REINDEX table table_name;
REINDEX table pg_toast.pg_toast_16687;
VACUUM analyze table_name;
That is data corruption:
restore from backup
upgrade to the latest PostgreSQL minor release
check the hardware
I'm trying to create schema with query:
CREATE SCHEMA IF NOT EXISTS hdb_catalog
but following error occurred:
2019-09-10 13:47:37.025 UTC [129] ERROR: duplicate key value violates unique constraint "pg_namespace_nspname_index"
2019-09-10 13:47:37.025 UTC [129] DETAIL: Key (nspname)=(hdb_catalog) already exists.
2019-09-10 13:47:37.025 UTC [129] STATEMENT:
CREATE SCHEMA IF NOT EXISTS hdb_catalog
How it is possible with IF NOT EXISTS?
That looks like you have catalog corruption.
With some luck, only the index is affected. You can try to repair it using
REINDEX pg_catalog.pg_namespace;
Like in all cases of corruption, it is commendable to create a new cluster with initdb and use pg_dump/pg_restore to copy the database there. There might be more problems.
Also, try to find out what caused the corruption. Often it is bad hardware.
I've looked at previous posts on this topic but couldn't understand why the import process died. Therefore I cannot start it again until I understand what the errors in the log mean. (import took 12 days)
here's the famous nominatim setup log
CREATE TABLE
CREATE TABLE
CREATE TYPE
Import
osm2pgsql version 0.93.0-dev (64 bit id space)
Using projection SRS 4326 (Latlong)
NOTICE: table "place" does not exist, skipping
Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=2800MB, maxblocks=44800*65536, allocation method=11
Mid: loading persistent node cache from /srv/nominatim/data/flatnode.file
Mid: pgsql, cache=2800
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Reading in file: /srv/nominatim/data/europe-latest.osm.pbf
Using PBF parser.
Processing: Node(1916319k 644.1k/s) Way(234287k 0.25k/s) Relation(3519490 21.55/s) parse time: 1109177s
Node stats: total(1916319008), max(4937556462) in 2975s
Way stats: total(234287257), max(503447033) in 942887s
Relation stats: total(3519494), max(7358761) in 163315s
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Going over pending ways...
0 ways are pending
Using 1 helper-processes
Finished processing 0 ways in 0 s
Going over pending relations...
0 relations are pending
Using 1 helper-processes
Finished processing 0 relations in 0 s
Stopping table: planet_osm_nodes
Stopped table: planet_osm_nodes in 0s
Stopping table: planet_osm_ways
Stopped table: planet_osm_ways in 0s
Stopping table: planet_osm_rels
Building index on table: planet_osm_rels
ERROR: Error executing external command: /srv/nominatim/Nominatim/build/osm2pgsql/osm2pgsql --flat-nodes /srv/nominatim/data/flatnode.file -lsc -O gazetteer --hstore --number-processes 1 -C 2800 -P 5432 -d nominatim /srv/nominatim/data/europe-latest.osm.pbf
Error executing external command: /srv/nominatim/Nominatim/build/osm2pgsql/osm2pgsql --flat-nodes /srv/nominatim/data/flatnode.file -lsc -O gazetteer --hstore --number-processes 1 -C 2800 -P 5432 -d nominatim /srv/nominatim/data/europe-latest.osm.pbf
Can anyone help?
Thanks in advance
I am using a docker project in a rather strong server(120GB RAM and plenty of disc space).
When trying to run an import on postgres server I get the following error
Using projection SRS 4326 (Latlong)
NOTICE: table "place" does not exist, skipping
Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=1207MB, maxblocks=154496*8192, allocation method=11
Mid: pgsql, scale=10000000 cache=1207
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Reading in file: /app/src/data.osm.pbf
Using PBF parser.
node cache: stored: 0(-nan%), storage efficiency: -nan% (dense blocks: 0, sparse nodes: 0), hit rate: -nan%
Osm2pgsql failed due to ERROR: PBF error: invalid BlobHeader size (> max_blob_header_size)
ERROR: Error executing external command: /app/src/osm2pgsql/osm2pgsql -lsc -O gazetteer --hstore --number-processes 1 -C 1207 -P 5432 -d nominatim /app/src/data.osm.pbf
How could I increase the max_blob_header_size?
I stumbled upon the same issue while feeding an S3 hosted PBF file into a Nominatim docker container.
Unfortunately I failed to configure the access to the PBF file properly so the docker container saved the XML error response as /app/src/data.osm.pbf. That's why the file header check fails.
while taking a backup from my PostgrSQL Database
it showing that
pg_dump: Dumping the contents of table "gtab17" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: invalid page header in block 9576 of relation base/17779/758869
pg_dump: The command was: COPY public.gtab17 (jrdetid, jrmid, acid, dr, cr, narr, ageamt) TO stdout;
i think my table gtab17 is corrupt
tried
Vaccum Full error on this table
INFO: vacuuming "public.gtab17" ; ERROR: row is too big: size 3256104, maximum size 8160
Analyze error
INFO: analyzing "public.gtab17" ;
ERROR: invalid page header in block 9576 of relation base/17779/758869
Database : PostgreSQL 9.2
OS : Windows XP SP3 ; FILESYSTEM : NTFS
i have googled but dint get any solution to solve this
It means, so your data file is corrupted - a solution is relative difficult - the best way is recovery from some older backup. You can try to fix it with replacing broken data page by zeroes - but you lost some data, and without some deeper knowledgeable you can destroy more than now it is.
REFER