I have a VPS with 6 core, 60GB RAM and 1TB SSD space.
I installed Nominatim on Ubuntu 18.04 following the official installation here Nominatim Installation
I tried to import the openstreet planet pbf file located here Planet OpenStreetMap
./utils/setup.php --osm-file planet-latest.osm.pbf --all
--osm2pgsql-cache 28000 2>&1 | tee setup.log
After few hours I got the error Segmentation fault ERROR: No Data see below:
> 2019-11-30 16:50:55 == Setup DB Postgres version found: 10 Postgis
> version found: 2.4 set_config
> ------------
>
> (1 row)
>
> 2019-11-30 16:51:04 == Import data osm2pgsql version 1.2.0 (64 bit id
> space)
>
> Allocating memory for dense node cache Processing: Node(618450k
> 2425.3k/s) Way(0k 0.00k/s) Relation(0 0.00/s) Allocating memory for sparse node cache Sharing dense sparse Node-cache: cache=28000MB,
> maxblocks=448000*65536, allocation method=11 Mid: loading persistent
> node cache from /srv/nominatim/flatnode/flatnode.file Mid: pgsql,
> cache=28000 Setting up table: planet_osm_nodes Setting up table:
> planet_osm_ways Setting up table: planet_osm_rels Processing:
> Node(621390k 2427.3k/s) Way(0k 0.00k/s) Relation(0
> 0.00/s)import-full.style'. Using projection SRS 4326 (Latlong) NOTICE: table "place" does not exist, skipping
>
> Reading in file:
> /srv/nominatim/Nominatim-3.4.0/build/planet-latest.osm.pbf Using PBF
> parser. Processing: Node(5612922k 3148.0k/s) Way(437617k 33.05k/s)
> Relation(0 0.00/s)Segmentation fault ERROR: No Data string(7) "No
> Data"
Any idea why I got this error please?
Thanks.
Related
I am trying to setup a Nominatim server following this tutorial:
https://www.linuxbabe.com/ubuntu/osm-nominatim-geocoding-server-ubuntu-20-04
Shortly after I run the import of the OSM data into the postgresql database using the following command:
/srv/nominatim/build$ /srv/nominatim/build/utils/setup.php --osm-file /home/zineb/data/great-britain-latest.osm.pbf --all 2>&1 | tee setup.log
I get the following error:
2022-03-02 11:49:27 == module path: /srv/nominatim/build/module
2022-03-02 11:49:27 == Create DB 2022-03-02 11:49:27 == Setup DB
Postgres version found: 12 Postgis version found: 3.2 PHP Fatal error:
Uncaught Nominatim\DatabaseError: [500]: Database server failed to
load /srv/nominatim/build/module/nominatim.so module thrown in
/srv/nominatim/Nominatim-3.5.1/lib/DB.php on line 61
What can be the source of this issue?
Thanks in advance.
I am creating a postgis database and want to use filtered OpenStreetMap data.
For this i have tried the following process:
Downloaded the planet.osm.bz2 file from https://planet.osm.org/
Unpacked to *.osm using bzip2
Filtered the file using osmfilter through the command prompt
Uploaded the filtered *.osm file to my database using osm2pgsql in command prompt
For my first attempt i have filtered for land area only.
However, in step 4 using osm2pgsql, i receive the following error in the command prompt: "Osm2pgsql failed due to ERROR: XML parsing error at line 3137102, column 61: not
well-formed (invalid token)
"
As shown from the command prompt on my windows computer:
Z:\OpenStreetMap>osm2pgsql -U postgres -W -m -d osm -p filteredland -S "C:\Progr
am Files (x86)\HOTOSM\share\default.style" filteredland2.osm
osm2pgsql version 0.92.0 (64 bit id space)
Password:
Using built-in tag processing pipeline
Using projection SRS 3857 (Spherical Mercator)
Setting up table: filteredland_point
Setting up table: filteredland_line
Setting up table: filteredland_polygon
Setting up table: filteredland_roads
Allocating memory for sparse node cache
Node-cache: cache=800MB, maxblocks=12800*65536, allocation method=1
Mid: Ram, scale=100
Reading in file: filteredland2.osm
Using XML parser.
Processing: Node(1230k 61.5k/s) Way(0k 0.00k/s) Relation(0 0.00/s)node cache: st
ored: 1233078(100.00%), storage efficiency: 50.00% (dense blocks: 0, sparse node
s: 1233078), hit rate: -nan(ind)%
Osm2pgsql failed due to ERROR: XML parsing error at line 3137102, column 61: not
well-formed (invalid token)
I have also attempted two alternate routes, which also failed:
Downloading the planet.pbf -> Converting to .o5m using osmconvert ->
Filtering using osmfilter
Downloading the planet.pbf -> Converting to .osm using osmconvert ->
Filtering using osmfilter(Gave warnings) -> Using osm2pgsql to
transfer to database
Anyone know how to avoid this error or have experience with filtering the planet.osm file and uploading to postgis?
I suggest using Osmium instead of osmfilter, which doesn't require to convert the planet to a different format first and natively is able to return PBF data, which can be processed directly by osm2pgsql. It's faster, too.
I've looked at previous posts on this topic but couldn't understand why the import process died. Therefore I cannot start it again until I understand what the errors in the log mean. (import took 12 days)
here's the famous nominatim setup log
CREATE TABLE
CREATE TABLE
CREATE TYPE
Import
osm2pgsql version 0.93.0-dev (64 bit id space)
Using projection SRS 4326 (Latlong)
NOTICE: table "place" does not exist, skipping
Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=2800MB, maxblocks=44800*65536, allocation method=11
Mid: loading persistent node cache from /srv/nominatim/data/flatnode.file
Mid: pgsql, cache=2800
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Reading in file: /srv/nominatim/data/europe-latest.osm.pbf
Using PBF parser.
Processing: Node(1916319k 644.1k/s) Way(234287k 0.25k/s) Relation(3519490 21.55/s) parse time: 1109177s
Node stats: total(1916319008), max(4937556462) in 2975s
Way stats: total(234287257), max(503447033) in 942887s
Relation stats: total(3519494), max(7358761) in 163315s
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Going over pending ways...
0 ways are pending
Using 1 helper-processes
Finished processing 0 ways in 0 s
Going over pending relations...
0 relations are pending
Using 1 helper-processes
Finished processing 0 relations in 0 s
Stopping table: planet_osm_nodes
Stopped table: planet_osm_nodes in 0s
Stopping table: planet_osm_ways
Stopped table: planet_osm_ways in 0s
Stopping table: planet_osm_rels
Building index on table: planet_osm_rels
ERROR: Error executing external command: /srv/nominatim/Nominatim/build/osm2pgsql/osm2pgsql --flat-nodes /srv/nominatim/data/flatnode.file -lsc -O gazetteer --hstore --number-processes 1 -C 2800 -P 5432 -d nominatim /srv/nominatim/data/europe-latest.osm.pbf
Error executing external command: /srv/nominatim/Nominatim/build/osm2pgsql/osm2pgsql --flat-nodes /srv/nominatim/data/flatnode.file -lsc -O gazetteer --hstore --number-processes 1 -C 2800 -P 5432 -d nominatim /srv/nominatim/data/europe-latest.osm.pbf
Can anyone help?
Thanks in advance
I am using a docker project in a rather strong server(120GB RAM and plenty of disc space).
When trying to run an import on postgres server I get the following error
Using projection SRS 4326 (Latlong)
NOTICE: table "place" does not exist, skipping
Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=1207MB, maxblocks=154496*8192, allocation method=11
Mid: pgsql, scale=10000000 cache=1207
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Reading in file: /app/src/data.osm.pbf
Using PBF parser.
node cache: stored: 0(-nan%), storage efficiency: -nan% (dense blocks: 0, sparse nodes: 0), hit rate: -nan%
Osm2pgsql failed due to ERROR: PBF error: invalid BlobHeader size (> max_blob_header_size)
ERROR: Error executing external command: /app/src/osm2pgsql/osm2pgsql -lsc -O gazetteer --hstore --number-processes 1 -C 1207 -P 5432 -d nominatim /app/src/data.osm.pbf
How could I increase the max_blob_header_size?
I stumbled upon the same issue while feeding an S3 hosted PBF file into a Nominatim docker container.
Unfortunately I failed to configure the access to the PBF file properly so the docker container saved the XML error response as /app/src/data.osm.pbf. That's why the file header check fails.
I installed Postgres/Postgis for mac osx. I followed all the steps needed to import osm data into the database as required (this includes setting up a user, a database and having spherical mercator projection done) but for some reason, the tables planet_osm_line, planet_osm_roads, planet_osm_polygon, etc are not even showing up in the database I created. Following is the success message for importing osm data in the database:
Using projection SRS 900913 (Spherical Mercator)
Setting up table: planet_osm_point
NOTICE: table "planet_osm_point_tmp" does not exist, skipping
Setting up table: planet_osm_line
NOTICE: table "planet_osm_line_tmp" does not exist, skipping
Setting up table: planet_osm_polygon
NOTICE: table "planet_osm_polygon_tmp" does not exist, skipping
Setting up table: planet_osm_roads
NOTICE: table "planet_osm_roads_tmp" does not exist, skipping
Using built-in tag processing pipeline
Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=800MB, maxblocks=102400*8192, allocation method=3
Mid: Ram, scale=100
Reading in file: map.osm
Processing: Node(0k 0.9k/s) Way(0k 0.09k/s) Relation(0 0.00/s) parse time: 0s
Node stats: total(923), max(3044483353) in 0s
Way stats: total(86), max(291932583) in 0s
Relation stats: total(0), max(0) in 0s
Committing transaction for planet_osm_point
Committing transaction for planet_osm_line
Committing transaction for planet_osm_polygon
Committing transaction for planet_osm_roads
...
Creating indexes on planet_osm_roads finished
All indexes on planet_osm_roads created in 0s
Completed planet_osm_roads
Osm2pgsql took 0s overall
I followed the steps listed here: http://skipperkongen.dk/2011/01/20/how-to-import-open-street-map-data-into-postgresql/
and here: https://wiki.archlinux.org/index.php/GpsDrive
but still the relations above do not show up. I just get an error saying such relations do not exist when I query my database. What else am I missing here?