Filtering openstreetmap data for postgis - postgresql

I am creating a postgis database and want to use filtered OpenStreetMap data.
For this i have tried the following process:
Downloaded the planet.osm.bz2 file from https://planet.osm.org/
Unpacked to *.osm using bzip2
Filtered the file using osmfilter through the command prompt
Uploaded the filtered *.osm file to my database using osm2pgsql in command prompt
For my first attempt i have filtered for land area only.
However, in step 4 using osm2pgsql, i receive the following error in the command prompt: "Osm2pgsql failed due to ERROR: XML parsing error at line 3137102, column 61: not
well-formed (invalid token)
"
As shown from the command prompt on my windows computer:
Z:\OpenStreetMap>osm2pgsql -U postgres -W -m -d osm -p filteredland -S "C:\Progr
am Files (x86)\HOTOSM\share\default.style" filteredland2.osm
osm2pgsql version 0.92.0 (64 bit id space)
Password:
Using built-in tag processing pipeline
Using projection SRS 3857 (Spherical Mercator)
Setting up table: filteredland_point
Setting up table: filteredland_line
Setting up table: filteredland_polygon
Setting up table: filteredland_roads
Allocating memory for sparse node cache
Node-cache: cache=800MB, maxblocks=12800*65536, allocation method=1
Mid: Ram, scale=100
Reading in file: filteredland2.osm
Using XML parser.
Processing: Node(1230k 61.5k/s) Way(0k 0.00k/s) Relation(0 0.00/s)node cache: st
ored: 1233078(100.00%), storage efficiency: 50.00% (dense blocks: 0, sparse node
s: 1233078), hit rate: -nan(ind)%
Osm2pgsql failed due to ERROR: XML parsing error at line 3137102, column 61: not
well-formed (invalid token)
I have also attempted two alternate routes, which also failed:
Downloading the planet.pbf -> Converting to .o5m using osmconvert ->
Filtering using osmfilter
Downloading the planet.pbf -> Converting to .osm using osmconvert ->
Filtering using osmfilter(Gave warnings) -> Using osm2pgsql to
transfer to database
Anyone know how to avoid this error or have experience with filtering the planet.osm file and uploading to postgis?

I suggest using Osmium instead of osmfilter, which doesn't require to convert the planet to a different format first and natively is able to return PBF data, which can be processed directly by osm2pgsql. It's faster, too.

Related

data corrupted in postgres - right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx"

I am new to Postgres and we are using it for tests reports, we had an issue with our environment that entered duplicate keys to one of the table and since then we are getting this message when trying to run migration scripts:
error: migration failed: right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx" in line 0: UPDATE log SET project_id = (SELECT project_id FROM item_project WHERE item_project.item_id=log.item_id LIMIT 1); (details: pq: right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx")
I tried to run pg_dump and got this error:
pg_dump: error: query was: SELECT pg_catalog.pg_get_viewdef('457544'::pg_catalog.oid) AS viewdef
pg_dumpall: error: pg_dump failed on database "reportportal", exiting
Can anyone help here?
Restore your backup, and research what parameters you changed and what you did to end up with data corruption in the first place.

Error cloning collection using Cosmic Clone

I'm trying to clone an existing MongoDB collection that is running on azure cosmos DB to another collection on the same DB using Cosmic Clone.
access validation succeeds but the process fails with the following error message:
Collection Copy log
Begin Document Migration.
Source Database: myDB Source Collection: X
Target Database: myDB Target Collection: Y
LogError
Error: Error reading JObject from JsonReader. Path '', line 0, position 0., Message: Error reading JObject from JsonReader. Path '', line 0, position 0.
Main process exits with error
LogError
Error: One or more errors occurred., Message: Error reading JObject from JsonReader. Path '', line 0, position 0.
Any ideas are appreciated.
I've not used this tool but I took a quick look at the source for it and I'm fairly certain it is not designed to work with MongoDB collections in Cosmos DB.
If you're looking to copy a MongoDB collection you're better off using native Mongo tools like mongodump and mongorestore.
More details here, https://docs.mongodb.com/database-tools/

Can not create backup of Firebird database because of the errors

At the time of backup firebird database (gbak -g -ig) I have the following error:
gbak: writing data for table ORDERS
gbak: ERROR:message length error (encountered 532, expected 528)
gbak: ERROR:gds_$receive failed
gbak:Exiting before completion due to errors
When I'm using gfix with different parameters (-v -full, -mend, -ignore), I have the message:
Summary of validation errors
Number of index page errors : 540
In firebird.log file I see the lines:
PC (Server) Thu Sep 20 08:37:01 2018
Database: E:\...GDB
Index 2 is corrupt on page 134706 level 1. File: ..\..\..\src\jrd\validation.cpp, line: 1699
in table COMPONENTS (197)
However, the database works OK without problems.
Please help me to fix the error and make a backup.
(I need the backup to migrate to on 64bit server).

Osm2pgsql failed due to ERROR: PBF error: invalid BlobHeader size (> max_blob_header_size)

I am using a docker project in a rather strong server(120GB RAM and plenty of disc space).
When trying to run an import on postgres server I get the following error
Using projection SRS 4326 (Latlong)
NOTICE: table "place" does not exist, skipping
Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=1207MB, maxblocks=154496*8192, allocation method=11
Mid: pgsql, scale=10000000 cache=1207
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Reading in file: /app/src/data.osm.pbf
Using PBF parser.
node cache: stored: 0(-nan%), storage efficiency: -nan% (dense blocks: 0, sparse nodes: 0), hit rate: -nan%
Osm2pgsql failed due to ERROR: PBF error: invalid BlobHeader size (> max_blob_header_size)
ERROR: Error executing external command: /app/src/osm2pgsql/osm2pgsql -lsc -O gazetteer --hstore --number-processes 1 -C 1207 -P 5432 -d nominatim /app/src/data.osm.pbf
How could I increase the max_blob_header_size?
I stumbled upon the same issue while feeding an S3 hosted PBF file into a Nominatim docker container.
Unfortunately I failed to configure the access to the PBF file properly so the docker container saved the XML error response as /app/src/data.osm.pbf. That's why the file header check fails.

Error while taking backup in Postgresql (Could not read Block X of relation base/Y/Z)

while taking a backup from my PostgrSQL Database
it showing that
pg_dump: Dumping the contents of table "gtab17" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: invalid page header in block 9576 of relation base/17779/758869
pg_dump: The command was: COPY public.gtab17 (jrdetid, jrmid, acid, dr, cr, narr, ageamt) TO stdout;
i think my table gtab17 is corrupt
tried
Vaccum Full error on this table
INFO: vacuuming "public.gtab17" ; ERROR: row is too big: size 3256104, maximum size 8160
Analyze error
INFO: analyzing "public.gtab17" ;
ERROR: invalid page header in block 9576 of relation base/17779/758869
Database : PostgreSQL 9.2
OS : Windows XP SP3 ; FILESYSTEM : NTFS
i have googled but dint get any solution to solve this
It means, so your data file is corrupted - a solution is relative difficult - the best way is recovery from some older backup. You can try to fix it with replacing broken data page by zeroes - but you lost some data, and without some deeper knowledgeable you can destroy more than now it is.
REFER