Restore postgis backup from 1.5 to 2.1.2 - postgresql

I have a database backup of a postgres 9.1 and postgis 1.5. Many of the databases were built using a spatial enabled database, created using:
createdb spacial psql -d spacial -f postgis.sql psql -d spacial -f spatial_ref_sys.sql psql -d spacial -f postgis_comments.sql
Now I jave a postgres 9.3 and postgis 2.1. All spatial databases were not restored correctly because my spacial database (created with this new version) is created with only one of the two tables (created with the old version).
Does anyone knows what is going on? Thanks.
I have made the steps I needed, at least I think I've made it. After run: SELECT postgis_full_version(); I've got this result: "POSTGIS="2.1.2 r12389" GEOS="3.3.3-CAPI-1.7.4" PROJ="Rel. 4.7.1, 23 September 2009" LIBXML="2.9.1" LIBJSON="UNKNOWN""
As the POSTGIS 2.1.2 is not covered in the link provided, I figured I should run only the script postgis_upgrade_21_minor.sql, as I don't have topology installed. But a few tables were not imported at all. During importing I got a few messages: type "geometry" is undefined.
Any new ideas?

Follow the hard upgrade instructions in the manual. These steps, in brief, are:
Dump the old database to a file
Spatially enable a new empty database (there are several different ways to do this)
Use a special utility postgis_restore.pl that came with your new version of PostGIS to translate the old format to the new format.

Related

Syntax to make pg_dump target an older version of pg_restore

I have a postgres 12 database in use on heroku with postgres 11 installed on my macOS workstation. When I try to restore the file provided to me by Heroku, I get
$ pg_restore --verbose --no-owner -h localhost -d myapp_development latest-heroku.dump
pg_restore: [archiver] unsupported version (1.14) in file header
According to Heroku's documentation, they make it sound like the only option is that if a Heroku user wants to access their data locally, they must be running postgres 12? That seems silly.
Digging into the Postgres docs on this topic, they say:
pg_dump can also dump from PostgreSQL servers older than its own version. (Currently, servers back to version 8.0 are supported.)
Which certainly sounds like it should be possible to specify a target version of pg_restore to be used by pg_dump? But nowhere on the internet does there seem to be an example of this in action. Including the postgres docs themselves, which offer no clues about the syntax that would be used to target the "dump versions back to version 8.0".
Has anyone ever managed to use the pg_restore installed with postgres 11 to import a dump from the pg_dump installed with postgres 12?
The best answer to this that I figured out was to upgrade via brew upgrade libpq. This upgrades psql, pg_dump and pg_restore to the latest version (to link them I had to use brew link --force libpq). Once that upgrade was in place, I was able to dump from the postgres 12 databases on heroku, and import into my postgres 11 database locally. I thought I might need to dump to raw SQL for that to work, but thankfully the pg-12 based pg_restore was able to import into my postgres 11 database without issue.
pg_restore will refuse to handle a dump with a later version than itself - basically, if it encounters a file "from the future", it cannot know how to handle it.
So if you dump a database with pg_dump from v12, pg_restore from v11 cannot handle it.
The solution, as you have found out, is to upgrade the v11 installation.

Postgresql - unrecognized configuration parameter

I exported a postgresql database from an external server, and attempted to import it into my local server but got this error:
unrecognized configuration parameter "idle_in_transaction_session_timeout"
Does this kind of error mean that the two servers are using different versions of postgresql? I looked into that, and the external server is running:
version
PostgreSQL 9.5.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit
and my server is running:
version
PostgreSQL 9.5.5 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.2) 5.4.0 20160609, 64-bit
Pretty much the same thing. Is there a site where you can see all of the valid config parameters for each version? And is there a way to sync up two databases like this, so incompatibilities like this get patched up automatically?
According to Postgresql 9.6 Release Notes the idle_in_transaction_session_timeout parameter was introduced in version 9.6.
E.2.3.1.10. Server Configuration
Allow sessions to be terminated automatically if they are in
idle-in-transaction state for too long (Vik Fearing)
This behavior is controlled by the new configuration parameter
idle_in_transaction_session_timeout. It can be useful to prevent
forgotten transactions from holding locks or preventing vacuum cleanup
for too long.
Since you are using version 9.5 on the server, the parameter is not recognized.
It's possible that you used version 9.6 of the Postgresql client to export data from the the source 9.5 server and the parameter was introduced in the dump file. If this was the case I would recommend using a 9.5 client version to export and import the data.
The accepted answer is the way to go, but if for some reason you can not upgrade version, here is a workaround.
Export using plain text. You probably want to use compression too. pg_dump -F c -Z 9 dbname > file.zip
Before import, we need to remove the offending parameter. To do that we can use zcat and grep. zcat file.zip | grep -vw "idle_in_transaction_session_timeout" | psql -d newdb
Note that there are drawbacks using psql instead of pg_import. For instance, one can not use the -j to import concurrently.

Import OSM file to PostGis on Windows10

can you help me with importing planet.osm file to my PostGist db? I am new in this and I found tutorials only for linux.
There are some commands, but I do not know how use it ... I will be grateful for some step by step tutorial. I'm using GeoServer if it is important information for us to help me. Thanks for advices.
edit:
I used osm2pgsql -s -U postgres -d nameofdatabase name.osm
but unsuccessful because I have error with no database found.
I used OGR2OGR to import osm data in pbf format on Windows (Windows 10, Posgres 9.6 with Postgis 2.3). You can use OGR2OGR from the "OSgeo42 shell", which comes with QGIS or you can get Osgeo4w separately here). The steps are something like this:
Create a new database: create database db_for_osm
Create Postgis extension in your db. In SQL create extension postgis
Now you can run OGR2OGR. Open the "OSGEO4W
shell". This will open a command window with all
the environment variables set. The command will be something like
ogr2ogr -f PostgreSQL PG:"dbname='db_for_osm' host='localhost' port='5432' user='myuser' password='mypassword'" planet.osm.pbf
My large upload took a couple of days to complete, so be prepared for this to take a long time - I suggest you do a test with a small region first - for the test I did for this answer I downloaded a city from Mapzen.

Copy data between postgres databases

I need, as a one-off, to copy data from one table in a PostgreSQL database to the corresponding table in a different database. There's not that much data: about 2500 rows, 8 columns (some numeric, some varchar).
My first thought was to simply pg_dump -a -t table -f output.file and then pg_restore on another database. However, as it turned out, the versions of pg_dump and the source server do not match - and I have no control over versions, so upgrading is not an option:
pg_dump: server version: 9.1.2; pg_dump version: 9.0.5
pg_dump: aborting because of server version mismatch
Unfortunately, with version 9 of Postgres, option -i (ignore version) is not longer available. I do know what I am doing, but it still wouldn't let me (naturally).
What other options do I have?
I would use COPY TO and COPY FROM. Works in either of the versions and is the optimal tool for this.
If you want to use pg_dump, you have to use the appropriate version. There are separate executables for each version. On Linux you can get the path of the currently used executable with which pg_dump.

Use pg_restore to restore from a newer version of PostgreSQL

I have a (production) DB server running PostgreSQL v9.0 and a development machine running PostgreSQL v8.4. I would like to take a dump of the production DB and use it on the development machine. I cannot upgrade the postgres on the dev machine.
On the production machine, I run:
pg_dump -f nvdls.db -F p -U nvdladmin nvdlstats
On the development machine, I run:
pg_restore -d nvdlstats -U nvdladmin nvdls.db
And I got this error:
pg_restore: [archiver] unsupported version (1.12) in file header
This occurs regardless of whether I choose the custom, tar, or plain_text format when dumping.
I found one discussion online which suggests that I should use a newer version of pg_restore on the dev machine. I tried this by simply copying the 9.0 binary to the dev machine, but this fails (not unexpectedly) due to linking problems.
I thought that the point of using a plain_text dump was that it would be raw, portable SQL. Apparently not.
How can I get the 9.0 DB into my 8.4 install?
pg_restore is only for restoring dumps taken in the "custom" format.
If you do a "plain text" dump you have to use psql to run the generated SQL script:
psql -f nvdls.db dbname username
Using pg_dump/pg_restore to move from 9.0 to 8.4 is not supported - only moving forward is supported.
However, you can usually get the data across (in a data-only dump), and in some cases you can get the schema - but that's mostly luck, it depends on which features you're using.
You should normally use the target version of pg_dump and pg_restore - meaning in this case you should use the binaries from 8.4. But you should use the same version of pg_dump and pg_restore. Both tools will work fine across the network, so there should be no need to copy the binaries around.
And as a_horse_with_no_name says, you may be better off using pg_dump in plaintext mode - that will allow you to hand-edit the dump if necessary. In particular, you can make one schema only dump (with -s) and one data only dump - only the schema dump is likely to require any editing.
If the 9.0 database contains any bytea columns, then bigger problems await.
These columns will be exported by pg_dump using the "hex" representation and appear in your dump file like:
SELECT pg_catalog.lowrite(0, '\x0a2')
Any version of the postgres backend below 9.0 can't grok the hex representation of bytea, and I can't find an option to tell pg_dump on the 9.0 side to not use it. Setting the default "bytea_output" setting to ESCAPE for either the database or the whole server is seemingly ignored by pg_dump.
I suppose it would be possible to post-process the dump file and actually change every hex-encoded bytea value to an escaped one, but the risk of untraceably corrupting the kind of things normally stored in a bytea (images, PDFs etc) does not excite me.
I solved this by upgrading postgresql from 8.X to 9.2.4. If you're using brew on Mac OS-X, use -
brew upgrade postgresql
Once this is done, just make sure your new postgres installation is at the top of your path. It'll look something like (depending on the version installation path) -
export PATH=/usr/local/Cellar/postgresql/9.2.4/bin:$PATH
I had same issue. I used pgdump and psql for export/import DB.
1.Set PGPASSWORD
export PGPASSWORD='h0ld1tn0w';
2.Export DB with pg_dump
pg_dump -h <<host>> -U <<username>> <<dbname>> > /opt/db.out
/opt/db.out is dump path. You can specify of your own.
3.Then set again PGPASSWORD of you another host. If host is same or password is same then this is not required.
4.Import db at your another host
psql -h <<host>> -U <<username>> -d <<dbname>> -f /opt/db.out
If username is different then find and replace with your local username in db.out file. And make sure on username is replaced and not data.