I'm very new to raster2pgsql so please bear with me. I'm trying to load a 60mb .tif (from the High-Resolution Settlements Layer project) to my postgis-enabled database with the following code:
raster2pgsql -s 5235 -C -F [path to the .tif] public.hrsl_lka | psql
-h localhost -U postgres -p 5432 -d project
However, I get the following error:
ERROR: insert_records: Could not allocate memory for INSERT statement
ERROR: process_rasters: Could not convert raster tiles into INSERT or
COPY statements ERROR: Unable to process rasters
Loading smaller .tifs of around 3mb to the same database but from other sources works fine, however.
Is there a size limit with raster2pgsql? I'm on PostgreSQL 12.4.
With many thanks,
Gregor
Have you tried setting the tile size -t?
According to the documentation:
-t: Tile size - expressed as width x height. If not provided, a default is worked out automatically in the range of 32-100 so it best
matches the raster dimensions. It is worth remembering that when
importing multiple files, tiles will be computed for the first raster
and then applied to others.
Alternatively you can let the script compute it for you by means of setting -t to auto e.g.
raster2pgsql -s 5235 -t auto -C -F file.tif public.hrsl_lka | psql -d db
Related answer: Are there limitations using a PostGIS out-db raster?
Related
I have a dump file (size around 5 GB) which is taken via this command:
pg_dump -U postgres -p 5440 MYPRODDB > MYPRODDB_2022.dmp
The database consists multiple schemas (let's say Schema A,B,C and D) but i need to restore only one schema (schema A).
How can i achieve that? The command below didn't work and gave error:
pg_restore -U postgres -d MYPRODDB -n A -p 5440 < MYPRODDB_2022.dmp
pgrestore: error: input file appears to be a text format dump. please
use psql.
You cannot do that with a plain format dump. That's one of the reasons why you always use a different format unless you need an SQL script.
If you want to stick with a plain text dump:
pg_dump -U postgres -p 5440 -n A MYPRODDB > MYPRODDB_2022.dmp
psql -U postgres -d MYPRODDB -p 5440 -f MYPRODDB_2022.dmp
Though dumping back over the same database as above will throw errors unless you use --clean or its short form -c to create commands to drop existing objects before restoring them:
-c
--clean
Output commands to clean (drop) database objects prior to outputting the commands for creating them. (Unless --if-exists is also specified, restore might generate some harmless error messages, if any objects were not present in the destination database.)
This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call pg_restore.
Probably also a good idea to throw in --if-exists:
--if-exists
Use conditional commands (i.e., add an IF EXISTS clause) when cleaning database objects. This option is not valid unless --clean is also specified.
I have been trying to import my SRTM raster data into my postgis using the command, but has generated the following error (tried multiple times). Is there any thing missing? I appreciate for any help
Error message:
ERROR: relation "test" already exists
ERROR: current transaction is aborted, commands ignored until end of transaction block
Welcome to SO.
The error message says you're trying to create a relation that already exists. Either drop it in your database ..
DROP TABLE test;
.. or tell raster2pgsql to do it for your by adding the parameter -d to your command.
-d Drops the table, then recreates it and populates
Something like
raster2pgsql -I -z 10x10 -C -F -s 4326 file.hgt -d public.test | psql ...
An alterative is to use -a to append the data to an existing table
-a Appends raster into current table, must be exactly the same table schema.
Ok I have tried several things but nothing seems to work for me. I have set the max file size to 750M, and added execution time in php.ini. I have set auto commit to 0. And I have used this:
LOAD DATA INFILE 'YOUR_FILE_COMPLETE_PATH' INTO TABLE tbl_name
FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\r\n'
It did not work. I get:
0 rows inserted. (Query took 0.0002 seconds.)
When I use the import form in phpmyadmin it times out (even though it is set to 5000 in php.ini) and only adds about 3,000 rows at a time (not good for a file with 273,000 rows.
Any suggestions?
You will have a problem with importing using phpMyAdmin, due to low memory and bandwidth, if you ever try to increase PHP config info, maybe you get data imported but you will face outage. I can suggest 3 option:
1) Use BigDump which is a library for this type of jobs.
2) Convert your data to .sql and import using Mysql dump:
mysql -u username -p database_name < file.sql
3) Also you can use mysqlimport to import CSV:
mysqlimport --ignore-lines=1 \
--fields-terminated-by=, \
--local -u root \
-p Database \
TableName.csv
Test data
1 TIF file (159KB)
Goal threefold:
Load raster into PostGIS using raster2pgsql and visualize in QGIS
In my IPython Notebook connect to PostGIS and load raster into NumPy array
In my IPyhton Notebook use Pandas to load a time-series of one pixel of rasters with different time step stored in PostGIS
Process so far
I've managed to get one raster image into PostGIS Raster using the raster2pgsql command and visualize it in QGIS using the DB Manager:
raster2pgsql -s 4326 -d -I -C -M -R -l 4 D:\Downloads\raster//modis.tif -F -t 100x100 public.ndvi | psql -U postgres -d rastertest -h localhost -p 5432
But how to access/query this raster from within IPython Notebook?
I found this presentation, which is going about SQLALchemy and GeoAlchemy2. And where it is mentioned that it support PostGIS Raster as well. It seems to be very interesting! But using the documentation I don't see how I can apply this to Raster data
I think I can make a connection to my PostGIS database using the following code, where postgres=user, password=admin and database=rastertest:
from sqlalchemy import create_engine
engine = create_engine('postgresql://postgres:admin#localhost/rastertest', echo=True)
But then..
any advice is very much appreciated.
You should use the psycopg module to connect to a postgres db from python. Some code samples:
import psycopg2
def connect_db():
try:
conn = psycopg2.connect("dbname='rastertest' user='admin' host='localhost' password='password'")
conn.set_session(autocommit=True) #if you want your updates to take effect without being in a transaction and requiring a commit, for a beginner, I would set this to True
return conn
except:
print "I am unable to connect to the database"
return None
def get_raster(raster_id,conn):
query= "SELECT ST_AsText(geom) from raster_table where id={}".format(raster_id)
conn.cursor().execute(query)
res = cur.fetchall()
return res[0][0]
Maybe the text representation of the raster is something you can use.
Alternatively, take a look here http://postgis.net/docs/RT_reference.html to see if any of the functions return what you want for your numpy array and replace the query in get_raster accordingly. (Possibly this http://postgis.net/docs/RT_ST_DumpValues.html)
I used this command to backup 200GB database (postgres 9.1, win7 x64):
pg_dump -Z 1 db_name > backup
It created 16GB file, which is fine I think because previous backups which works (and were packed by ext. tools) had similar size. Now, when I'm trying to restore into PG9.2 using pg_restore, I'm getting the error:
input file does not appear to be a valid archive
With pg_restore -Ft:
[tar archiver] corrupt tar header found in ▼ (expected 13500752, com
puted 78268) file position 512
Gzip also shows it's corrupted. When I open the backup file in Total Commander, the inner file has only 1.8GB.
When I was looking for a solution, dump should be done with -Cf parameter probably.
Which format has the file right now? Is it only tar or gzip (winrar shows gzip)?
Is there any way to restore this properly or is it corrupted somehow (no error when dumped)? Could it be due to file size limitations of tar or gzip?
What you have as output in "backup" is just zipped plain sql.
You could check it by prompting:
gzip -l backup
Unfortunately pg_retore do not provide possibility to restore PLAIN SQL,
so you just need to decompress the file and use psql -f <FILE> command:
zcat backup > backup.sql
psql -f backup.sql
It is not possible to make dump with pg_dump -Fc from postgres 9.1 as proposed by "Frank Heikens",
because dump formats are not compatible between primary versions, like 9.0 -> 9.1 -> 9.2
and "pg_restore" will give you an error on 9.2
Mostly this error mean that your restore action used invalid format
From manual of pg_dump ( pg_dump --help )
-F, --format=c|d|t|p output file format (custom, directory, tar,
plain text (default))
This mean that if you create dump with pg_dump without option --format / -F that your dump will be created in plain text format
NOTE: Plain text format cannot be restored with pg_restore tool. Use psql < dump.sql instead.
Examples:
# plain text export/import
pg_dump -Fp -d postgres://<db_user>:<db_password>#<db_host>:<db_port>/<db_name> > dump.sql
psql -d postgres://<target_db_user>:<target_db_password>#<target_db_host>:<target_db_port>/<target_db_name> -f dump.sql
# custom format
pg_dump -Fc -d postgres://<db_user>:<db_password>#<db_host>:<db_port>/<db_name> > dump.sql.custom
pg_restore -Ft postgres://<target_db_user>:<target_db_password>#<target_db_host>:<target_db_port>/<target_db_name> dump.sql.custom
# tar format
pg_dump -Ft -d postgres://<db_user>:<db_password>#<db_host>:<db_port>/<db_name> > dump.sql.tar
pg_restore -Ft postgres://<target_db_user>:<target_db_password>#<target_db_host>:<target_db_port>/<target_db_name> dump.sql.tar
Error from subject also can occur when restoring format not match backup.
For example created dump will be in custom format but for restore specified tar
Your dump is plain SQL, it's not a tar format, like you try to use in pg_restore. Use --format=custom or -Fc when you want a compressed format and use this setting in pg_restore as well. Check the manual.
This is an old thread though I had the exact same issue and managed to fix the somewhat corrupted dump with fixgz:
Short answer: run fixgz http://www.gzip.org/fixgz.zip on compressed dump.
fixgz.exe bad.gz fixed.gz
Long answer:
So if you used pg_dump with --compresss or -Z without specifying custom format option (-Fc) what you actually get is a compressed file in ASCII mode instead of BINARY mode.
Quoting from http://www.gzip.org/#faq1
If you have transferred a file in ASCII mode and you no longer have
access to the original, you can try the program fixgz to remove the
extra CR (carriage return) bytes inserted by the transfer. A Windows
9x/NT/2000/ME/XP binary is here. But there is absolutely no guarantee
that this will actually fix your file. Conclusion: never transfer
binary files in ASCII mode.
I got this problem when restoring using PGAdmin III. The problem doesn't occur with PGAdmin 4.