I would like to important a file into my Postgresql system(specificly RedShift). I have found a arguement for copy that allows importing a gzip file. But the provider for the data I am trying to include in my system only produces the data in a .zip. Any built in postgres commands for opening a .zip?
From within Postgres:
COPY table_name FROM PROGRAM 'unzip -p input.csv.zip' DELIMITER ',';
From the man page for unzip -p:
-p extract files to pipe (stdout). Nothing but the file data is sent to stdout, and the files are always extracted in binary
format, just as they are stored (no conversions).
Can you just do something like
unzip -c myfile.zip | gzip myfile.gz
Easy enough to automate if you have enough files.
This might only work when loading redshift from S3, but you can actually just include a "gzip" flag when copying data to redshift tables, as described here:
This is the format that works for me if my s3 bucket contains a gzipped .csv.
copy <table> from 's3://mybucket/<foldername> '<aws-auth-args>' delimiter ',' gzip;
unzip -c /path/to/.zip | psql -U user
The 'user' must be have super user right else you will get a
ERROR: must be superuser to COPY to or from a file
To learn more about this see
https://www.postgresql.org/docs/8.0/static/backup.html
Basically this command is used in handling large databases
Related
I recently tried copying my database contents to a csv file with the following command inside my containerized Postgres database:
\copy ${TABLE} TO ${FILE} DELIMITER ',' CSV HEADER;
I got a response indicating the file was successfully copied however I can't find where it was copied to. When I try specifying a different path to output the file, I get the response directory/file.csv: No such file or directory
Does anyone know where containerized databases output files and how I can direct them to accessible locations?
I am on a Mac OS and this is some of the relevant info from my docker-compose file with which the database was initiated.
db:
image: kartoza/postgis:12.0
volumes:
- postgis:/var/lib/postgresql
Docker containers store their information internally in what is called Docker volumes. You can read more literature on that in Use volumes.
Regarding your particular issue, you've got some options:
Copy to a temporary file and pull it from the container:
\copy ${TABLE} TO /tmp/file.csv DELIMITER ',' CSV HEADER;
Then run docker ps, find your container ID and run:
docker cp container_id:/tmp/file.csv file.csv
And you will have file.csv with the data in your current folder.
Another, simpler way is to export to stdout, if the output is gonna be short:
\copy ${TABLE} TO STDOUT DELIMITER ',' CSV HEADER;
This will dump all the data through the terminal. Only use it if there are few enough registers that it doesn't get past the scrollback.
Third option, because two are never enough... you could publish temporarily the 5432 port and connect from your local machine using psql... then running the copy command will dump to your local machine. (Or use third-party tools like pgAdmin or DataGrip to dump the information).
I want to export a postgresql table to a csv file.
I have tried two ways, however both are unsuccessful for different reasons.
In the first case, you can see what I run and what I get bellow:
COPY demand.das_april18_pathprocess TO '/home/katerina/das_april18_pathprocess.csv' DELIMITER ',' CSV HEADER;
No such file or directory
SQL state: 58P01
I need to mention that in the location /home/katerina/ I have created an empty file named das_april18_pathprocess.csv, for which I modified the Permission settings to allow Read and Write.
In my second try, the query is executed without any errors but I cannot see the csv file. The command that I run is the following:
COPY demand.das_april18_pathprocess TO '/tmp/das_april18_pathprocess.csv' DELIMITER ',' CSV HEADER;
In the /tmp directory there is no cvs file.
Any advice on how to export the table to csv file with any way is really appreciated!
Ah, you run into a common problem -- you're creating a file on the server's filesystem, not your local filesystem. That can be a pain.
You can, however, COPY TO STDOUT, then redirect the result.
If you're using linux or another unix, the easiest way to do this is from the command line:
$ psql <connection options> -c "COPY demand.das_april18_pathprocess TO STDOUT (FORMAT CSV)" > das_april18_pathprocess.csv
copy ( select * from demand.das_april18_pathprocess) to '/home/katerina/das_april18_pathprocess.csv' with CSV header ;
In a tar dump
$ tar -tf dvdrental.tar
toc.dat
2163.dat
...
2189.dat
restore.sql
After extraction
$ file *
2163.dat: ASCII text
...
2189.dat: ASCII text
restore.sql: ASCII text, with very long lines
toc.dat: PostgreSQL custom database dump - v1.12-0
What is the purpose of restore.sql?
toc.dat is binary, but I can open it and it looks like a sql
script too. How different are between the purposes of restore.sql
and toc.dat?
The following quote from the document does't answer my question:
with one file for each table and blob being dumped, plus a so-called Table of Contents file describing the dumped objects
in a machine-readable format that pg_restore can read.
Since a tar dump contains restore.sql besides the .dat files,
what is the difference between the sql script files restore.sql and toc.dat in a tar dump and a
plain dump (which has only one sql script file)?
Thanks.
restore.sql is not used by pg_restore. See this comment from src/bin/pg_dump/pg_backup_tar.c:
* The tar format also includes a 'restore.sql' script which is there for
* the benefit of humans. This script is never used by pg_restore.
toc.dat is the table of contents. It contains commands to create and drop each object in the dump and is used by pg_restore to create the objects. It also contains COPY statements that load the data from the *.dat file.
You can extract the table of contents in human-readable form with pg_restore -l, and you can edit the result to restore only specific objects with pg_restore -L.
The <number>.dat files are the files containing the table data, they are used by the COPY statements in toc.dat and restore.sql.
This looks a script to restore the data to PostgresQL. the script was created using pg_dump.
If you'd like to restore, please have a look at pg_restore.
The dat files contain the data to be restored in those \copy commands in the sql script.
the toc.dat file is not referenced inside the sql file. if you try to peek inside using cat toc.dat|strings you'll find that it contains data very similar to the sql file, but with a few more internal ids.
I think it might have been intended to work without the SQL at some point, but that's not how it's working right now. see the code to generate toc here.
we can use
copy (select * from mytbl) to 'D:/products.csv' with csv header
to import data in mytbl to local disk D
so is it possible to use the same method to upload the file directly into a FTP-Server ?
i tried like this
copy (select * from mytbl) to 'ftp://usrname:mypasswrd#ftp.drivehq.com/masters/3/product/products.csv' with csv header
but got this error
ERROR: relative path not allowed for COPY to file
SQL state: 42602
using PostgreSQL 9.2
PostgreSQL does not support any source/destination for COPY other than a file or stdin/stdout.
What you can do is COPY to stdout and pipe that to a program that writes the data to the ftp dir. psql's \copy is useful for this:
psql -c "\copy mytable to stdout with (format csv, header)" | ncftpput -c my.ftp.host /path/on/host
You can use any tool that accepts the input data on a pipe to write to the remote ftp file; ncftpput is just one option.
A future PostgreSQL version may add support for invoking COPY with a pipe, e.g. COPY ... TO '|/some/command', but there are serious security concerns with running programs under the PostgreSQL user that would make this a superuser-only operation and of questionable safety even then. It's much safer to run the program client-side, and psql is ideal for that.
I'm trying to import a (rather large) .txt file into a table geonames in PostgreSQL 9.1. I'm in the /~ directory of my server, with a file named US.txt placed in that directory. I set the search_path variable to geochat, the name of the database I'm working in. I then enter this query:
COPY geonames
FROM 'US.txt',
DELIMITER E'\t',
NULL 'NULL');
I then receive this error:
ERROR: could not open file "US.txt" for reading: No such file or directory.
Do I have to type in \i US.txt or something similar first, or should it just get it from the present working directory?
Maybe a bit late, but hopefully useful:
Use \copy instead
https://wiki.postgresql.org/wiki/COPY
jvdw
A couple of misconceptions:
1.
I'm in the /~ directory of my server
There is no directory /~. It's either / (root directory) or ~ (home directory of current user). It's also irrelevant to the problem.
2.
I set the search_path variable to geochat, the name of the database I'm working in
The search_path has nothing to do with the name of the database. It's for schemas inside the current database. You probably need to reset this.
3.
You are required to use the absolute path for your file. As documented in the manual here:
filename
The absolute path name of the input or output file.
4.
DELIMITER: just noise.
The default is a tab character in text format
5.
NULL: It's rather uncommon to use the actual string 'NULL' for a NULL value. Are you sure?
The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format.
My guess (after resetting search_path - or you schema-qualify the table name):
COPY geonames FROM '/path/to/file/US.txt';
The paths are relative to the PostgreSQL server, not the psql client.
Assuming you are running PostgreSQL 9.4, you can put US.txt in the directory /var/lib/postgresql/9.4/main/.
Another option is to pipe it in from stdin:
cat US.txt | psql -c "copy geonames from STDIN WITH (FORMAT csv);"
if you're running your COPY command from a script, you can have a step in the script that creates the COPY command with the correct absolute path.
MYPWD=$(pwd)
echo "COPY geonames FROM '$MYPWD/US.txt', DELIMITER E'\t';"
MYPWD=
you can then run this portion into a file and execute it
./step_to_create_COPY_with_abs_path.sh >COPY_abs_path.sql
psql -f COPY_abs_path.sql -d your_db_name