I am trying to import a PostgreSQL data file into Amazon redshift using my command line. I did import the schema file but can not import data file. It seems that data insertion in amazon redshift is a bit different.
I want to know all kinds of way of importing data file into redshift using command line.
UPDATE
My data file looks like :
COPY actor (actor_id, first_name, last_name, last_update) FROM stdin;
0 Chad Murazik 2014-12-03 10:54:44
1 Nelle Sauer 2014-12-03 10:54:44
2 Damien Ritchie 2014-12-03 10:54:44
3 Casimer Wiza 2014-12-03 10:54:44
4 Dana Crist 2014-12-03 10:54:44
....
I typed the following command from CLI:
PGPASSWORD=**** psql -h testredshift.cudmvpnjzyyy.us-west-2.redshift.amazonaws.com -p 5439 -U abcd -d pagila -f /home/jamy/Desktop/pag_data.sql`
And then got error like :
ERROR: LOAD source is not supported. (Hint: only S3 or DynamoDB or EMR based load is allowed
Dump your table using a CSV format:
\copy <your_table_name> TO 'dump_fulename.csv' csv header NULL AS '\N'
Upload it to S3, and read it from redshift using:
COPY schema.table FROM 's3:/...' WITH CREDENTIALS '...' CSV;
Source: Importing Data into Redshift from MySQL and Postgres
You can't use pg_dump: unload all your data to s3 and use a copy command to load it into Redshift. This is a common mistake.
Related
I have a requirement of taking the table backup into S3 in csv format . I already tried taking the pg_dump of the tables, but those are in .sql format. Also I tried using gz format, but they all contains the dump of the data. I have tried this command:
pg_dump -v -h <hostname> -d <dbname> -t <tablename> -U <username>| gzip | aws s3 cp - s3://xxx/xxx/tablename.csv.gz
Is it possible to get the tables exactly in csv file format so that it could be easy to create QuickSight reports out of that because we cant use pg_dump directly to create the QuickSight reports.
Any help would be much appreciated.
Note: the table size is between 50 to 100 GB
You can use COPY command to take data dump in CSV.
Server Side Export:
copy table_name to 'Absolute/path/to/filename.csv' csv header;
Client Side Export:
\copy table_name to 'Relative/path/to/filename.csv' csv header;
I'm new to PostgreSQL and the psql CLI. My bandwidth is extremely limited, which results in it taking hours to download each table from an AWS instance, that are 1 - 5 GB's each. The current command I use, after logging into the DB with psql:
\copy (SELECT * FROM table) TO table.csv CSV DELIMITER ','
Is it possible to query a table, similar to the above, that actually zips the csv file ON the Amazon PostgreSQL instance, prior to downloading and saving locally, thus reducing the 1 - 5 GB downloads to < 1 GB; significantly reducing the download times?
Something like:
\copy (SELECT * FROM table) TO csv.zip CSV DELIMITER ',' TO table.csv.zip
I came across this gist, but the commands listed appear to be a complete dump of all tables / the entire db. I would like the ability to do the same for tables and subset queries.
EDIT: Solution = \copy (SELECT * FROM table) TO PROGRAM 'gzip > Users/username/folder/folder/my_table.gz' DELIMITER ',' after logging into psql
Using psql and the STDOUT. This command will return the output to the client and will compress it:
psql yourdb -c "\COPY (SELECT * FROM table) TO STDOUT;" | gzip > output.gz
Or directly at the database server (also into a compressed file), using a client of your choice:
COPY (SELECT * FROM table) TO PROGRAM 'gzip > /var/lib/postgresql/my_table.gz' DELIMITER ',';
I have DB2 installed locally where I have a table named INCIDENTS. I can do "db2 describe table INCIDENTS" to list the columns and their types. Is it possible to get a create table query or some script which when run on another server can create a table with the same schema?
If you are using Db2 for Linux/Unix/Windows, you can use db2look command line tool which can extract the DDL to a text file, which you can then copy to the other server and run against a database there.
Example:
db2look -d <your database> -z <your schema> -t <your table> -e -o script.sql
-e = Extract DDL statements
-o = Output file
If you are happy to have both the DDL and the data, then you can use the command line to export the contents of the table to an IXF file, which you can then copy to the target server and use IMPORT ... CREATE INTO ... to replicate both the DDL and the data and indexes etc.
Use the Db2 Knowledge Center to find the details.
If you prefer to use GUI tools, IBM Data Studio also lets you extract DDL to a file, as do other tools such as DB-Visualiser etc.
I am trying to take backup of a table as sql file in postgres with below command.. the command is not executing properly.. can some one help ?
pg_dump -U postgres-a test_1 > test_1_dump.sql
I have a table (tst2) in a database (tweets) in PostgreSQL and I need to have a plain/text format file out of it, I was wondering if there is any possible solution with pg_dump ? something like :
pg_dump -t tst2 tweets -f plain >...
also if I am in a wrong way please let me know?!
There are a couple of ways to dump a table into a text file.
First, you can use pg_dump, as you intended. In that case you'll get a SQL script to restore the tables. Just fix your command a bit:
pg_dump -t tst2 -t tweets -F plain >...
Second, you can dump contents of a table with copy command. There are either SQL version of the command (files will be created on the server):
copy tst2 to 'tst2.txt';
copy tweets to 'tweets.txt';
Or client-side psql version (files will be created on your client computer):
\copy tst2 to 'tst2.txt';
\copy tweets to 'tweets.txt';
pg_dump works for me, though there is some clutter before and after the table (after all, dumps are supposed to be used to fill up the table at recovery time).
I'm not sure what you use the > operator for; the dump goes to the file `plane'.
Your error message would help, of course.
On the other hand, what's wrong with using psql with a .pgpass password file and setting the PGDATABASE, PGHOST, PGPORT, and PGUSER anvironment variables, e.g.:
export PGDATABASE=tweet
# PGHOST, PGPORT, and PGUSER as per your setup
psql -c 'select * from tst2'