There's 1 column that contains commas. When I output my query to csv, these commas break the csv format. What I've been doing to avoid this is a simple
replace(A."Sales Rep",',','')
Is there a better way of doing this so that I can actually get the commas in the final output without breaking the csv file?
Thanks!
You can use the COPY command to get PostgreSQL to build the CSV for you:
COPY -- copy data between a file and a table
Something like one of these:
copy your_table to 'filename' csv
copy your_table to 'filename' csv force quote *
copy your_table to stdout csv force quote *
copy your_table to stdout csv force quote * header
...
You have to be the super user to copy to a filename though. If you're inside psql, you can use the \copy command:
Performs a frontend (client) copy. This is an operation that runs an SQL COPY command, but instead of the server reading or writing the specified file, psql reads or writes the file and routes the data between the server and the local file system.
The syntax is pretty much the same:
\copy your_table to 'filename.csv' csv force quote * header
...
Quote the fields with "
a,this has a , in it,b
would become
a,"this has a, in it",b
and if the fields have BOTH a , and a ", double the quotes:
a,this has a " and , in it,b
becomes
a,"this has a "" and , in it",b
Related
I'm new to this very interesting blog. This is my problem: I have to load a csv file with three columns (field1, field2 and field3), in a postgresql table.
In the string contained in the field1 column there are new line characters.
I use sql statements:
COPY test (regexp_replace (field1,, E '[\\n\\r] +', '', 'g'),
field2, field3)
from 'D:\zzz\aaa20.csv' WITH DELIMITER '|';
but it reports me an error.
How can I remove new line characters?
If the newlines are properly escaped by quoting the value, this should not be a problem.
If your data are corrupted CSV files with unescaped newlines, you will have to do some pre-processing. If you are willing to give the database user permission to execute programs on the database server, you could use
COPY mytable FROM PROGRAM 'demangle D:\zzz\aaa20.csv' (FORMAT 'csv');
Here, demangle is a program or script that reads the file, fixes the data and outputs them to standard output. Since you are on Windows, you probably don't have access to tools like sed and awk that can be used for such purposes, and you may have to write your own.
So, this is syntax of COPY command:
COPY table_name [ ( column_name [, ...] ) ]
FROM { 'filename' | STDIN }
[ [ WITH ] ( option [, ...] ) ]
You can only add optional list of column names, and not function calls (regexp_replace in your case) or some other complex constructions.
You can create some temporal table import data into it and than copy data in your table using ordinal INSERT...SELECT query.
I am using pgAdminIII and I want to copy data from a .txt file to my database.Let's say that we have a file called Address.txt and it has these values:
1,1970 Napa Ct.,Bothell,98011
2,9833 Mt. Dias Blv.,Bothell,98011
3,"7484, Roundtree Drive",Bothell,98011
4,9539 Glenside Dr,Bothell,98011
If I type
COPY myTable FROM 'C:\Address.txt' (DELIMITER(','));
I will get
ERROR: extra data after last expected column
CONTEXT: COPY address, line 3: "7484, Roundtree Drive",Bothell,98011
What do I need to add to the COPY command in order to ignore the , as a new column inside the " "?
You need to specify quote character such that:
COPY mytable FROM 'C:\Address.txt' DELIMITER ',' QUOTE '"' csv;
For example, say I've got the output of:
SELECT
$text$col1, col2, col3
0,my-value,text
7,value2,string
0,also a value,fort
$text$;
Would it be possible to populate a table directly from it with the COPY command?
Sort of. You would have to strip the first two and last lines of your example in order to use the data with COPY. You could do this by using the PROGRAM keyword:
COPY table_name FROM PROGRAM 'sed -e ''1,2d;$d'' inputfile';
Which is direct in that you are doing everything from the COPY command and indirect in that you are setting up an outside program to filter your input.
We are exporting data from Postgres 9.3 into a text file for ingestion by Spark.
We would like to use the ASCII 31 field separator character as a delimiter instead of \t so that we don't have to worry about escaping issues.
We can do so in a shell script like this:
#!/bin/bash
DELIMITER=$'\x1F'
echo "copy ( select * from table limit 1) to STDOUT WITH DELIMITER '${DELIMITER}'" | (psql ...) > /tmp/ascii31
But we're wondering, is it possible to specify a non-printable glyph as a delimiter in "pure" postgres?
edit: we attempted to use the postgres escaping convention per http://www.postgresql.org/docs/9.3/static/sql-syntax-lexical.html
warehouse=> copy ( select * from table limit 1) to STDOUT WITH DELIMITER '\x1f';
and received
ERROR: COPY delimiter must be a single one-byte character
Try prepending E before the sequence you're trying to use as a delimter. For example E'\x1f' instead of '\x1f'. Without the E PostgreSQL will read '\x1f' as four separate characters and not a hexadecimal escape sequence, hence the error message.
See the PostgreSQL manual on "String Constants with C-style Escapes" for more information.
From my testing, both of the following work:
echo "copy (select 1 a, 2 b) to stdout with delimiter u&'\\001f'"| psql;
echo "copy (select 1 a, 2 b) to stdout with delimiter e'\\x1f'"| psql;
I've extracted a small file from Actian Matrix (a fork of Amazon Redshift, both derivatives of postgres), using this notation for ASCII character code 30, "Record Separator".
unload ('SELECT btrim(class_cd) as class_cd, btrim(class_desc) as class_desc
FROM transport.stg.us_fmcsa_carrier_classes')
to '/tmp/us_fmcsa_carrier_classes_mk4.txt'
delimiter as '\036' leader;
This is an example of how this file looks in VI:
C^^Private Property
D^^Private Passenger Business
E^^Private Passenger Non-Business
I then moved this file over to a machine hosting PostgreSQL 9.5 via sftp, and used the following copy command, which seems to work well:
copy fmcsa.carrier_classes
from '/tmp/us_fmcsa_carrier_classes_mk4.txt'
delimiter u&'\001E';
Each derivative of postgres, and postgres itself seems to prefer a slightly different notation. Too bad we don't have a single standard!
I have an input CSV file containing something like:
SD-32MM-1001,"100.00",4/11/2012
SD-32MM-1001,"1,000.00",4/12/2012
I was trying to COPY import that into a postgresql table(varchar,float8,date) and ran into an error:
# copy foo from '/tmp/foo.csv' with header csv;
ERROR: invalid input syntax for type double precision: "1,000.00"
Time: 1.251 ms
Aside from preprocessing the input file, is there some setting in PG that will have it read a file like the one above and convert to numeric form in COPY? Something other than COPY?
If preprocessing is required, can it be set as part of the COPY command? (Not the psql \copy)?
Thanks a lot.
The option to pre processing is to first copy to a temporary table as text. From there insert into the definitive table using the to_number function:
select to_number('1,000.00', 'FM000,009.99')::double precision;
It's an odd CSV file that surrounds numeric values with double quotes, but leaves values like SD-32MM-1001 unquoted. In fact, I'm not sure I've ever seen a CSV file like that.
If I were in your shoes, I'd try copy against a file formatted like this.
"SD-32MM-1001",100.00,4/11/2012
"SD-32MM-1001",1000.00,4/12/2012
Note that numbers have no commas. I was able to import that file successfully with
copy test from '/fullpath/test.dat' with csv
I think your best bet is to get better formatted output from your source.