Quote strings and dates in psql query results output - postgresql

Under the section \pset [ option [ value ] ] of the psql docs, I can set various settings to make my query results convenient for me.
I can, for example, approach a CSV-like output with:
\pset fieldsep ','
\pset footer off
\pset format unaligned
\pset null 'NULL'
Resulting in output like:
> WITH foo_tbl(foo,bar,baz)
> AS
> (
> VALUES
> ('foo', NULL, 1),
> (NULL, 'bar', 1)
> )
> SELECT * FROM foo_tbl;
foo,bar,baz
foo,NULL,1
NULL,bar,1
This is great, but I'd like strings and dates to be quoted, like this:
foo,bar,baz
'foo',NULL,1
NULL,'bar',1
Is this not possible with psql?
p.s. I know this kind of thing can be done with SQL clients like DBeaver, but that isn't in the scope of this question.

To generate CSV output, you can use the copy command rather than trying to tweak the output of a regular SELECT statement.
copy (
WITH foo_tbl (foo,bar,baz,dt) AS
(
VALUES
('foo', NULL, 1, date '2020-01-02'),
(NULL, 'bar', 1, date '2020-03-04')
)
SELECT *
FROM foo_tbl
) to stdout
with (format csv, quote '''', header, null 'NULL', force_quote (foo, dt) );
Will generate the following output
foo,bar,baz,dt
'foo',NULL,1,'2020-01-02'
NULL,bar,1,'2020-03-04'
I am not aware of an option that will quote only dates and strings, but not numbers, so using force_quote and specifying the columns to quote is the only way to get them (always).
copy (...) to stdout is easier to use than it's psql sibling \copy because it allows multi-line queries.
To write everything into a file, you can use the \o command in psql
postgres=> \o data.csv
postgres=> copy (...) to stdout with (...);

Related

Trying to create query to export data as csv

I have a Postgresql table I wish to export as CSV on demand using a query, without superuser.
I tried:
COPY myapp_currencyprice to STDOUT WITH (DELIMITER ',', FORMAT CSV, HEADER) \g /tmp/prices.csv
But I get a syntax error at "\g"
So I tried:
\copy myapp_currencyprice to '/tmp/prices.csv' with (DELIMITER ',', FORMAT CSV, HEADER)
But I also get a syntax error at "" from "\copy"
You can do the following in psql.
SELECT 1 as one, 2 as two \g /tmp/1.csv
then in psql
\! cat /tmp/1.csv
or you can
copy (SELECT 1 as one, 2 as two) to '/tmp/1.csv' with (format csv , delimiter '|');
But You can't STDOUT and filename. Because in manual(https://www.postgresql.org/docs/current/sql-copy.html):
COPY { table_name [ ( column_name [, ...] ) ] | ( query ) }
TO { 'filename' | PROGRAM 'command' | STDOUT }
[ [ WITH ] ( option [, ...] ) ]
the Vertical line | means: you must choose one alternative.(source: https://www.postgresql.org/docs/14/notation.html)

How to properly insert from stdin in postgresql?

normal insert:
insert into tfreeze(id,s) values(1,'foo');
I tried the following ways, both not working:
copy tfreeze(id,s ) from stdin;
1 foo
\.
copy tfreeze(id,s ) from stdin;
1 'foo'
\.
Only a few questions related from stdin in stackoverflow. https://stackoverflow.com/search?q=Postgres+Insert+statements+from+stdin
--
error code:
ERROR: 22P02: invalid input syntax for type integer: "1 foo"
CONTEXT: COPY tfreeze, line 1, column id: "1 foo"
LOCATION: pg_strtoint32, numutils.c:320
I get code from this(https://postgrespro.ru/education/books/internals) book.
code source: https://prnt.sc/eEsRZ5AK-tjQ
So far I tried:
1, foo, 1\t'foo', 1\tfoo
First, you have to use psql for that (you are already doing that).
You get that error because you use the default text format, which requires that the values are separated by tabulator characters (ASCII 9).
I recommend that you use the CSV format and separate the values with commas:
COPY tfreeze (id, s) FROM STDIN (FORMAT 'csv', FREEZE);
1,foo
\.

Snowflake null values quoted in CSV breaks PostgreSQL unload

I am trying to shift data from Snowflake to Postgresql and to do so I first load it into s3 in CSV format. In the table, comas in text could appear, I therefore use FIELD_OPTIONALLY_ENCLOSED_BY snowflake unloading option to quote the content of the problematic cells. However when this happen + null values, I can't manage to have a valid CSV for PostgreSQL.
I created a simple table for you to understand the issue. Here it is :
CREATE OR REPLACE TABLE PUBLIC.TEST(
TEXT_FIELD VARCHAR(),
NUMERIC_FIELD INT
);
INSERT INTO PUBLIC.TEST VALUES
('A', 1),
(NULL, 2),
('B', NULL),
(NULL, NULL),
('Hello, world', NULL)
;
COPY INTO #STAGE/test
FROM PUBLIC.TEST
FILE_FORMAT = (
COMPRESSION = NONE,
TYPE = CSV,
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
NULL_IF = ''
)
OVERWRITE = TRUE;
Snowflake will from that create the following CSV
"A",1
"",2
"B",""
"",""
"Hello, world",""
But after that, it is for me impossible to copy this CSV inside a PostgreSQL Table as it is.
Even thought from PostgreSQL documentation we have next to NULL option :
Specifies the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format.
Not setting COPY Option in PostgreSQL COPY INTO will result in a failed unloading. Indeed it won't work as we also have to specify the quote used using QUOTE. Here it'll be QUOTE '"'
Therefore during POSTGRESQL unloading, using :
FORMAT csv, HEADER false, QUOTE '"' will give :
DataError: invalid input syntax for integer: "" CONTEXT: COPY test, line 3, column numeric_field: ""
FORMAT csv, HEADER false, NULL '""', QUOTE '"' will give :
NotSupportedError: CSV quote character must not appear in the NULL specification
FYI, To test the unloading in s3 I will use this command in PostgreSQL:
CREATE IF NOT EXISTS TABLE PUBLIC.TEST(
TEXT_FIELD VARCHAR(),
NUMERIC_FIELD INT
);
CREATE EXTENSION IF NOT EXISTS aws_s3 CASCADE;
SELECT aws_s3.table_import_from_s3(
'PUBLIC.TEST',
'',
'(FORMAT csv, HEADER false, NULL ''""'', QUOTE ''"'')',
'bucket',
'test_0_0_0.csv',
'aws_region'
)
Thanks a lot for any ideas on what I could do to make it happen? I would love to find a solution that don't requires modifying the csv between snowflake and postgres. I think it is an issue more on the Snowflake side as it don't really make sense to quote null values. But PostgreSQL is not helping either.
When you set the NULL_IF value to '', you are actually telling Snowflake to convert NULLS to a BLANK, which then get quoted. When you are copying out of Snowflake, the copy options are "backwards" in a sense and NULL_IF acts more like an IFNULL.
This is the code that I'd use on the Snowflake side, which will result in an unquoted empty string in your CSV file:
FILE_FORMAT = (
COMPRESSION = NONE,
TYPE = CSV,
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
NULL_IF = ()
)

How to quote column names on the commandline of fbexport

As the title says, how am I going to deal with a column name in FBExport that look like a keyword?
this is how my statement looks like:
-Q "SELECT a.ID, a.USERID, a.`WHEN`, a.INOUT FROM ATTENDANT a"
then I get this error:
Engine Code : 335544569
Engine Message :
Dynamic SQL Error
SQL error code = -104
Token unknown - line 1, column 26
WHEN
When I use
"WHEN"
Error: Switches must begin with -
tried 'When'
-Q "SELECT a.ID, a.USERID, a.'WHEN', a.INOUT FROM ATTENDANT a;"
SQL Message : -104
Invalid token
Engine Code : 335544569
Engine Message :
Dynamic SQL Error
SQL error code = -104
Token unknown - line 1, column 26
'WHEN'
Error: Switches must begin with -
What are the correct escape characters?
For dialect 3 database, Firebird allows quoting object names using double quotes ("<objectname>"). Be aware that quoting object names makes them case sensitive, so "WHEN" is not the same as "when". If your database is dialect 1 then this is not possible, and you should first convert your database to dialect 3.
However the problem this is a command line option, meaning that
-Q "SELECT a.ID, a.USERID, a."WHEN", a.INOUT FROM ATTENDANT a"
Is split by your shell to the arguments:
-Q
SELECT a.ID, a.USERID, a.
WHEN
, a.INOUT FROM ATTENDANT a
While you want:
-Q
SELECT a.ID, a.USERID, a."WHEN", a.INOUT FROM ATTENDANT a
To achieve that, you need to escape the double quote inside the second argument, so:
-Q "SELECT a.ID, a.USERID, a.\"WHEN\", a.INOUT FROM ATTENDANT a"
or - as indicated by a_horse_with_no_name in the comments - wrap the argument in single quotes:
-Q 'SELECT a.ID, a.USERID, a."WHEN", a.INOUT FROM ATTENDANT a'
This doesn't really have to do with Firebird or FBExport, but is a result of how your shell (eg bash) parses commandline arguments.
It looks like someone else is dealing with the external access to the Firebird database from Safescan TimeAttenedant
It was a little bit stupid from Safescan to name a column WHEN because this is a keyword in Firebird.
In the context of an insert statement, I have no success with a column list like:
insert into attendant (ID, USERID, DEVICEID, WHEN, INOUT, VERIFYMODE, WORKCODE) values (1092, 1, 1, '28.08.2017 08:00', 0, 4, 3);
"WHEN", \"WHEN\", \'WHEN\', ... no success
Remedy - Insert all data without column list, ex:
insert into attendant values (1034, 2, 1, '28.08.2017 08:00', 0, 4, null, null, null, null, null, 3);
Query is much easier: select * from attendant;

Complex sed Command for Insert Command

I have a bunch of php files, which have many insert commands.
In each query, I want to insert a column variable admin_id = '$admin_id',
i.e., if the query is
insert into users (ch_id, num_value) values ('2', '100')
the query should be converted to
insert into users (admin_id, ch_id, num_value) values ($admin_id, '2', '100')
To do this, I have executed the following command
sed -i 's/\(insert.*into.*\) (\(.*values\)/\1 (admin_id, \2/' *.php
and
sed -i "s/\(insert.*into.*\) values (/\1 values ('\$admin_id', /" *.php
The above has worked successfully, but am still facing problem with SQL queries where there is no where in the query, i.e.,
insert into abctable (id,no)
to
insert into tablename (admin_id, id, no)
and
insert into abctable select $column from $tableperiod
to
insert into abctable select $column from $tableperiod where admin_id='$admin_id'
and
insert into abctable select $column from $tableperiod where abc != 'xyz'
to
insert into abctable select $column from $tableperiod where admin_id = '$admin_id' and abc != 'xyz'
How can I insert admin_id in these queries as well?
The queries in php files are executed by passing the query to the function in the following way:
execute_query("insert * from $table order by username");
I can find the queries still which are left to be modified by
executing
grep 'execute_query' *| grep insert| grep -v admin_id > stillleft.txt
I have solved it by using the following command
sed -e "s/\(query.*insert.*select.*where\)/& admin_id='\$admin_id' and /g" -e t \
-e "s/\(query.*insert.*select.*\)\")/\1 where admin_id='\$admin_id\")'/g" -e t \
-e "s/\(query.*insert.*\)(\(.*\)values (/\1(admin_id, \2values ('\$admin_id', /g" -e t \
-e "s/\(query.*insert.*(\)/& admin_id, /g" \
-i *.php
I'm not sure my testcases are right, but I think this could help you:
I changed the first statement, because I think it's easier and it matches the first and the second command of YOUR sed
sed -i 's/\(insert into .* (\)\(.*) values (\)\(.*\)) /\1admin_id, \2\$admin_id, \3/' *.php
The second (the first you are looking for) should work with the following
sed -i 's/\(insert into .* (\)\(.*) \)/\1admin_id, \2/' *.php
And the last two should work with this:
sed -i "s/\(insert into \w* select \$column from \$tableperiod\)/\1 where admin_id='\$admin_id'/" *.php
I hope this works for you, if not, please send a little bit more test data, if tested the commands with the text of your question as input
I you use multiple sed commands, you'll traverse the complete file each time. You can do it in a single pass. Assuming an input file infile that looks like this:
insert into users (ch_id, num_value) values ('2', '100')
insert into abctable (id, no)
insert into abctable select $column from $tableperiod
insert into abctable select $column from $tableperiod where abc != 'xyz'
we can use the following sed script sedscr
/^insert into/ {
s/\(([^)]*)\)(.*)\(([^)]*)\)/(admin_id, \1)\2($admin_id, \3)/
s/^([^(]+)\(([^)]*)\)$/\1(admin_id, \2)/
/\(.*\)/! {
/where/s/$/ and admin_id ='$admin_id'/
/where/!s/$/ where admin_id='$admin_id'/
}
}
It does the following:
if a line starts with insert into, then
for all lines with two pairs of parentheses, insert admin_id in the the first one and $admin_id in the second one
for lines with one pair of parentheses at the end, insert admin_id
if there are no parentheses, then
if there is a "where" clause, append and admin_id = '$admin_id'
else append where admin_id='$admin_id'
This can be called as follows:
$ sed -rf sedscr infile
insert into users (admin_id, ch_id, num_value) values ($admin_id, '2', '100')
insert into abctable (admin_id, id, no)
insert into abctable select $column from $tableperiod where admin_id='$admin_id'
insert into abctable select $column from $tableperiod where abc != 'xyz' and admin_id ='$admin_id'
If you can't use extened regular expressions (-r), the quoting of parentheses has to be inverted (all \( become ( etc.) and the + has to be replaced by \{1,\}.
The cumbersome regexes such as \(([^)]*)\) stand for "between literal parentheses, capture zero or more characters that are not a closing parenthesis" – this enables non-greedy capturing.