How do I escape single quote in command line query of psql? - postgresql

I googled a lot but..
How do I escape single quote in command line query of psql ?
psql -t -A -F $'\t' postgresql://zzzz:5432/casedb -U qqqq -c 'select id,ext_ids ->> 'qwe' as qwe from data ORDER BY qwe' > /jdata/qwe.tab
Results in error
ERROR: column "qwe" does not exist
LINE 1: select id,ext_ids ->> qwe as qwe from data...

In Postgres you can use dollar-quoted strings:
select id,ext_ids ->> $$qwe$$ as qwe from data ORDER BY qwe;
-- or
select id,ext_ids ->> $anything$qwe$anything$ as qwe from data ORDER BY qwe;

You could just use double quotes (") for the shell quoting and single quotes (') for the SQL quoting:
psql -t -A -F $'\t' postgresql://zzzz:5432/casedb -U qqqq -c "select id,ext_ids ->> 'qwe' as qwe from data ORDER BY qwe" > /jdata/qwe.tab
# Here ------------------------------------------------------^---------------------------------------------------------^

Related

sed: add semicolon after each sentence

I'm using this grep in order to extract "SQL " sentences from my log file:
grep -oPzZ ' SQL "\K[^"]+' log.log
After that, I need to format it adding ; at the end of each detected sql sentence:
grep -oPzZ ' SQL "\K[^"]+' log.log | sed -E '$s/$/\n/; s/\x0/;/; s/^[[:blank:]]+//'
Nevertheless, it seems to not working at all. I mean, I'm getting:
alter table HFJ_RES_LINK modify ( SRC_PATH varchar2(200) );create index IDX_VALUESET_EXP_STATUS on TRM_VALUESET(EXPANSION_STATUS)drop index IDX_VALUESET_EXP_STATUS
As you can see, first ; is added after first detected sql sentence, later is not added.
log.log is similar to:
3_6_0.20180929.1: SQL "alter table HFJ_RES_LINK modify ( SRC_PATH varchar2(200) )" returned 0
4_0_0.20190722.37: SQL "create index IDX_VALUESET_EXP_STATUS on TRM_VALUESET(EXPANSION_STATUS)" returned 0
Any ideas?
It would be better to use awk here as you can combine both grep and sed operations into a single command. Consider this gnu awk solution:
awk -v RS='SQL "[^"]+"' 'RT {gsub(/^SQL "|"|\n/, "", RT); print RT ";"}' file
alter table HFJ_RES_LINK modify( SRC_PATH varchar2(200) );
create index IDX_VALUESET_EXP_STATUS on TRM_VALUESET(EXPANSION_STATUS);
Details:
This awk command uses a custom record separator of SQL followed by a single space and then followed by a double quoted string.
Matched text is available in internal variable RT to awk.
When RT is non-empty we remove unwanted start SQL " and end " and all line breaks from RT and finally print it with an ending ;.
A simple sed can replace your grep + sed:
sed -nr 's/.*SQL "([^"]*)".*/\1;/p' log.log

psql \copy "No such file or directory" if file is a variable

I want to copy a .csv file into a postgresql table, where the file name is a variable. It fails with a "no such file or directory" error if \COPY and a user other than postgres is used. However, the copy succeeds if COPY and the postgres user is used.
The failing script:
martin#opensuse1:~> ./test1.sh
Null display is "¤".
'/home/martin/20180423.csv'
psql:load.sql:2: :load_csv: No such file or directory
martin#opensuse1:~> cat test1.sh
load_csv=/home/martin/20180423.csv
psql -d test1 -e -f load.sql --variable=load_csv="'$load_csv'"
martin#opensuse1:~> cat load.sql
\echo :load_csv
\copy test_table (col1, col2, col3) FROM :load_csv delimiter ';' encoding 'LATIN1' NULL '';
martin#opensuse1:~>
The working script:
martin#opensuse1:~> ./test1.sh
Null display is "¤".
'/home/martin/20180423.csv'
copy test_table (col1, col2, col3) FROM '/home/martin/20180423.csv' delimiter ';' encoding 'LATIN1' NULL '';
COPY 3
martin#opensuse1:~> cat test1.sh
load_csv=/home/martin/20180423.csv
psql -w postgres -d test1 -e -f load.sql --variable=load_csv="'$load_csv'"
martin#opensuse1:~> cat load.sql
\echo :load_csv
copy test_table (col1, col2, col3) FROM :load_csv delimiter ';' encoding 'LATIN1' NULL '';
martin#opensuse1:~>
What can I do to make this script run without having to use the postgres user?
Martin
It seems that the psql variables are not substituted in the \copy command.
A solution is to write the \copy command to a file and execute that file.
The part from my script (load the table par from the tsv-file with the
name stored in :input_file) is:
-- Tuples only:
\t on
-- Output file:
\o load_cmd.sql
select concat('\copy par from ''', :'input_file', '.tsv'';');
-- Standard output again:
\o
-- Normal decoration of tables:
\t off
-- Now execute the file with the \copy command:
\i load_cmd.sql

psql non-select: how to remove formatting and show only certain columns?

I'm looking to remove all line drawing characters from:
PGPASSWORD="..." psql -d postgres -h "1.2.3.4" -p 9432 -c 'show pool_nodes' -U owner
node_id | hostname | port | status | lb_weight | role
---------+---------------+------+--------+-----------+---------
0 | 10.20.30.40 | 5432 | 2 | 0.500000 | primary
1 | 10.20.30.41 | 5432 | 2 | 0.500000 | standby
(2 rows)
Adding the -t option gets rid of the header and footer, but the vertical bars are still present:
PGPASSWORD="..." psql -t -d postgres -h "1.2.3.4" -p 9432 -c 'show pool_nodes' -U owner
0 | 10.20.30.40 | 5432 | 2 | 0.500000 | primary
1 | 10.20.30.41 | 5432 | 2 | 0.500000 | standby
Note that this question is specific to show pool_nodes and other similar non-select SQL statements.
My present workaround is to involve the Linux cut command:
<previous command> | cut -d '|' -f 4
The question has two parts:
How using psql only can the vertical bars above be removed?
How using psql only can only a specific column (for example, status) or columns be shown? For example, the result might be just two lines, each showing the number 2.
I'm using psql version psql (PostgreSQL) 9.2.18 on a CentOS 7 server.
For scripting psql use psql -qAtX:
quiet
tuples-only
unAligned output
do not read .psqlrc (X)
To filter columns you must name them in the SELECT list. psql always outputs the full result set it gets from the server. E.g. SELECT status FROM pool_nodes.
Or you can cut to extract ordinal column numbers e.g.
psql -qAtX -c 'whatever' | cut -d '|' -f 1,2-4
(I have no idea how show pool_nodes can produce the output you show here, since SHOW returns a single scalar value...)
To change the delimiter from a pipe | to something else, use -F e.g. -F ','. But be warned, the delimiter is not escaped when it appears in output, this isn't CSV. You might want to consider a tab as a useful option; you have to enter a quoted literal tab to do this. (If doing it in an interactive shell, search for "how to enter literal tab in bash" when you get stuck).
Example showing all the above, given dummy data:
CREATE TABLE dummy_table (
a integer,
b integer,
c text,
d text
);
INSERT INTO dummy_table
VALUES
(1,1,'chicken','turkey'),
(2,2,'goat','cow'),
(3,3,'mantis','cricket');
query, with single space as the column delimiter (so you'd better not have spaces in your data!):
psql -qAtX -F ' ' -c 'SELECT a, b, d FROM dummy_table'
If for some reason you cannot generate a column-list for SELECT you can instead filter by column-ordinal with cut:
psql -qAtX -F '^' -c 'TABLE dummy_table' | cut -d '^' -f 1-2,4

psql - read SQL file and output to CSV

I have a SQL file my_query.sql:
select * from my_table
Using psql, I can read in this sql file:
\i my_query.sql
Or pass it in as an arg:
psql -f my_query.sql
And I can output the results of a query string to a csv:
\copy (select * from my_table) to 'output.csv' with csv header
Is there a way to combine these so I can output the results of a query from a SQL file to a CSV?
Unfortunately there's no baked-in functionality for this, so you need a little bash-fu to get this to work properly.
CONN="psql -U my_user -d my_db"
QUERY="$(sed 's/;//g;/^--/ d;s/--.*//g;' my_query.sql | tr '\n' ' ')"
echo "\\copy ($QUERY) to 'out.csv' with CSV HEADER" | $CONN
The sed fun removes all semicolons, comment lines, and end of line comments, and tr converts newlines to spaces (as mentioned in a comment by #abelisto):
-- my_query.sql
select *
from my_table
where timestamp < current_date -- only want today's records
limit 10;
becomes:
select * from my_table where timestamp < current_date limit 10
which then gets passed in to the valid psql command:
\copy (select * from my_table where timestamp < current_date) to 'out.csv' with csv header
Here's a script:
sql_to_csv.sh
#!/bin/bash
# sql_to_csv.sh
CONN="psql -U my_user -d my_db"
QUERY="$(sed 's/;//g;/^--/ d;s/--.*//g;' $1 | tr '\n' ' ')"
echo "$QUERY"
echo "\\copy ($QUERY) to '$2' with csv header" | $CONN > /dev/null
./sql_to_csv.sh my_query.sql out.csv
I think the simplest way is to take advantage of the shell's variable expansion capabilities:
psql -U my_user -d my_db -c "COPY ($(cat my_query.sql)) TO STDOUT WITH CSV HEADER" > my_query_results.csv
You could do it using a bash script.
dump_query_to_csv.sh:
#!/bin/bash
# Takes an sql query file as an argument and dumps its results
# to a CSV file using psql \copy command.
#
# Usage:
#
# dump_query_to_csv.sh <sql_query_file> [<csv_output_filesname>]
SQL_FILE=$1
[ -z $SQL_FILE ] && echo "Must supply query file" && exit
shift
OUT_FILE=$1
[ -z $OUT_FILE ] && OUT_FILE="output.csv" # default to "output.csv" if no argument is passed
TMP_TABLE=ttt_temp_table_xx # some table name that will not collide with existing tables
## Build a psql script to do the work
PSQL_SCRIPT=temp.psql
# create a temporary database table using the SQL from the query file
echo "DROP TABLE IF EXISTS $TMP_TABLE;CREATE TABLE $TMP_TABLE AS" > $PSQL_SCRIPT
cat $SQL_FILE >> $PSQL_SCRIPT
echo ";" >> $PSQL_SCRIPT
# copy the temporary table to the output CSV file
echo "\copy (select * from $TMP_TABLE) to '$OUT_FILE' with csv header" >> $PSQL_SCRIPT
# drop the temporary table
echo "DROP TABLE IF EXISTS $TMP_TABLE;" >> temp.sql
## Run psql script using psql
psql my_database < $PSQL_SCRIPT # replace my_database and add user login credentials as necessary
## Remove the psql script
rm $PSQL_SCRIPT
You'll need to edit the psql line in the script to connect to your database. The script could also be enhanced to take the database and account credentials as arguments.
The accepted solution is correct, but I had Windows and had to make it run via a batch (command) file. Posting it here if someone needs that
#echo off
echo 'Reading file %1'
set CONN="C:\Program Files\PostgreSQL\11\bin\psql.exe" -U dbusername -d mydbname
"C:\Program Files\Git\usr\bin\sed.exe" 's/;//g;/^--/ d;s/--.*//g;' %1 | "C:\Program Files\Git\usr\bin\tr.exe" '\n' ' ' > c:\temp\query.txt
set /p QUERY=<c:\temp\query.txt
echo %QUERY%
echo \copy (%QUERY%) to '%2' WITH (FORMAT CSV, HEADER) | %CONN%

Export from PostgreSQL multiple times to same file

Is it possible to export a table to csv, but to append multiple selections to the same file?
I would like to export (for instance):
SELECT * FROM TABLE WHERE a > 5
Then, later:
SELECT * FROM TABLE WHERE b > 2
This must go to the same file.
Thanks in advance!
The only way that I know of to do this is from the command-line, redirecting output.
psql -d dbname -t -A -F"," -c "SELECT * FROM TABLE WHERE a > 5" >> output.csv
then later
psql -d dbname -t -A -F"," -c "SELECT * FROM TABLE WHERE b > 2" >> output.csv
You can look up the command line options here.
http://www.postgresql.org/docs/9.0/static/app-psql.html
Use \o <filename> to output to a file. All your SELECT statements after using \o will be appended to <file> until you set \o to something else.
Using \o in combination with \copy to STDOUT seems to work. For example:
db=> \o /tmp/test.csv
db=> \copy (select 'foo','bar') to STDOUT with CSV;
db=> \copy (select 'foo','bar') to STDOUT with CSV;
db=> \q
$ cat /tmp/test.csv
foo,bar
foo,bar