Postgres query result formatting in bash script send mail - postgresql

I have a bash script where I execute a Postgres query using psql client, store the result into a variable and send the result in email using mail utility.
Below are the bash scripts I have tried.
1:
#!/bin/bash
varresult=`psql -H -A -h host_name -p port -U usr -d mydb -t -c "select rpad(sch_name, 10, ' '), rpad(trs_name, 63, ' '), rpad(trs_type, 11, ' '), comments from table1
;"`
mail -s "testing email" someone#example.com <<< $varresult
2:
#!/bin/bash
mail -s "testing email" someone#example.com <<< `psql -H -A -h host_name -p port -U usr -d mydb -t -c "select rpad(sch_name, 10, ' '), rpad(trs_name, 63, ' '), rpad(trs_type, 11, ' '), comments from table1
;"`
Both the above scripts send same unformatted result set in email. The query result has 4 columns and are appearing without any format in the receiver's email.
So I used rpad function in the query to align them properly, but in the email the result still looks unformatted. When I copy the email content to a text file, the content looks perfectly formatted.
How the result looks in email:
COL1 | COL2 | COL3
val_t | test_rgt | test cmts apr
val_t | test_t | test cmts
val_tr | test_wgdt | test cmts
val_t | test_hty | test cmts stgr
val_t | test_sf | test cmts gedf
How the result looks in notepad after copy/pasting the above result:
COL1 | COL2 | COL3
val_t | test_rgt | test cmts apr
val_t | test_t | test cmts
val_tr | test_wgdt | test cmts
val_t | test_hty | test cmts stgr
val_t | test_sf | test cmts gedf
Can someone help me on how to make the result look perfectly formatted in email?
Below is the mail version:
Heirloom Mail version 12.5 7/5/10.

You are effectively asking the shell to remove all the spaces from the output from psql; see When to wrap quotes around a shell variable. Besides, that's basically a useless use of echo though with the here string, somewhat in disguise.
#!/bin/sh
psql -H -A -h host_name -p port -U usr -d mydb -t \
-c "select rpad(sch_name, 10, ' '), rpad(trs_name, 63, ' '), rpad(trs_type, 11, ' '), comments from table1;" |
mail -s "testing email" someone#example.com
Capturing a large amount of data into a variable just so you can echo that variable once is also a waste of memory.
http://shellcheck.net/ basically identifies both of these problems, though you might need to do some digging to uncover exactly why they are wrong. Going forward, probably try this free service before asking for human assistance.
Without the <<< here string syntax, there is no longer any Bash-specific code in this script, so I changed the shebang to /bin/sh which on many platforms is a smaller, faster, and more secure alternative; perhaps see also Difference between sh and bash.
The HTML output from psql -H should also be marked in the email headers as text/html (the default is text/plain) but if your email client displays the HTML already, that's probably not strictly necessary if you can ensure that all recipients have a client which behaves like this, in spite of pertinent Internet RFCs.
Heirloom mailx does not seem to support overriding the Content-type: header from the command line, though it would contain logic to heuristically infer the content type if you sent the body as an attachment instead.
Alternatively, you could assemble a simple MIME message yourself, and pass it to sendmail. This is brittle for other reasons (mainly, you have to know what you are doing if you want to modify the code; and the location of sendmail is system-dependent. Look in /usr/sbin or /usr/libexec if you can't find it on your PATH).
( printf '%s\n' \
"From: you <sender#example.org>" \
"To: victim <someone#example.com>" \
"Subject: testing email" \
"MIME-Version: 1.0" \
'Content-type: text/html; charset="utf-8"' \
"Content-transfer-encoding: quoted-printable" \
""
psql -H -A -h host_name -p port -U usr -d mydb -t \
-c "select rpad(sch_name, 10, ' '), rpad(trs_name, 63, ' '), rpad(trs_type, 11, ' '), comments from table1;" |
# really cheap quoted-printable encoding
fold -w72 |
sed -e 's/=/=3D/g;s/$/=/' ) |
sendmail -oi -t

Related

Check number of active meetings in Big Blue Button from command line

I want to check how many active meetings there are on the BBB server at any one time from the command line. I have tried
$ bbb-conf --network
but not getting anywhere. I have also checked the number of active connections to port 80 and 443
$ netstat -anp | grep :443 | grep ESTABLISHED | wc -l
but I'm not sure if I can trust that figure.
I know I can use the isMeetingRunning call from the API but I'm just looking for command line.
Any ideas would be appreciated
The following bash script, which can be run from command line on the same machine as the BigBlueButton server, will process the response to the BBB API getMeetings call.
#!/bin/bash
APICallName="getMeetings"
APIQueryString=""
X=$( bbb-conf --secret | fgrep URL: )
APIEndPoint=${X##* }
Y=$( bbb-conf --secret | fgrep Secret: )
Secret=${Y##* }
S=$APICallName$APIQueryString$Secret
Checksum=$( echo -n $S | sha1sum | cut -f 1 -d ' ' )
if [[ "$APIQueryString" == "" ]]
then
URL="${APIEndPoint}api/$APICallName?checksum=$Checksum"
else
URL="${APIEndPoint}api/$APICallName?$APIQueryString&checksum=$Checksum"
fi
wget -q -O - "$URL" | grep -o '<meetingID>' | wc -w
Tested on a live BBB machine.
Note:
The APICallName and APIQueryString can be modified to provide interface to other BBB API calls. See https://docs.bigbluebutton.org/dev/api.html
The command-line sha1sum will output a different result if a newline is appended to its input. This is the reason echo -n is used instead of echo.
In the last line, the script processes the XML output from the API call in a very naïve way, simply counting the number of occurences of the <meetingID> tag. More elaborate processing would probably require parsing the XML.

I want to update a file with rollno. and name from a csv file where name and roll no are separated by comma

My input file is a csv file containing details as:
2233,anish sharma
2234,azad khan
2235,birbal singh
2236,chaitanya kumar
my expected output is display of the two details in two separate columns.
I executed following code. Full name is not getting displayed. The part after space doesn't appear. What changes should be done?
echo "Roll no updation"
tput cup 10 10
echo "Key in file name (rollno,name separated by comma)"
tput cup 12 10
read infile
for i in `cat $infile`
do
rollno=`echo $i|cut -d , -f1`
name=`echo $i|cut -d , -f2`
psql -U postgres -A -t -F, -c "update student set name = '$name' where rollno = '$rollno' current record" >bq
done
Your loop should be written in this fashion
# comma separates records
IFS=,
cat "$infile" | while read rollno name; do
psql -U postgres -A -t -F, -c \
"update student set name = '$name'
where rollno = '$rollno'" >bq
done
But you should be aware that this code is susceptible to SQL injection. Only use it if you can trust the source of the data!
Any ' in the file will cause errors and worse.

psql non-select: how to remove formatting and show only certain columns?

I'm looking to remove all line drawing characters from:
PGPASSWORD="..." psql -d postgres -h "1.2.3.4" -p 9432 -c 'show pool_nodes' -U owner
node_id | hostname | port | status | lb_weight | role
---------+---------------+------+--------+-----------+---------
0 | 10.20.30.40 | 5432 | 2 | 0.500000 | primary
1 | 10.20.30.41 | 5432 | 2 | 0.500000 | standby
(2 rows)
Adding the -t option gets rid of the header and footer, but the vertical bars are still present:
PGPASSWORD="..." psql -t -d postgres -h "1.2.3.4" -p 9432 -c 'show pool_nodes' -U owner
0 | 10.20.30.40 | 5432 | 2 | 0.500000 | primary
1 | 10.20.30.41 | 5432 | 2 | 0.500000 | standby
Note that this question is specific to show pool_nodes and other similar non-select SQL statements.
My present workaround is to involve the Linux cut command:
<previous command> | cut -d '|' -f 4
The question has two parts:
How using psql only can the vertical bars above be removed?
How using psql only can only a specific column (for example, status) or columns be shown? For example, the result might be just two lines, each showing the number 2.
I'm using psql version psql (PostgreSQL) 9.2.18 on a CentOS 7 server.
For scripting psql use psql -qAtX:
quiet
tuples-only
unAligned output
do not read .psqlrc (X)
To filter columns you must name them in the SELECT list. psql always outputs the full result set it gets from the server. E.g. SELECT status FROM pool_nodes.
Or you can cut to extract ordinal column numbers e.g.
psql -qAtX -c 'whatever' | cut -d '|' -f 1,2-4
(I have no idea how show pool_nodes can produce the output you show here, since SHOW returns a single scalar value...)
To change the delimiter from a pipe | to something else, use -F e.g. -F ','. But be warned, the delimiter is not escaped when it appears in output, this isn't CSV. You might want to consider a tab as a useful option; you have to enter a quoted literal tab to do this. (If doing it in an interactive shell, search for "how to enter literal tab in bash" when you get stuck).
Example showing all the above, given dummy data:
CREATE TABLE dummy_table (
a integer,
b integer,
c text,
d text
);
INSERT INTO dummy_table
VALUES
(1,1,'chicken','turkey'),
(2,2,'goat','cow'),
(3,3,'mantis','cricket');
query, with single space as the column delimiter (so you'd better not have spaces in your data!):
psql -qAtX -F ' ' -c 'SELECT a, b, d FROM dummy_table'
If for some reason you cannot generate a column-list for SELECT you can instead filter by column-ordinal with cut:
psql -qAtX -F '^' -c 'TABLE dummy_table' | cut -d '^' -f 1-2,4

psql - read SQL file and output to CSV

I have a SQL file my_query.sql:
select * from my_table
Using psql, I can read in this sql file:
\i my_query.sql
Or pass it in as an arg:
psql -f my_query.sql
And I can output the results of a query string to a csv:
\copy (select * from my_table) to 'output.csv' with csv header
Is there a way to combine these so I can output the results of a query from a SQL file to a CSV?
Unfortunately there's no baked-in functionality for this, so you need a little bash-fu to get this to work properly.
CONN="psql -U my_user -d my_db"
QUERY="$(sed 's/;//g;/^--/ d;s/--.*//g;' my_query.sql | tr '\n' ' ')"
echo "\\copy ($QUERY) to 'out.csv' with CSV HEADER" | $CONN
The sed fun removes all semicolons, comment lines, and end of line comments, and tr converts newlines to spaces (as mentioned in a comment by #abelisto):
-- my_query.sql
select *
from my_table
where timestamp < current_date -- only want today's records
limit 10;
becomes:
select * from my_table where timestamp < current_date limit 10
which then gets passed in to the valid psql command:
\copy (select * from my_table where timestamp < current_date) to 'out.csv' with csv header
Here's a script:
sql_to_csv.sh
#!/bin/bash
# sql_to_csv.sh
CONN="psql -U my_user -d my_db"
QUERY="$(sed 's/;//g;/^--/ d;s/--.*//g;' $1 | tr '\n' ' ')"
echo "$QUERY"
echo "\\copy ($QUERY) to '$2' with csv header" | $CONN > /dev/null
./sql_to_csv.sh my_query.sql out.csv
I think the simplest way is to take advantage of the shell's variable expansion capabilities:
psql -U my_user -d my_db -c "COPY ($(cat my_query.sql)) TO STDOUT WITH CSV HEADER" > my_query_results.csv
You could do it using a bash script.
dump_query_to_csv.sh:
#!/bin/bash
# Takes an sql query file as an argument and dumps its results
# to a CSV file using psql \copy command.
#
# Usage:
#
# dump_query_to_csv.sh <sql_query_file> [<csv_output_filesname>]
SQL_FILE=$1
[ -z $SQL_FILE ] && echo "Must supply query file" && exit
shift
OUT_FILE=$1
[ -z $OUT_FILE ] && OUT_FILE="output.csv" # default to "output.csv" if no argument is passed
TMP_TABLE=ttt_temp_table_xx # some table name that will not collide with existing tables
## Build a psql script to do the work
PSQL_SCRIPT=temp.psql
# create a temporary database table using the SQL from the query file
echo "DROP TABLE IF EXISTS $TMP_TABLE;CREATE TABLE $TMP_TABLE AS" > $PSQL_SCRIPT
cat $SQL_FILE >> $PSQL_SCRIPT
echo ";" >> $PSQL_SCRIPT
# copy the temporary table to the output CSV file
echo "\copy (select * from $TMP_TABLE) to '$OUT_FILE' with csv header" >> $PSQL_SCRIPT
# drop the temporary table
echo "DROP TABLE IF EXISTS $TMP_TABLE;" >> temp.sql
## Run psql script using psql
psql my_database < $PSQL_SCRIPT # replace my_database and add user login credentials as necessary
## Remove the psql script
rm $PSQL_SCRIPT
You'll need to edit the psql line in the script to connect to your database. The script could also be enhanced to take the database and account credentials as arguments.
The accepted solution is correct, but I had Windows and had to make it run via a batch (command) file. Posting it here if someone needs that
#echo off
echo 'Reading file %1'
set CONN="C:\Program Files\PostgreSQL\11\bin\psql.exe" -U dbusername -d mydbname
"C:\Program Files\Git\usr\bin\sed.exe" 's/;//g;/^--/ d;s/--.*//g;' %1 | "C:\Program Files\Git\usr\bin\tr.exe" '\n' ' ' > c:\temp\query.txt
set /p QUERY=<c:\temp\query.txt
echo %QUERY%
echo \copy (%QUERY%) to '%2' WITH (FORMAT CSV, HEADER) | %CONN%

How to send special characters via mail from a shell script?

I have a script that runs on cron that outputs some text which we send to the 'mail' program. The general line is like this:
./command.sh | mail -s "My Subject" destination#address.com -- -F "Sender Name" -f sender#address.com
The problem is that the text generated by the script has some special characters - é, ã, ç - since it is not in english. When the e-mail is received, each character is replaced by ??.
Now I understand that this is most likely due to the encoding that is not set correctly. What is the easiest way to fix this?
My /usr/bin/mail is symlinked to /etc/alternatives/mail which is also symlinked to /usr/bin/bsd-mailx
I had to specify myself the encoding in the mail header. (The -S is not supported here.)
cat myutf8-file | mail -a "Content-Type: text/plain; charset=UTF-8" -s "My Subject" me#mail.com
You're right in assuming this is a charset issue. You need to set the appropriate environment variables to the beginning of your crontab.
Something like this should work:
LANG=en_US.UTF-8
LC_CTYPE=en_US.UTF-8
Optionally use LC_ALL in place of LC_CTYPE.
Reference: http://opengroup.org/onlinepubs/007908799/xbd/envvar.html
Edit: The reason it displays fine when you run it in your shell is probably because the above env vars are set in your shell.
To verify, execute 'locale' in your shell, then compare to the output of a cronjob that runs the same command.
Re-Edit: Ok, so it's not an env var problem.
I am assuming you're using mailx, as it is the most common nowdays. It's manpage says:
The character set for outgoing
messages is not necessarily the same
as the one used on the terminal. If an
outgoing text message contains
characters not representable in
US-ASCII, the character set being used
must be declared within its header.
Permissible values can be declared
using the sendcharsets variable,
So, try and add the following arguments when calling mail:
-S sendcharsets=utf-8,iso-8859-1
Just to give additional information to KumZ answer:
if you need to specify more headers with the -a switch, feel free to add them up, like this (note the polyusage of -a).
echo /path/to/file | mail -s "Some subject" recipient#theirdomain.com -a "From: Human Name <noreply#mydomain.com>" -a "Content-Type: text/plain; charset=UTF-8"
i've written a bash function to send an email to recipients. The function send utf-8 encoded mails and work with utf-8 chars in subject and content by doing a base64 encode.
To send a plain text email:
send_email "plain" "from#domain.com" "subject" "contents" "to#domain.com" "to2#domain.com" "to3#domain.com" ...
To send a HTML email:
send_email "html" "from#domain.com" "subject" "contents" "to#domain.com" "to2#domain.com" "to3#domain.com" ...
Here is the function code.
# Send a email to recipients.
#
# #param string $content_type Email content mime type: 'html' or 'plain'.
# #param string $from_address Sender email.
# #param string $subject Email subject.
# #param string $contents Email contents.
# #param array $recipients Email recipients.
function send_email() {
[[ ${#} -lt 5 ]] && exit 1
local content_type="${1}"
local from_address="${2}"
local subject="${3}"
local contents="${4}"
# Remove all args but recipients.
shift 4
local encoded_contents="$(base64 <<< "${contents}")"
local encoded_subject="=?utf-8?B?$(base64 --wrap=0 <<< "${subject}")?="
for recipient in ${#}; do
if [[ -n "${recipient}" ]]; then
sendmail -f "${from_address}" "${recipient}" \
<<< "Subject: ${encoded_subject}
MIME-Version: 1.0
From: ${from_address}
To: ${recipient}
Content-Type: text/${content_type}; charset=\"utf-8\"
Content-Transfer-Encoding: base64
Content-Disposition: inline
${encoded_contents}"
fi
done
return 0
} # send_message()
You may use sendmail command directly without mail wrapper/helper.
It would allow you to generate all headers required for "raw" UTF-8 body
(UTF-8 is mentioned in asker's comments),
WARNING-1:
Non 7bit/ASCII characters in headers (e.g. Subject:,From:,To:) require special encoding
WARNING-2:
sendmail may break long lines (>990 bytes).
SENDER_ADDR=sender#address.com
SENDER_NAME="Sender Name"
RECIPIENT_ADDR=destination#address.com
(
# BEGIN of mail generation chain of commands
# "HERE" document with all headers and headers-body separator
cat << END
Subject: My Subject
From: $SENDER_NAME <$SENDER_ADDR>
To: $RECIPIENT_ADDR
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
END
# custom script to generate email body
./command.sh
# END of mail generation chain of commands
) | /usr/sbin/sendmail -i -f$SENDER_ADDR -F"$SENDER_NAME" $RECIPIENT_ADDR
rfc2045 - (5) (Soft Line Breaks) The Quoted-Printable encoding REQUIRES that encoded lines be no more than 76 characters long. For bash shell script code:
#!/bin/bash
subject_encoder(){
echo -n "$1" | xxd -ps -c3 |awk -Wposix 'BEGIN{
BASE64 = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
printf " =?UTF-8?B?"; bli=8
}
function encodeblock (strin){
b1=sprintf("%d","0x" substr(strin,1,2))
b2=sprintf("%d","0x" substr(strin,3,2))
b3=sprintf("%d","0x" substr(strin,5,2))
o=substr(BASE64,b1/4 + 1,1) substr(BASE64,(b1%4)*16 + b2/16 + 1,1)
len=length(strin)
if(len>1) o=o substr(BASE64,(b2%16)*4 + b3/64 + 1,1); else o=o"="
if(len>2) o=o substr(BASE64,b3%64 +1 ,1); else o=o"="
return o
}{
bs=encodeblock($0)
bl=length(bs)
if((bl+bli)>64){
printf "?=\n =?UTF-8?B?"
bli=bl
}
printf bs
bli+=bl
}END{
printf "?=\n"
}'
}
SUBJECT="Relatório de utilização"
SUBJECT=`subject_encoder "${SUBJECT}"`
echo '<html>test</html>'| mail -a "Subject:${SUBJECT}" -a "MIME-Version: 1.0" -a "Content-Type: text/html; charset=UTF-8" you#domain.net
This is probably not a command line issue, but a character set problem. Usually when sending E-Mails, the character set will be iso-8859-1. Most likely the text you are putting into the process is not iso-8859-1 encoded. Check out what the encoding is of whatever data source you are getting the text from.
Obligatory "good reading" link: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
Re your update: In that case, if you enter the special characters manually, your terminal may be using UTF-8 encoding. You should be able to convert the file's character set using iconv for example. The alternative would be to tell mail to use UTF-8 encoding, but IIRC that is not entirely trivial.
use the option -o message-charset="utf-8", like that:
sendemail -f your_email -t destination_email -o message-charset="utf-8" -u "Subject" -m "Message" -s smtp-mail.outlook.com:587 -xu your_mail -xp your_password
I'm a bit late but none of the previous solutions worked for me.
Locating mail command (CentOS)
# locate mail | grep -v www | grep -v yum | grep -v share
# ls -l /bin/mail
lrwxrwxrwx. 1 root root 22 jul 21 2016 /bin/mail -> /etc/alternatives/mail
# ls -l /etc/alternatives/mail
lrwxrwxrwx. 1 root root 10 jul 21 2016 /etc/alternatives/mail -> /bin/mailx
# ls -l /bin/mailx
-rwxr-xr-x. 1 root root 390744 dic 16 2014 /bin/mailx
So mail command is in fact mailx. This helped with the search that finally took me to this answer at Unix&Linux Stackexchange that states:
Mailx expects input text to be in Unix format, with lines separated by newline (^J, \n) characters only. Non-Unix text files that use carriage return (^M, \r) characters in addition will be treated as binary data; to send such files as text, strip these characters e. g. by tr -d '\015'
From man page and:
If there are other control characters in the file they will result on mailx treating the data as binary and will then attach it instead of using it as the body. The following will strip all special characters and place the contents of the file into the message body
So the solution is using tr command to remove those special characters. Something like this:
./command.sh \
| tr -cd "[:print:]\n" \
| mail -s "My Subject" destination#address.com -- -F "Sender Name" -f sender#address.com
I've used this solution with my command
grep -v "pattern" $file \
| grep -v "another pattern" \
| ... several greps more ... \
| tr -cd "[:print:]\n" \
| mail -s "$subject" -a $file -r '$sender' $destination_email