How to use postgres user in a shell script on Ubuntu 16 - postgresql

I am trying to adapt a shell script set made for running on Debian 7 to work on Ubuntu 16.
I got to change successfully all except a part that executes PosgreSQL database commands.
Former version of script has these lines:
service postgresql restart
psql -q -U postgres -c "DROP DATABASE IF EXISTS db_crm" -o $log &> /dev/null
psql -q -U postgres -c "CREATE DATABASE db_crm" -o $log &> /dev/null
When I tried to run psql as above on Ubuntu 16, OS didn't recognize command. It is important to say that script is called with sudo.
I got to find a way to run only database script on Ubuntu 16 changing code so:
service postgresql restart
su postgres <<EOF
psql -c "DROP DATABASE IF EXISTS db_crm" -o $log &> /dev/null
psql -c "CREATE DATABASE db_crm" -o $log &> /dev/null
However, this same script doesn't work when it is called by main script. Following messages are presented:
here-document at line 41 delimited by end-of-file (wanted 'EOF')
syntax error: unexpected end of file
Even replacing EOF to beggining of next line, error continues.
If there is a way to use psql in shell script without to use EOF would be better.

The reason your script is failing, is you forgot the EOF at the end of input.
su postgres <<EOF
psql -c "DROP DATABASE IF EXISTS db_crm" -o $log &> /dev/null
psql -c "CREATE DATABASE db_crm" -o $log &> /dev/null
EOF #<<<--- HERE
An easy way to do this is to put your commands into a temporary file, then re-direct that into psql. Obviously you don't want this to stop for a password prompt, in this case either use a user that doesn't need it, or set $PGPASSWORD - or prompt at the beginning of the script - there's lots of ways around.
#! /usr/bin/env bash
# PGPASSWORD='' #(set this to stop password prompts, but it's insecure)
PSQL="psql -q -U postgres -o $log" #TODO define $log
TMPFILE="/tmp/sql-tmp.`date +%Y%m%d_%H%M%S_%N`.sql"
# TODO - check $TMPFILE does not exist already
echo "DROP DATABASE IF EXISTS db_crm;" > "$TMPFILE"
echo "CREATE DATABASE db_cr;" >> "$TMPFILE"
# ... etc.
# run the command, throw away any stdout, stderr
PSQL < "$TMPFILE" 2>&1 > /dev/null
# exit with the psql error code
err_code=$?
exit $?

Related

shell script having embedded password inside it is not working as expected

I have developed a shell script whose job is to take the dump of postgres DB. Below is the snippet:
#!/bin/sh
today=$(date +"%Y-%m-%d")
yes "password" | sudo -S sudo su - postgres <<EOF
/usr/pgsql-11/bin/pg_dump -U postgres -d db_name > /home/db_backup/db_name_$today.sql
EOF
exit
However, this script is NOT running because of the below reason:
[sudo] password for user: Sorry, Try again
However, when I use sudo su - postgres and then provide password, it is working as expected. And interestingly, if now I run the above shell script after the login, it runs absolutely fine.
What I am missing here.
It is dangerous to store passwords in scripts, so please do not do it.
Modify your /etc/sudoers file by running sudo visudo and adding a line like this at the bottom:
%sudo ALL=(postgres) NOPASSWD: /usr/bin/psql
This allows anyone with sudo permission to run /usr/bin/psql to postgres on any host (ALL) with no password.
Now your script should work this way:
#!/bin/sh
today=$(date +"%Y-%m-%d")
sudo -b -n -H -u postgres /usr/pgsql-11/bin/pg_dump -U postgres -d db_name > /home/db_backup/db_name_$today.sql
Make sure postgres can write to the directory /home/db_backup/.

Postgresql Auto backup script runs in terminal but does not runs in CRON job on MacOS

I Used Common Postgresql backup script from Automated_Backup_on_Linux:
#!/bin/bash
if [ ! $HOSTNAME ]; then HOSTNAME="localhost"; fi
if [ ! $USERNAME ]; then USERNAME="postgres"; fi
BACKUP_DIRECTORY="/Users/xeranta/Documents/AW/"
CURRENT_DATE=$(date "+%Y%m%d")
if [ -z "$1" ]; then
pg_dump -U postgres -h localhost -p 5432 db_gocampus_unmul \
> $BACKUP_DIRECTORY/db_gocampus_unmul.sql
else
pg_dump $1 | gzip - > $BACKUP_DIRECTORY/$1_$CURRENT_DATE.sql.gz
fi
It runs In terminal
$ ~/Documents/AW/./dbbackup.sh
But does not runs when in set this in CRONTAB in MacOS Sierra version 10.12.6
I have this error
/Users/xeranta/Documents/AW/./backup.sh: line 36: pg_dumpall: command
not found
The cause of the problem is that pg_dump is on your PATH in the interactive shell, but not in the cron job.
You should use an absolute path similar to this:
PGPATH=/wherever/your/postgres/binaries/are
"$PGPATH"/pg_dump ...

pg_dumpall not working when start it with QProcess

I want to copy my data and tables from one postgres installation to the other, source version listens on port 5432 destination server on port 5433. User myUser is superuser on both versions.
Postgres "pg_dumpall" does not working when start it with QProcess
but the command works in windows cmd, this here:
pg_dumpall -p 5432 -U myUser | psql -U myUser -d myDbName -p 5433
But not from Qt code using QProcess:
QProcess *startProgram = new QProcess();
startProgram->start("pg_dumpall -p 5432 -U myUser | psql -U myUser -d myDbName -p 5433");
startProgram->waitForFinished()
return true
startProgram->exitCode();
returns 1
startProgram->exitStatus();
return 0
Anyway my data and tables are not copied to destination.
Creating db with QProcess works by using:
startProgram->start("createdb -p 5433 -U myUser myDbName");
Yeah its a bit annoying, I was trying to do the same thing with ls | grep <pattern> type commands - which spawn off multiple processes...
I came up with this for linux:
if (QProcess::startDetached("xfce4-terminal -x bash -c \"ls -l | grep main > out\""))
{
qDebug("ok\n");
}
else
{
qDebug("failed\n");
}
So basically if I break that down:
QProcess runs xfce4-terminal (or which ever term you want) with the execute parameter -x:
xfce4-terminal -x <command to execute>
This then executes bash with the command parameter -c (in escaped quotes):
bash -c \"bash command\"
Finally the bash command:
ls -l | grep main > out
So for your application you could substitute the final command (part 3) with:
pg_dumpall -p 5432 -U myUser | psql -U myUser -d myDbName -p 5433
I am assuming you are using linux? (there is a similar possibility for windows which uses cmd instead of terminal. Also you can probably just replace xfce4-terminal for gnome-terminal which is perhaps more common, but might need to check the -x is the same.... IIRC it is.
There is probably a nicer way to do this.... but I wanted to harness the power of bash, so this seemed the logical way to do it.
Further: I think you can just do this:
QProcess::startDetached("bash -c \"ls -l | grep main > out\"")
And get rid of the terminal part, (works for simple stuff like ls), but I am not sure if all the paths and what-not are setup... worth a go as it is a little neater and removes your reliance on any particular terminal...
Thank you! Yes, the pipe was the problem.
In windows this works for me:
QProcess *startProgram = new QProcess();
startProgram->start("cmd /c \"pg_dumpall -p 5432 -U myUser | psql -U myUser -d myDbName -p 5433\"");

How to execute multiple sql files in postgreSQL linux?

I have many .sql files in a folder (/home/myHSH/scripts) in linux debian. I want to know the command to execute or run all sql files inside the folder into postgreSQL v9.1 database.
PostgreSQL informations:
Database name=coolDB
User name=coolUser
Nice to have: if you know how to execute multiple sql files through GUI tools too like pgAdmin3.
From your command line, assuming you're using either Bash or ZSH (in short, anything but csh/tcsh):
for f in *.sql;
do
psql coolDB coolUser -f "$f"
done
The find command combined with -exec or xargs can make this really easy.
If you want to execute psql once per file, you can use the exec command like this
find . -iname "*.sql" -exec psql -U username -d databasename -q -f {} \;
-exec will execute the command once per result.
The psql command allows you to specify multiple files by calling each file with a new -f argument. e.g. you could build a command such as
psql -U username -d databasename -q -f file1 -f file2
This can be accomplished by piping the result of the find to an xargs command once to format the files with the -f argument and then again to execute the command itself.
find . -iname "*.sql" | xargs printf -- ' -f %s' | xargs -t psql -U username -d databasename -q

psql -o not what I expected (how to output db response to an output file)

I am creating a PostgreSQL database from the command line (i.e. using psql).
There are some errors in my SQL statements and I want to find out where the errors are occuring (too many objects to fill the screen buffer - so I need to save thios to file)
I have tried just about everything, from using the -o option, the -L option and using tee - I still cant capture the information that scrolls past on the screen.
How do I log this?
This is what I have tried so far:
psql -U -o dbcreate.log -f file.sql
psql -U -L dbcreate.log -f file.sql
psql -U -a -f file.sql | tee dbcreate.log
NONE of which results in the data flashing accross the screen being logged to file - how do I do this?
You need to redirect stderr. On Un*x and Linux:
psql ... 2>error.log
or both stdout and stderr:
psql ... &>error.log
On the other hand if you like to investigate the errors one by one:
psql -v ON_ERROR_STOP=1 ...
A helpful article about executing SQL scripts with psql - here.