pg_dump through ssh stops after some seconds when use in script - postgresql

I do backup of three PostgreSQL servers with pgdump launched by script through ssh. The command line in script is :
sudo -u barman ssh postgres#$SERVER 'pg_dump -Fc -b $database 2> ~/dump_error.txt' | gzip > $DUMP_ROOT/$SERVER-$BACKUPDATE.gz
But the dump size is always about 1K, for all servers. When I execute this line in a shell, just replacing the variable by their values, that perfectly works. It executed it as root (sudo -u barman ssh postgres#server ...), and as barman, just as user barman (ssh postgres#server ...), the dump is correct.
When I open the dump, I see the start of dump, but suddenly it stops.
The dump_error.txt on servers is empty.
There is nothing in log (postgres log and syslog), in backup and PostgreSQL servers.
The user barman can connect to server as user postgres without password.
The limits of shell are enough high to not block the script (open files 1024, file size unlimited, max user process 13098).
I try to change the cron hour of script, thinking that a process could consume all resources, but it is always the same thing, and ps -e show nothing special.
The version of postgreSQL is 9.1.
Why does this line never produce a complete dump when executed in script, but only when executed in a shell ?
Thanks for your help, Denis

Your problem is related to bad quoting. Simple quotes will cause the string to not be expanded, while double quotes will expand what's inside. For instance :
>MYVARIABLE=test
>echo '$MYVARIABLE'
$MYVARIABLE
>echo "$MYVARIABLE"
test
In your case, ssh postgres#$SERVER 'pg_dump -Fc -b $database 2> ~/dump_error.txt' will execute the command on the remote computer, without expanding variables. This means ssh will pass the expression pg_dump -Fc -b $database, and bash will interprete the variable $database on the remote computer. If this variable doesn't exist there, it will be considered an empty string.
You can see the difference when you do ssh user#server 'echo $PWD' and ssh user#server "echo $PWD".

Related

postgresql db take backup and restore with data

i would like to take backup and restore of my postgresql database automatically.
take backup automatically from postgresql with cron trigger in windows pc
later i will do restore when ever it required.
i have below bat commands to take back up of my database
F:
cd F:\softwares\postgresql-12.1-3-windows-x64-binaries\pgsql\bin
pg_dump.exe -U postgres -s fuelman > E:\fuel_man_prod_backup\prod.sql
cmd /k
but i am getting
pg_dump: error: too many command-line arguments (first is "-s")
also i need to take both schema structure with data.
Edit :-
removed password -W
"Too many command line" probably is due to -W option; the string "postgres" after -W is interpreted as db name, so following -s gives error. Anyway, when running pg_dump from a script you must not use -W; use .pgpass file or set PGPASSWORD environment variable (look at pg_dump: too many command line arguments for more details).
As for Frank Heikens comment, if you need to dump both object definition and data, avoid -s option. pg_dump documentation is quite clear.

psql, can't copy db content to another - cannot run inside a transaction block-

I'd like to copy the content of my local machine to my remote one (inside a docker).
For some reason, it is more complicated that I was expected:
When I try to copy the data to the remote one, I get this "ERROR: CREATE DATABASE cannot run inside a transaction block".
Ok... So I get into my docker container, added the rule \set AUTOCOMMIT inside. But I still get this error.
This is the command I did:
// backup
pg_dump -C -h localhost -U postgres woof | xz >backup.xz
and then in my remote computer:
xz -dc backup.xz | docker exec -i -u postgres waf-postgres psql --set ON_ERROR_STOP=on --single-transaction
But each time I get this "CREATE DATABASE cannot run inside a transaction block" no matter what I try. Even if I put the autocommit to "on".
Here my problem: I don't know what a transaction block is. And I don't understand why copying one db to another need to be so hard pain: My remote db is empty. So why there is so much fuss and why psql just can't force what I want?
My aim is just to copy my local db to the remote one.
what happens here is: you add CREATE DATABASE statement with -C key and then try to run psql with --single-transaction, so the content of script are wrapped to BEGIN;...END;, where you can't use CREATE DATABASE
So iether remove -C and run psql against existing database, or remove --single-transaction for psql. Make decision based on what you really need...
from man pg_dump:
-C
--create
Begin the output with a command to create the database itself and reconnect to the created database. (With a script of this
form, it doesn't matter which database in the destination installation
you connect to before
running the script.) If --clean is also specified, the script drops and recreates the target database before reconnecting to
it.
from man psql:
--single-transaction
This option can only be used in combination with one or more -c and/or -f options. It causes psql to issue a BEGIN command
before the first such option and a COMMIT command after the last one, thereby wrapping all the commands into a single
transaction. This ensures that either all the commands complete successfully, or no changes are applied.

Deploy postgres via jenkins - continuous integration/deployment

I got my database dump (tables, functions, triggers etc) in *.sql files.
At this moment I am deploying them via jenkins, by passing execute shell command:
sudo -u postgres psql -d my_db < /[path_to_my_file].sql
The problem is, that if something is wrong in my sql file, build finishes as SUCCESS. I would like to got information immediately if something fails, without looking into log and checking if every command executed succesfully.
Is it possible (and how if the answer is 'yes') to deploy postgres database via jenkins other way?
I changed my execution command to:
sudo -u postgres psql -v ON_ERROR_STOP=1 -d my_db < [path_to_file].sql
Make sure you have set
set -e
Before running the command.
If that does not work, I'd look at the return code from the command above. That can be done by running
echo $?
right after the command.
If that gives you a zero when it fails it's postgres fault (sice it should return with something else than 0 on fail).
Perhaps there is a postgres flag to fail on wrong input.
EDIT:
-v ON_ERROR_STOP=1
As a flag to postgres should make postgres fail on errors

How to enable quiet mode for Postgres commands on Heroku

When using the psql command line utility on my local machine, I have the option to use the -q or --quiet switch to tell Postgres to do it's work quietly - i.e. it won't print every single INSERT statement to the console if you're doing a large import.
Here's an example of how I'm using it:
psql -q -d <SOME_DATABASE> -f <SOME_SQL_FILE>
However, when using the pg:psql command line utility in Heroku, that option doesn't seem to be available. So I'm currently having to use it like so:
heroku pg:psql DATABASE -a <SOME_HEROKU_APP> < <SOME_SQL_FILE>
which produces a lot of output to my console (hundreds of thousands of lines), because of the large size of the SQL file I'm importing. Whenever I try to use the -q or --quiet option, something like this:
heroku pg:psql DATABASE -q -a <SOME_HEROKU_APP> < <SOME_SQL_FILE>
it'll throw an error saying that -q is not a valid option.
Is there some way to enable quiet mode when running Postgres commands in Heroku?
heroku pg:psql is just a wrapper onto your local psql binary (https://github.com/heroku/heroku/blob/master/lib/heroku/command/pg.rb#L151)
So, given this - you are able to do:
psql `heroku config:get DATABASE_URL -a <yourappname>`
to get a psql connection and consequently pass -q other options accordingly.

Out of memory exception when running data-only script

When I run data-only script in SQL Server 2008 R2, it is showing this error:
Cannot execute script
Additional information:
Exception of type 'System.OutOfMemoryException' was thrown. (mscorlib)
The size of script file is 115MB and it's only data .
When I open this script file, it shows:
Document contains one or more extremely long lines of text.
These lines cause the editor to respond slowly when you open the file .
Do you still want to open the file ?
I run schema-only script first and then data-only script .
Is there any way to fix this error ?
I solved it by using sqlcmd utitlity.
sqlcmd -S "Server\InstanceName" -U "instantName" -P "password" -i FilePathForScriptFile
For example :
sqlcmd -S .\SQLEXPRESS -U sa -P 123 -i D:\myScript.sql
Zey's answer was helpful for me, but for completion:
If you want to use Windows Authentication just omit the user and password.
And don't forget the quotes before and after the path if you have spaces.
sqlcmd -S .\SQLEXPRESS -i "C:\Users\Stack Overflow\Desktop\script.sql"
If you're logged into the domain with the correct privileges and there's only one instance running, you also do not have to provide the above user/pw/instance command args. I was able to just execute:
sqlcmd -i myfile.sql