Deploy postgres via jenkins - continuous integration/deployment - postgresql

I got my database dump (tables, functions, triggers etc) in *.sql files.
At this moment I am deploying them via jenkins, by passing execute shell command:
sudo -u postgres psql -d my_db < /[path_to_my_file].sql
The problem is, that if something is wrong in my sql file, build finishes as SUCCESS. I would like to got information immediately if something fails, without looking into log and checking if every command executed succesfully.
Is it possible (and how if the answer is 'yes') to deploy postgres database via jenkins other way?

I changed my execution command to:
sudo -u postgres psql -v ON_ERROR_STOP=1 -d my_db < [path_to_file].sql

Make sure you have set
set -e
Before running the command.
If that does not work, I'd look at the return code from the command above. That can be done by running
echo $?
right after the command.
If that gives you a zero when it fails it's postgres fault (sice it should return with something else than 0 on fail).
Perhaps there is a postgres flag to fail on wrong input.
EDIT:
-v ON_ERROR_STOP=1
As a flag to postgres should make postgres fail on errors

Related

psql script execution not returning error

I am trying to execute multiple sql scripts using psql. I created one master script with all the scripts to be executed as below. I want all the scripts to succeed or fail together.
master.sql
BEGIN;
\i one.sql
\i two.sql
\i three.sql
COMMIT;
I am trying to catch the error code of psql to determine if it success. When i query for ?$ it always returns 0. I figured when add ON_ERROR_STOP=1 to command it returns proper error code. Problem with this approach if the error happens in three.sql it does not roll back one.sql and two.sql transactions.
psql -U postgres -h localhost -d test -v ON_ERROR_STOP=1 -f master.sql
What would be correct approach to find out if script executed successfully.

psql, can't copy db content to another - cannot run inside a transaction block-

I'd like to copy the content of my local machine to my remote one (inside a docker).
For some reason, it is more complicated that I was expected:
When I try to copy the data to the remote one, I get this "ERROR: CREATE DATABASE cannot run inside a transaction block".
Ok... So I get into my docker container, added the rule \set AUTOCOMMIT inside. But I still get this error.
This is the command I did:
// backup
pg_dump -C -h localhost -U postgres woof | xz >backup.xz
and then in my remote computer:
xz -dc backup.xz | docker exec -i -u postgres waf-postgres psql --set ON_ERROR_STOP=on --single-transaction
But each time I get this "CREATE DATABASE cannot run inside a transaction block" no matter what I try. Even if I put the autocommit to "on".
Here my problem: I don't know what a transaction block is. And I don't understand why copying one db to another need to be so hard pain: My remote db is empty. So why there is so much fuss and why psql just can't force what I want?
My aim is just to copy my local db to the remote one.
what happens here is: you add CREATE DATABASE statement with -C key and then try to run psql with --single-transaction, so the content of script are wrapped to BEGIN;...END;, where you can't use CREATE DATABASE
So iether remove -C and run psql against existing database, or remove --single-transaction for psql. Make decision based on what you really need...
from man pg_dump:
-C
--create
Begin the output with a command to create the database itself and reconnect to the created database. (With a script of this
form, it doesn't matter which database in the destination installation
you connect to before
running the script.) If --clean is also specified, the script drops and recreates the target database before reconnecting to
it.
from man psql:
--single-transaction
This option can only be used in combination with one or more -c and/or -f options. It causes psql to issue a BEGIN command
before the first such option and a COMMIT command after the last one, thereby wrapping all the commands into a single
transaction. This ensures that either all the commands complete successfully, or no changes are applied.

How to enable quiet mode for Postgres commands on Heroku

When using the psql command line utility on my local machine, I have the option to use the -q or --quiet switch to tell Postgres to do it's work quietly - i.e. it won't print every single INSERT statement to the console if you're doing a large import.
Here's an example of how I'm using it:
psql -q -d <SOME_DATABASE> -f <SOME_SQL_FILE>
However, when using the pg:psql command line utility in Heroku, that option doesn't seem to be available. So I'm currently having to use it like so:
heroku pg:psql DATABASE -a <SOME_HEROKU_APP> < <SOME_SQL_FILE>
which produces a lot of output to my console (hundreds of thousands of lines), because of the large size of the SQL file I'm importing. Whenever I try to use the -q or --quiet option, something like this:
heroku pg:psql DATABASE -q -a <SOME_HEROKU_APP> < <SOME_SQL_FILE>
it'll throw an error saying that -q is not a valid option.
Is there some way to enable quiet mode when running Postgres commands in Heroku?
heroku pg:psql is just a wrapper onto your local psql binary (https://github.com/heroku/heroku/blob/master/lib/heroku/command/pg.rb#L151)
So, given this - you are able to do:
psql `heroku config:get DATABASE_URL -a <yourappname>`
to get a psql connection and consequently pass -q other options accordingly.

Loading PostgreSQL Database Backup Into Docker/Initial Docker Data

I am migrating an application into Docker. One of the issues that I am bumping into is what is the correct way to load the initial data into PostgreSQL running in Docker? My typical method of restoring a database backup file are not working. I have tried the following ways:
gunzip -c mydbbackup.sql.gz | psql -h <docker_host> -p <docker_port> -U <dbuser> -d <db> -W
That does not work, because PostgreSQL is prompting for a password, and I cannot enter a password because it is reading data from STDOUT. I cannot use the $PGPASSWORD environment variable, because the any environment variable I set in my host is not set in my container.
I also tried a similar command above, except using the -f flag, and specify the path to a sql backup file. This does not work because my file is not on my container. I could copy the file to my container with the ADD statement in my Dockerfile, but this does not seem right.
So, I ask the community. What is the preferred method on loading PostgreSQL database backups into Docker containers?
I cannot use the $PGPASSWORD environment variable, because the any
environment variable I set in my host is not set in my container.
I don't use docker, but your container looks like a remote host in the command shown, with psql running locally. So PGPASSWORD never has to to be set on the remote host, only locally.
If the problems boils down to adding a password to this command:
gunzip -c mydbbackup.sql.gz |
psql -h <docker_host> -p <docker_port> -U <dbuser> -d <db> -W
you may submit it using several methods (in all cases, don't use the -W option to psql)
hardcoded in the invocation:
gunzip -c mydbbackup.sql.gz |
PGPASSWORD=something psql -h <docker_host> -p <docker_port> -U <dbuser> -d <db>
typed on the keyboard
echo -n "Enter password:"
read -s PGPASSWORD
export PGPASSWORD
gunzip -c mydbbackup.sql.gz |
psql -h <docker_host> -p <docker_port> -U <dbuser> -d <db>
Note about the -W or --password option to psql.
The point of this option is to ask for a password to be typed first thing, even if the context makes it unnecessary.
It's frequently misunderstood as the equivalent of the -poption of mysql. This is a mistake: while -p is required on password-protected connections, -W is never required and actually goes in the way when scripting.
-W, --password
Force psql to prompt for a password before connecting to a
database.
This option is never essential, since psql will automatically
prompt for a password if the server demands password
authentication. However, psql will waste a connection attempt
finding out that the server wants a password. In some cases it is
worth typing -W to avoid the extra connection attempt.

Unable to restore the postgresql data through command prompt

I am trying to restore the postgres sql data from a file . I am trying to do so but it is not importing .
Here is the command which i am using:
postgres-# psql -hlocalhost -p5432 -u postgres -d test -f C:/wamp/www/test/database_backups/backup004.sql
Please help me what I am doing wrong .
I am using windows and the above command does not throws any error but it does not import data.
Regards
Surjan
The only immediate thing I can see there is the capitilsation of -u for username (should be -U).
Correction: You're typing the command line into the psql shell.
You should exit to the CMD.EXE shell, and try the command there. With the correct capitalisation of -U, by the way.
OR, use this to replay the script into that psql shell:
\i C:/wamp/www/test/database_backups/backup004.sql
The forward slashes don't cause a problem on my Windows machine.