Run postgres query in jenkins after build - postgresql

I installed jenkins and postgres on same centos7 server.
I also installed and configured "database" and "PostgreSQL Database Plugin" as shown in this image:
I want to insert data in my database jenkinsdb (the table i want to work on is "builds") after build is succesfull so i can track history of builds , deployments etc.
How can i run query to my database from jenkins ?

Create a script file, let's say build_complete.sh, with the postgresql commands:
#!/bin/bash
#Updated command that solves the bug. Courtesy: YoussefBoudaya's comment.
"export PGPASSWORD='postgres'; sudo -u postgres -H -- psql -d jenkinsdb -c "SELECT * FROM builds" postgres"
Please confirm psql path from server, it will be similar to /usr/lib/postgresql/10/bin/psql.
Add execute script step at the end of your pipeline and simple run your script.
A similar solution can be read here.

Related

creating a postgresql database back end for a new Label Studio project

I am creating a local Label Studio server to host images to annotate in our office. I would like the database back end to be postgresql and not sqlite and be located in a particular directory, not the default and not the same as the 'data-dir'. I have got a test server working across the network with various machines annotating images on the server, but the backend was sqlite for this test.
Everything I've tried to get a postgresql backend db has failed for various reasons. Some commands result in a sqlite db (occasionally with the name 'postgresql') located in my required directory; others create postgres/pyscopg2 errors but I think they're up a garden path.
The host machine is running Ubuntu 20.04 LTS. And serves another postgresql db over the network using other APIs. Postgresql version running is 12.9.
I have created a conda environment and pip installed Label Studio as the documentation suggested.
Here's what I've tried:
Start the conda environment. Follow instructions to assign environment variables from https://labelstud.io/guide/storedata.html#PostgreSQL-database which at time of writing is:
DJANGO_DB=default
POSTGRE_NAME=postgres
POSTGRE_USER=postgres
POSTGRE_PASSWORD=
POSTGRE_PORT=5432
POSTGRE_HOST=db
Then a few variations on the start command (I didn't include the backslashes, just put here for readability/comparability):
label-studio start --init \
-db postgresql \
--database /path/to/label-studio/databases/newdb \
--data-dir /path/to/label-studio/media_dirs/test_proj
result: db is where expected, but:
file newdb
gives "newdb: SQLite 3.x database, last written using SQLite version 3038002"
label-studio start --init \
--database /path/to/label-studio/databases/newdb \
-db postgresql \
--data-dir /path/to/label-studio/media_dirs/test_proj
result: a db at specified path named 'postgresql' and still an sqlite db. This seems to mirror the mistake mentioned at: https://github.com/heartexlabs/label-studio/issues/1660
I have also tried the above two commands with the '--init' argument omitted with same results.
Then I tried adding something on the front of the command suggested at the same link above:
DJANGO_DB=default label-studio start \
--database /path/to/label-studio/databases/newdb \
--data-dir /path/to/label-studio/media_dirs/test_proj
result: psycopg2.OperationalError: FATAL: password authentication failed for user "postgres"
FATAL: password authentication failed for user "postgres"
DJANGO_DB=default POSTGRE_PASSWORD= label-studio start \
--database /path/to/label-studio/databases/newdb \
--data-dir /path/to/label-studio/media_dirs/test_proj
result: psycopg2.OperationalError: fe_sendauth: no password supplied
Any help and resolution would be highly appreciated.
Also, I can't tag this with 'label-studio' because I'm not quite at the required reputation to create a new tag, so if anyone who can feels like doing so, pleaseandthankyou!
Your last option was closer than all the others. Have you tried to run LS using this:
DJANGO_DB=default POSTGRE_NAME=<postgres_name> POSTGRE_USER=<postgres_user> POSTGRE_PASSWORD=<password> POSTGRE_PORT=<db_port> POSTGRE_HOST=<db_host> label-studio
Sure, you have to run postgres service by yourself, configure it properly, create the DB <postgres_name>, the user <postgres_user> and set the password <password>, grant access rights to this user. Also don't forget to specify <db_host> (localhost?), <db_port> (5432?)

Configuration issue Postgres on Ubuntu?

I have installed Postgres 12 on Ubuntu by building it from source and I am facing two issues:
Although I followed the installation manual from Postgrez, every time I restart my computer, my Postgres server stopz and is no longer seen as a running process.
To start it the first time after install, I do this from the terminal:
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start
After a restart, to start DB again when I run: /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data, it throws this error:
initdb: error: directory "/usr/local/pgsql/data" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/usr/local/pgsql/data" or run initdb
with an argument other than "/usr/local/pgsql/data".
Does that mean that every time I start Postgres after a restart, I have to create a new /data directory?
Upon installing Postgres sing pip or pip3, one can just switch user to postgres and run psql to enter postgres, however now I have to run "/usr/local/bin/psql". Please note I have exported all the paths per https://www.postgresql.org/docs/12/installation.html. How can I fix this? Can an alias be set for this?
After a restart, to start DB again when I run:
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data, it throws this
error:
Does that mean that every
time I start Postgres after a restart, I have to create a new /data
directory?
No, quite the opposite. You don't need to initdb after the first time, you just need to start. It is your attempt to initdb when you don't need to which is causing the error message. Note that attempting to initdb isn't doing any harm, because it refused to run. It just generates log/console noise.
Upon installing Postgres sing pip or pip3, one can just switch user to
postgres and run psql to enter postgres, however now I have to run
"/usr/local/bin/psql". Please note I have exported all the paths per
https://www.postgresql.org/docs/12/installation.html. How can I fix
this?
I don't know what your first sentence means, as you don't use pip or pip3 to install PostgreSQL (or at least, the docs don't describe doing so) although you might use them to install psycopg2 to enable python to talk to PostgreSQL.
You could use an alias, but it would probably make more sense to edit ~/.bash_profile to set the PATH, as described from the page you linked to under Environment Variables.
You have to register postgreSQL as a service.
run this:
pg_ctl register [-N servicename] [-U username] [-P password] [-D datadir] [-S a[uto] | d[emand] ] [-w] [-t seconds] [-s] [-o options]
Example:
pg_ctl register -N postgresql -U OS_username -P OS_password -D '/etc/postgresql/12/data' -w
More info in the manual: pg_ctl
Notes:
Username and Password is related to the OS, not postgresql
If you have doubts read the manual.
/usr/local/pgsql/bin/pg_ctl start -D '/usr/local/pgsql/data'
Export following in postgres user account's ~/.bashrc:
LD_LIBRARY_PATH=/usr/local/pgsql/lib
export LD_LIBRARY_PATH
PATH=/usr/local/pgsql/bin:$PATH
export PATH

Artifactory upgrade fail, postgres 9.5 -> 9.6 upgrade instructions needed

I had planned an upgrade of artifactory from 6.7.5 to 6.8.1. As part of the upgrade I checked jfrog's repo on github and it looks like they have a new recommended nginx and postgres version.
The current docker-compose is using postgres 9.5 and the new default version if 9.6. Simply pulling down postgres 9.6 however does not do an inplace upgrade.
FATAL: database files are incompatible with server DETAIL: The data
directory was initialized by PostgreSQL version 9.5, which is not
compatible with this version 9.6.11.
The upgrade instructions do not mention anything about how to do the upgrade.
The examples provided in github (https://github.com/jfrog/artifactory-docker-examples) are just examples.
Using them in production could cause issues and backwards compatibility is not guaranteed.
To get over the PostgreSQL matter when upgrading, I would suggest:
$ docker-compose -f yml-file-name.yml stop
edit the yml-file-name.yml and change the docker.bintray.io/postgres:9.6.11 to docker.bintray.io/postgres:9.5.2
$ docker-compose -f yml-file-name.yml up -d
Artifactory should be upgraded after following this, however it will keep using the previous version of the PostgreSQL DB
I have been able to upgrade database using following approach:
Dump all database to an SQL script using old database image; store it in a volume for future import:
# Override PostgreSQL image used to export using old binaries
printf "version: '2.1'\nservices:\n postgresql:\n image: docker.bintray.io/postgres:9.5.2\n" > image_override.yml
started_container=$(docker-compose -f artifactory-pro.yml -f image_override.yml run -d -v sql_dump_volume:/tmp/dump --no-deps postgresql)
# Dump database to a text file in a volume (to make it available for import)
docker exec "${started_container}" bash -c "until pg_isready -q; do sleep 1; done"
docker exec "${started_container}" bash -c "pg_dumpall --clean --if-exists --username=\${POSTGRES_USER} > /tmp/dump/dump.sql"
docker stop "${started_container}"
docker rm --force "${started_container}"
Back up old database directory and prepare a new one:
mv -fv /data/postgresql /data/postgresql.old
mkdir -p /data/postgresql
chown --reference=/data/postgresql.old /data/postgresql
chmod --reference=/data/postgresql.old /data/postgresql
Run a new database image with mounting dump script from step 1. It processes SQL scripts upon startup when setting up a new database, provided it's started as postgres something. We just don't need to leave the server running afterwards, so I provided --version to make entrypoint execute, import the data and quit:
docker-compose -f artifactory-pro.yml run --rm --no-deps -e POSTGRES_DB=postgres -e POSTGRES_USER=root -v sql_dump_volume:/docker-entrypoint-initdb.d postgresql postgres --version
After all this is done, I was able to start Artifactory normally with docker-compose -f artifactory-pro.yml up -d and it started up normally, applying rest of schema and file upgrade procedure as usual.
I have also prepared a script that basically does the above steps along with some additional checks and cleanup. Feel free to use if you find it useful.

Automating pg_basebackup command for recovery using cron and script

I have a 2 node system running pacemaker, corosync and postgresql 9.4. I am carrying out pgsql replication using virtual IP and am able to successfully recover a downed machine using manual commands. Now to automate stuff, I want to run the following commands in a script to get my recovered master back on cluster.
`#su -postgres
$rm -rf /var/lib/pgsql/9.4/data/* //To delete old data files
$pg_basebackup -h 192.XX.XX.XX -U postgres -D /var/lib/pgsql/9.4/data -X stream -P // To recover the latest data from standby PC running latest entries
$rm /var/lib/pgsql/tmp/PGSQL.lock
$exit //exit from postgresql shell
#pcs resource cleanup msPostgresql`
Now when i run these commands as a script, it hangs after the first command itself i.e. su -postgres and the cursor blinks at bash$ syntax without inserting the commands down below.
I want to automate this process using cron but the script itself is not working for me. Can someone help me out here.
Regards
As far as I know "su -postgres" is wrong. You can use either "su postgres", or "sudo -i -u postgres"
Regarding the scripts here you can find tested, working scripts. The one you are interested in is called "initiate_replication.sh" there.

How can I download db from heroku?

I'm using heroku and I want to download the database from my app(heroku) so I can make some changes in it, I've installed pgbackups, but using heroku pgbackups:url downloads a .dump file
How can I download a postgresql file or translate that .dump into a postgresql file?
If you're using Heroku's pgbackups (which you probably should be using):
$ heroku pg:backups capture
$ curl -o latest.dump `heroku pg:backups public-url`
"Translate" it into a postgres db with
$ pg_restore --verbose --clean --no-acl --no-owner -h localhost -U myuser -d mydb latest.dump
See https://devcenter.heroku.com/articles/heroku-postgres-import-export
There's a command for this in the CLI - heroku db:pull which will do this for you. db:pull can be a bit slow mind you so you may be better to use the next option.
If you are using complex postgress data types (hstore, arrays etc) then you need to use the pgtransfer plugin https://github.com/ddollar/heroku-pg-transfer which will basically does a backup on Heroku and a restores it locally.
UPDATE: db:pull and db:push have been deprecated and should be replaced with pg:pull and pg:push - read more at https://devcenter.heroku.com/articles/heroku-postgresql#pg-push-and-pg-pull
I found the first method suggested in the documentation pull/push even easier. No password or username needed.
pg:pull
pg:pull can be used to pull remote data from a Heroku Postgres
database to a database on your local machine. The command looks like
this:
$ heroku pg:pull HEROKU_POSTGRESQL_MAGENTA mylocaldb --app sushi
This command will create a new local database named “mylocaldb” and
then pull data from database at DATABASE_URL from the app “sushi”. In
order to prevent accidental data overwrites and loss, the local
database must not exist. You will be prompted to drop an already
existing local database before proceeding.
At first I had an error: /bin/sh: createdb: command not found; which I solved following this SO post.
An alternative described also in the documentation (I did not try it yet) is:
To export the data from your Heroku Postgres database, create a new
backup and download it.
$ heroku pg:backups:capture
$ heroku pg:backups:download
Source: Importing and Exporting Heroku Postgres Databases with PG Backups
To export the data from Heroku Postgres database, just follow below steps
Login to heroku
Go to APP->settings->reveal config variable
Copy DATABASE_URL
run pg_dump --DATABASE_URL_COPIED_IN_STEP_3 > database_dump_file
Note this will provide postgresql file or for dump file you can download directly from postgres addon interface.
I think the easiest way to download and replicate the database on local server:
**PGUSER**=LOCAL_USER_NAME PGPASSWORD=LOCAL_PASSWORD heroku pg:pull --app APP_NAME HEROKU_POSTGRESQL_DB_NAME LOCAL_DB_NAME
Go through this document for more info:
https://devcenter.heroku.com/articles/heroku-postgresql#pg-push-and-pg-pull
This is the script that I like to use.
namespace :heroku do
desc "Import most recent database dump"
task :import_from_prod => :environment do
puts 'heroku run pg:backups capture --app APPNAME'
restore_backup 'APPNAME'
end
def path_to_heroku
['/usr/local/heroku/bin/heroku', '/usr/local/bin/heroku'].detect {|path| File.exists?(path)}
end
def heroku(command, site)
`GEM_HOME='' BUNDLE_GEMFILE='' GEM_PATH='' RUBYOPT='' #{path_to_heroku} #{command} -a #{site}`
end
def restore_backup(site = 'APPNAME')
dump_file = "#{Rails.root}/tmp/postgres.dump"
unless File.exists?(dump_file)
pgbackups_url = heroku('pg:backups public-url -q', site).chomp
puts "curl -o #{dump_file} #{pgbackups_url}"
system "curl -o #{dump_file} '#{pgbackups_url}'"
end
database_config = YAML.load(File.open("#{Rails.root}/config/database.yml")).with_indifferent_access
dev_db = database_config[Rails.env]
system "pg_restore -d #{dev_db[:database]} -c #{dump_file}".gsub(/\s+/,' ')
puts
puts "'rm #{dump_file}' to redownload postgres dump."
puts "Done!"
end
end