backup postgresql from azure container instance - postgresql

I created Azure Container Instance and ran postgresql in it. Mounted an azure container instance storage account. How can I start backup work, possibly by sheduler?
When I run the command
az container exec --resource-group Vitalii-demo --name vitalii-demo --exec-command "pg_dumpall -c -U postgrace > dump.sql"
I get an error error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \ "pg_dumpall -c -U postgrace > dump.sql\": executable file not found in $PATH"
I read that
Azure Container Instances currently supports launching a single process with az container exec, and you cannot pass command arguments. For example, you cannot chain commands like in sh -c "echo FOO && echo BAR", or execute echo FOO.
Perhaps there is an opportunity to run as a task? Thanks.

Unfortunately - and as you already mentioned - it's not possible to run any commands with arguments like echo FOO or chain multiple commands together with &&.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-exec#run-a-command-with-azure-cli
You should be able to run an interactive shell by using --exec-command /bin/bash.
But this will not help if you want to schedule the backups programatically.
pg_dumpall can also be configured by environment variables:
https://www.postgresql.org/docs/9.3/libpq-envars.html
You could launch your backup-container with the correct environment variables in order to connect your database service:
PGHOST
PGPORT
PGUSER
PGPASSWORD
When having these variables set, a simple pg_dumpall should totally do what you want.
Hope that helps.
UPDATE:
Yikes, even when configuring the connection via environment-variables you won't be able to state the desired output file... Sorry.
You could create your own Dockerimage with a pre-configured script for dumping your PostgreSQL-database.
Doing it that way, you can configure the output-file in your script and then simply execute the script with --exec-command dump_my_db.sh.
Keep in mind that your script has to be located somewhere in the default $PATH - e.g. /usr/local/bin.

Related

Docker image wait for postgres image to start - without Bash

I have a standard Python docker image that needs to start after postgers is properly started in its standard image.
I understand that I can add this Bash command in the docker-compose file:
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; npm start'
depends_on:
- mypostgres
But I don't have bash installed in the standard python docker image, and I'm trying to keep the installation minimal.
Is there a way to wait for postgres without having bash installed in my image?
I have a standard Python docker image that needs to start after postgres is properly started in its standard image.
You mentioned "Python docker image", but you appear to be calling npm start, which is a node.js application, not a Python application.
The standard Python images do have bash installed (as do the official Node images):
$ docker run -it --rm python:3.10 bash
root#c9bdac2e23f9:/#
However, just checking for the port to be available may be insufficient in any case, so really what you want is to execute a query against the database and only continue once the query is successful.
A common solution is to install the postgres cli and run psql in a loop, like this:
until psql -h $HOST -U $USER -d $DB_NAME -c 'select 1' >/dev/null 2>&1; do
echo 'Waiting for database...'
sleep 1
done
You can use environment variables or a .pgpass file to provide the appropriate password.
If you are building a custom image, it may be better to place this logic in your ENTRYPOINT script rather than embedding it in the command field of your docker-compose.yaml.
If you don't want to psql, you can write the same logic in Python or Node utilizing whatever Postgres bindings are available (e.g., something like psycopg2 for Python).
A better solution is to make your application robust in the face of database failures, because this allows your application to continue running if the database is briefly unavailable during a restart.

Docker Compose - Container Bash Forking

I am trying to run netbox based on their standard guide on Docker Hub with a slight difference that I need our existing postgres dump to be restored when the postgres container starts.
I have tried a few approaches like defining a command option in docker-compose file like (and a few more combinations):
sleep 2 && psql -U netbox -f netbox.sql
sleep is required to prevent psql command running before the postgres service is started.
Or defining a bash script that does the database restore but all these approaches cause the container to exit after that command/script is run.
My last resort was to utilize bash forking and this is what the postgres snippet of docker-compose looks like:
postgres:
image: postgres:13-alpine
env_file: env/postgres.env
command:
- sh
- -c
- (sleep 3 && cd /home && psql -U netbox -f netbox.sql) & su -c postgres postgres
volumes:
- ./my_db:/home/
- netbox-postgres-data:/var/lib/postgresql/data
Sadly this throws results in:
postgres: could not access the server configuration file
"/var/lib/postgresql/data/postgresql.conf": No such file or directory
If I omit the command section of docker-compose, the container starts up fine and I can navigate and ls the directory in the error message but it is not what I really need because this container will go on to be part of a much larger jungle of an ecosystem with little to no control over it afterwards.
Could it be my bash forking or the problem lies somewhere else?
Thanks in advance
I was able to find a solution by going through the thread that David Maze shared in the comments.
In my case, placing the *.sql file inside /docker-entrypoint-initdb.d did not work but I wrote a bash script, placed it in /docker-entrypoint-initdb.d directory and it got triggered.
The bash script was a very simple one, it would cd to the directory containing the sql dump and then restore it by running psql:
psql -U netbox -f netbox.sql

Why are psql commands in my script suddenly being killed by jenkins / hudson?

I have an existing jenkins job that kicks off a shell script to copy my prod environment into qa.
We added a lot of data to prod (gzip dump went from 2gig to 15gig) and all of the sudden my jenkins jobs started failing.
We are running postgres 9.5 in aws and jenkins 2.171. all jenkins jobs are executed on master which is the same server with 6 executors. There are no memory/cpu/disk space issues
Tried a few things: statement_timeout on the postgres instance is already 0. Switching from bash to sh for some reason helped on some scripts but not others. In particular this one is still having various psql statements Killed. the script works fine when run from an interactive shell.
Also tried disabling Process Tree Killer https://wiki.jenkins.io/display/JENKINS/ProcessTreeKiller. no go.
Here's the code from two of the more innocuous commands that should run pretty quickly. $POSTGRES_HOST_OPTS only has the db name and port:
echo -e "Running POSTGIS command"
psql $POSTGRES_HOST_OPTS -U $POSTGRES_ENV_POSTGRES_USER_PROD -d postgres -c "CREATE EXTENSION postgis;"
echo -e "Creating temporary user dv3_qa_tmp so we can rename the $POSTGRES_ENV_POSTGRES_USER_PROD user\n"
psql $POSTGRES_HOST_OPTS -U $POSTGRES_ENV_POSTGRES_USER_PROD -d postgres -c "create role dv3_qa_tmp password '$PGPASSWORD_QA' createdb createrole inherit login;"
Here's the output from jenkins console:
Waiting for new instance to be available...
-e Renaming database dv3_prod to dv3_qa
Killed
-e Running POSTGIS command
Killed
-e Creating temporary user dv3_qa_tmp so we can rename the dv3_prod_user user
Killed
-e Renaming user dv3_prod_user to dv3_qa_user
Killed
Killed
-e
All done
From the jenkins.log there is something on file descriptors but not sure how that is related. I've also tried redirecting stderr which gets rid of this message but doesn't stop the commands being killed.
Apr 10, 2019 4:23:31 PM hudson.Proc$LocalProc join
WARNING: Process leaked file descriptors. See https://jenkins.io/redirect/troubleshooting/process-leaked-file-descriptors for more information
java.lang.Exception
at hudson.Proc$LocalProc.join(Proc.java:334)
at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1818)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)

Run SQL script after start of SQL Server on docker

I have a Dockerfile with below code
FROM microsoft/mssql-server-windows-express
COPY ./create-db.sql .
ENV ACCEPT_EULA=Y
ENV sa_password=##$wo0RD!
CMD sqlcmd -i create-db.sql
and I can create image but when I run container with the image I don't see created database on the SQL Server because the script is executed before SQL Server was started.
Can I do that the script will be execute after start the service with SQL Server?
RUN gets used to build the layers in an image. CMD is the command that is run when you launch an instance (a "container") of the built image.
Also, if your script depends on those environment variables, if it's an older version of Docker, it might fail because those variables are not defined the way you want them defined!
In older versions of docker the Dockerfile ENV command uses spaces instead of "="
Your Dockerfile should probably be:
FROM microsoft/mssql-server-windows-express
COPY ./create-db.sql .
ENV ACCEPT_EULA Y
ENV SA_PASSWORD ##$wo0RD!
RUN sqlcmd -i create-db.sql
This will create an image containing the database with your password inside it.
(If the SQL file somehow uses the environment variables, this wouldn't make sense as you might as well update the SQL file before you copy it over.) If you want to be able to override the password between the docker build and docker run steps, by using docker run --env sa_password=##$wo0RD! ..., you will need to change the last line to:
CMD sqlcmd -i create-db.sql && .\start -sa_password $env:SA_PASSWORD \
-ACCEPT_EULA $env:ACCEPT_EULA -attach_dbs \"$env:attach_dbs\" -Verbose
Which is a modified version of the CMD line that is inherited from the upstream image.
You can follow this link https://github.com/microsoft/mssql-docker/issues/11.
Credits to Robin Moffatt.
Change your docker-compose.yml file to contain the following
mssql:
image: microsoft/mssql-server-windows-express
environment:
- SA_PASSWORD=##$wo0RD!
- ACCEPT_EULA=Y
volumes:
# directory with sql script on pc to /scripts/
# - ./data/mssql:/scripts/
- ./create-db.sql:/scripts/
command:
- /bin/bash
- -c
- |
# Launch MSSQL and send to background
/opt/mssql/bin/sqlservr &
# Wait 30 seconds for it to be available
# (lame, I know, but there's no nc available to start prodding network ports)
sleep 30
# Run every script in /scripts
# TODO set a flag so that this is only done once on creation,
# and not every time the container runs
for foo in /scripts/*.sql
do /opt/mssql-tools/bin/sqlcmd -U sa -P $$SA_PASSWORD -l 30 -e -i $$foo
done
# So that the container doesn't shut down, sleep this thread
sleep infinity

Backup postgresql database from 4D

I am using 4D for front-end and postgresql for back-end. So i have the requirement to take database backups from front-end.
Here what i have done so far for taking backups in 4D.
C_LONGINT(i_pg_connection)
i_pg_connection:=PgSQL Connect ("localhost";"admin";"admin";"test_db")
LAUNCH EXTERNAL PROCESS("C:\\Program Files\\PostgreSQL\\9.5\\bin\\pg_dump.exe -h localhost -p 5432 -U admin -F c -b -v -f C:\\Users\\Admin_user\\Desktop\\backup_test\\db_backup.backup test_db")
PgSQL Close (i_pg_connection)
But the it's not taking the backup.
The backup command is ok because it works perfectly while firing on command prompt.
What's wrong in my code?
Thanks in advance.
Unneeded commands in your code
If you are using LAUNCH EXTERNAL PROCESS to do the backup then you do not need the PgSQL CONNECT and PgSQL CLOSE.
These plug-in commands do not execute in the same context as LAUNCH EXTERNAL PROCESS so they are unneeded in this situation.
Make sure you have write access
If the 4D Database is running as a Service, or more specifically as a user that does not have write access to C:\Users\Admin_user\..., then it could be failing due to a permissions issue.
Make sure that you are writing to a location that you have write access to, and also be sure to check the $out and $err parameters to see what the Standard Output and Error Streams are.
You need to specify a password for pg_dump
Another problem is that you are not specifying the password.
You could either use the PGPASSWORD environment variable or use a pgpass.conf file in the user's profile directory.
Regarding the PGPASSWORD environment variable; the documentation has the following warning:
Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via ps; instead consider using the ~/.pgpass file
Example using pgpass.conf
The following example assumes you have a pgpass.conf file in place:
C_TEXT($c;$in;$out;$err)
$c:="C:\\Program Files\\PostgreSQL\\9.5\\bin\\pg_dump.exe -h localhost -p 5432 -U admin -F"
$c:=$c+" c -b -v -f C:\\Users\\Admin_user\\Desktop\\backup_test\\db_backup.backup test_db"
LAUNCH EXTERNAL PROCESS($c;$in;$out;$err)
TRACE
Example using PGPASSWORD environment variable
The following example sets the PGPASSWORD environment variable before the call to pg_dump and then clears the variable after the call:
C_TEXT($c;$in;$out;$err)
SET ENVIRONMENT VARIABLE ( "PGPASSWORD" ; "your postgreSQL password" )
$c:="C:\\Program Files\\PostgreSQL\\9.5\\bin\\pg_dump.exe -h localhost -p 5432 -U admin -F"
$c:=$c+" c -b -v -f C:\\Users\\Admin_user\\Desktop\\backup_test\\db_backup.backup test_db"
LAUNCH EXTERNAL PROCESS($c;$in;$out;$err)
SET ENVIRONMENT VARIABLE ( "PGPASSWORD" ; "" ) // clear password for security
TRACE
Debugging
Make sure to use the debugger to check the $out and $err to see what the underlying issue is.