How to dump DB via cron inside container? - postgresql

I use docker-compose which ups a stack.
Relative code:
db:
build: ./dockerfiles/postgres
container_name: postgres-container
volumes:
- ./dockerfiles/postgres/pgdata:/var/lib/postgresql/data
- ./dockerfiles/postgres/backups:/pg_backups
Dockerfile for Postgres:
FROM postgres:latest
RUN mkdir /pg_backups && > /etc/cron.d/pg_backup-cron && echo "00 22 * * * /backup.sh" >> /etc/cron.d/pg_backup-cron
ADD ./backup.sh /
RUN chmod +x /backup.sh
backup.sh
#!/bin/sh
# Dump DBs
now=$(date +"%d-%m-%Y_%H-%M")
pg_dump -h db -U postgres -d postgres > "/pg_backups/db_dump_$now.sql"
# remove all files (type f) modified longer than 30 days ago under /pg_backups
find /pg_backups -name "*.sql" -type f -mtime +30 -delete
exit 0
Cron simply does not launch the script. How to fix that?
FINAL VERSION
Based on #Farhad Farahi answer, below is the final result:
On host I made a script:
#!/bin/bash
# Creates Cron Job which backups DB in Docker everyday at 22:00 host time
croncmd_backup="docker exec -it postgres-container bash -c '/pg_backups/backup.sh'"
cronjob_backup="00 22 * * * $croncmd_backup"
if [[ $# -eq 0 ]] ; then
echo -e 'Please provide one of the arguments (example: ./run_after_install.sh add-cron-db-backup):
1) add-cron-db-backup
2) remove-cron-db-backup'
# In order to avoid task duplications in cron, the script checks, if there is already back-up job in cron
elif [[ $1 == add-cron-db-backup ]]; then
( crontab -l | grep -v -F "$croncmd_backup" ; echo "$cronjob_backup" ) | crontab -
echo "==>>> Backup task added to Cron"
# Remove back-up job from cron
elif [[ $1 == remove-cron-db-backup ]]; then
( crontab -l | grep -v -F "$croncmd_backup" ) | crontab -
echo "==>>> Backup task removed from Cron"
fi
This script adds cron task to host, which launches the script backup.sh (see above) in a container.
For this implementation there is no need to use Dockerfile for Postgres, so relevant part of docker-compose.yml should look like:
version: '2'
services:
db:
image: postgres:latest
container_name: postgres-container
volumes:
- ./dockerfiles/postgres/pgdata:/var/lib/postgresql/data
- ./dockerfiles/postgres/backups:/pg_backups

Things you should know:
cron service is not started by default in postgres library image.
when you change cron config, you need to reload cron service.
Recommendation:
Use docker host's cron and use docker exec to launch the periodic tasks.
Advantages of this approach:
Unified Configuration for all containers.
Avoids running multiple cron services in multiple containers (Better usage of system resources aswell as less management overhead.
Honors Microservices Philosophy.

Based on the Farhad's answer I created a file postgres_backup.sh on the host with the next content:
#!/bin/bash
# Creates Cron Job which backups DB in Docker everyday at 22:00 host time
croncmd_backup="docker exec -it postgres-container bash -c '/db_backups/script/backup.sh'"
cronjob_backup="00 22 * * * $croncmd_backup"
if [[ $# -eq 0 ]] ; then
echo -e 'Please provide one of the arguments (example: ./postgres_backup.sh add-cron-db-backup):
1 > add-cron-db-backup
2 > remove-cron-db-backup
elif [[ $1 == add-cron-db-backup ]]; then
( crontab -l | grep -v -F "$croncmd_backup" ; echo "$cronjob_backup" ) | crontab -
echo "==>>> Backup task added to Local (not container) Cron"
elif [[ $1 == remove-cron-db-backup ]]; then
( crontab -l | grep -v -F "$croncmd_backup" ) | crontab -
echo "==>>> Backup task removed from Cron"
fi
and I added a file /db_backups/script/backup.sh to Docker's Postgres Image with the content:
#!/bin/sh
# Dump DBs
now=$(date +"%d-%m-%Y_%H-%M")
pg_dump -h db -U postgres -d postgres > "/db_backups/backups/db_dump_$now.sql"
# remove all files (type f) modified longer than 30 days ago under /db_backups/backups
find /db_backups/backups -name "*.sql" -type f -mtime +30 -delete
exit 0

Related

run conditional migration sh script via docker compose yml

Been googling around for this but I can't get the exact answer.
I am building a mobile app and I want to run additional migration scripts when the environment is "local".
I have a docker-compose-local.yml which builds the db
database:
build:
context: ./src/Database/
dockerfile: Dockerfile
container_name: database
ports:
- 1401:1433
environment:
ACCEPT_EULA: 'Y'
SA_PASSWORD: 'password'
ENVIRONMENT: 'local'
networks:
- my-network
and then a Dockerfile with an entrypoint
ENTRYPOINT ["/usr/src/app/entry-point.sh"]
And then a script that runs migrations.
#!/bin/bash
# wait for MSSQL server to start
export STATUS=1
i=0
MIGRATIONS=$(ls migrations/*.sql | sort -V)
SEEDS=$(ls seed/*.sql)
while [[ $STATUS -ne 0 ]] && [[ $i -lt 30 ]]; do
i=$i+1
/opt/mssql-tools/bin/sqlcmd -t 1 -U sa -P $SA_PASSWORD -Q "select 1" >> /dev/null
STATUS=$?
done
if [ $STATUS -ne 0 ]; then
echo "Error: MSSQL SERVER took more than thirty seconds to start up."
exit 1
fi
echo "======= MSSQL SERVER STARTED ========" | tee -a ./config.log
# Run the setup script to create the DB and the schema in the DB
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i create-database.sql
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i create-database-user.sql
for f in $MIGRATIONS
do
echo "Processing migration $f"
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i $f
done
# RUN THIS ONLY FOR ENVIRONMENT = local
for s in $SEEDS
do
echo "Seeding $s"
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i $s
done
Currently everything works perfectly fine, except the seeds are added for all environments.
I only want to run the seed scripts if environment = local.
How can this condition be written into this script?
Alternative is there a cleaner way to do this?
Thanks
There are multiple ways to achieve your goal. Three that come to mind quickly are:
Insert and check the environment variable in the script (What you are trying to do now)
Have two versions of the script in the Container and change the entrypoint in the docker-compose file (either with environment variables or by using multiple compose files)
Build two different versions of the docker image for local and production environment
With your current setup the first alternative is the easiest:
# RUN THIS ONLY FOR ENVIRONMENT = local
if [[ "$ENVIRONMENT" == "local" ]]; then
for s in $SEEDS
do
echo "Seeding $s"
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i $s
done
fi

Postgres with liquibase in one docker image

I want to create dataBase for autotest in kubernetes. I want to create an image(postg-my-app-v1) from postgres image, add changelog files, and liquibase image. When I deploy this image with helm i just want to specify containers - postg-my-app-v1 and it should startup pod with database and create tables with liquibase changelog.
Now i create Dockerfile as below
FROM postgres
ADD /changelog /liquibase/changelog
I don't understand how to add liquibase to this image? Or i must use docker compose? or helm lifecycle postStart for liquibase?
FROM docker-proxy.tcsbank.ru/liquibase/liquibase:3.10.x AS Liquibase
FROM docker-proxy.tcsbank.ru/postgres:9.6.12 AS Postgres
ENV POSTGRES_DB bpm
ENV POSTGRES_USER priest
ENV POSTGRES_PASSWORD Bpm_123
COPY --from=Liquibase /liquibase /liquibase
ENV JAVA_HOME /usr/local/openjdk-11
COPY --from=Liquibase $JAVA_HOME $JAVA_HOME
ENV LIQUIBASE_CHANGELOG /liquibase/changelog/
COPY /changelog $LIQUIBASE_CHANGELOG
COPY liquibase.sh /usr/local/bin/
COPY main.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/liquibase.sh && \
chmod +x /usr/local/bin/main.sh && \
ln -s /usr/local/bin/main.sh / && \
ln -s /usr/local/bin/liquibase.sh /
ENTRYPOINT ["main.sh"]
main.sh
#!/bin/bash
bash liquibase.sh | awk '{print "liquiBase script: " $0}' &
bash docker-entrypoint.sh postgres
liquibase.sh
#!/bin/bash
for COUNTER in {1..120}
do
sleep 1s
echo "check db $COUNTER times"
pg_isready
if [ $? -eq 0 ]
then
break
fi
done
echo "try execute liquibase"
bash liquibase/liquibase --url="jdbc:postgresql://localhost:5432/$POSTGRES_DB" --username=$POSTGRES_USER --password=$POSTGRES_PASSWORD --changeLogFile=/liquibase/changelog/changelog.xml update

running pgadmin4 in docker with pre-configured servers

I'd love to run pgadmin4 in our infrastructure in a way, that postgres servers would be preconfigured during docker build/1.st start.
I've tried to modify the internaly used /var/lib/pgadmin/pgadmin4.db sqlite DB on the 1.st start, which however results in an error in the UI (once selecting the particular postgres server:
definition of service "" not found
I've tried following:
Directory structure:
find ./ -print | sed -e 's;[^/]*/;|____;g;s;____|; |;g'
|____
|____dump
| |____servergroup.csv
| |____server.csv
| |____import_db.sh
|____Dockerfile
Where Dockerfile is:
cat Dockerfile
# rebuild:
# docker build -t pgadmin4:3.0-custom .
# run:
# docker run --rm -it -e PGADMIN_DEFAULT_EMAIL=admin -e PGADMIN_DEFAULT_PASSWORD=admin -p8081:80 docker build -t pgadmin4:3.0-custom
FROM dpage/pgadmin4:3.0
COPY dump/ /dump
RUN \
apk add --no-cache sqlite && \
chmod +x /dump/import_db.sh && \
# re rely on the current entrypoint.sh impl
sed -i '/python run_pgadmin.py/a \/dump\/import_db.sh' /entrypoint.sh && \
cat /entrypoint.sh
In fact it just modifies the https://github.com/postgres/pgadmin4/blob/master/pkg/docker/entrypoint.sh to run import_db.sh script on the 1.st start.
Where dump/import_db.sh is:
cat dump/import_db.sh
#!/bin/sh
echo ".tables" | sqlite3 -csv /var/lib/pgadmin/pgadmin4.db
# remove header and `1,1,Servers` entry (would cause duplicates)
cat /dump/servergroup.csv | sed '1d' | grep -v 1,1,Servers > /tmp/servergroup.in.csv
echo "csv servergroup:"
cat /tmp/servergroup.in.csv
echo "DB servergroup:"
sqlite3 -csv -header /var/lib/pgadmin/pgadmin4.db "select * from servergroup;"
echo ".import /tmp/servergroup.in.csv servergroup" | sqlite3 -csv /var/lib/pgadmin/pgadmin4.db
# remove header
cat /dump/server.csv | sed '1d' > /dump/server.in.csv
echo "csv server:"
cat /dump/server.in.csv
echo "DB server:"
sqlite3 -csv -header /var/lib/pgadmin/pgadmin4.db "select * from server;"
echo ".import /dump/server.in.csv server" | sqlite3 -csv /var/lib/pgadmin/pgadmin4.db
Csv files contents:
cat dump/server.csv
id,user_id,servergroup_id,name,host,port,maintenance_db,username,password,role,ssl_mode,comment,discovery_id,hostaddr,db_res,passfile,sslcert,sslkey,sslrootcert,sslcrl,sslcompression,bgcolor,fgcolor,service
1,1,2,servername,localhost,5432,postgres,postgres,"",,prefer,,,"","",,<STORAGE_DIR>/.postgresql/postgresql.crt,<STORAGE_DIR>/.postgresql/postgresql.key,,,0,,,
cat dump/servergroup.csv
id,user_id,name
2,1,my-group
1,1,Servers
Any idea how to fix my error? Or of any other approach that could provide me the pre-configured pgadmin4 docker container?
The current version of image dpage/pgadmin is 4.24. This version has support for external configuration of server definition list (servers.json):
{
"Servers": {
"test": {
"Name": "test",
"Group": "Servers",
"Port": 5432,
"Username": "postgres",
"Host": "postgres",
"SSLMode": "prefer",
"MaintenanceDB": "postgres"
}
}
}
Volume binding can be configured as below:
volumes:
- ./servers.json:/pgadmin4/servers.json
First time container is started server groups and servers will be configured automaticaly.
UPD. JSON format has more fields which are optional. It's important that password can not be imported/exported in such way due to the obvious security reasons.
Looks this change the service column value to an empty string instead of NULL.
Can you try updating the value of service column to NULL
sqlite> UPDATE server SET service = NULL;
commit the changes and Restart pgAdmin4 & try again connecting to that server.

How do I get pcp to automatically attach nodes to postgres pgpool?

I'm using postgres 9.4.9, pgpool 3.5.4 on centos 6.8.
I'm having a major hard time getting pgpool to automatically detect when nodes are up (it often detects the first node but rarely detects the secondary) but if I use pcp_attach_node to tell it what nodes are up, then everything is hunky dory.
So I figured until I could properly sort the issue out, I would write a little script to check the status of the nodes and attach them as appropriate, but I'm having trouble with the password prompt. According to the documentation, I should be able to issue commands like
pcp_attach_node 10 localhost 9898 pgpool mypass 1
but that just complains
pcp_attach_node: Warning: extra command-line argument "localhost" ignored
pcp_attach_node: Warning: extra command-line argument "9898" ignored
pcp_attach_node: Warning: extra command-line argument "pgpool" ignored
pcp_attach_node: Warning: extra command-line argument "mypass" ignored
pcp_attach_node: Warning: extra command-line argument "1" ignored
it'll only work when I use parameters like
pcp_attach_node -U pgpool -h localhost -p 9898 -n 1
and there's no parameter for the password, I have to manually enter it at the prompt.
Any suggestions for sorting this other than using Expect?
You have to create PCPPASSFILE. Search pgpool documentation for more info.
Example 1:
create PCPPASSFILE for logged user (vi ~/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 ~/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Example 2:
create PCPPASSFILE (vi /usr/local/etc/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 /usr/local/etc/.pcppass), set variable PCPPASSFILE (export PCPPASSFILE=/usr/local/etc/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Script for auto attach the node
You can schedule this script with for example crontab.
#!/bin/bash
#pgpool status
#0 - This state is only used during the initialization. PCP will never display it.
#1 - Node is up. No connections yet.
#2 - Node is up. Connections are pooled.
#3 - Node is down.
source $HOME/.bash_profile
export PCPPASSFILE=/appl/scripts/.pcppass
STATUS_0=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 0 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 0 status "$STATUS_0;
if (( $STATUS_0 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 0 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 0 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
STATUS_1=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 1 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 1 status "$STATUS_1;
if (( $STATUS_1 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 1 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 1 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
exit 0
yes you can trigger execution of this command using a customised failover_command (failover.sh in your /etc/pgpool)
Automated way to up your pgpool down node:
copy this script into a file with execute permission to your desired location with postgres ownership into all nodes.
run crontab -e comamnd under postgres user
Finally set that script to run every minute at crontab . But to execute it for every second you may create your own
service and run it.
#!/bin/bash
# This script will up all pgpool down node
#************************
#******NODE STATUS*******
#************************
# 0 - This state is only used during the initialization.
# 1 - Node is up. No connection yet.
# 2 - Node is up and connection is pooled.
# 3 - Node is down
#************************
#******SCRIPT*******
#************************
server_node_list=(0 1 2)
for server_node in ${server_node_list[#]}
do
source $HOME/.bash_profile
export PCPPASSFILE=/var/lib/pgsql/.pcppass
node_status=$(pcp_node_info -p 9898 -h localhost -U pgpool -n $server_node -w | cut -d ' ' -f 3);
if [[ $node_status == 3 ]]
then
pcp_attach_node -n $server_node -U pgpool -p 9898 -w -v
fi
done

issues backing up postgres databases in cron

I am trying to backup postgres databases. I am running a cron job to do so. Issue is that postgres runs under user postgres and I dont think I can run a cron job under ubuntu user. I tried to create a cron job under postgres user and that also did not work. My script, if login as postgres user works just fine.
Here is my script
#!/bin/bash
# Location to place backups.
backup_dir="/home/postgres-backup/"
#String to append to the name of the backup files
backup_date=`date +%d-%m-%Y`
#Numbers of days you want to keep copie of your databases
number_of_days=30
databases=`psql -l -t | cut -d'|' -f1 | sed -e 's/ //g' -e '/^$/d'`
for i in $databases; do
if [ "$i" != "template0" ] && [ "$i" != "template1" ]; then
echo Dumping $i to $backup_dir$i\_$backup_date
pg_dump -Fc $i > $backup_dir$i\_$backup_date
fi
done
find $backup_dir -type f -prune -mtime +$number_of_days -exec rm -f {} \;
if I do
sudo su - postgres
I see
-rwx--x--x 1 postgres postgres 570 Jan 12 20:48 backup_all_db.sh
and when I do
./backup_all_db.sh
it gets backed up in /home/postgres-backup/
however with cronjob its not working, regardless if I add the cron job under postgres or under ubuntu.
here is my cronjob
0,30 * * * * /var/lib/pgsql/backup_all_db.sh 1> /dev/null 2> /home/cron.err
Will appreciate any help
Enable user to run cron jobs
If the /etc/cron.allow file exists, then users must be listed in it in order to be allowed to run the crontab command. If the /etc/cron.allow file does not exist but the /etc/cron.deny file does, then users must not be listed in the /etc/cron.deny file in order to run crontab.
In the case where neither file exists, the default on current Ubuntu (and Debian, but not some other Linux and UNIX systems) is to allow all users to run jobs with crontab.
Add cron jobs
Use this command to add a cron job for the current user:
crontab -e
Use this command to add a cron job for a specified user (permissions are required):
crontab -u <user> -e
Additional reading
man 5 crontab
Crontab in Ubuntu: https://help.ubuntu.com/community/CronHowto