Been googling around for this but I can't get the exact answer.
I am building a mobile app and I want to run additional migration scripts when the environment is "local".
I have a docker-compose-local.yml which builds the db
database:
build:
context: ./src/Database/
dockerfile: Dockerfile
container_name: database
ports:
- 1401:1433
environment:
ACCEPT_EULA: 'Y'
SA_PASSWORD: 'password'
ENVIRONMENT: 'local'
networks:
- my-network
and then a Dockerfile with an entrypoint
ENTRYPOINT ["/usr/src/app/entry-point.sh"]
And then a script that runs migrations.
#!/bin/bash
# wait for MSSQL server to start
export STATUS=1
i=0
MIGRATIONS=$(ls migrations/*.sql | sort -V)
SEEDS=$(ls seed/*.sql)
while [[ $STATUS -ne 0 ]] && [[ $i -lt 30 ]]; do
i=$i+1
/opt/mssql-tools/bin/sqlcmd -t 1 -U sa -P $SA_PASSWORD -Q "select 1" >> /dev/null
STATUS=$?
done
if [ $STATUS -ne 0 ]; then
echo "Error: MSSQL SERVER took more than thirty seconds to start up."
exit 1
fi
echo "======= MSSQL SERVER STARTED ========" | tee -a ./config.log
# Run the setup script to create the DB and the schema in the DB
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i create-database.sql
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i create-database-user.sql
for f in $MIGRATIONS
do
echo "Processing migration $f"
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i $f
done
# RUN THIS ONLY FOR ENVIRONMENT = local
for s in $SEEDS
do
echo "Seeding $s"
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i $s
done
Currently everything works perfectly fine, except the seeds are added for all environments.
I only want to run the seed scripts if environment = local.
How can this condition be written into this script?
Alternative is there a cleaner way to do this?
Thanks
There are multiple ways to achieve your goal. Three that come to mind quickly are:
Insert and check the environment variable in the script (What you are trying to do now)
Have two versions of the script in the Container and change the entrypoint in the docker-compose file (either with environment variables or by using multiple compose files)
Build two different versions of the docker image for local and production environment
With your current setup the first alternative is the easiest:
# RUN THIS ONLY FOR ENVIRONMENT = local
if [[ "$ENVIRONMENT" == "local" ]]; then
for s in $SEEDS
do
echo "Seeding $s"
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i $s
done
fi
Related
We have 100+ databases contains daily backups
how to check failure backups in PostgreSQL backup schedules
pg_dump -h localhost -p 5432 -U postgres -d db1 -v -f "path/file.backup"
pg_dump -h localhost -p 5432 -U postgres -d db2 -v -f "path/file.backup"
pg_dump -h localhost -p 5432 -U postgres -d db3 -v -f "path/file.backup"
pg_dump -h localhost -p 5432 -U postgres -d db4 -v -f "path/file.backup"
pg_dump -h localhost -p 5432 -U postgres -d db5 -v -f "path/file.backup"
...
like this i have 100 backup schedules
Try to do it in for loop for example?
#!/bin/bash
# create an indexed array with all your databases listed
database_names=( "1" "2" "3" )
# Declare an associative array to store dbname and dump status codes
declare -A all_dbdump_states
for db in "${database_names[#]}"; do
echo "Executing $db dump.."
pg_dump -h localhost -p 5432 -U postgres -d $db -v -f "path/file.backup"
dump_rc=$? # Save exit code of pg_dump process into variable
# After each for loop iteration, append data into array
all_dbdump_states[$db]+=$dump_rc
done
echo -e "\nListing status codes of all dumps:"
for db in "${!all_dbdump_states[#]}"; do
echo "Database [$db] status: ${all_dbdump_states[$db]}"
sleep 1
done
Here I'm echoing these pg_dump lines for better tests and made an explicit mistake in echo command to put second command fail, with exit code 127:
#!/bin/bash
# create an indexed array with all your databases listed
database_names=( "1" "2" "3" )
# Declare an assotiative array to store dbname and dump status codes
declare -A all_dbdump_states
for db in "${database_names[#]}"; do
echo "Executing $db dump.."
if [[ $db -eq 2 ]]; then
ech "pg_dump -h localhost -p 5432 -U postgres -d 2 -v -f 'path/file.backup'" &>/dev/null
dump_rc=$? # Save exit code of pg_dump process into variable
all_dbdump_states[$db]+=$dump_rc
continue
fi
echo "pg_dump -h localhost -p 5432 -U postgres -d $db -v -f 'path/file.backup'" &>/dev/null
dump_rc=$? # Save exit code of pg_dump process into variable
# After each for loop iteration, append data into array
all_dbdump_states[$db]+=$dump_rc
done
echo -e "\nListing status codes of all dumps:"
for db in "${!all_dbdump_states[#]}"; do
echo "Database [$db] status: ${all_dbdump_states[$db]}"
sleep 1
done
Sample output:
$ ./test.sh
Executing 1 dump..
Executing 2 dump..
Executing 3 dump..
Listing status codes of all dumps:
Database [1] status: 0
Database [2] status: 127
Database [3] status: 0
I try to configure continuous testing/integration with odoo and postgres docker container.
But I stuck on a problem, Gitlab CI can't do any operations on docker postgres.
My goal is to put a database template into a postgres container after run it and before testing.
I tried to use ssh executor after shell executor, but I always stuck on the same problem.
Notice that all commands here can be complete on the runner without problems, I test it.
I wrote this yml files:
variables:
# Configure postgres service (https://hub.docker.com/_/postgres/)
POSTGRES_DB: db
POSTGRES_USER: odoo
POSTGRES_PASSWORD: odoo
before_script:
# Pull container version
- docker pull postgres:9.5
- docker pull odoo:8.0
after_script:
# Remove all used container
- docker stop $(docker ps -a -q) && docker rm $(docker ps -aq)
stages:
- prepare
job1:
stage: prepare
# prepare postgres db
script:
#Launch postgres container
- docker run -d -e POSTGRES_USER=$POSTGRES_USER -e POSTGRES_PASSWORD=$POSTGRES_PASSWORD --name db postgres:9.5
# Copy and restore db template
- docker cp /home/myuser/odoov8_test.dump db:/home
- docker exec -i db su -s /bin/sh - postgres -c "createdb odoov8_test && pg_restore -d odoov8_test --no-owner --verbose /home/odoov8_test.dump"
# launch odoo with own addons folder (/own/path/to/addons:/mnt/extra-addons) testdatabase (-d) module to install without space with all dependances (-i) test unable (--test-enable) and stop after init (--stop-after-init)
- docker run -v $CI_PROJECT_DIR:/mnt/extra-addons -p 8069:8069 --name odoo --link db:db -t odoo:8.0 -- -d odoov8_test.dump -i crm,sale --test-enable --stop-after-init
I got this result:
[0KRunning with gitlab-ci-multi-runner 1.11.2 (0489844)
on Test docker odoo (7fafb15a)
[0;m[0KUsing Shell executor...
[0;mRunning on debian-8-clean...
[32;1mFetching changes...[0;m
HEAD est maintenant à 7d196ea Update .gitlab-ci.yml
Depuis https://myserver.com/APSARL/addons-ext
7d196ea..47591ac master -> origin/master
[32;1mChecking out 47591ac6 as master...[0;m
[32;1mSkipping Git submodules setup[0;m
[32;1m$ docker pull postgres:9.5[0;m
9.5: Pulling from library/postgres
Digest: sha256:751bebbc12716d7d9818678e91cbec8138e52dc4a084f0e81c58cd8b419930e5
Status: Image is up to date for postgres:9.5
[32;1m$ docker pull odoo:8.0[0;m
8.0: Pulling from library/odoo
Digest: sha256:9deda039e0df28aaf515001dd1606ab74a16ed25504447edc2912bca9019cd43
Status: Image is up to date for odoo:8.0
[32;1m$ docker run -d -e POSTGRES_USER=$POSTGRES_USER -e POSTGRES_PASSWORD=$POSTGRES_PASSWORD --name db postgres:9.5[0;m
60a0c75fd55e953e6a25a3cc0f13093ec2f1ee96bfb8384ac19d00f740dd1d4e
[32;1m$ docker cp /home/myuser/odoov8_test.dump db:/home
[0;m[32;1m$ docker exec -i db su -s /bin/sh - postgres -c "createdb odoov8_test && pg_restore -d odoov8_test --no-owner --verbose /home/odoov8_test.dump"
[0;m
createdb: could not connect to database template1: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
[32;1mRunning after script...[0;m
[32;1m$ docker stop $(docker ps -a -q) && docker rm $(docker ps -aq)[0;m
60a0c75fd55e
60a0c75fd55e
[31;1mERROR: Job failed: exit status 1
[0;m
I use docker-compose which ups a stack.
Relative code:
db:
build: ./dockerfiles/postgres
container_name: postgres-container
volumes:
- ./dockerfiles/postgres/pgdata:/var/lib/postgresql/data
- ./dockerfiles/postgres/backups:/pg_backups
Dockerfile for Postgres:
FROM postgres:latest
RUN mkdir /pg_backups && > /etc/cron.d/pg_backup-cron && echo "00 22 * * * /backup.sh" >> /etc/cron.d/pg_backup-cron
ADD ./backup.sh /
RUN chmod +x /backup.sh
backup.sh
#!/bin/sh
# Dump DBs
now=$(date +"%d-%m-%Y_%H-%M")
pg_dump -h db -U postgres -d postgres > "/pg_backups/db_dump_$now.sql"
# remove all files (type f) modified longer than 30 days ago under /pg_backups
find /pg_backups -name "*.sql" -type f -mtime +30 -delete
exit 0
Cron simply does not launch the script. How to fix that?
FINAL VERSION
Based on #Farhad Farahi answer, below is the final result:
On host I made a script:
#!/bin/bash
# Creates Cron Job which backups DB in Docker everyday at 22:00 host time
croncmd_backup="docker exec -it postgres-container bash -c '/pg_backups/backup.sh'"
cronjob_backup="00 22 * * * $croncmd_backup"
if [[ $# -eq 0 ]] ; then
echo -e 'Please provide one of the arguments (example: ./run_after_install.sh add-cron-db-backup):
1) add-cron-db-backup
2) remove-cron-db-backup'
# In order to avoid task duplications in cron, the script checks, if there is already back-up job in cron
elif [[ $1 == add-cron-db-backup ]]; then
( crontab -l | grep -v -F "$croncmd_backup" ; echo "$cronjob_backup" ) | crontab -
echo "==>>> Backup task added to Cron"
# Remove back-up job from cron
elif [[ $1 == remove-cron-db-backup ]]; then
( crontab -l | grep -v -F "$croncmd_backup" ) | crontab -
echo "==>>> Backup task removed from Cron"
fi
This script adds cron task to host, which launches the script backup.sh (see above) in a container.
For this implementation there is no need to use Dockerfile for Postgres, so relevant part of docker-compose.yml should look like:
version: '2'
services:
db:
image: postgres:latest
container_name: postgres-container
volumes:
- ./dockerfiles/postgres/pgdata:/var/lib/postgresql/data
- ./dockerfiles/postgres/backups:/pg_backups
Things you should know:
cron service is not started by default in postgres library image.
when you change cron config, you need to reload cron service.
Recommendation:
Use docker host's cron and use docker exec to launch the periodic tasks.
Advantages of this approach:
Unified Configuration for all containers.
Avoids running multiple cron services in multiple containers (Better usage of system resources aswell as less management overhead.
Honors Microservices Philosophy.
Based on the Farhad's answer I created a file postgres_backup.sh on the host with the next content:
#!/bin/bash
# Creates Cron Job which backups DB in Docker everyday at 22:00 host time
croncmd_backup="docker exec -it postgres-container bash -c '/db_backups/script/backup.sh'"
cronjob_backup="00 22 * * * $croncmd_backup"
if [[ $# -eq 0 ]] ; then
echo -e 'Please provide one of the arguments (example: ./postgres_backup.sh add-cron-db-backup):
1 > add-cron-db-backup
2 > remove-cron-db-backup
elif [[ $1 == add-cron-db-backup ]]; then
( crontab -l | grep -v -F "$croncmd_backup" ; echo "$cronjob_backup" ) | crontab -
echo "==>>> Backup task added to Local (not container) Cron"
elif [[ $1 == remove-cron-db-backup ]]; then
( crontab -l | grep -v -F "$croncmd_backup" ) | crontab -
echo "==>>> Backup task removed from Cron"
fi
and I added a file /db_backups/script/backup.sh to Docker's Postgres Image with the content:
#!/bin/sh
# Dump DBs
now=$(date +"%d-%m-%Y_%H-%M")
pg_dump -h db -U postgres -d postgres > "/db_backups/backups/db_dump_$now.sql"
# remove all files (type f) modified longer than 30 days ago under /db_backups/backups
find /db_backups/backups -name "*.sql" -type f -mtime +30 -delete
exit 0
I'm using postgres 9.4.9, pgpool 3.5.4 on centos 6.8.
I'm having a major hard time getting pgpool to automatically detect when nodes are up (it often detects the first node but rarely detects the secondary) but if I use pcp_attach_node to tell it what nodes are up, then everything is hunky dory.
So I figured until I could properly sort the issue out, I would write a little script to check the status of the nodes and attach them as appropriate, but I'm having trouble with the password prompt. According to the documentation, I should be able to issue commands like
pcp_attach_node 10 localhost 9898 pgpool mypass 1
but that just complains
pcp_attach_node: Warning: extra command-line argument "localhost" ignored
pcp_attach_node: Warning: extra command-line argument "9898" ignored
pcp_attach_node: Warning: extra command-line argument "pgpool" ignored
pcp_attach_node: Warning: extra command-line argument "mypass" ignored
pcp_attach_node: Warning: extra command-line argument "1" ignored
it'll only work when I use parameters like
pcp_attach_node -U pgpool -h localhost -p 9898 -n 1
and there's no parameter for the password, I have to manually enter it at the prompt.
Any suggestions for sorting this other than using Expect?
You have to create PCPPASSFILE. Search pgpool documentation for more info.
Example 1:
create PCPPASSFILE for logged user (vi ~/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 ~/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Example 2:
create PCPPASSFILE (vi /usr/local/etc/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 /usr/local/etc/.pcppass), set variable PCPPASSFILE (export PCPPASSFILE=/usr/local/etc/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Script for auto attach the node
You can schedule this script with for example crontab.
#!/bin/bash
#pgpool status
#0 - This state is only used during the initialization. PCP will never display it.
#1 - Node is up. No connections yet.
#2 - Node is up. Connections are pooled.
#3 - Node is down.
source $HOME/.bash_profile
export PCPPASSFILE=/appl/scripts/.pcppass
STATUS_0=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 0 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 0 status "$STATUS_0;
if (( $STATUS_0 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 0 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 0 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
STATUS_1=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 1 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 1 status "$STATUS_1;
if (( $STATUS_1 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 1 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 1 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
exit 0
yes you can trigger execution of this command using a customised failover_command (failover.sh in your /etc/pgpool)
Automated way to up your pgpool down node:
copy this script into a file with execute permission to your desired location with postgres ownership into all nodes.
run crontab -e comamnd under postgres user
Finally set that script to run every minute at crontab . But to execute it for every second you may create your own
service and run it.
#!/bin/bash
# This script will up all pgpool down node
#************************
#******NODE STATUS*******
#************************
# 0 - This state is only used during the initialization.
# 1 - Node is up. No connection yet.
# 2 - Node is up and connection is pooled.
# 3 - Node is down
#************************
#******SCRIPT*******
#************************
server_node_list=(0 1 2)
for server_node in ${server_node_list[#]}
do
source $HOME/.bash_profile
export PCPPASSFILE=/var/lib/pgsql/.pcppass
node_status=$(pcp_node_info -p 9898 -h localhost -U pgpool -n $server_node -w | cut -d ' ' -f 3);
if [[ $node_status == 3 ]]
then
pcp_attach_node -n $server_node -U pgpool -p 9898 -w -v
fi
done
So I have two docker images which are used to host postgres servers. One is for first time setup and puts a .sh and .sql script into the appropriate directory for postgres to load them up (which works as expect). The second image is for running any other time where the database has already been created. The docker files for these looks like this:
FROM postgres:9.3
RUN mkdir -p /etc/postgresql/9.3/main/
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
ADD Setup.sh /docker-entrypoint-initdb.d/1.sh
ADD schema.sql /docker-entrypoint-initdb.d/2.sql
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
and
FROM postgres:9.3
MAINTAINER Andrew Broadbent <andrew.broadbent#manchester.ac.uk>
RUN mkdir -p /etc/postgresql/9.3/main/
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
I run the first from the following shell script:
# Create the volumes for the data backend database.
docker volume create --name psql-data-etc
docker volume create --name psql-data-log
docker volume create --name psql-data-lib
# Create data store database
echo -e "\n${TITLE}[Data Store Database]${NC}"
read -p " User name: " user
read -s -p " Password: " password
echo ""
read -p " Database name: " dbname
docker run -v psql-data-etc:/etc/postgresql -v psql-data-log:/var/log/postgresql -v psql-data-lib:/var/lib/postgresql -e "NEW_USER=$user" -e "NEW_PASSWORD=$password" -e "POSTGRES_DB=$dbname" -p 9001:5432 -P -d --name psql-data postgres-setup-datastore
and I run the second from the following:
echo -e "\n${TITLE}[Data Store Database]${NC}"
read -p " User name: " user
read -s -p " Password: " password
docker run -v psql-data-etc:/etc/postgresql -v psql-data-log:/var/log/postgresql -v psql-data-lib:/var/lib/postgresql -e "POSTGRES_USER=$user" -e "POSTGRES_PASSWORD=$password" -p 9001:5432 -P --name psql-data postgres-runtime
If I run the first script, connect the the database make some changes, then remove the container, then run the second script, the changes I made to the database aren't persisted. How do I persist them? I managed to get it working earlier, but I'm not sure what I've changed and can't seem to get it working any more.