Testing database creation in docker - postgresql

I have a script creates my database (script with all required DDL and inserts). My goal is to test that script is correct and database will be created successfully and without exceptions.
I decide to use for this docker image "postgres:latest".
My question is: can I run the docker image so that my script will be applied (I know, I can run my cript by copying to /docker-entrypoint-initdb.d/), and immedietly after that database will be shutdown and docker container exit with code 0. I want to shutdown database for automation this process and check exit code in test script.
I'll be glad to other suggestions of automation the prosess.

Related

Automatically initialize replica set for mongoDB in docker fails

I have a NodeJS Express App that depends on MongoDB change streams. For them to be available, MongoDB has to be configured to run as a replica set (even if there is only one node in that set).
I'm working on Windows 10 pro.
I'm trying to dockerize this App, basing the MongoDB container off the official mongo:5 image.
For this to work, I want an automated way of initializing the DB as a replica set. Tutorials I've found rely on either execing into the container and running rs.initiate() from mongosh (or similar approaches), which is manual work I want to avoid. Or they use hacks like wait-for-it.sh as here.
I feel there must be a better solution, based somehow on the paragraph "Initializing a fresh instance", from the docs.
It describes that
When a container is started for the first time it will execute files with extensions .sh and .js that are found in /docker-entrypoint-initdb.d.
When exactly in the container lifecycle does that happen? After the container is initialized? Or after the DB is ready? Because this seems to be the perfect place for this initialization logic, which runs flawlessly when executed manually, from within the container.
However, placing
// initReplSet.js
print('Script running');
config={"_id":"rs0", "members":[{"_id":0,"host":"app-db:27017"}]};
print(JSON.stringify(rs.initiate(config)));
print('Script end');
fails with the error {"ok":0,"errmsg":"No host described in new configuration with {version: 1, term: 0} for replica set rs0 maps to this node","code":93,"codeName":"InvalidReplicaSetConfig"}, yet the database is available under the hostname app-db from other containers. This makes me feel that this code runs too early, before all other initialization logic (networking) is done.
Another approach is to place a bash script that executes code via mongosh. Here's what I've tried:
#!/bin/bash
mongosh "mongodb://app-db:27017/app_db" "initiateReplSet"
where initiateReplSet is
config={"_id":"rs0", "members":[{"_id":0,"host":"app-db:27017"}]}
rs.initiate(config)
exit
but this crashes the container with the error
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/initiateReplSetWrapper.sh
{"t":{"$date":"2022-02-15T11:31:23.353+00:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":600}}
Warning: Could not access file: EACCES: permission denied, mkdir '/home/mongodb'
Current Mongosh Log ID: 620b8f0b04b7ad69b446768d
Connecting to: mongodb://app-db:27017/app_db?directConnection=true&appName=mongosh+1.1.9
Only the first and the last three lines seem to really belong to the bash script, the second line is repeated constantly.
I'm not sure whether the error originates at the permission denied issue, or whether the DB really can't be accessed. However, specifying
RUN mkdir -p /home/mongodb/.mongodb
RUN chown -R 777 /home/mongodb
in the Dockerfile did not improve the situation (same error nevertheless).
Could you please explain either why this approach can not work, or how to make it work? Is there another, better, automated way to initialize the replica set? Could the docker image be improved to allow such initialization logic?
I just made it work with a wild experiment. Means I simply left out the config in my call to rs.initiate(), from the JS script. For some reason, the script then runs successfully and change streams become available to my NodeJS backend.
I will post everything that's needed to run a MongoDB docker with change streams enabled:
# Dockerfile
From mongo
WORKDIR .
COPY initiateReplSet.js ./docker-entrypoint-initdb.d/
CMD ["-replSet", "rs0"]
// initiateReplSet.js
rs.initiate()

Running a one off command in a docker cloud stack

I deployed a stack to a docker cloud (cloud.docker.com not in swarm mode).
Everything is running fine but I have a postgres database. I have a separate container that contains scripts to initialize the structure of the database (I need certain tables). I only need to run this once, so I thought about executing this container in the stack.
However it doesn't seem to be possible to run a single container (the docker-cloud container commands don't have a run sub-cmnd).
Is there a way to execute one-off scripts in the stack?

Backup Oracle Physical Standby through DBMS_SCHEDULER job

Friends!
We have a lot of primary databases with their physical standby databases in Data Guard configurations on servers. Each primary database on single server and each physical standby on single server.
In EM12c we've configured scheduler jobs for backup our primary databases. Unfortunately, when server is really busy, Agent suspends backup execution and we haven't backup according to out schedule.
So, we disabled our backup jobs from EM12c and want to perform backups on Physical standby using procedure DBMS_SCHEDULER.CREATE_JOB.
As Physical Standby is read only database and per-block copy of Primary Database, I have to create schedule job on Primary and it applied to standby.
So, the question is: Is it possible? And if, yes - how to realize this in script??
Something like this:
check database_role
if role='PHYSICAL STANDBY'
then execute backup script
else nothing to do..
If it's not possible, which solution is the best for resolve this task?
Is there a way to solve this problem without create cron task with single script on each server? Is it possible to use one global script from recovery catalog database?
Kris said, that I can't run scheduled jobs from physical standby database.
So, I'll schedule my linux script with crontab.
My linux script is:
#! /usr/bin/bash
LOG_PATH=/home/oracle/scripts/logs; export LOG_PATH
TASK_NAME=backup_database_inc0; export TASK_NAME
CUR_DATE=`date +%Y.%m.%d-%H:%M`; export CUR_DATE
LOGFILE=$LOG_PATH/$TASK_NAME.$CUR_DATE.log; export LOGFILE
rman target / catalog rmancat/<pswd>#rmancat script 'backup_database' log $LOGFILE
if [ $? -eq 0 ]
then
mail -s "$ORACLE_UNQNAME Backup Status: SUCCESS" dba#server.mail< $LOGFILE
exit 0
else
mail -s "$ORACLE_UNQNAME Backup Status: FAILED" dba#server.mail< $LOGFILE
exit 1
I don't want to create linux file on each host to call backup global script from my recovery catalog. Is it possible to configure centralized backups execution schedule on all hosts? Can i configure ssh from one host to all database hosts and execute my linux script for backup?
Thanks in advance for your answers.
I highly recommend using Enterprise Manager to run backup jobs. EM integrates nicely with rman catalog and each instance so you can have setup just the rman execute global script command. Everything else is done by EM.
I have the job scheduled to run only on standbys through EM job scheduler without having to change anything during a switchover.
I just have to cascade the jobs. So I have one step checking if the target is standby and if that step is successful, then I am running a backup. If it's not, the next step doesn't run.
In this way, the monitoring is also integrated with the global database monitoring. You don't have to setup error catch inside a shell script at OS level.

The best practices for PostgreSQL docker container initialization with some data

I've created docker image with PostgreSQL running inside and exposing 5432 port.
This image doesn't contain any database inside. Container is an empty PostgreSQL database server.
I'd like in (or during) "docker run" command:
attach db file
create db via sql query execution
restore db from dump
I don't want to keep the data after container will be closed. It's just a temporary development server.
I suspect it's possible to keep my "docker run" command string quite short/simple.
Probably there it is possible to mount some external folder with db/sql/dump in run command and then create db during container initialization.
What are the best/recommended way and the best practices to accomplish this task? Probably somebody can point me to corresponding docker examples.
This is a good question and probably something other folks asked themselves more than once.
According to the docker guide you would not do this in a RUN command. Instead you would create yourself an ENTRYPOINT or CMD in your Dockerfile that calls a custom shell script instead of calling the postgres process direclty. In this scenario the DB would be created in a "real" filesystem, but then cleaned-up during shutdown of the container.
How would this work? The container would start, call the ENTRYPOINT or CMD as usual and consume the init script to get the DB filled. Then at the moment the container is stopped, the same script will be notified with a signal and manually drop the database content.
CMD ["cleanAndRun.sh"]
A sketched script "cleanAndRun.sh" taken from the Docker documentation and modified for your needs. Please remember it is a sketch only and needs modifications:
#!/bin/sh
# The script that is called in the trap must also stop the DB, so below call to
# dropdb is not enough, it just demonstrates how to call anything in the stop-container scenario!
trap "dropdb <params>" HUP INT QUIT TERM
# init your DB -every- time container starts
<init script to import to clean and import dump>
# start your postgres DB
postgres
echo "exited $0"

How to run a mongo script from Heroku scheduler?

I have implemented a javascript script for my mongo database. This script is called getMetrics.js and I am able to execute it by running: mongo getMetrics.js from my computer.
Now I want to automatically execute that script one time per day. To do so, I have created a Heroku app and I added to it the scheduler add-on (https://devcenter.heroku.com/articles/scheduler).
My main problem is that in order to be run, my task will execute the command "mongo getMetrics.js" and it will failed because I don't have mongo command installed in my Heroku app.
How can I run this script from Heroku?
Thanks a lot for your help.
I did the below in a similar case:
Download mongodb for linux https://www.mongodb.com/download-center#community
The bin folder contains the mongo binary
Make this binary available in your Heroku instance (e.g. If you have your Heroku configured with your git repo, then checkin this binary along side your script
[Make sure the folder you are keeping this binary is in the path, safe path will be inside /bin]