How to create a replica set enabled mongo service with Github Actions - mongodb

I want to leverage Github Action's service and use it to create a mongo instance with replica set support. This is what I have right now.
mongo:
image: mongo:5.0.3
volumes:
- /d/test-db.d
ports:
- 27017:27017
options: >-
--entrypoint /usr/bin/docker run --name mongo_container --replSet test --bind_ip_all
--health-cmd "bash -c \"if [ $(mongo mongo_container --quiet --eval 'rs.initiate().ok || rs.status().ok') -eq 1 ]; then exit 1; else exit 0; fi\""
--health-interval 10s
--health-timeout 5s
--health-retries 5
--name test
However, github actions gives me an error 'denied: requested access to the resource is denied' when trying to call docker run as the entrypoint. I suppose it's not permittable for me to directly run docker run.
I also tried setting --replSet test and --bind_ip_all within the options array, but it is not recognized and throws and error.
I see nothing in the documentation regarding replica sets. Are they even supported as a service?

Related

docker-compose create user in mognodb

What is the best solution to create a user and database in MongoDB using docker-compose?
mongo:
restart: always
image: mongo:latest
container_name: "mongodb"
environment:
- MONGODB_USERNAME=test
- MONGODB_PASSWORD=test123
- MONGODB_DATABASE=test1
volumes:
- ./data/db:/var/micro-data/mongodb/data/db
- ./setup:/setup
ports:
- 27017:27017
command: mongod --smallfiles --logpath=/dev/null # --quiet
MONGODB env doesn't work for me.
With https://hub.docker.com/_/mongo/ you need to start the db with auth disabled, wait for mongo to spin up, create a user, restart the container with auth enabled.
https://hub.docker.com/r/bitnami/mongodb/ has a handy script added:
You can create a user with restricted access to a database while starting the container for the first time. To do this, provide the MONGODB_USERNAME, MONGO_PASSWORD and MONGODB_DATABASE environment variables.
$ docker run --name mongodb \
-e MONGODB_USERNAME=my_user -e MONGODB_PASSWORD=password123 \
-e MONGODB_DATABASE=my_database bitnami/mongodb:latest
It seems like you have bitnami environment variables set up, but use the original image image: mongo:latest where they are not being used.
So either use image: bitnami/mongodb:latest, or add the user manually.
Update:
Starting from v3.0 you can benefit from Localhost exception so you don't need to restart the container. Instead you can start it with authentication enabled, wait some time for the server to start listening, create users from within the container, e.g.
docker exec mongo4 mongo test1 \
--eval 'db.createUser({user: "test", pwd: "test123", roles: [ "readWrite", "dbAdmin" ]});'

Let other containers access mongo official docker image

I have several docker containers , one of them is Mongodb official image,
Here's part of my docker-compose.yml file
version: '3'
services:
mongo:
image: mongo
container_name: mongo01
# command: ["mongod", "-f", "/etc/mongo/mongod.conf"]
volumes:
- ./data/mongodata:/data/db
# - ./config/mongo:/etc/mongo
restart: always
ports:
- "27017:27017"
I could access to mongo service from the host ( my system) but according to the mongo new security policy there is config for limit access to mongo just form 127.0.0.1,I know it , it's
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
if I could push the mongo image read my custom config I could resolve the problem, but I tried to
to mount a custom config file - ./config/mongo:/etc/mongo and then run mongod with command: ["mongod", "-f", "/etc/mongo/mongod.conf"] but didn't work.
it seems mongod starting in container as process 1 and try to run it with custom command not works, even when I tried to shutdown the mongod in container with mongod --shutdown it shutdown the whole container.( I wanted to stop the mongod and then rerun it with mongod --bind_ip_all )
So the problem is how we can change the mongo image config file ?
The mongo docker image already has an ENTRYPOINT set and it basically is mongod, so in your command (CMD) you can add extra arguments to mongod
simple docker run
docker run -d mongo --bind_ip_all
or with compose
version: '3'
services:
mongo:
image: mongo
command: ["--bind_ip_all"]
ports:
- "27017:27017"
The entrypoint for the official mongo image already contains a step to add --bind_ip_all as long as you don't explicitly bind a specific IP:
# MongoDB 3.6+ defaults to localhost-only binding
haveBindIp=
if _mongod_hack_have_arg --bind_ip "$#" || _mongod_hack_have_arg --bind_ip_all "$#"; then
haveBindIp=1
elif _parse_config "$#" && jq --exit-status '.net.bindIp // .net.bindIpAll' "$jsonConfigFile" > /dev/null; then
haveBindIp=1
fi
if [ -z "$haveBindIp" ]; then
# so if no "--bind_ip" is specified, let's add "--bind_ip_all"
set -- "$#" --bind_ip_all
fi

I m not able to connect to mongo with a container docker

I m new to docker
so i followed up a tutorial here , part 6,7 and 8 in order to use and learn docker in a project.
The problem is that when i use docker-compose.yml to build images on my laptop,
my stack_server can connect to mongo.
But if i build images from docker hub and i pull and run them seperately on my laptop,my stack_server CAN't connect to mongo.
here is my docker-compose.yml :
client:
build: ./client
restart: always
ports:
- "80:80"
links:
- server
mongo:
image: mongo
command: --smallfiles
restart: always
ports:
- "27017:27017"
server:
build: ./server
restart: always
ports:
- "8080:8080"
links:
- mongo
However, my stack_client can connect to the stack_server.
and my commands to run my image (my images are public)
docker-run -i -t -p 27017:27017 mongo
docker-run -i -t -p 80:80 mik3fly4steri5k/stack_client
docker-run -i -t -p 8080:8080 mik3fly4steri5k/stack_server
and my error log
bryan#debian-dev7:~$ sudo docker run -i -t -p 8080:8080 mik3fly4steri5k/stack_server
[sudo] password for bryan:
Express server listening on 8080, in development mode
connection error: { Error: connect ECONNREFUSED 127.0.0.1:27017
at Object._errnoException (util.js:1021:11)
at _exceptionWithHostPort (util.js:1043:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1175:14)
name: 'MongoError',
message: 'connect ECONNREFUSED 127.0.0.1:27017' }
here is my stack_server dockerfile
FROM node:latest
# Set in what directory commands will run
WORKDIR /home/app
# Put all our code inside that directory that lives in the container
ADD . /home/app
# Make sure NPM is not cached, remove everything first
RUN rm -rf /home/app/node_modules/npm \
&& rm -rf /home/app/node_modules
# Install dependencies
RUN npm install
# Tell Docker we are going to use this port
EXPOSE 8080
# The command to run our app when the container is run
CMD ["node", "app.js"]
First Solution but deprecated
docker-run -i -t -p 27017:27017 --name:mongo mongo
docker-run -i -t -p 80:80 mik3fly4steri5k/stack_client
docker-run -i -t -p 8080:8080 --link mongo:mongo mik3fly4steri5k/stack_server
I had to put a name on my mongo and to link it with the parameter --link when i run my stack_server
docker documentation
Warning: The --link flag is a deprecated legacy feature of Docker.
It may eventually be removed.
Unless you absolutely need to continue using it,
we recommend that you use user-defined networks to facilitate communication
between two containers instead of using --link.
One feature that user-defined networks do not support that you can do with
--link is sharing environmental variables between containers.
However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
Run your stack_server with below command
sudo docker run -i -t -p 8080:8080 --link mongo:mongo mik3fly4steri5k/stack_server
Note: --link flag is a deprecated
Your container is running. But is it healthy? I recommend that you implement Healthcheck
It’s actually would look pretty much the same:
(Docker-compose YAML)
healthcheck:
test: curl -sS http://127.0.0.1:8080 || echo 1
interval: 5s
timeout: 10s
retries: 3
Docker health checks is a cute little feature that allows attaching shell command to container and use it for checking if container’s content is alive enough.
I hope it can help you.
more info https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
To start MongoDB on a desire port you would have to do the followings:
client:
build: ./client
restart: always
ports:
- "80:80"
links:
- server
mongo:
image: mongo
command: mongod --port 27017
restart: always
ports:
- "27017:27017"
server:
build: ./server
restart: always
ports:
- "8080:8080"
links:
- mongo
Lets look at error
connection error: { Error: connect ECONNREFUSED 127.0.0.1:27017
Server trying to connect to localhost instead of mongo. You need to configure server to connect mongo to mongo:27017
mongo is alias created by docker for linking containers to each other.

Docker-compose: Can I automate compose to run commands inside a container?

Okay so I know I can automate my "docker run" instructions like this, say I would do this without compose:
First create the volume
docker volume create --name mongodb-shard-1-node-1
Then the container
docker run --name mongodb-node-1 -d -v mongodb-node-1:/data/db -p 27031:27017 --link mongo-node-2:mongo mongo --replSet rs0 --smallfiles --oplogSize 128
This would be the same as including this in the docker-compose.yml file:
mongodb-node-1:
image: mongo
volumes:
- "mongodb-node-1:/data/db"
ports:
- "27031:27017"
container_name: mongodb-node-1
external_links:
- "mongodb-node-3:mongo"
command: --replSet rs0 --smallfiles --oplogSize 128
But I also have to run commands inside the mongodb shell, to do this I first use exec to enter the shell like this:
docker exec -it mongodb-shard-1-node-1 mongo
afterwards inside the shell I need to run commands such as
rs.initiate()
and others like
rs.addArb("172.17.0.6:27017")
etc...
Can I automate these last steps with docker-compose? Is it possible to automate this in docker at all?
You can't directly automate it like that, sadly.
As a workaround, you could extend the container to add in a shell script which runs, starts Mongo, then runs the specified commands. You could even pass in that IP address in an environment variable if it needs to be modifiable.
Kontena has leveraged docker-compose.yml file format and introduced this kind of functionality by adding post_start hook.
peer:
image: mongo:3.2
stateful: true
command: --replSet kontena --smallfiles
instances: 3
hooks:
post_start:
- cmd: sleep 10
name: sleep
instances: 1
oneshot: true
- cmd: mongo --eval "printjson(rs.initiate());"
name: rs_initiate
instances: 1
oneshot: true
- cmd: mongo --eval "printjson(rs.add('%{project}-peer-2'))"
name: rs_add2
instances: 1
oneshot: true
- cmd: mongo --eval "printjson(rs.add('%{project}-peer-3'))"
name: rs_add3
instances: 1
oneshot: true
https://github.com/kontena/examples/blob/master/mongodb-cluster/kontena.yml
$ kontena app deploy command will deploy all three mongodb peers and add them to replica set.

Docker wait for postgresql to be running

I am using postgresql with django in my project. I've got them in different containers and the problem is that i need to wait for postgres before running django. At this time i am doing it with sleep 5 in command.sh file for django container. I also found that netcat can do the trick but I would prefer way without additional packages. curl and wget can't do this because they do not support postgres protocol.
Is there a way to do it?
I've spent some hours investigating this problem and I got a solution.
Docker depends_on just consider service startup to run another service. Than it happens because as soon as db is started, service-app tries to connect to ur db, but it's not ready to receive connections. So you can check db health status in app service to wait for connection. Here is my solution, it solved my problem. :)
Important: I'm using docker-compose version 2.1.
version: '2.1'
services:
my-app:
build: .
command: su -c "python manage.py runserver 0.0.0.0:8000"
ports:
- "8000:8000"
depends_on:
db:
condition: service_healthy
links:
- db
volumes:
- .:/app_directory
db:
image: postgres:10.5
ports:
- "5432:5432"
volumes:
- database:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
volumes:
database:
In this case it's not necessary to create a .sh file.
This will successfully wait for Postgres to start. (Specifically line 6). Just replace npm start with whatever command you'd like to happen after Postgres has started.
services:
practice_docker:
image: dockerhubusername/practice_docker
ports:
- 80:3000
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; npm start'
depends_on:
- db
environment:
- DATABASE_URL=postgres://postgres:password#db:5432/practicedocker
- PORT=3000
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=practicedocker
If you have psql you could simply add the following code to your .sh file:
RETRIES=5
until psql -h $PG_HOST -U $PG_USER -d $PG_DATABASE -c "select 1" > /dev/null 2>&1 || [ $RETRIES -eq 0 ]; do
echo "Waiting for postgres server, $((RETRIES--)) remaining attempts..."
sleep 1
done
The simplest solution is a short bash script:
while ! nc -z HOST PORT; do sleep 1; done;
./run-smth-else;
Problem with your solution tiziano is that curl is not installed by default and i wanted to avoid installing additional stuff. Anyway i did what bereal said. Here is the script if anyone would need it.
import socket
import time
import os
port = int(os.environ["DB_PORT"]) # 5432
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
while True:
try:
s.connect(('myproject-db', port))
s.close()
break
except socket.error as ex:
time.sleep(0.1)
In your Dockerfile add wait and change your start command to use it:
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait
CMD /wait && npm start
Then, in your docker-compose.yml add a WAIT_HOSTS environment variable for your api service:
services:
api:
depends_on:
- postgres
environment:
- WAIT_HOSTS: postgres:5432
postgres:
image: postgres
ports:
- "5432:5432"
This has the advantage that it supports waiting for multiple services:
environment:
- WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017
For more details, please read their documentation.
wait-for-it small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
can be cloned in Dockerfile by below command
RUN git clone https://github.com/vishnubob/wait-for-it.git
docker-compose.yml
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it/wait-for-it.sh", "db:5432", "--", "npm", "start"]
db:
image: postgres
Why not curl?
Something like this:
while ! curl http://$POSTGRES_PORT_5432_TCP_ADDR:$POSTGRES_PORT_5432_TCP_PORT/ 2>&1 | grep '52'
do
sleep 1
done
It works for me.
I have managed to solve my issue by adding health check to docker-compose definition.
db:
image: postgres:latest
ports:
- 5432:5432
healthcheck:
test: "pg_isready --username=postgres && psql --username=postgres --list"
timeout: 10s
retries: 20
then in the dependent service you can check the health status:
my-service:
image: myApp:latest
depends_on:
kafka:
condition: service_started
db:
condition: service_healthy
source: https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
If the backend application itself has a PostgreSQL client, you can use the pg_isready command in an until loop. For example, suppose we have the following project directory structure,
.
├── backend
│   └── Dockerfile
└── docker-compose.yml
with a docker-compose.yml
version: "3"
services:
postgres:
image: postgres
backend:
build: ./backend
and a backend/Dockerfile
FROM alpine
RUN apk update && apk add postgresql-client
CMD until pg_isready --username=postgres --host=postgres; do sleep 1; done \
&& psql --username=postgres --host=postgres --list
where the 'actual' command is just a psql --list for illustration. Then running docker-compose build and docker-compose up will give you the following output:
Note how the result of the psql --list command only appears after pg_isready logs postgres:5432 - accepting connections as desired.
By contrast, I have found that the nc -z approach does not work consistently. For example, if I replace the backend/Dockerfile with
FROM alpine
RUN apk update && apk add postgresql-client
CMD until nc -z postgres 5432; do echo "Waiting for Postgres..." && sleep 1; done \
&& psql --username=postgres --host=postgres --list
then docker-compose build followed by docker-compose up gives me the following result:
That is, the psql command throws a FATAL error that the database system is starting up.
In short, using an until pg_isready loop (as also recommended here) is the preferable approach IMO.
There are couple of solutions as other answers mentioned.
But don't make it complicated, just let it fail-fast combined with restart: on-failure. Your service will open connection to the db and may fail at the first time. Just let it fail. Docker will restart your service until it green. Keep your service simple and business-focused.
version: '3.7'
services:
postgresdb:
hostname: postgresdb
image: postgres:12.2
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=Ceo
migrate:
image: hanh/migration
links:
- postgresdb
environment:
- DATA_SOURCE=postgres://user:secret#postgresdb:5432/Ceo
command: migrate sql --yes
restart: on-failure # will restart until it's success
Check out restart policies.
None of other solution worked, except for the following:
version : '3.8'
services :
postgres :
image : postgres:latest
environment :
- POSTGRES_DB=mydbname
- POSTGRES_USER=myusername
- POSTGRES_PASSWORD=mypassword
healthcheck :
test: [ "CMD", "pg_isready", "-q", "-d", "mydbname", "-U", "myusername" ]
interval : 5s
timeout : 5s
retries : 5
otherservice:
image: otherserviceimage
depends_on :
postgres:
condition: service_healthy
Thanks to this thread: https://github.com/peter-evans/docker-compose-healthcheck/issues/16
Sleeping until pg_isready returns true unfortunately is not always reliable. If your postgres container has at least one initdb script specified, postgres restarts after it is started during it's bootstrap procedure, and so it might not be ready yet even though pg_isready already returned true.
What you can do instead, is to wait until docker logs for that instance return a PostgreSQL init process complete; ready for start up. string, and only then proceed with the pg_isready check.
Example:
start_postgres() {
docker-compose up -d --no-recreate postgres
}
wait_for_postgres() {
until docker-compose logs | grep -q "PostgreSQL init process complete; ready for start up." \
&& docker-compose exec -T postgres sh -c "PGPASSWORD=\$POSTGRES_PASSWORD PGUSER=\$POSTGRES_USER pg_isready --dbname=\$POSTGRES_DB" > /dev/null 2>&1; do
printf "\rWaiting for postgres container to be available ... "
sleep 1
done
printf "\rWaiting for postgres container to be available ... done\n"
}
start_postgres
wait_for_postgres
You can use the manage.py command "check" to check if the database is available (and wait 2 seconds if not, and check again).
For instance, if you do this in your command.sh file before running the migration, Django has a valid DB connection while running the migration command:
...
echo "Waiting for db.."
python manage.py check --database default > /dev/null 2> /dev/null
until [ $? -eq 0 ];
do
sleep 2
python manage.py check --database default > /dev/null 2> /dev/null
done
echo "Connected."
# Migrate the last database changes
python manage.py migrate
...
PS: I'm not a shell expert, please suggest improvements.
#!/bin/sh
POSTGRES_VERSION=9.6.11
CONTAINER_NAME=my-postgres-container
# start the postgres container
docker run --rm \
--name $CONTAINER_NAME \
-e POSTGRES_PASSWORD=docker \
-d \
-p 5432:5432 \
postgres:$POSTGRES_VERSION
# wait until postgres is ready to accept connections
until docker run \
--rm \
--link $CONTAINER_NAME:pg \
postgres:$POSTGRES_VERSION pg_isready \
-U postgres \
-h pg; do sleep 1; done
An example for Nodejs and Postgres api.
#!/bin/bash
#entrypoint.dev.sh
echo "Waiting for postgres to get up and running..."
while ! nc -z postgres_container 5432; do
# where the postgres_container is the hos, in my case, it is a Docker container.
# You can use localhost for example in case your database is running locally.
echo "waiting for postgress listening..."
sleep 0.1
done
echo "PostgreSQL started"
yarn db:migrate
yarn dev
# Dockerfile
FROM node:12.16.2-alpine
ENV NODE_ENV="development"
RUN mkdir -p /app
WORKDIR /app
COPY ./package.json ./yarn.lock ./
RUN yarn install
COPY . .
CMD ["/bin/sh", "./entrypoint.dev.sh"]
If you want to run it with a single line command. You can just connect to the container and check if postgres is running
docker exec -it $DB_NAME bash -c "\
until psql -h $HOST -U $USER -d $DB_NAME-c 'select 1'>/dev/null 2>&1;\
do\
echo 'Waiting for postgres server....';\
sleep 1;\
done;\
exit;\
"
echo "DB Connected !!"
Inspired by #tiziano answer and the lack of nc or pg_isready, it seems that in a recent docker python image (python:3.9 here) that curl is installed by default and I have the following check running in my entrypoint.sh:
postgres_ready() {
$(which curl) http://$DBHOST:$DBPORT/ 2>&1 | grep '52'
}
until postgres_ready; do
>&2 echo 'Waiting for PostgreSQL to become available...'
sleep 1
done
>&2 echo 'PostgreSQL is available.'
Trying with a lot of methods, Dockerfile, docker compose yaml, bash script. Only last of method help me: with makefile.
docker-compose up --build -d postgres
sleep 2
docker-compose up --build -d app