I've added
command: bash -c './wait-for-it.sh -t 4 -s php:9000 -- bash run-ssh-on-php.sh'
to my docker-compose.yml
php:
build: docker/php
user: "$LOCAL_USER_ID:$LOCAL_GROUP_ID"
depends_on:
- mysql
- rabbitmq
- mail
- phantomjs
- data
volumes_from:
- data
ports:
- "9000:9000"
environment:
- SYMFONY_ENV
command: bash -c './wait-for-it.sh -t 4 -s php:9000 -- bash run-ssh-on-php.sh'
and it seems that it wasn't executed at all, how can I check if it was? I tried adding "touch somefile" but nothing was created
Related
i want to be able to recreate some base data that is dumped when mongo-data folder is deleted and docker-compose up is called.
the problem that im facing is that app does not have mongo
these are my files:
docker-compose.yml
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/testapp
environment:
DB_URL: mongodb://test_mongo/appdb
depends_on:
- mongo
mongo:
image: "mongo:4.4.4"
restart: always
container_name: test_mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./mongo-data:/data/db
Dockerfile:
FROM node:14.15.5
RUN mkdir -p /testapp
WORKDIR /testapp
EXPOSE 3000
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
sh ./__backup__/db/restore.sh
sh ./__backup__/app/restore.sh
yarn install
yarn start:dev
backup/app/restore.sh:
#!/bin/bash
if [[ ! -d '/testapp/uploads' ]]
then
tar -xvf ./uploads.tar.gz /testapp/
fi
backup/app/restore.sh:
#!/bin/bash
until mongo --eval "print(\"waited for connection\")"
do
sleep 1
done
if [[ ! -d '/testapp/mongo-data' ]]
then
mongorestore --archive ./db.dump
fi
is there anyway to run these resotre.sh files after mongo service is up or running mongo from app?
If I understand the question correctly, you want to restore the MongoDB to a certain state every time your app launches, and you're asking if there's a way to do it after MongoDB container launches.
There's a tool called docker-compose-wait, quoting from its GitHub README, it's a small command-line utility to wait for other docker images to be started while using docker-compose.
It's fairly simple to use it. Add it to the image, run /wait to wait for services to be up, and get on to whatever you want next.
So according to your current setup, your Dockerfile could be like this:
FROM node:14.15.5
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.9.0/wait /wait
RUN chmod +x /wait
RUN mkdir -p /testapp
WORKDIR /testapp
ADD . .
EXPOSE 3000
## Launch the wait tool and then your entrypoint.sh
ENTRYPOINT /wait && /testapp/entrypoint.sh"
In which your entrypoint.sh was already written to call the restore script. In your docker-compose.yml, add environment variable to set up the services to be waited.
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/testapp
environment:
DB_URL: mongodb://test_mongo/appdb
WAIT_HOSTS: mongo:27017
depends_on:
- mongo
mongo:
image: "mongo:4.4.4"
restart: always
container_name: test_mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./mongo-data:/data/db
This is my first time with docker, I'm working on this problem for two days, it would make me very happy to find a solution.
I'm running docker-compose.yml file with "docker-compose up":
version: '3.3'
services:
base:
networks:
- brain_storm-network
volumes:
- brain_storm-storage:/usr/src/brain_storm
build: "./brain_storm"
data_base:
image: mongo
volumes:
- brain_storm-storage:/usr/src/brain_storm
networks:
- brain_storm-network
ports:
- '27017:27017'
api:
build: "./brain_storm/api"
volumes:
- brain_storm-storage:/usr/src/brain_storm
networks:
- brain_storm-network
ports:
- 5000:5000
depends_on:
- data_base
- base
restart: on-failure
the base Dockerfile inside ./brain_storm does the following:
FROM brain_storm-base:latest
RUN mkdir -p /usr/src/brain_storm/brain_storm
ADD . /usr/src/brain_storm/brain_storm
and when running the Dockerfile inside brain_storm/api
FROM brain_storm-base:latest
CMD cd /usr/src/brain_storm \
&& python -m brain_storm.api run-server -h 0.0.0.0 -p 5000 -d mongodb://0.0.0.0:27017
I'm getting this error :
brain_storm_api_1 exited with code 1
api_1 | /usr/local/bin/python: Error while finding module specification for 'brain_storm.api' (ModuleNotFoundError: No module named 'brain_storm')
pwd says I'm in '/' and not in the current directory when running the base Dockerfile,
so that might be the problem but how do I solve it without going to /home/user/brain_storm in the Dockerfile, because I want to keep the location of brain_storm folder general.
How can I make Dockerfile see and take the file from the current directory (where the Dockerfile file is) ?
You should probably define WORKDIR command in both your Dockerfiles. The WORKDIR command is used to define the working directory of a Docker container at any given time. Any RUN, CMD, ADD,COPY, or ENTRYPOINT command will be executed in the specified working directory.:
base:
FROM brain_storm-base:latest
WORKDIR /usr/src/brain_storm
COPY . .
api:
FROM brain_storm-base:latest
WORKDIR /usr/src/brain_storm
CMD python -m brain_storm.api run-server -h 0.0.0.0 -p 5000 -d mongodb://0.0.0.0:27017
I'm new to mongo and I have a web app the uses mongo to store data. I can get the app to run the docker compose but data gets left out of it when I do. The mongo data is in a remote host and I need to copy all of that data and store it into the mongo container so that dockerized app runs with the same data
I've attempted to dump the data from the remote host on to the container, based on some code I found while researching for this.
# Backup DB
docker run \
--rm \
--link running_mongo:mongo \
-v /data/mongo/backup:/backup \
mongo \
bash -c ‘mongodump --out /backup --host 10.22.150.7:27017’
# Download the dump
scp -r jsikala#10.22.150.7:/data/mongo/backup ./backup
The result I got from doing that is
[jsikala#koala-jsikala koala]$ docker run --rm --link running_mongo:3.2.0 -v /data/mongo/backup:/backup mongo bash -c ‘mongodump --out /backup --host 10.22.150.7:27017’
Unable to find image 'mongo:latest' locally
latest: Pulling from library/mongo
Digest: sha256:93c98ffc714faa1fa501297d35670a62835dbb7e62243cee0c491433ea523f30
Status: Image is up to date for mongo:latest
docker: Error response from daemon: could not get container for running_mongo: No such container: running_mongo.
See 'docker run --help'.
I'm assume I did something trivial wrong.
This is my docker-compose file for a bit of context on what is suppose to happen
version: "3"
volumes:
data:
external:
name: ${MONGO_VOLUME_NAME}
services:
rails:
image: rails2
container_name: koala_rails_${USER}
environment:
- KOALA_ENV
- RAILS_PORT
- KOALA_INGEST_URL=${INGEST_PROTOCOL}://ingest:${INGEST_PORT}
- KOALA_MONGO_URL=mongo_service:27017
- KOALA_REDIS_URL=redis_service:6379
- KOALA_PKI_IN_DEV
- KOALA_USER_ID_HEADER
- USER
- USERNAME
- KOALA_REGISTER_USER_URL
- KOALA_SECURITY_VALIDATOR_URL
- CERT_FILE_PEM=/usr/src/app/certs/public.pem
- PRIVATE_CERT_FILE_PEM=/usr/src/app/certs/private-key.pem
- SSL_CA_FILE=/usr/src/app/certs/ca.pem
- LOGNAME
- KOALA_SECRET_KEY_BASE
- KOALA_MONGO_USERNAME
- KOALA_MONGO_PASSWORD
- KOALA_HELP_URL
- KOALA_CONTACT_EMAIL
- KOALA_USE_CERTS
- BUNDLE_GEMFILE
- KOALA_SERVER_URL
- RAILS_SERVE_STATIC_FILES
- RAILS_LOG_TO_STDOUT
ports:
- "${RAILS_PORT}:${RAILS_PORT}"
volumes:
- ${CERT_FILE_PEM}:/usr/src/app/certs/public.pem
- ${PRIVATE_CERT_FILE_PEM}:/usr/src/app/certs/private-key.pem
- ${SSL_CA_FILE}:/usr/src/app/certs/ca.pem
links:
- mongo_service
- redis_service
- ingest
depends_on:
- mongo_service
- redis_service
mongo_service:
image: mongo:3.2.0
volumes:
- data:/data/db
ports:
- "27017:27017"
redis_service:
image: redis
restart: always
ports:
- "6379:6379"
ingest:
image: ingest
container_name: koala_ingest_${USER}
extra_hosts:
- csie.as.northgrum.com:10.8.131.12
environment:
- KOALA_ENV
- KOALA_CONFIG_FILE=/go/config.yml
- INGEST_PORT
- LOGNAME
- KOALA_JIRA_URL
- KOALA_JIRA_SESSION_URL
- CERT_FILE_PEM=/go/certs/public.pem
- PRIVATE_CERT_FILE_PEM=/go/certs/private-key.pem
- SSL_CA_FILE=/go/certs/ca.pem
- KOALA_REDIS_URL=redis_service:6379
- KOALA_MONGO_URL=mongo_service:27017
- KOALA_USE_CERTS
- KOALA_MONGO_USERNAME
- KOALA_MONGO_PASSWORD
- JIRA_USERNAME=jsikala
- JIRA_PASSWORD=changeme123
ports:
- "${INGEST_PORT}:${INGEST_PORT}"
volumes:
- ${CERT_FILE_PEM}:/go/certs/public.pem
- ${PRIVATE_CERT_FILE_PEM}:/go/certs/private-key.pem
- ${SSL_CA_FILE}:/go/certs/ca.pem
links:
- mongo_service
depends_on:
- mongo_service
- redis_service
Essentially the once the docker-compose file is ran then, the app deploys with some data, just like it does on the remote host. Since I can't seem to get the data that's in the remote host export/dumped on to my container, the app doesn't have the data that it needs.
docker run is the command you'd use to start a mongo container running. You shouldn't need to start running a new container to dump data from an existing container.
If you want to run a command within the container, you'll want to run docker ps to find the name of your container & then docker exec to run a command within the container (or connect to a shell within the container).
You shouldn't need to connect to the container at all to run mongoexport though -- you should be able to just run mongoexport with the correct port and creds to dump your data.
I'm trying to mount my postgres.conf and pg_hba.conf using docker-compose and having difficulty understanding why it work when run using docker-cli and doesn't with docker-compose
The following docker-compose causes the image to crash with error:
/usr/local/bin/docker-entrypoint.sh: line 176: /config_file=/etc/postgresql/postgres.conf: No such file or directory
docker-compose.yml
services:
postgres-master:
image: postgres:11.4
container_name: postgres-master
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
- /home/agilob/dockers/pg/data:/var/lib/postgresql/data:rw
- $PWD/pg:/etc/postgresql:rw
- /etc/localtime:/etc/localtime:ro
hostname: 'primary'
environment:
- PGHOST=/tmp
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
- MAX_CONNECTIONS=10
- MAX_WAL_SENDERS=5
- PG_MODE=primary
- PUID=1000
- PGID=1000
ports:
- "5432:5432"
command: 'config_file=/etc/postgresql/postgres.conf hba_file=/etc/postgresql/pg_hba.conf'
This command works fine:
docker run -d --name some-postgres -v "$PWD/postgres.conf":/etc/postgresql/postgresql.conf postgres -c 'config_file=/etc/postgresql/postgresql.conf'
also when I remove command: section and run the same docker-compose:
$ docker-compose -f postgres-compose.yml up -d
Recreating postgres-master ... done
$ docker exec -it postgres-master bash
root#primary:/# cd /etc/postgresql
root#primary:/etc/postgresql# ls
pg_hba.conf postgres.conf
The files are present in /etc/postgres.
Files in $PWD/pg are present:
$ ls pg
pg_hba.conf postgres.conf
The following works fine:
command: postgres -c config_file='/etc/postgresql/postgres.conf' -c 'hba_file=/etc/postgresql/pg_hba.conf'
I can't find the way to execute the following commands from a docker-compose.yml file:
rails db:setup
rails db:init_data.
I tried to do that as follows and it failed:
version: '3'
services:
web:
build: .
links:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command: ["rails", "db:setup"]
command: ["rails", "db:init_data"]
redis:
image: redis
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
Any idea on what's going wrong here ? Thank you.
The code source is on the GitHub.
You can do two things in my opinion:
Change command: to the following line, because two commands are not allowed in compose file:
command:
- /bin/bash
- -c
- |
rails db:setup
rails db:init_data
Use supervisord app: supervisord web page
The solution that worked for me was to remove CMD commad from Dockerfile because using command option in docker-compose.yml would have overridden CMD command.
So, Docker file will look like that:
FROM ruby:2.5.1
LABEL maintainer="DECATHLON"
RUN apt-get update -yqq
RUN apt-get install -yqq --no-install-recommends nodejs
COPY Gemfile* /usr/src/app/
WORKDIR /usr/src/app
RUN bundle install
COPY . /usr/src/app/
Then add command option to docker-compose file:
version: '3'
services:
web:
build: .
links:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command:
- |
rails db:reset
rails db:init_data
rails s -p 3000 -b '0.0.0.0'
redis:
image: redis
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
If the above solution does not work for somebody, there is an alternative solution:
Create a shell script in the project route and name it entrypoint.sh, for example:
#!/bin/bash
set -e
bundle exec rails db:reset
bundle exec rails db:migrate
exec "$#"
Declare entrypoint option in dpcker-compose file:
v
version: '3'
services:
web:
build: .
entrypoint:
- /bin/sh
- ./entrypoint.sh
depends_on:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command: ['./wait-for-it.sh', 'database:5432', '--', 'bundle', 'exec', 'rails', 's', '-p', '3000', '-b', '0.0.0.0']
database:
image: postgres:9.6
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
I also user wait-for-it script to ensure the DB is started.
Hope this helps. I pushed the modifications to the Github repo. Sorry for some extra letters left in the text before code blocks, - for some unknown reasons, the code markdown didn't work, so I left them to get it working.