i want to be able to recreate some base data that is dumped when mongo-data folder is deleted and docker-compose up is called.
the problem that im facing is that app does not have mongo
these are my files:
docker-compose.yml
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/testapp
environment:
DB_URL: mongodb://test_mongo/appdb
depends_on:
- mongo
mongo:
image: "mongo:4.4.4"
restart: always
container_name: test_mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./mongo-data:/data/db
Dockerfile:
FROM node:14.15.5
RUN mkdir -p /testapp
WORKDIR /testapp
EXPOSE 3000
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
sh ./__backup__/db/restore.sh
sh ./__backup__/app/restore.sh
yarn install
yarn start:dev
backup/app/restore.sh:
#!/bin/bash
if [[ ! -d '/testapp/uploads' ]]
then
tar -xvf ./uploads.tar.gz /testapp/
fi
backup/app/restore.sh:
#!/bin/bash
until mongo --eval "print(\"waited for connection\")"
do
sleep 1
done
if [[ ! -d '/testapp/mongo-data' ]]
then
mongorestore --archive ./db.dump
fi
is there anyway to run these resotre.sh files after mongo service is up or running mongo from app?
If I understand the question correctly, you want to restore the MongoDB to a certain state every time your app launches, and you're asking if there's a way to do it after MongoDB container launches.
There's a tool called docker-compose-wait, quoting from its GitHub README, it's a small command-line utility to wait for other docker images to be started while using docker-compose.
It's fairly simple to use it. Add it to the image, run /wait to wait for services to be up, and get on to whatever you want next.
So according to your current setup, your Dockerfile could be like this:
FROM node:14.15.5
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.9.0/wait /wait
RUN chmod +x /wait
RUN mkdir -p /testapp
WORKDIR /testapp
ADD . .
EXPOSE 3000
## Launch the wait tool and then your entrypoint.sh
ENTRYPOINT /wait && /testapp/entrypoint.sh"
In which your entrypoint.sh was already written to call the restore script. In your docker-compose.yml, add environment variable to set up the services to be waited.
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/testapp
environment:
DB_URL: mongodb://test_mongo/appdb
WAIT_HOSTS: mongo:27017
depends_on:
- mongo
mongo:
image: "mongo:4.4.4"
restart: always
container_name: test_mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./mongo-data:/data/db
I've searched over the web but couldn't find my answer anywhere.
I'm trying to run an API web service, using NestJS framework.
I'm running docker-compose that spins up the API server, a MongoDB instance, and a mongocryptd instance to allow Client-Side Field Level Encryption on my app.
I'm able to connect to the MongoDB instance, but not to the mongocryptd instance.
Docker-Compose file:
version: "3.7"
services:
api:
build:
context: .
dockerfile: Dockerfile
labels:
env: dev
args:
APP: appname
APP_PORT: 3000
ports:
- "3000:3000"
command: ["sh", "-c", "npm run start:app:dev"]
volumes:
- .:/app
mongodb:
build:
context: .
dockerfile: docker/MongoEP-Dockerfile
labels:
env: dev
args:
MONGO_PACKAGE: mongodb-enterprise
MONGO_REPO: repo.mongodb.com
image: mongo-enterprise:4.2.5
command: ["--auth"]
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: usr
MONGO_INITDB_ROOT_PASSWORD: pwd
ports:
- "27017:27017"
volumes: ["/private/var/services/mongodb:/data/db"]
mongocryptd:
build:
context: .
dockerfile: docker/MongoEP-Dockerfile
labels:
env: dev
args:
MONGO_PACKAGE: mongodb-enterprise
MONGO_REPO: repo.mongodb.com
image: mongo-enterprise:4.2.5
entrypoint: mongocryptd
restart: always
ports:
- "27020:27020"
volumes: ["/private/var/services/mongodb:/data/db"]
The used dockerfile is mongo's official dockerfile, but supplied with args to build an enterprise version of the image which includes the enterprise features.
When trying to connect to the database from the app, I'm running:
MongooseModule.forRoot(`mongodb://usr:pwd#mongodb:27017`, {
useNewUrlParser: true,
useUnifiedTopology: true,
useFindAndModify: false,
retryAttempts: 2,
autoEncryption: {
keyVaultNamespace,
kmsProviders,
extraOptions: {
mongocryptdURI: `mongodb://mongocryptd:27020`,
mongocryptdBypassSpawn: true
}
} as any
})
** This is the NestJS version of supplying the configs. it's similar to mongoose - the first argument is the URI and the second is the settings object
Without the autoEncryption options, I'm able to connect without any problems. That means that my database address is correct.
With the autoEncyption options, I'm getting MongooseServerSelectionError: connect ECONNREFUSED 172.25.0.4:27020 (mongocryptd address). That means that the IP is correct (DNS resolved), but the connection is refused. As I showed before, the port (27020) is being published by the docker-compose file, and I even tried to add an EXPOSE step in the build itself.
BUT when I map the network of the containers to host (network_mode: "host"), the application is able to connect without any problems (changing the connections DNS to localhost:27017 and 27020 of course). So that must mean it's a docker-related problem.
Additional things I've tried && a recap of what I tried:
Attach a volume to replace /etc/mongod.conf.orig with the following network configurations:
net:
port: 27017
bindIp: 0.0.0.0
bindIpAll: true
Instead of attaching a volume, replacing it ^ at the build step before launching the mongo service.
I also tried changing the bindIp to the specific application IP that was given by the docker network.
All types of connection strings with & without user credentials, auth source, and default database.
Port 27020 is published in docker-compose & exposed on docker file.
I ran out of ideas. Any help is appreciated! :)
EDIT:
After more debugging, I can see that mongod is running with --bind_ip_all by default so changing the conf file shouldn't have an effect.
Tried also running mongocryptd with mongods docker-entrypoint.sh entrypoint instead of overriding it.
Verify mongocryptd is running (ps awwxu, etc.)
Verify you can connect to it from bash on the same container where it is running using mongo.
Verify you can connect to it from host system using mongo.
Check mongocryptd logs (it's basically a mongod with some extra functionality).
I had similar issues with mongocryptd, but I was installing php node instead of npm. Tried different solutions but didn't managed to succeed. Even tried similar Asaf Kfir docker-composer.yaml with pre-installed mongodb-enterprise-cryptd lib, but had same issue. Keep in mind, in php node via Dockerfile I already been able to install libmongocrypt-dev and
mongodb-enterprise-cryptd. (I will leave php Dockerfile below)
Those three containers I managed to link under the same ip address, tested with ncat and I was able to reach. But when I tried to run tests from php node, it start throwing:
MongoDB\Driver\Exception\BulkWriteException: Bulk write failed due to previous MongoDB\Driver\Exception\RuntimeException: key vault error: Invalid reply to find command.
and had this issue for two weeks. Basically didn't know how to resolve it.
P.S. remember these words: The automatic feature of field level encryption is only available in MongoDB Enterprise 4.2 or later
At that time my docker-compose.yaml file looked like that:
version: '3'
services:
#PHP Service
php:
image: local-base-php
container_name: app
restart: unless-stopped
tty: true
ports:
- "27017:27017"
- "27020:27020"
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www/projects
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
#MongoDB Service
mongodb:
image: local-mongo-db
container_name: mongodb
restart: unless-stopped
tty: true
environment:
MONGO_INITDB_DATABASE: test
MONGO_INITDB_USERNAME: root
MONGO_INITDB_PASSWORD: rootpassword
network_mode: service:php
volumes: ["/tmp/mongodb:/data/db"]
#MongoDB Service
mongocryptd:
image: local-mongocryptd
container_name: mongocryptd
entrypoint: mongocryptd
restart: unless-stopped
tty: true
network_mode: service:php
volumes: ["/tmp/mongodb:/data/db"]
volumes:
dbdata:
driver: local
Mongo images where build from here: https://docs.mongodb.com/manual/tutorial/install-mongodb-enterprise-with-docker/
I managed to solve this issue like this:
Instead of building mongo-enterprise image I accidentally build up with official mongo image mongo:4.2 and everything worked well. I don't know why mongo says that enterprise is needed for encryption. Because for me mongo-enterprise encryption didn't worked. The original mongo:4.2 image worked perfectly.
working docker-compose.yaml:
version: '3'
services:
#PHP Service
php:
image: local-base-php
container_name: app
restart: always
tty: true
ports:
- "27017:27017"
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www/projects
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
#MongoDB Service
mongodb:
image: mongo:4.2
container_name: mongodb
restart: always
tty: true
environment:
MONGO_INITDB_DATABASE: test
MONGO_INITDB_USERNAME: root
MONGO_INITDB_PASSWORD: rootpassword
network_mode: service:php
php node Dockerfile:
FROM php:7.4
RUN apt-get update && apt-get install -y zip unzip libzip-dev git mercurial zlib1g-dev libicu-dev libcurl4-gnutls-dev libssl-dev libssh2-1-dev libgmp-dev libpng-dev uuid-dev
RUN cd /tmp && git clone https://github.com/php/pecl-networking-ssh2 && cd /tmp/pecl-networking-ssh2 \
&& phpize && ./configure && make && make install \
&& echo "extension=ssh2.so" > /usr/local/etc/php/conf.d/ext-ssh2.ini \
&& rm -rf /tmp/ssh2
RUN docker-php-ext-configure gmp
RUN docker-php-ext-install zip json pdo pdo_mysql curl opcache bcmath sockets gmp gd
RUN docker-php-ext-install -j$(nproc) intl
RUN pecl install uuid pcov redis mongodb
RUN docker-php-ext-enable uuid pcov redis mongodb
RUN curl -sS https://get.symfony.com/cli/installer | bash -s -- --install-dir /usr/local/bin
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN curl -L -sS "https://github.com/splitsh/lite/releases/download/v1.0.1/lite_linux_amd64.tar.gz" | tar xvz -C /usr/local/bin
RUN apt-get update
RUN apt-get install -y curl gpg wget
RUN sh -c 'curl -s https://www.mongodb.org/static/pgp/libmongocrypt.asc | gpg --dearmor >/etc/apt/trusted.gpg.d/libmongocrypt.gpg'
RUN echo "deb https://libmongocrypt.s3.amazonaws.com/apt/ubuntu bionic/libmongocrypt/1.0 universe" | tee /etc/apt/sources.list.d/libmongocrypt.list
RUN wget -qO - mongodb.org/static/pgp/server-4.2.asc | apt-key add -
RUN echo "deb http://repo.mongodb.com/apt/debian stretch/mongodb-enterprise/4.2 main" | tee /etc/apt/sources.list.d/mongodb-enterprise.list
RUN apt-get update
RUN apt-get install -y libmongocrypt-dev && apt-get install --no-install-recommends -y mongodb-enterprise-cryptd
I hope I helped. Cheers!
UPDATE:
My tests are running without libmongocrypt-dev lib, so I guess you only need mongodb-enterprise-cryptd.
I decided to use socat forwarding traffic
Dockerfile:
# Build stage
FROM ubuntu:focal
ENV ENTRY_FILE=docker-entrypoint.sh
ENV MONGODB_PATH=/usr/src/mongodb
ENV ENTRY_POINT=$MONGODB_PATH/$ENTRY_FILE
RUN apt-get update && apt-get install sudo
RUN sudo apt-get install -y curl telnet vim socat libcurl4 libgssapi-krb5-2 libldap-2.4-2 libwrap0 libsasl2-2 libsasl2-modules libsasl2-modules-gssapi-mit snmp openssl liblzma5
RUN curl -k -o mongodb.tgz "https://downloads.mongodb.com/linux/mongodb-linux-$(arch)-enterprise-ubuntu2004-5.0.10.tgz"
RUN tar -xf mongodb.tgz --strip-components=1
RUN sudo ln -s $MONGODB_PATH/bin/* /usr/local/bin/
RUN sudo mkdir -p /data/db
RUN sudo mkdir -p /data/log
RUN sudo chown `whoami` /data/db
RUN sudo chown `whoami` /data/log
COPY ./$ENTRY_FILE $ENTRY_POINT
RUN chmod +x $ENTRY_POINT
ENTRYPOINT $ENTRY_POINT
docker-entrypoint.sh:
#!/bin/sh
socat -d -d TCP-LISTEN:27017,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17017 &
socat -d -d TCP-LISTEN:27018,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17018 &
socat -d -d TCP-LISTEN:27019,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17019 &
socat -d -d TCP-LISTEN:27020,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17020 &
./bin/mongod --port 17017 --dbpath /data/db --logpath /data/log/mongod.log &
./bin/mongocryptd --port 17020 --logpath /data/log/mongocryptd.log
$(hostname -I | awk '{print $1}') is the remote ip(my docker host ip, 172.2.x.x), you can change to your another container ip.
docker-compose.yml:
version: "3.8"
networks:
ABC:
external: false
name: ABC
services:
mongo-database:
container_name: MongoDB
privileged: true
build:
context: .docker/db
dockerfile: Dockerfile
volumes:
- .mongo/db:/data/db
- .mongo/log:/data/log
ports:
- 27017-27020:27017-27020
networks:
- ABC
I am trying to deploy keystoneJS app on the server. Before that I want to test it locally.
I started mongoDB locally. Then use the Dockerfile from the documentation https://www.keystonejs.com/guides/deployment here, I can't run it. I get the error below:
✖ Connecting to database
Error: Server selection timed out after 30000 ms
at /home/node/node_modules/#keystonejs/utils/dist/utils.cjs.prod.js:54:26
at async executeDefaultServer (/home/node/node_modules/#keystonejs/keystone/bin/utils.js:109:3) {
errors: {
MongooseAdapter: MongoTimeoutError: Server selection timed out after 30000 ms
at Timeout._onTimeout (/home/node/node_modules/mongodb/lib/core/sdam/server_selection.js:308:9)
at listOnTimeout (internal/timers.js:531:17)
at processTimers (internal/timers.js:475:7) {
name: 'MongoTimeoutError',
reason: [MongoNetworkError],
[Symbol(mongoErrorContextSymbol)]: {}
}
}
}
error Command failed with exit code 1.
I googled seems didn't know about local mongodb://localhost:27017.
Then I decided to use docker-compose:
Here is the docker-compose.yml
version: '3'
services:
app:
container_name: my-admin
restart: always
build: .
volumes:
- .:/mycode
environment:
- MONGO_URI=mongodb://mongo:27017
ports:
- "80:3030"
links:
- mongo
mongo:
image: mongo:latest
restart: always
ports:
- "27017:27017"
when I run docker-compose up, got the same error.
Also tried this:
const keystone = new Keystone({
name: PROJECT_NAME,
adapter: new Adapter({mongoUri: "mongodb://mongo:27017/myapp"}),
onConnect: initialiseData,
});
Any help? Thanks!
EDIT
Here is the Dockerfile
# https://docs.docker.com/samples/library/node/
ARG NODE_VERSION=12.10.0
# https://github.com/Yelp/dumb-init/releases
ARG DUMB_INIT_VERSION=1.2.2
# Build container
FROM node:${NODE_VERSION}-alpine AS build
ARG DUMB_INIT_VERSION
WORKDIR /home/node
RUN apk add --no-cache build-base python2 yarn && \
wget -O dumb-init -q https://github.com/Yelp/dumb-init/releases/download/v${DUMB_INIT_VERSION}/dumb-init_${DUMB_INIT_VERSION}_amd64 && \
chmod +x dumb-init
ADD . /home/node
RUN yarn install && yarn build && yarn cache clean
# Runtime container
FROM node:${NODE_VERSION}-alpine
WORKDIR /home/node
COPY --from=build /home/node /home/node
EXPOSE 3000
CMD ["./dumb-init", "yarn", "start"]
I have an image (gepick:latest) with node app created from Dockerfile:
FROM centos:7
# Create app directory
WORKDIR /usr/src/app
RUN curl --silent --location https://rpm.nodesource.com/setup_8.x | bash -
RUN yum install -y nodejs
RUN curl --silent --location https://dl.yarnpkg.com/rpm/yarn.repo | tee /etc/yum.repos.d/yarn.repo
RUN rpm --import https://dl.yarnpkg.com/rpm/pubkey.gpg
RUN yum install -y yarn
RUN yarn
COPY . .
EXPOSE 8080
CMD [ "yarn", "test-matches-collecting-job"]
My goal is run tests in docker. But it requires mongodb
docker run gepick:latest :
...
Mongoose default connection error: MongoError: failed to connect to server [localhost:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]
...
I tried link mongo:4 images container docker run --link 0d24c3a35d5a gepick:latest but get same error.
When you launch your container using a docker-compose yaml file Docker bridges the containers together and allows you to have it launch the mongo container before other containers which rely on mongo to be active ... try something like this
cat my-docker-compose.yml
version: '3'
services:
my-gepick:
image: gepick:latest
container_name: blah_gepick
restart: always
depends_on:
- loudmongo
volumes:
- /cryptdata5/var/log/blobs:/blobs
- /webapp/enduser/bundle:/tmp
environment:
- MONGO_SERVICE_HOST=loudmongo
- MONGO_SERVICE_PORT=$GKE_MONGO_PORT
- MONGO_URL=mongodb://loudmongo:$GKE_MONGO_PORT/test
- METEOR_SETTINGS=${METEOR_SETTINGS}
- MAIL_URL=smtp://support#${GKE_DOMAIN_NAME}:blah#loudmail:587/
links:
- loudmongo
ports:
- 127.0.0.1:3000:3000
working_dir: /tmp
command: /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
loudmongo:
image: mongo
container_name: loud_mongo
restart: always
ports:
- 127.0.0.1:$GKE_MONGO_PORT:$GKE_MONGO_PORT
volumes:
- /cryptdata7/var/data/db:/data/db
so your launch sequence may look like
docker-compose -f /somedir/my-docker-compose.yml pull
docker-compose -f /somedir/my-docker-compose.yml up -d
Iam running a rails api server with mongodb all worked perfectly find and I started to move my server into docker.
Unfortunately whenever I stop my server (docker-compose down) and restart it all data are lost and the db is completely empty.
This is my docker-compose file:
version: '2'
services:
mongodb:
image: mongo:3.4
command: mongod
ports:
- "27017:27017"
environment:
- MONGOID_ENV=test
volumes:
- /data/db
api:
build: .
depends_on:
- 'mongodb'
ports:
- "3001:3001"
command: bundle exec rails server -p 3001 -b '0.0.0.0'
environment:
- RAILS_ENV=test
links:
- mongodb
And this is my dockerfile:
FROM ruby:2.5.1
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
ENV APP_HOME /app
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
COPY Gemfile* $APP_HOME/
RUN bundle install
COPY . $APP_HOME
RUN chown -R nobody:nogroup $APP_HOME
USER nobody
ENV RACK_ENV test
ENV MONGOID_ENV test
EXPOSE 3001
Any idea whats missing here?
Thanks,
Michael
In docker-compose, I think your "volumes" field in the mongodb service isn't quite right. I think
volumes:
- /data/db
Should be:
volumes:
- ./localFolder:/data/db