Docker and MongoDB: Start Mongo and Import Data via Dockerfile - mongodb

I need to use MongoDB with Docker. Up until now i am able to create a container, start the Mongo server and access it from the host machine (via Compass).
What i want to do next is import data from a script into the Mongo database that is running in the container.
I'm getting the following error when trying to import the data:
Failed: error connecting to db server: no reachable servers
Where's what i'm doing...
docker-compose.yml:
version: '3.7'
services:
mongodb:
container_name: mongodb_db
build:
context: .
dockerfile: .docker/db/Dockerfile
args:
DB_IMAGE: mongo:4.0.9
ports:
- 30001:27017
environment:
MONGO_DATA_DIR: /data/db
MONGO_LOG_DIR: /dev/null
db_seed:
build:
context: .
dockerfile: .docker/db/seed/Dockerfile
args:
DB_IMAGE: mongo:4.0.9
links:
- mongodb
mongodb Dockerfile:
ARG DB_IMAGE
FROM ${DB_IMAGE}
CMD ["mongod", "--smallfiles"]
db_seedDockerfile:
ARG DB_IMAGE
FROM ${DB_IMAGE}
RUN mkdir -p /srv/tmp/import
COPY ./app/import /srv/tmp/import
# set working directory
WORKDIR /srv/tmp/import
RUN mongoimport -h mongodb -d dbName--type csv --headerline -c categories --file=categories.csv #Failed: error connecting to db server: no reachable servers
RUN mongo mongodb/dbName script.js
What am I doing wrong here? How can i solve this issue?
I would like to keep the current file organisation (docker-compose, mongodb Dockerfile and db_seed Dockerfile).

I found the reason for the issue. The import command is being executed prior to the mongo service being started.
To solve this issue i created a .sh script with the import commands and execute it using ENTRYPOINT. This way the script is only executed after the db_seed container is created. Since the the db_seed container depends on the mongobd container, it will only execute the script after the mongo service is started.
db_seedDockerfile
ARG DB_IMAGE
FROM ${DB_IMAGE}
RUN mkdir -p /srv/tmp/import
COPY ./app/import /srv/tmp/import
ENTRYPOINT ["./srv/tmp/import/import.sh"]

Related

Why does my MongoDB data disappear when I convert my Meteor App to use Docker?

I recently switched my meteor app to use Docker as I am trying to create a new microservice. Previously, I would deploy my app locally using meteor run, but I've switched to docker-compose up --build using a docker-compose.yml at the root of my project and a Dockerfile in my Meteor app's directory. I finally got things running, which is great, but all the data I persisted when launching the app via meteor run is not being correctly accessed. I know the data still exists because when I try launching the app with meteor run the data is restored from the previous sessions.
This leads me to believe I am not connecting to Mongo correctly through Docker, and would appreciate any help finding an answer.
FYI, I am connect to a mongo instance it's just a freshly wiped DB.
docker-compose.yml:
version: '3'
services:
aldoa:
build:
context: ./js/app
dockerfile: Dockerfile
ports:
- '3000:3000'
links:
- mongo
environment:
ROOT_URL: ${APP_ROOT_URL:-http://localhost}
MONGO_URL: mongodb://mongo:27017/meteor
PORT: 3000
volumes:
- ./opt/app:/./js/app
mongo:
image: mongo:latest
ports:
- '27017:27017'
command:
- --storageEngine=wiredTiger
volumes:
- data:/data/db
volumes:
data:
Thanks in advance!
Then you launch meteor using meteor or meteor run you run in development mode. In that mode, unless you explicitly defined the MONGO_URL environmental variable, meteor will start it's own mongo instance on port 3001 (or, to be precise, the meteor port specified with -p plus one).
In your docker deployment you are doing the right thing for production and use a separately launched mongo instance running on the default port (27017).
You can export your data from your development instance by running:
$ meteor
$ # in a separate terminal:
$ mongodump -p 3001 -d meteor
And then import that into the "real" mongo instance:
mongorestore /path/to/dump/created/above

Recreation of MongoDB container doesnt re-run config.js file in docker-entrypoint-initdb.d

i have a problem with MongoDB. I am provisioning the file config.js to docker-entrypoint-initdb.d in my docker-compose file:
mongo:
image: mongo:latest
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
MONGO_INITDB_DATABASE: dev
volumes:
- ./config.js:/docker-entrypoint-initdb.d/config.js
- mongodbdata:/data/db
The config.js file looks like this:
db.auth('root', 'example');
db = db.getSiblingDB('dev');
db.approver.insert({"email":"some#email.com,"approverType":"APPROVER"});
db.approver.insert({"email":"someother#email.com","approverType":"ACCOUNTANCY"});
When I run docker-compose up -d for the first time, everything is fine, the two entries are inserted into the database.
But then, I want to add a trird entry, and recreate the container:
db.auth('root', 'example');
db = db.getSiblingDB('dev');
db.approver.insert({"email":"some#email.com,"approverType":"APPROVER"});
db.approver.insert({"email":"someother#email.com","approverType":"ACCOUNTANCY"});
db.approver.insert({"email":"another#email.com","approverType":"ACCOUNTANCY"});
I run docker-compose up -d --force-recreate --no-deps mongo nothing happens. The container is recreated, but the 3rd entry is not there.
Running docker exec -it dev_mongo_1 mongo docker-entrypoint-initdb.d/config.js returns:
MongoDB shell version v4.0.10
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("d44b8e0a-a32c-4da0-a02b-c3f71d6073dd") }
MongoDB server version: 4.0.10
Error: Authentication failed.
Is there a way to recreate the container so the script is re-runned? Or to run a mongo command that will re-run the script in a running container?
In mongodb's startup script there is check if initialization should be done:
https://github.com/docker-library/mongo/blob/40056ae591c1caca88ffbec2a426e4da07e02d57/3.6/docker-entrypoint.sh#L225
# check for a few known paths (to determine whether we've already initialized and should thus skip our initdb scripts)
if [ -n "$shouldPerformInitdb" ]; then
...
so probably it's done only once during DB initialization and then, as you keep DB state by using mongodbdata:/data/db, it won't initialize.
To fix that you can try to type docker-compose down -v what will delete data from your DB and let you run initialization once again.

Why is data persisting in mongo docker container?

From my understanding, new containers should not persist data after they die.
Right now I have two Dockerfiles. One is for my application and the other is to seed a database.
This is my project structure
/
package.json
Dockerfile
docker-compose.yml
mongo-seeding/
Dockerfile
seedDatabase.mjs
This is what my seedDatabase looks like
import mongodb from "mongodb";
import data from "./dummyData";
// Connection URL
const url = "mongodb://mongo:27017/poc";
async function mongoSeeder() {
try {
// first check to see if the database exists
const client = await mongodb.MongoClient.connect(url);
const db = client.db("poc");
const questionCollection = await db.createCollection("question");
const answerCollection = await db.createCollection("answer");
await questionCollection.insertMany(data.questions);
await answerCollection.insertMany(data.answers);
} catch (e) {
console.log(e);
}
}
mongoSeeder();
This is what my the Dockerfile for seeding the database looks like
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
COPY mongo-seeding mongo-seeding
RUN yarn add mongodb uuid faker
CMD ["yarn", "seed"]
This is what the other Dockerfile looks like
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN yarn install
RUN yarn add mongodb uuid faker
RUN yarn global add nodemon babel-cli babel-preset-env
COPY . .
EXPOSE 3000
CMD ["yarn", "dev-deploy"]
This is what my docker-compose.yml looks like
version: "3"
services:
mongo:
container_name: mongo
image: mongo
ports:
- "4040:27017"
expose:
- "4040"
app:
container_name: ingovguru
build: .
ports:
- "3000:3000"
depends_on:
- mongo
links:
- mongo
expose:
- "3000"
seed:
container_name: seed
build:
context: .
dockerfile: ./mongo-seeding/Dockerfile
links:
- mongo
depends_on:
- mongo
- app
Initially, I am seeding the database with 1000 questions and 200 answers.
Notice that I am not using any volumes, so nothing should be persisted.
I run
docker-compose build
docker-compose up
I jump into the mongo container, and I see that I have a 1000 questions and 100 answers.
I then Ctrl-C and re-run.
I now have 2000 questions and 100 answers.
Why does this happen?
Even though you are not declaring any volumes, the mongo image itself declares volumes for /data/db and /data/configdb in containers, which cover database data (see Dockerfile#L87). With Docker Compose, these anonymous volumes are re-used between invocations of docker-compose up and docker-compose down.
To remove the volumes when you bring down a stack, you need to use the -v, --volumes option. This will remove the volumes, giving you a clean database on the next up.
docker-compose down --volumes
docker-compose down documentation
-v, --volumes Remove named volumes declared in the `volumes`
section of the Compose file and anonymous volumes
attached to containers.
The problem that you have is because the mongo image has defined some volumes on its Dockerfile, you can see them doing:
docker history --no-trunc mongo | grep VOLUME
If you only stop the mongo container (you can check doing docker ps -a | grep mongo), the created volumes are kept. If you want to remove the container and its volumnes, you can remove the stopped mongo container with docker rm CONTAINER_ID or removing all the containers created by compose with docker-compose down.
In your case, if you want to build and run the services, you have to do:
docker-compose build && docker-compose down && docker-compose up -d

docker-compose mongodb phoenix, [error] failed to connect: ** (Mongo.Error) tcp connect: connection refused - :econnrefused

Hi I am getting this error when I try to run docker-compose up on my yml file.
This is my docker-compose.yml file
version: '3.6'
services:
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- . .
# make sure we start mongodb when we start this service
depends_on:
- db
db:
image: mongo:latest
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
This is my Dockerfile:
# base image elixer to start with
FROM elixir:1.6
# install hex package manager
RUN mix local.hex --force
RUN mix local.rebar --force
# install the latest phoenix
RUN mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez --force
# create app folder
COPY . .
WORKDIR ./
# install dependencies
RUN mix deps.get
# run phoenix in *dev* mode on port 4000
CMD mix phx.server
Is this a problem with my dev.exs setup or something to do with the compatibility of docker and phoenix / docker and mongodb?
https://docs.docker.com/compose/compose-file/#depends_on explicitly says:
There are several things to be aware of when using depends_on:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready,
and advises you to implement the logic to wait for mongodb to spinup and be ready to accept connections by yourself: https://docs.docker.com/compose/startup-order/
In your case it could be something like:
CMD wait-for-db.sh && mix phx.server
where wait-for-db.sh can be as simple as
#!/bin/bash
until nc -z localhost 27017; do echo "waiting for db"; sleep 1; done
for which you need nc and wait-for-db.sh installed in the container.
There are plenty of other alternative tools to test if db container is listening on the target port.
UPDATE:
The network connection between containers is described at https://docs.docker.com/compose/networking/:
When you run docker-compose up, the following happens:
A network called myapp_default is created, where myapp is name of the directory where docker-compose.yml is stored.
A container is created using phoenix’s configuration. It joins the network myapp_default under the name phoenix.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
Each container can now look up the hostname phoenix or db and get back the appropriate container’s IP address. For example, phoenix’s application code could connect to the URL mongodb://db:27017 and start using the Mongodb database.
It was an issue with my dev environment not connecting to the mongodb url specified in docker-compose. Instead of localhost, it should be db as named in my docker-compose.yml file
For clarity to dev env:
modify config/dev.exs to (replace with correct vars)
username: System.get_env("PGUSER"),
password: System.get_env("PGPASSWORD"),
database: System.get_env("PGDATABASE"),
hostname: System.get_env("PGHOST"),
port: System.get_env("PGPORT"),
create a dot env file on the root folder of your project (replace with relevant vars to the db service used)
PGUSER=some_user
PGPASSWORD=some_password
PGDATABASE=some_database
PGPORT=5432
PGHOST=db
Note that we have added port.
Host can be localhost but should be mongodb or db or even url when working on a docker-compose or server or k8s.
will update answer for prod config...

HapiJS web app inside docker container not responding

I utilize HapiJS via docker compose 2+
.env
NODE_VIEWS_PATH=../
NODE_PUBLIC_PATH=../
MONGODB_URI=mongodb://127.0.0.1:27017/mahrio
WEB_DOMAIN=http://127.0.0.1:6085
deep down somewhere I am setting the HapiJS stuff via these .env files, but for Docker I understand I need to do some changes.. no problem I made a docker specific version
docker.env
NODE_VIEWS_PATH=../
NODE_PUBLIC_PATH=../
MONGODB_URI=mongodb://mongo:27017/mahrio
WEB_DOMAIN=http://0.0.0.0:6085
I've tried 0.0.0.0 and 127.0.0.1 , neither work
I can see everything seems to work however when I goto localhost:6085 I get no response.
127.0.0.1 didn’t send any data.
Dockerfile
FROM node:carbon
# Create app directory
RUN mkdir -p /usr/src/mahrio
WORKDIR /usr/src/mahrio
COPY package*.json /usr/src/mahrio
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
COPY . /usr/src/mahrio
EXPOSE 6085
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
app:
build: .
container_name: mahrio
depends_on:
- mongo
env_file:
- docker.env
ports:
- "6085:6085"
restart: always
mongo:
container_name: mongo
image: mongo
volumes:
- ./tmp:/data/db
ports:
- "27017:27017"
any ideas ? No errors from nodejs are coming, everything looks A-OKAY at the console and I know it works outside docker just fine.
Edit: added the docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25e7a4c3f350 mahriomedium_app "npm start" 24 hours ago Up About a minute 0.0.0.0:6085->6085/tcp mahrio
c8d691777aa0 mongo "docker-entrypoint..." 3 days ago Up About a minute 0.0.0.0:27017->27017/tcp mongo
docker logs
> mahrio-medium#0.0.1 start /usr/src/mahrio
> node server/index.js
Running Development!
MongoDB Config...
Server running at: http://127.0.0.1:6085
MongoDB connected!
db connection opened
it turned out WEB_DOMAIN was the wrong env var
the right var is NODE_URI to set to 0.0.0.0
all works now