docker-entrypoint-initdb.d not executing scripts - mongodb

I'm using docker-compose version 1.25.5, build 8a1c60f6 &
Docker version 19.03.8, build afacb8b
I'm trying to initialise database with Users in MongoDb container using docker-entrypoint-initdb.d but,
js/scripts aren't being executed when container starts.
I know /data/db must be empty for it execute so I've been deleting it every time before start. But still doesn't execute. Going into the container and manually executing mongo mongo-init.js works.
Not sure why it's not working when it should.
docker-compose.yml:
version: '3'
services:
mongodb:
container_name: "mongodb"
image: tonyh308/cv-mongodb:1
build: ./mongo
container_name: mongodb
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_DATABASE: test
volumes:
- ./docker-entrypoint-initdb.d/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- ./mongo/mongodb:/data/db
labels:
com.qa.description: "MongoDb Container for application."
# other services ...
Dockerfile:
FROM mongo:3.6.18-xenial
RUN apt-get update
COPY ./Customers /Customers
COPY ./test /test
WORKDIR /data
ENTRYPOINT ["mongod", "--bind_ip_all"]
EXPOSE 27017
Feb 26 2021: still no solution

Related

SpringBoot and MongoDb integration in the Docker

I have a SpringBoot application running in the docker and I am using PostgreSQL as a database for this project and the database also running in the docker.
Now, I want to use MongoDb along with PostgreSQL as database to my SpringBoot application.
I added MongoDb info in the docker-compose.yml file and created new Dockerfile and ran the application. After that MongoDb got installed and running in the docker successfully.
I created a api for insertion of a document into the collection. When I hit the api I am getting the error. I think, I am not able to connect to the MongoDb which is running in the docker.
error:- com.mongodb.MongoSocketOpenException: Exception opening socket
I think I have to do configuration in the MongoDb before doing any CRUD operations.
Can anyone please share a detailed configuration of MongoDb with some examples.
Or provide some information which can help me achieve my task.
Thanks.
Docker-compose.yml
mongodb:
build:
context: mongodb
args:
DOCKER_ARTIFACTORY: ${DOCKER_ARTIFACTORY}
container_name: “mongodb”
image: mongo:6.0.4
restart: always
environment:
- MONGODB_USER=${SPRING_DATASOURCE_USERNAME:-username}
- MONGODB_PASSWORD=${SPRING_DATASOURCE_PASSWORD:-password}
ports:
- “27017:27017”
volumes:
- “/mongodata:/data/mongodb”
networks:
- somenetwork
Dockerfile
ARG DOCKER_ARTIFACTORY
FROM ${DOCKER_ARTIFACTORY}mongo:6.0.4
COPY init/mongodbsetup.sh /docker-entrypoint-initdb.d/
RUN chmod +x /docker-entrypoint-initdb.d/mongodbsetup.sh
CMD [“mongod”]
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: password
mongodb:
image: mongo
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
You can find above configuration of docker file that includes both MongoDB and PostgreSQL configurations.
I resolve the issue and successfully able to connect and able to perform CRUD operations too.
Below are the changes I did.
1. docker-compose.yml
mongodb:
build:
context: mongodb
args:
DOCKER_ARTIFACTORY: ${DOCKER_ARTIFACTORY}
container_name: mongodb
hostname: mongodb
restart: always
environment:
- MONGO_INITDB_DATABASE=databaseName
- MONGO_INITDB_ROOT_USERNAME=username
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- 27017:27017
volumes:
- /mongodata:/data/mongodb
networks:
- somenetwork
created below file (filename: mongodbsetup.sh) under projectFolder/builder/mongodb/init
#!/bin/bash
mongosh <<EOF
use code
EOF
created below file (filename: Dockerfile) under projectFolder/builder/mongodb
ARG DOCKER_ARTIFACTORY
FROM ${DOCKER_ARTIFACTORY}mongo:6.0.4
COPY init/mongodbsetup.sh /docker-entrypoint-initdb.d/
RUN chmod +x /docker-entrypoint-initdb.d/mongodbsetup.sh
CMD ["mongod"]
In application.properties file added below properties
#MongoDB configurations
spring.data.mongodb.database=databasename
spring.data.mongodb.uri=mongodb://username:password#databaseurlOrIpAddress:27017
That's it. It's working.

docker-compose - NestJS container cannot access RethinkDB container

Problem
I am trying to containerize a full stack app. For now, I am putting the front-end aside, so I am trying to set up only three containers :
PostgreSQL
RethinkDB
NestJS
But when I try to run my containers with
docker-compose up
the NestJS container can't access the RethinkDB container.
Code
docker-compose.yaml
version: "3.9"
services:
opm_postgres:
container_name: opm_postgres_1
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: *******
POSTGRES_USER: postgres
volumes:
- 'opm_postgres:/var/lib/postgresql/data'
opm_adminer:
container_name: opm_adminer_1
image: adminer
restart: always
ports:
- 8085:8080
opm_rethink:
container_name: opm_rethink_1
image: rethinkdb
restart: always
ports:
- 28016:28015
- 8084:8080
volumes:
- 'opm_rethink:/data'
opm_back:
container_name: opm_back_1
build: ../OPM-back
restart: always
ports:
- "3000:3000"
volumes:
opm_postgres:
opm_rethink:
NestJS Dockerfile (coming from : Ultimate Guide: NestJS Dockerfile For Production [2022])
# Base image
FROM node:14
# Create app directory
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
# Creates a "dist" folder with the production build
RUN npm run build
# Start the server using the production build
CMD [ "node", "dist/main.js" ]
Logs
docker-compose up
docker ps
Additional info
I used the containers names as DB hosts, both for RethinkDB and PostgreSQL.
Also, when I comment the rethink part in my docker-compose.yaml, everything works fine, I can call a route on my NestJS API and it queries correctly in my PostgreSQL db. The problem seems to be specific to RethinkDB.

Knex Migration with Docker Compose Psql

I have a problem migrating using Knex js inside my docker-compose container.
the problem is that npm run db (knex migrate:rollback && knex migrate:latest && knex seed:run) would run right before the database is even created. Is there anyway to say that I would only like to run npm run db after the database has been created?
NOTE : if I do this npm commands on the docker terminal after it has been built everything works fine. just fyi
here is my docker-compose.yml
version: '3.6'
services:
#Backend api
server:
container_name: server
build: ./
command: npm run db
working_dir: /user/src/server
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
# PostgreSQL database
postgres:
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: interapp
POSTGRES_HOST: postgres
image: postgres
ports:
- "5432:5432"
and here is my Dockerfile
FROM node:10.14.0
WORKDIR /user/src/server
COPY ./ ./
RUN npm install
CMD ["/bin/bash"]
on the docker-compose.yml file, using sh (bash) for a contained environment context for your command to run in. ie. sh -c 'npm run db'
your docker-compose file would now be
secondly, use the depends_on step to wait for the database to start
services:
#Backend api
server:
container_name: server
build: ./
command: sh -c 'npm run db'
working_dir: /user/src/server
depends_on:
-postgres
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
Simply adding depends_on to server service should do the trick here.
services:
server:
depends_on:
- postgres
...
This will cause docker-compose to start postgres container before the server container. It will not however wait for postgres to be ready. In this case it shouldn't be problem, because postgres starts really quickly.
If you want something more solid, or depends_on doesn't do the trick, you can add entrypoint wrapping script to your container. See https://docs.docker.com/compose/startup-order/, where you can read more about it. There are also links to tools, so you don't have to write your own script from scratch.

Docker run not working

When I use docker compose it performs perfectly the application, however, when I use docker run nothing happens
I have a API Rest (Express & Mongodb) with nginx proxy-pass.
Docker file:
FROM node:8-alpine
EXPOSE 3000
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
RUN mkdir /app
WORKDIR /app
ADD package.json yarn.lock /app/
RUN yarn --pure-lockfile
ADD . /app
CMD ["yarn", "start"]
Docker compose:
version: "2"
services:
api:
build: .
environment:
- NODE_ENV=production
command: yarn start
volumes:
- .:/app
ports:
- "3000:3000"
tty: true
depends_on:
- mongodb
restart: always
nginx:
image: nginx
depends_on:
- api
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- mongodb
restart: always
mongodb:
image: mongo
ports:
- "27017:27017"
restart: always
When I use docker compose it performs perfectly the application, however, when I use docker run nothing happens
That seems expected, since docker run would run one image.
As opposed to docker compose, which will run a multi-container Docker application.
You need for all images to run, starting with the right order, in order to anything to happen.

Why I don't lose postgresql data when rebuild docker image?

version: '3'
services:
db:
image: postgres
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Why I don't lose data when running docker-compose build --force-em --no-cache. If this is normal, why do we need to create volume for data folder ?
When running the command docker-compose build --force-em --no-cache, this will only build the web Docker image from the Dockerfile which in your case is in the same directory.
This command will not stop the containers that you have previously started using this compose file, thus you want lose any data when running this command.
However, as soon as you remove the containers using docker-compose down or when containers are stopped docker-compose rm, you won't find the postgres data when you restart the container.
If you want to persist the data, and make the container pick it up when it is recreated, you need to give the postgres data volume a name as such.
version: '3'
services:
db:
image: postgres
volumes:
- pgdata:/var/lib/postgresql/data
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Now the postgres data won't be lost when the containers are recreated.