Mongodb error on docker - mongodb

I've a dockerfile and a docker-compose
When I try to run docker-compose this happens
Does someone know how to fix it?
Edit 1
dockerfile
FROM node
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927 \
&& echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list \
&& apt-get update \
&& apt-get install -y mongodb-org
RUN useradd --user-group --create-home --shell /bin/false app &&\
npm install --global npm
ENV HOME=/home/app
COPY package.json $HOME/library/
RUN chown -R app:app $HOME/*
USER root
WORKDIR $HOME/library
RUN npm install --silent --progress=false
COPY . $HOME/library
RUN chown -R app:app $HOME/*
RUN npm install --build-from-source bcrypt
CMD ["npm", "start"]
docker-compose
version: '2'
services:
db:
image: mongo
command: "mongod"
ports:
- "27018:27018"
library:
build:
context: .
dockerfile: Dockerfile
command: node_modules/.bin/nodemon --exec npm start
environment:
NODE_ENV: development
ports:
- 3000:3000
volumes:
- .:/home/app/library
- /home/app/library/node_modules
links:
- db
Edit 2
Running docker ps -a here
First container docker log here
Second container docker log here
Edit 3
mongoose.connect(config.database);
That's how I am connecting the database on server.js

Your Docker Compose file has the MongoDB Ports as 27018=>27018 but the error from your node application is saying it is trying to connect to 127.0.0.1:27017.
Because of how Docker's networking works (which is all kinds of awesome) you should change the port to be 27017 not 27018 in your Compose file & update the MongoDB connection string in your Node app to db:27017.
You shouldn't use 127.0.0.1 within Docker as it expects that whatever you are connecting to is running on within the same container.

Related

How can I start Tortoise-ORM in a celery docker container?

I have an application in which I use a postgresql and celery database. Each element is running in a different container, in the celery container I am already connected to the postgres database, however I don't know how I could configure tortoise-orm to start in the celery container, since I have a task in which I want to interact with the database using tortoise.
This is my docker compose:
version: '3.8'
services:
web:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: fastapi_celery_example_web
# '/start' is the shell script used to run the service
command: /start
# this volume is used to map the files and folders on the host to the container
# so if we change code on the host, code in the docker container will also be changed
volumes:
- .:/app
ports:
- 8010:8000
env_file:
- .env/.dev-sample
depends_on:
- redis
- db
db:
image: postgres:14-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=fastapi_celery
- POSTGRES_USER=fastapi_celery
- POSTGRES_PASSWORD=fastapi_celery
redis:
image: redis:7-alpine
celery_worker:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: fastapi_celery_example_celery_worker
command: /start-celeryworker
volumes:
- .:/app
env_file:
- .env/.dev-sample
depends_on:
- redis
- db
This is my dockerfile:
FROM python:3.10-slim-buster
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN apt-get update \
# dependencies for building Python packages
&& apt-get install -y build-essential \
# psycopg2 dependencies
&& apt-get install -y libpq-dev \
# Additional dependencies
&& apt-get install -y telnet netcat \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# Requirements are installed here to ensure they will be cached.
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./compose/local/fastapi/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/fastapi/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
COPY ./compose/local/fastapi/celery/worker/start /start-celeryworker
RUN sed -i 's/\r$//g' /start-celeryworker
RUN chmod +x /start-celeryworker
COPY ./compose/local/fastapi/celery/beat/start /start-celerybeat
RUN sed -i 's/\r$//g' /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/local/fastapi/celery/flower/start /start-flower
RUN sed -i 's/\r$//g' /start-flower
RUN chmod +x /start-flower
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
The task:
#shared_task()
def task_send_welcome_email(user_pk):
from project.users.models import User
user = User.filter(id=user_pk).first()
logger.info(f'send email to {user.email} {user.id}')

Connecting to a MongoCryptD instance in docker environment with Mongoose

I've searched over the web but couldn't find my answer anywhere.
I'm trying to run an API web service, using NestJS framework.
I'm running docker-compose that spins up the API server, a MongoDB instance, and a mongocryptd instance to allow Client-Side Field Level Encryption on my app.
I'm able to connect to the MongoDB instance, but not to the mongocryptd instance.
Docker-Compose file:
version: "3.7"
services:
api:
build:
context: .
dockerfile: Dockerfile
labels:
env: dev
args:
APP: appname
APP_PORT: 3000
ports:
- "3000:3000"
command: ["sh", "-c", "npm run start:app:dev"]
volumes:
- .:/app
mongodb:
build:
context: .
dockerfile: docker/MongoEP-Dockerfile
labels:
env: dev
args:
MONGO_PACKAGE: mongodb-enterprise
MONGO_REPO: repo.mongodb.com
image: mongo-enterprise:4.2.5
command: ["--auth"]
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: usr
MONGO_INITDB_ROOT_PASSWORD: pwd
ports:
- "27017:27017"
volumes: ["/private/var/services/mongodb:/data/db"]
mongocryptd:
build:
context: .
dockerfile: docker/MongoEP-Dockerfile
labels:
env: dev
args:
MONGO_PACKAGE: mongodb-enterprise
MONGO_REPO: repo.mongodb.com
image: mongo-enterprise:4.2.5
entrypoint: mongocryptd
restart: always
ports:
- "27020:27020"
volumes: ["/private/var/services/mongodb:/data/db"]
The used dockerfile is mongo's official dockerfile, but supplied with args to build an enterprise version of the image which includes the enterprise features.
When trying to connect to the database from the app, I'm running:
MongooseModule.forRoot(`mongodb://usr:pwd#mongodb:27017`, {
useNewUrlParser: true,
useUnifiedTopology: true,
useFindAndModify: false,
retryAttempts: 2,
autoEncryption: {
keyVaultNamespace,
kmsProviders,
extraOptions: {
mongocryptdURI: `mongodb://mongocryptd:27020`,
mongocryptdBypassSpawn: true
}
} as any
})
** This is the NestJS version of supplying the configs. it's similar to mongoose - the first argument is the URI and the second is the settings object
Without the autoEncryption options, I'm able to connect without any problems. That means that my database address is correct.
With the autoEncyption options, I'm getting MongooseServerSelectionError: connect ECONNREFUSED 172.25.0.4:27020 (mongocryptd address). That means that the IP is correct (DNS resolved), but the connection is refused. As I showed before, the port (27020) is being published by the docker-compose file, and I even tried to add an EXPOSE step in the build itself.
BUT when I map the network of the containers to host (network_mode: "host"), the application is able to connect without any problems (changing the connections DNS to localhost:27017 and 27020 of course). So that must mean it's a docker-related problem.
Additional things I've tried && a recap of what I tried:
Attach a volume to replace /etc/mongod.conf.orig with the following network configurations:
net:
port: 27017
bindIp: 0.0.0.0
bindIpAll: true
Instead of attaching a volume, replacing it ^ at the build step before launching the mongo service.
I also tried changing the bindIp to the specific application IP that was given by the docker network.
All types of connection strings with & without user credentials, auth source, and default database.
Port 27020 is published in docker-compose & exposed on docker file.
I ran out of ideas. Any help is appreciated! :)
EDIT:
After more debugging, I can see that mongod is running with --bind_ip_all by default so changing the conf file shouldn't have an effect.
Tried also running mongocryptd with mongods docker-entrypoint.sh entrypoint instead of overriding it.
Verify mongocryptd is running (ps awwxu, etc.)
Verify you can connect to it from bash on the same container where it is running using mongo.
Verify you can connect to it from host system using mongo.
Check mongocryptd logs (it's basically a mongod with some extra functionality).
I had similar issues with mongocryptd, but I was installing php node instead of npm. Tried different solutions but didn't managed to succeed. Even tried similar Asaf Kfir docker-composer.yaml with pre-installed mongodb-enterprise-cryptd lib, but had same issue. Keep in mind, in php node via Dockerfile I already been able to install libmongocrypt-dev and
mongodb-enterprise-cryptd. (I will leave php Dockerfile below)
Those three containers I managed to link under the same ip address, tested with ncat and I was able to reach. But when I tried to run tests from php node, it start throwing:
MongoDB\Driver\Exception\BulkWriteException: Bulk write failed due to previous MongoDB\Driver\Exception\RuntimeException: key vault error: Invalid reply to find command.
and had this issue for two weeks. Basically didn't know how to resolve it.
P.S. remember these words: The automatic feature of field level encryption is only available in MongoDB Enterprise 4.2 or later
At that time my docker-compose.yaml file looked like that:
version: '3'
services:
#PHP Service
php:
image: local-base-php
container_name: app
restart: unless-stopped
tty: true
ports:
- "27017:27017"
- "27020:27020"
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www/projects
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
#MongoDB Service
mongodb:
image: local-mongo-db
container_name: mongodb
restart: unless-stopped
tty: true
environment:
MONGO_INITDB_DATABASE: test
MONGO_INITDB_USERNAME: root
MONGO_INITDB_PASSWORD: rootpassword
network_mode: service:php
volumes: ["/tmp/mongodb:/data/db"]
#MongoDB Service
mongocryptd:
image: local-mongocryptd
container_name: mongocryptd
entrypoint: mongocryptd
restart: unless-stopped
tty: true
network_mode: service:php
volumes: ["/tmp/mongodb:/data/db"]
volumes:
dbdata:
driver: local
Mongo images where build from here: https://docs.mongodb.com/manual/tutorial/install-mongodb-enterprise-with-docker/
I managed to solve this issue like this:
Instead of building mongo-enterprise image I accidentally build up with official mongo image mongo:4.2 and everything worked well. I don't know why mongo says that enterprise is needed for encryption. Because for me mongo-enterprise encryption didn't worked. The original mongo:4.2 image worked perfectly.
working docker-compose.yaml:
version: '3'
services:
#PHP Service
php:
image: local-base-php
container_name: app
restart: always
tty: true
ports:
- "27017:27017"
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www/projects
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
#MongoDB Service
mongodb:
image: mongo:4.2
container_name: mongodb
restart: always
tty: true
environment:
MONGO_INITDB_DATABASE: test
MONGO_INITDB_USERNAME: root
MONGO_INITDB_PASSWORD: rootpassword
network_mode: service:php
php node Dockerfile:
FROM php:7.4
RUN apt-get update && apt-get install -y zip unzip libzip-dev git mercurial zlib1g-dev libicu-dev libcurl4-gnutls-dev libssl-dev libssh2-1-dev libgmp-dev libpng-dev uuid-dev
RUN cd /tmp && git clone https://github.com/php/pecl-networking-ssh2 && cd /tmp/pecl-networking-ssh2 \
&& phpize && ./configure && make && make install \
&& echo "extension=ssh2.so" > /usr/local/etc/php/conf.d/ext-ssh2.ini \
&& rm -rf /tmp/ssh2
RUN docker-php-ext-configure gmp
RUN docker-php-ext-install zip json pdo pdo_mysql curl opcache bcmath sockets gmp gd
RUN docker-php-ext-install -j$(nproc) intl
RUN pecl install uuid pcov redis mongodb
RUN docker-php-ext-enable uuid pcov redis mongodb
RUN curl -sS https://get.symfony.com/cli/installer | bash -s -- --install-dir /usr/local/bin
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN curl -L -sS "https://github.com/splitsh/lite/releases/download/v1.0.1/lite_linux_amd64.tar.gz" | tar xvz -C /usr/local/bin
RUN apt-get update
RUN apt-get install -y curl gpg wget
RUN sh -c 'curl -s https://www.mongodb.org/static/pgp/libmongocrypt.asc | gpg --dearmor >/etc/apt/trusted.gpg.d/libmongocrypt.gpg'
RUN echo "deb https://libmongocrypt.s3.amazonaws.com/apt/ubuntu bionic/libmongocrypt/1.0 universe" | tee /etc/apt/sources.list.d/libmongocrypt.list
RUN wget -qO - mongodb.org/static/pgp/server-4.2.asc | apt-key add -
RUN echo "deb http://repo.mongodb.com/apt/debian stretch/mongodb-enterprise/4.2 main" | tee /etc/apt/sources.list.d/mongodb-enterprise.list
RUN apt-get update
RUN apt-get install -y libmongocrypt-dev && apt-get install --no-install-recommends -y mongodb-enterprise-cryptd
I hope I helped. Cheers!
UPDATE:
My tests are running without libmongocrypt-dev lib, so I guess you only need mongodb-enterprise-cryptd.
I decided to use socat forwarding traffic
Dockerfile:
# Build stage
FROM ubuntu:focal
ENV ENTRY_FILE=docker-entrypoint.sh
ENV MONGODB_PATH=/usr/src/mongodb
ENV ENTRY_POINT=$MONGODB_PATH/$ENTRY_FILE
RUN apt-get update && apt-get install sudo
RUN sudo apt-get install -y curl telnet vim socat libcurl4 libgssapi-krb5-2 libldap-2.4-2 libwrap0 libsasl2-2 libsasl2-modules libsasl2-modules-gssapi-mit snmp openssl liblzma5
RUN curl -k -o mongodb.tgz "https://downloads.mongodb.com/linux/mongodb-linux-$(arch)-enterprise-ubuntu2004-5.0.10.tgz"
RUN tar -xf mongodb.tgz --strip-components=1
RUN sudo ln -s $MONGODB_PATH/bin/* /usr/local/bin/
RUN sudo mkdir -p /data/db
RUN sudo mkdir -p /data/log
RUN sudo chown `whoami` /data/db
RUN sudo chown `whoami` /data/log
COPY ./$ENTRY_FILE $ENTRY_POINT
RUN chmod +x $ENTRY_POINT
ENTRYPOINT $ENTRY_POINT
docker-entrypoint.sh:
#!/bin/sh
socat -d -d TCP-LISTEN:27017,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17017 &
socat -d -d TCP-LISTEN:27018,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17018 &
socat -d -d TCP-LISTEN:27019,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17019 &
socat -d -d TCP-LISTEN:27020,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17020 &
./bin/mongod --port 17017 --dbpath /data/db --logpath /data/log/mongod.log &
./bin/mongocryptd --port 17020 --logpath /data/log/mongocryptd.log
$(hostname -I | awk '{print $1}') is the remote ip(my docker host ip, 172.2.x.x), you can change to your another container ip.
docker-compose.yml:
version: "3.8"
networks:
ABC:
external: false
name: ABC
services:
mongo-database:
container_name: MongoDB
privileged: true
build:
context: .docker/db
dockerfile: Dockerfile
volumes:
- .mongo/db:/data/db
- .mongo/log:/data/log
ports:
- 27017-27020:27017-27020
networks:
- ABC

How to disable docker container restart

Postgres docker is restarting with changed name after stopping it.
How to disable restart?
I've tried
docker update --restart=no my-container-ID
but when i stop container its starting again with new Container ID
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
53e52dfc9015 postgres:latest "docker-entrypoint.s…" 5 hours ago Up 5 hours 5432/tcp startmarketplace_db.1.o2i5ig3cn0tba5a64r4vkrb8n
$docker stop 53e52dfc9015
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a75d1587c66d postgres:latest "docker-entrypoint.s…" 46 seconds ago Up 39 seconds 5432/tcp startmarketplace_db.1.5ukdrwdo1bc0tssf4rzdkjrta
Source code of Dockerfile:
FROM php:7.2-apache
RUN apt-get update \
&& apt-get install -y \
curl git unzip vim \
libpng-dev libpq-dev \
&& docker-php-ext-install gd pdo pdo_pgsql pgsql
# Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# xDebug
RUN yes | pecl install xdebug \
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini \
&& echo "xdebug.remote_enable=on" >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo "xdebug.remote_autostart=on" >> /usr/local/etc/php/conf.d/xdebug.ini
# PHP
ADD ./php.ini /usr/local/etc/php
# Apache
ADD ./virtualhost.conf /etc/apache2/sites-enabled
RUN a2enmod rewrite
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Source code of docker-compose.yml:
version: '3.1'
services:
web:
build: ./xxx
ports:
- "9001:80"
volumes:
- ./app/xxx:/var/www/html
environment:
XDEBUG_CONFIG: >
remote_host=172.18.0.1
idekey=xxx
PHP_IDE_CONFIG: serverName=xxx
links:
- db
db:
image: postgres
environment:
POSTGRES_DB: xxx
POSTGRES_USER: xxx
POSTGRES_PASSWORD: xxx
ports:
- "5432:5432"
That's running in swarm mode. You need to stop the service or remove the entire stack.
For just the service:
docker service rm startmarketplace_db
For the entire stack:
docker stack rm startmarketplace

Symfony app in Docker doesn't work

I created an app in Symfony with MongoDB and I added that in a Docker image.
Image for MongoDB works good with message: 2017-04-19T12:47:33.936+0000 I NETWORK [initandlisten] waiting for connections on port 27017
But image for app dosen't work I receive the message:
stdin: is not a tty
hello
when I call the instruction: docker run docker_web_server:latest
I use for that docker-compose file:
web_server:
build: web_server/
ports:
- 5000:5000
links:
- mongo
tty: true
environment:
SYMFONY__MONGO_ADDRESS: mongo
SYMFONY__MONGO_PORT: 27017
mongo:
image: mongo:3.0
container_name: mongo
command: mongod --smallfiles
expose:
- 27017
And Dockerfile for app is:
FROM ubuntu:14.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
git \
curl \
php5-cli \
php5-json \
php5-intl
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
ADD entrypoint.sh /entrypoint.sh
ADD ./code /var/www
WORKDIR /var/www
#RUN chmod +x /entrypoint.sh
ENTRYPOINT [ "/bin/bash", "/entrypoint.sh" ]
entrypoint.sh
#!/bin/bash
rm -rf /var/www/app/cache/*
exec php -S 0.0.0.0:8000 # This will run a web-server on port 8000
What is the problem?
I call wrong the server from docker image?
I expected to have the message:
[OK] Server running on http://127.0.0.1:8000
You should remove CMD ['echo', 'hello'] from your Dockerfile, has this is being passed as a parameter to your ENTRYPOINT
You should also add tty: true to your service definition, web-server
I'm hoping entrypoint.sh runs php -S 0.0.0.0:8000 at end. Please post that for further advice. If you do use php -S inside of it, prefix it with exec to get it to take over as the main process.
Edit since new information added:
I'd modify the entrypoint.sh to:
#!/bin/bash
rm -rf /var/www/app/cache/*
php -S 0.0.0.0:8000 # This will run a web-server on port 8000
I'd get rid of the symfony_environment.sh and instead, add the following to your web-server service:
environment:
SYMFONY__MONGO_ADDRESS: mongo
SYMFONY__MONGO_PORT: 27017
As a side-note, I contribute to a project called boilr that generates boilerplates, like this. I've even created a template for php/docker-compose projects, it would be worth checking out. I always keep it up-to-date with the best practices.
https://github.com/rawkode/boilr-docker-compose-php

PostgreSQL PGDATA from host in Docker-System

I want to run a webapp with docker-compose. I have a centos7 host running postgresql and the docker-engine. My docker-compose includes a postgresql-image and should run with the PGDATA from the host system. But everytime I run the docker-compose I get the error:
initdb: directory "/var/lib/docker-postgresql" exists but is not empty
The docker-compose part for the postgresql-database looks like:
db:
build: ./postgres/
container_name: ps01
volumes:
- ./postgres:/tmp
- /var/lib/pgsql/9.4:/var/lib/docker-postgresql
expose:
- "5432"
environment:
PGDATA: /var/lib/docker-postgresql
I mount the /var/lib/pgsql/9.4 inside the Docker-Postgresql-App to /var/lib/docker-postgresql and set this path to the PGDATA-ENV.
The Dockerfile in ./postgres/ looks like:
FROM postgres:latest
ENV POSTGIS_MAJOR 2.3
ENV POSTGIS_VERSION 2.3.1+dfsg-1.pgdg80+1
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR=$POSTGIS_VERSION \
postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR-scripts=$POSTGIS_VERSION \
postgis=$POSTGIS_VERSION \
&& rm -rf /var/lib/apt/lists/*
What should I do to share my Postgres-Data from the host?