MongoNetworkError when connecting from different container - mongodb

PHP dev new to NodeJS and I am struggling to get my NodeJS container to connect to my MongoDB container. As far I can see I have all the correct NPMs installed in my Docker file and the docker-compose is correct. Please note that I have not added the containers to the same network or but in the link to the db service into the nodejs container, although I did try this and got pretty much the same result.
Unsure why I am getting the error below when I bash into the nodejs container and run node app.js
Error
[nodemon] clean exit - waiting for changes before restart
[nodemon] restarting due to changes...
[nodemon] starting `node app.js`
(node:92) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
Server is listening on port 3000
Could not connect to the database. Exiting now... { MongoNetworkError: failed to connect to server [localhost:27017] on first connect [MongoNetworkError: connect ECONNREFUSED localhost:27017]
at Pool.<anonymous> (/usr/src/app/node_modules/mongodb/lib/core/topologies/server.js:431:11)
at Pool.emit (events.js:193:13)
at createConnection (/usr/src/app/node_modules/mongodb/lib/core/connection/pool.js:559:14)
at connect (/usr/src/app/node_modules/mongodb/lib/core/connection/pool.js:973:11)
at makeConnection (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:39:11)
at callback (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:261:5)
at Socket.err (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:286:7)
at Object.onceWrapper (events.js:281:20)
at Socket.emit (events.js:193:13)
at emitErrorNT (internal/streams/destroy.js:91:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
at processTicksAndRejections (internal/process/task_queues.js:81:17)
name: 'MongoNetworkError',
errorLabels: [ 'TransientTransactionError' ],
[Symbol(mongoErrorContextSymbol)]: {} }
docker-compose.yml
version: '3.5' # We use version 3.5 syntax
services: # Here we define our service(s)
frontend:
container_name: angular
build: ./angular_app
volumes:
- ./angular_app:/usr/src/app
ports:
- 4200:4200
command: >
bash -c "npm install && ng serve --host 0.0.0.0 --port 4200"
depends_on:
- api
# NodeJS/Express service for API
api:
image: nodeexpress
build:
context: ./node_server
dockerfile: Dockerfile
volumes:
- ./node_server:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3000:3000
links:
- mongoservice
depends_on:
- mongoservice
# Mongo database service
mongoservice:
image: mongo
container_name: mongocontainer
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${DB_MONGO_ROOTUSER}
MONGO_INITDB_ROOT_PASSWORD: ${DB_MONGO_ROOTPWD}
ports:
- ${DB_MONGO_EXTERNAL_PORT}:${DB_MONGO_INTERNAL_PORT}
volumes:
- ${DB_MONGO_VOLUME1}
volumes:
data:
external: true
networks:
default:
driver: bridge
Dockerfile (for api service - nodejs express)
FROM node:11-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . .
RUN npm install
#RUN npm install mysql
RUN npm install mongodb --save
#RUN npm install --save body-parser express mysql2 sequelize helmet cors
RUN npm install --save body-parser express mongoose helmet cors
RUN npm install --save nocache
RUN npm install nodemon --save
EXPOSE 4300
#CMD ["npm", "run", "start"]
CMD [ "npm", "run", "start.dev" ]
app.js
const express = require('express');
const bodyParser = require('body-parser');
// create express app
const app = express();
// parse requests of content-type - application/x-www-form-urlencoded
app.use(bodyParser.urlencoded({ extended: true }))
// parse requests of content-type - application/json
app.use(bodyParser.json());
// Configuring the database
const dbConfig = require('./config/database.config');
const mongoose = require('mongoose');
mongoose.Promise = global.Promise;
// Connecting to the database
// Connection string variants attempted
// mongodb://root:secret#0.0.0.0:27017/angapp2
// mongodb://root:secret#127.0.0.1:27017/angapp2
// mongodb://root:secret#mongoservice:27017/angapp2
mongoose.connect('mongodb://root:secret#localhost:27017/angapp2', {
useNewUrlParser: true
}).then(() => {
console.log("Successfully connected to the database");
}).catch(err => {
console.log('Could not connect to the database. Exiting now...', err);
process.exit();
});
// define a simple route
app.get('/', (req, res) => {
res.json({"message": "Welcome to EasyNotes application. Take notes quickly. Organize and keep track of all your notes."});
});
// Require Notes routes
require('./routes/note.routes.js')(app);
// listen for requests
app.listen(3000, () => {
console.log("Server is listening on port 3000");
});
What I've tried:
Attempted the various connection string variants in terms of the host name, i.e. localhost, 127.0.0.1, 0.0.0.0, mongoservice
Also ran docker inspect <container-id> on the mongo service and got the internal IP address of the container and tried that in the connection string
Added RUN npm install mongodb --save to node servers Dockerfile
Managed to connect Robo 3D GUI to the Mongo container without issue
Bashed into Mongo service and managed to log into the DB and run some statements as a test that the service was working fine.

Maybe its just me being blind but, it seems that you are trying to connect to your Database on Port 27017 but in your Docker-Compose File you set the Port of the Database to 8081. Try Matching the ports.

Related

Failed Authentication when connecting with Flask through PyMongo to MongoDB in Docker Compose

I'm using Docker Compose and trying to make two containers talk to each other. One runs a MongoDB database and the other one is a Flask app that needs to read data from the first one using PyMongo.
The Mongo image is defined with the following Dockerfile:
FROM mongo:6.0
ENV MONGO_INITDB_ROOT_USERNAME admin
ENV MONGO_INITDB_ROOT_PASSWORD admin-pwd
ENV MONGO_INITDB_DATABASE admin
COPY mongo-init.js /docker-entrypoint-initdb.d/
EXPOSE 27017
And my data is loaded through the following mongo-init.js script:
db.auth('admin','admin-pwd')
db = db.getSiblingDB('quiz-db')
db.createUser({
user: 'quiz-admin',
pwd: 'quiz-pwd',
roles: [
{
role: 'readWrite',
db: 'quiz-db'
}
]
});
db.createCollection('questions');
db.questions.insertMany([
{
question: "Do you like sushi?",
answers: {
0:"Yes",
1:"No",
2:"Maybe"
}
}
]);
The Flask app is pretty straightforward. I'll skip the Dockerfile for this one as I don't think it's important to the issue. I try to connect to the database with the following code:
from flask import Flask, render_template
from pymongo import MongoClient
app = Flask(__name__)
MONGO_HOST = "questions-db"
MONGO_PORT = "27017"
MONGO_DB = "quiz-db"
MONGO_USER = "quiz-admin"
MONGO_PASS = "quiz-pwd"
uri = "mongodb://{}:{}#{}:{}/{}?authSource=quiz-db".format(MONGO_USER, MONGO_PASS, MONGO_HOST, MONGO_PORT, MONGO_DB)
client = MongoClient(uri)
db=client["quiz-db"]
questions=list(db["questions"].find())
I'm not an expert when it comes to Mongo, but I've set authSource to 'quiz-db' since that's the database where I've created the user in the 'mongo-init.js' script. I tried to run the database container alone and I did successfully log in using mongosh with the user 'quiz-db'. All the data is there and everything works fine.
The problem is only coming up when trying to connect from the Flask app. Here's my Docker compose file:
version: '3.9'
services:
#Flask App
app:
build: ./app
ports:
- "8000:5000"
depends_on:
- "questions-db"
networks:
- mongo-net
#Mongo Database
questions-db:
build: ./questions_db
hostname: questions-db
container_name: questions-db
ports:
- "27017:27017"
networks:
- mongo-net
networks:
mongo-net:
driver: bridge
When I run 'docker compose up' I get the following error on the Flask container startup:
pymongo.errors.OperationFailure: command find requires authentication
full error: {'ok': 0.0, 'errmsg': 'command find requires authentication', 'code': 13, 'codeName': 'Unauthorized'}
MongoDB stores all user credentials in the admin database, unless you are using a really ancient version.
Use authSource=admin in the URI

Running command during docker compose or docker build failed

I am trying to build mongo inside docker and I want to push database, collection and document inside the collection I tried with docker build and below my Dockerfile
FROM mongo
RUN mongosh mongodb://127.0.0.1:27017/demeter --eval 'db.createCollection("Users")'
RUN mongosh mongodb://127.0.0.1:27017/demeter --eval 'var document = {"_id": "61912ebb4b6d7dcc7e689914","name": "Test Account","email":"test#test.net", "role": "admin", "company_domain": "test.net","type": "regular","status": "active","createdBy": "61901a01097cb16e554f5a19","twoFactorAuth": false, "password": "$2a$10$MPDjDZIboLlD8xpc/RfOouAAAmBLwEEp2ESykk/2rLcqcDJJEbEVS"}; db.Users.insert(document);'
EXPOSE 27017
and using Docker Compose
version: '3.9'
services:
web:
build:
context: ./server
dockerfile: Dockerfile
ports:
- "8080:8080"
demeter_db:
image: "mongo"
volumes:
- ./mongodata:/data/db
ports:
- "27017:27017"
command: mongosh mongodb://127.0.0.1:27017/demeter --eval 'db.createCollection("Users")'
demeter_redis:
image: "redis"
I want to add the below records because the Web Server is using them in backend. if there is a better way of doing it I would be thankful.
What I get is the below error
demeter_db_1 | Current Mongosh Log ID: 61dc697509ee790cc89fc7aa
demeter_db_1 | Connecting to: mongodb://127.0.0.1:27017/demeter?directConnection=true&serverSelectionTimeoutMS=2000
demeter_db_1 | MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017
Knowing when I connect to interactive shell inside mongo container and add them manually things works fine.
root#8b20d117586d:/# mongosh 127.0.0.1:27017/demeter --eval 'db.createCollection("Users")'
Current Mongosh Log ID: 61dc64ee8a2945352c13c177
Connecting to: mongodb://127.0.0.1:27017/demeter?directConnection=true&serverSelectionTimeoutMS=2000
Using MongoDB: 5.0.5
Using Mongosh: 1.1.7
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.
------
The server generated these startup warnings when booting:
2022-01-10T16:52:14.717+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
2022-01-10T16:52:15.514+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
------
{ ok: 1 }
root#8b20d117586d:/# exit
exit
Cheers

On Mac and on Raspi same error: EADDRINUSE: address already in use 0.0.0.0:9092

What happened:
I am running kafka and zookeeper with the docker-compose.yml below on my local machine. With server.js I was always able to run a node websocket server to transmit messages.
Also I am connected to a raspi 4 via ssh. The raspi is in the same network like my machine and also runs kafka and zookeeper.
Somehow I now get an error message (seen below) if I run the server.js. I tried running the script on both, the raspi and my macbook, and I alway get the same message.
I tried the following
I searched for other processes using this port and I only found the docker container using it.
(base) ➜ server git:(branch_julian) ✗ lsof -i:9092
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 7145 jumue 74u IPv6 0x4d78cd0e1c9333df 0t0 TCP *:XmlIpcRegSvc (LISTEN)
I already restarted docker.
Closed the ssh connection to the pi
Scripts running
docker-compose.yml:
version: "3"
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
container_name: 'zookeeper'
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
container_name: 'kafka'
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
server.js
const WebSocket = require("ws");
var Kafka = require('no-kafka');
const wss = new WebSocket.Server({ port: "9092" });
wss.on("connection", ws => {
console.log("Kafka is connected");
// Create an instance of the Kafka consumer
const consumer = new Kafka.SimpleConsumer;
var data = function (messageSet, topic, partition) {
messageSet.forEach(function (m) {
console.log(topic, partition, m.offset, m.message.value.toString('utf8'));
ws.send(m.message.value.toString('utf8'))
});
};
ws.on("close", () => {
console.log("Client has disconnected");
});
// Subscribe to the Kafka topic
return consumer.init().then(function () {
return consumer.subscribe('test', data);
});
});
Error Message
(base) ➜ websocketserver git:(branch_me) ✗ node server/server.js
events.js:292
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE: address already in use 0.0.0.0:9092
at Server.setupListenHandle [as _listen2] (net.js:1314:16)
at listenInCluster (net.js:1362:12)
at doListen (net.js:1499:7)
at processTicksAndRejections (internal/process/task_queues.js:85:21)
Emitted 'error' event on WebSocketServer instance at:
at Server.WebSocketServer._onServerError (/Users/jumue/virtual7/tasks/0006_lui_dashboard/websocketserver/server/node_modules/ws/lib/WebSocketServer.js:82:50)
at Server.emit (events.js:315:20)
at emitErrorNT (net.js:1341:8)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
code: 'EADDRINUSE',
errno: -48,
syscall: 'listen',
address: '0.0.0.0',
port: 9092
}
In your server.js code, you are trying to create a WebSocket server on the port 9092 which is already allocated by Kafka already.
You should better check npm documentation or github documentation. You may configure your Kafka to another port and configure your SimpleConsumer (using connectionString) accordingly.

connecting scrapy container to mongo container

I am trying to spin and connect two containers (mongo and scrapy spider) using docker-compose. Being new to Docker I've had a hard time troubleshooting networking ports (inside and outside the container). To respect your time I'll keep it short.
The problem:
Can't connect the spider to the mongo db container and get a timeout error. I think it has to with the IP address that I am trying to connect to from the container is incorrect. However, the spider works locally (non-dockerized version) and can pass data to a running mongo container.
small edit to remove name and email from code.
error:
pymongo.errors.ServerSelectionTimeoutError: 127.0.0.1:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 5feb8bdcf912ec8797c25497, topology_type: Single
pipeline code:
from scrapy.exceptions import DropItem
# scrappy log is deprecated
#from scrapy.utils import log
import logging
import scrapy
from itemadapter import ItemAdapter
import pymongo
class xkcdMongoDBStorage:
"""
Class that handles the connection of
Input:
MongoDB
Output
"""
def __init__(self):
# requires two arguments(address and port)
#* connecting to the db
self.conn = pymongo.MongoClient(
'127.0.0.1',27017) # works with spider local and container running
# '0.0.0.0',27017)
# connecting to the db
dbnames = self.conn.list_database_names()
if 'randallMunroe' not in dbnames:
# creating the database
self.db = self.conn['randallMunroe']
#if database already exists we want access
else:
self.db = self.conn.randallMunroe
#* connecting to the table
dbCollectionNames = self.db.list_collection_names()
if 'webComic' not in dbCollectionNames:
self.collection = self.db['webComic']
else:
# the table already exist so we access it
self.collection = self.db.webComic
def process_item(self, item, spider):
valid = True
for data in item:
if not data:
valid = False
raise DropItem("Missing {0}!".format(data))
if valid:
self.collection.insert(dict(item))
logging.info(f"Question added to MongoDB database!")
return item
Dockerfile for the spider
# base image
FROM python:3
# metadata info
LABEL maintainer="first last name" email="something#gmail.com"
# exposing container port to be the same as scrapy default
EXPOSE 6023
# set work directly so that paths can be relative
WORKDIR /usr/src/app
# copy to make usage of caching
COPY requirements.txt ./
#install dependencies
RUN pip3 install --no-cache-dir -r requirements.txt
# copy code itself from local file to image
COPY . .
CMD scrapy crawl xkcdDocker
version: '3'
services:
db:
image: mongo:latest
container_name: NoSQLDB
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- ./data/bin:/data/db
ports:
- 27017:27017
expose:
- 27017
xkcd-scraper:
build: ./scraperDocker
container_name: xkcd-scraper-container
volumes:
- ./scraperDocker:/usr/src/app/scraper
ports:
- 5000:6023
expose:
- 6023
depends_on:
- db
Thanks for the help
Try:
self.conn = pymongo.MongoClient('NoSQLDB',27017)
Within docker compose you reference other containers based on the service name.

waiting service database running before others services running in Docker [duplicate]

This question already has answers here:
Docker Compose wait for container X before starting Y
(20 answers)
Closed 3 years ago.
I am trying to run my app which depends_on my Postgresql in Docker
let say my database PostgreSQL not running now
and in my docker-compose.yml:
version: "3"
services:
myapp:
depends_on:
- db
container_name: myapp
build:
context: .
dockerfile: Dockerfile
restart: on-failure
ports:
- "8100:8100"
db:
container_name: postgres
restart: on-failure
image: postgres:10-alpine
ports:
- "5555:5432"
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: 12345678
POSTGRES_DB: dev
when I try docker-compose up -d yes it created the postgres and then create that myapp service
but it seems my Postgresql is not running yet, after finish install and running myapp,
it said:
my database server not running yet
how to make myapp running until that db service know that my db running ??
The documentation of depends_on says that:
depends_on does not wait for db to be “ready” before starting myapp - only until it have been started.
So you'll have to check that your database is ready by yourself before running your app.
Docker has a documentation that explains how to write a wrapper script to do that:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
Then you can just call this script before running your app in your docker-compose file:
command: ["./wait-for-postgres.sh", "db", "python", "app.py"]
There are also tools such as wait-for-it, dockerize or wait-for.
However these solutions has some limitations and Docker says that:
The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason.
This method will be more resilient.
Here is how I use a retry strategy in javascript:
async ensureConnection () {
let retries = 5
const interval = 1000
while (retries) {
try {
await this.utils.raw('SELECT \'ensure connection\';')
break
} catch (err) {
console.error(err)
retries--
console.info(`retries left: ${retries}, interval: ${interval} ms`)
if (retries === 0) {
throw err
}
await new Promise(resolve => setTimeout(resolve, interval))
}
}
}
Please have a look at: https://docs.docker.com/compose/startup-order/.
Docker-compose won't wait for your database, you need a way to check it externally (via script or retrying the connection as Mickael B. proposed). One of the solutions proposed in the above link is a wait-for.sh utility script - we used it in a project and it worked quite well.