Admin user fails in rocketchat - docker-compose

I have been running rocketchat on a cloud instance. I have used the parameters specified on the below document in rocketchat for creating admin user through docker-compose in yaml file.
https://docs.rocket.chat/guides/administrator-guides/create-the-first-admin
I am not able to create a admin user as my variables are correctly specified.
docker-compose.yaml
version: '3.8'
services:
rocketchat:
image: rocketchat/rocket.chat:latest
container_name: $ROCKETCHAT_CONTAINER_NAME
command: >
bash -c
"for i in `seq 1 30`; do
node main.js &&
s=$$? && break || s=$$?;
echo \"Tried $$i times. Waiting 5 secs...\";
sleep 5;
done; (exit $$s)"
restart: unless-stopped
volumes:
- ./uploads:/app/uploads
depends_on:
- mongo
environment:
- PORT=3000
- ROOT_URL=http://xxxxxxxxx:3000
- MONGO_URL=mongodb://mongo:27017/rocketchat
- MONGO_OPLOG_URL=mongodb://mongo:27017/local
- MAIL_URL=smtp://smtp.email
- ADMIN_USERNAME=admin
- ADMIN_PASS=password
- ADMIN_EMAIL=beulah#xxxxxx.com
ports:
- 3000:$ROCKETCHAT_PORT
labels:
- "traefik.backend=rocketchat"
- "traefik.frontend.rule=Host: your.domain.tld"
networks:
- $ROCKETCHAT_NETWORK
mongo:
image: mongo:$MONGO_IMAGE_TAG
container_name: $MONGO_CONTAINER_NAME
restart: unless-stopped
volumes:
- ./data/db:/data/db
command: mongod --smallfiles --oplogSize 128 --replSet rs0 --storageEngine=mmapv1
env_file: .env
labels:
- "traefik.enable=false"
networks:
- $ROCKETCHAT_NETWORK
mongo-init-replica:
image: mongo:$MONGO_IMAGE_TAG
container_name: $MONGO_REPLICA_CONTAINER_NAME
command: >
bash -c
"for i in `seq 1 30`; do
mongo mongo/rocketchat --eval \"
rs.initiate({
_id: 'rs0',
members: [ { _id: 0, host: 'localhost:27017' } ]})\" &&
s=$$? && break || s=$$?;
echo \"Tried $$i times. Waiting 5 secs...\";
sleep 5;
done; (exit $$s)"
depends_on:
- mongo
env_file: .env
networks:
- $ROCKETCHAT_NETWORK
networks:
rocketchat:

I wasn't been able to reproduce the problem, although there is one common pitfall that you may have encountered. Tl;dr: if you run this several times on one machine in the same directory - it's most likely mongo's storage. After first setup it creates ./data directory where it keeps user accounts and everything else. If you created admin once, it won't go through this again.
Normally, if you run run rocket.chat without these variables, it allows you to create an admin account via the web interface. When you set environment variables it may get into this piece of code:
programs/server/app/app.js:
...
if (process.env.ADMIN_PASS) {
if (_.isEmpty(getUsersInRole('admin').fetch())) {
...
But as you see there is a second check for any user in 'admin' role. In other words, environment variables are only used when there is no one in the role yet.
If it does use the variables, you will see something like this in the container logs:
Inserting admin user:
Name: Administrator
Email: beulah#nonexistent.domain
Username: admin
If it does not, you'll see a line like this:
Users with admin role already exist; Ignoring environment variables ADMIN_PASS
The most obvious reason why this can happen is that you ran the compose file before with different set of credentials or registered an account via the web GUI. After that admin user was saved in the database, which (in your compose file) keeps its data outside the container, so it is persistent between restarts. If what I said about previous launch was true for you and you want to start from the beginning - remove ./data directory from where your compose file is. It is there mongo saves the data.

Related

How do I run a bash script in a docker container after it starts?

I'm trying to run a bash script after a Postgres container starts which 1) creates a new table within the Postgres DB, and 2) runs a copy command that dumps the contents of a csv file into the newly created table.
Currently, I'm specifying the execution of the script within my docker-compose.yml file using the "command" argument, but I find that it doesn't allow the Postgres container to succesfully start. I receive the following information from the log:
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
When I remove the "command" argument everything is fine. Here is what my docker-compose.yml files looks like now:
# docker-compose.yml
version: '3'
services:
web:
build: .
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; uvicorn app.main:app --host 0.0.0.0'
volumes:
- .:/app
expose: # new
- 8000
environment:
- DATABASE_URL=postgresql://fastapi_traefik:fastapi_traefik#db:5432/fastapi_traefik
depends_on:
- db
labels: # new
- "traefik.enable=true"
- "traefik.http.routers.fastapi.rule=Host(`fastapi.localhost`)"
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
- "/Users/theComputerPerson/:/tmp"
expose:
- 5432
environment:
- POSTGRES_USER=fastapi_traefik
- POSTGRES_PASSWORD=fastapi_traefik
- POSTGRES_DB=fastapi_traefik
command: /bin/bash -c "/tmp/newtable.sh"
traefik: # new
image: traefik:v2.2
ports:
- 8008:80
- 8081:8080
volumes:
- "./traefik.dev.toml:/etc/traefik/traefik.toml"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
volumes:
postgres_data:
It may be worth noting that I'm trying to customize some of the aspects of this FastAPI project, and to turn your attention to the development files and not the production files. Please let me know if I can provide any additional information in the comments.
You are overriding the default container image startup command.
According to PostgreSQL official container image page, you can extend initialization adding your sh scripts (or even sql files) to /docker-entrypoint-initdb.d directory.
See https://hub.docker.com/_/postgres.
This approach has a caveat that this script could not be executed.
Another approach is to override default container image command adding yours in bash style: postgres; /bin/bash -c "/tmp/newtable.sh";

Docker DB Migration/Deployment to DigitalOcean

Warning: I am fairly new to docker and cloud hosting, this is likely a dumb question.
I have a local web app which has 3 images associated with it, the app itself, the db and a phpmyadmin image. All works well locally, and if I transfer all the files to my digital ocean droplet and bring up my containers it works fine there as well, but this is not how I want to deploy having every file from every library residing in my droplet.
I have been experimenting with creating a docker-machine on my droplet and deploying my containers remotely to it. This seems to work fine other than the fact that my db image does not reference my database and is simply an empty db. I tried to migrate the db in this fashion which I saw in a tutorial:
docker-compose run --rm web db:create db:migrate
But got the following error, I assume this is because my dev machine is running Windows 10 not Linux, but I cannot find anywhere what the equivalent command would be for a Windows machine.
Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"db:create\": executable file not found in $PATH": unknown
I know I am probably missing something really stupid and easy but I am having difficulties figuring out how to migrate the data for my db image. Thanks in advance.
UPDATE:
As requested here is my docker-compose:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ./data:/docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
volumes:
data:
UPDATE #2:
transfered db file to /docker-entrypoint-initdb.d (I tried this yesterday too but couldn't get it working) and created a new production docker-compose-prod.yml I must be missing something still though as the DB is still empty. Below is my new docker-compose-prod.yml:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- /docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
Your strategy is sound.
Actually, you can take it a further step by automating the Droplet provisioning to e.g. use a container-oriented OS and access your Compose file. But that's not this question ;-)
I think it is not relevant that you're using Windows and probably makes little difference; it may require some answer tweaks but that's about it.
The challenge is that you need to move (or recreate) the database state on the remote machine. There are several ways that the DB state could be persisted: in-container (not ideal); using volume mounts (good), other.
Each is "moveable" but it would help if you could add your Compose file to your question so that we may see which approach is being used.
In full-disclosure Im not familiar with the approach that you referencesd but that does not mean that it's inaccurate; I'm just not familiar with it.
Update: docker-entrypoint-initdb.d
See: "Initializing a fresh instance" on MySQL
So, any files within that directory are run to initialize the database container when it's created from the image.
In your Compose file you mount your host's ./data directory into this file. Presumably that directory contains >=1 file that performs your intended initialization.
NB The section volumes: data: at the end of the Compose file appears redundant. You're actually using a host-mounted directory ./data not this volume.
When you run the Compose file on the Droplet, those files aren't present and you'll need to copy them.
The simplest way to do this is to use scp and this provides 2 alternatives:
Either retain the data directory:
IP=[DROPLET-IP]
scp -r ./data root#${IP}:/data
NB The remote destination is /data not ./data. You will need to revise the Compose file on the Droplet (!) too:volumes: - /data:/docker-entrypoint-initdb.d
Or move the files directly to the Droplet's /docker-entrypoint-initdb.d:
scp -r ./data root#${IP}/docker-entrypointy-initdb.d
NB Now there's no need for the volume mapping. You may remove: volumes: - ./data:/docker-entrypoint-initdb.d
Update: repro (works)
I used a tweaked docker-compose.yaml but it's essentially the same:
version: "3.4"
services:
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ${PWD}/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
restart: always
adminer:
image: adminer
restart: always
ports:
- 8080:8080
Then mkdir ${PWD}/docker-entrypoint-initdb.d and created a file in it called freddie.sql:
create database if not exists frederik;
use frederik;
create table treats (
TreatID INT NOT NULL AUTO_INCREMENT,
TreatName VARCHAR(255) NOT NULL,
PRIMARY KEY (TreatId));
insert into treats (TreatName)
values
("Dried Salmon"),
("Meatballs");
Then docker-compose rm --force && docker-compose up
I was able to browse the adminer UI (:8080), login (root|mypass) and browse the database frederik:

Swift Vapor 3 + PostgreSQL + Docker-Compose Correct configuration?

Currently building a package to test some devOps configurations with AWS. Building an application with Swift Vapor3, PostgreSQL 11, Docker. Given my github Repo the project builds/tests/runs just fine with vapor build vapor test vapor run given that you have a local installation of postgresql installed with a username: test, password: test
However my api is not connecting to my DB and am worried my configuration is wrong.
version: "3.5"
services:
api:
container_name: vapor_it_container
build:
context: .
dockerfile: web.Dockerfile
image: api:dev
networks:
- vapor-it
environment:
POSTGRES_PASSWORD: 'test'
POSTGRES_DB: 'test'
POSTGRES_USER: 'test'
POSTGRES_HOST: db
POSTGRES_PORT: 5432
ports:
- 8080:8080
volumes:
- .:/app
working_dir: /app
stdin_open: true
tty: true
entrypoint: bash
restart: always
depends_on:
- db
db:
container_name: postgres_container
image: postgres:11.2-alpine
restart: unless-stopped
networks:
- vapor-it
ports:
- 5432:5432
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_HOST: db
POSTGRES_PORT: 5432
PGDATA: /var/lib/postgresql/data
volumes:
- database_data:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: test#test.com
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- vapor-it
restart: unless-stopped
networks:
vapor-it:
driver: bridge
volumes:
database_data:
pgadmin:
# driver: local
Also while reading the Docker postgres docs I came across this in the "Caveats" section.
If there is no database when postgres starts in a container, then postgres will create the default database for you. While this is the expected behavior of postgres, this means that it will not accept incoming connections during that time. This may cause issues when using automation tools, such as docker-compose, that start several containers simultaneously.postgres dockerhub
I have not made those changes because I am not sure how to go about making that file or how the configuration would look. Has anyone done something like this that has some experience with connecting to Postgresql and using vapor as a back end?
The theory is, a well-behaved container should be able to gracefully handle not having its dependencies running, because despite the best efforts of your container scheduler, containers may come and go. So if your app needs a DB, but at any given moment the DB is unavailable, it should respond rationally. For example, returning a 503 for an HTTP request, or trying again after a delay for a scheduled task.
That’s theory though, and not always applicable. In your situation, maybe you really do just need your Vapor app to wait for Postgres to come available, in which case you could use a wrapper script that polls your DB and only starts your main app after the DB is ready.
See this suggested wrapper script from the Docker docs:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
command: ["./wait-for-postgres.sh", "db", "vapor-app", "run"]

How do I properly set up my Keystone.js app to run in docker with mongo?

I have built my app which runs fine locally. When I try to run it in docker (docker-compose up) it appears to start, but then throws an error message:
Creating mongodb ... done
Creating webcms ... done
Attaching to mongodb, webcms
...
Mongoose connection "error" event fired with:
MongoError: failed to connect to server [localhost:27017] on first connect
...
webcms exited with code 1
I have read that with Keystone.js you need to configure the Mongo location in the .env file, which I have:
MONGO_URI=mongodb://localhost:27017
Here is my Docker file:
# Use node 9.4.0
FROM node:9.4.0
# Copy source code
COPY . /app
# Change working directory
WORKDIR /app
# Install dependencies
RUN npm install
# Expose API port to the outside
EXPOSE 3000
# Launch application
CMD ["node","keystone"]
...and my docker-compose
version: "2"
services:
# NodeJS app
web:
container_name: webcms
build: .
ports:
- 3000:3000
depends_on:
- mongo
# MongoDB
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db/mongo
ports:
- 27017:27017
When I run docker ps it confirms that mongo is up and running in a container...
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3e06e4a5cfe mongo "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:27017->27017/tcp mongodb
I am either missing some config or I have it configured incorrectly. Could someone tell me what that is?
Any help would be appreciated.
Thanks!
It is not working properly because you are sending the wrong host.
your container does not understand what is localhost:27017 since it's your computer address and not its container address.
Important to understand that each service has it's own container with a different IP.
The beauty of the docker-compose that you do not need to know your container address! enough to know your service name:
version: "2"
volumes:
db-data:
driver: local
services:
web:
build: .
ports:
- 3000:3000
depends_on:
- mongo
environment:
- MONGO_URI=mongodb://mongo:27017
mongo:
image: mongo
volumes:
- "db-data:/data/db/mongo"
ports:
- 27017:27017
just run docker-compose up and you are all-set
A couple of things that may help:
First. I am not sure what your error logs look like but buried in my error logs was:
...Error: The cookieSecret config option is required when running Keystone in a production environment.Update your app or environment config so this value is supplied to the Keystone constructor....
To solve this problem, in your Keystone entry file (eg: index.js) make sure your Keystone constructor has the cookieSecret parameter set correctly: process.env.NODE_ENV === 'production'
Next. Change the mongo uri from the one Keystone generated (mongoUri: mongodb://localhost/my-keystone) to: mongoUri: 'mongodb://mongo:27017'. Docker needs this because it is the mongo container address. This change should also be reflected in your docker-compose file under the environment variable under MONGO_URI:
... environment: - MONGO_URI=mongodb://mongo:27017 ...
After these changes your Keystone constructor should look like this:
const keystone = new Keystone({
adapter: new Adapter(adapterConfig),
cookieSecret: process.env.NODE_ENV === 'production',
sessionStore: new MongoStore({ url: 'mongodb://mongo:27017' }),
});
And your docker-compose file, something like this (I used a network instead of links for my docker-compose as Docker has stated that links are a legacy option. I've included mine in case its useful for anyone else):
version: "3.3"
services:
mongo:
image: mongo
networks:
- appNetwork
ports:
- "27017:27017"
environment:
- MONGO_URI=mongodb://mongo:27017
appservice:
build:
context: ./my-app
dockerfile: Dockerfile
networks:
- appNetwork
ports:
- "3000:3000"
networks:
appNetwork:
external: false
It is better to use mongo db atlas if you does not want complications. You can use it in local and in deployment.
Simple steps to get the mongo url is available in https://www.mongodb.com/cloud/atlas
Then add a env variable
CONNECT_TO=mongodb://your_url
For passing the .env to docker, use
docker run --publish 8000:3000 --env-file .env --detach --name kb keystoneblog:1.0

How to create user in mongodb with docker-compose

I'm trying to create some kind of script that will create a docker with mongodb and automatically create a user.
I can usually manage my docker images with docker-compose but this time, I don't know how to do it.
Basically, here is what I have to do:
clean/destroy container (docker-compose down)
create a docker container with mongodb and start it (without --auth parameter)
execute a java script containing db.createUser()
stop the container
restart the same container with --auth parameter to allow login with the user created in the javascript
I can't find how to do that properly with docker-compose because when it starts, I have to give it the command --auth. If I do that, I cannot execute my javascript to add my user. MongoDB allows users creation without being logged in if there is no user and if --auth parameter is not provided.
I want to do that automatically, I do not want to manually do some commands. The goal is to have a script that can be executed before each integration tests to start from a clean database.
Here is my project:
integration-test/src/test/resources/scripts/docker-compose.yml
mongodb:
container_name: mongo
image: mongo
ports:
- "27017:27017"
volumes:
- .:/setup
command: --auth
integration-test/src/test/resources/scripts/docker-init.sh
docker-compose down
docker-compose up -d
sleep 1
docker exec mongo bash -c "mongo myDatabase /setup/mongodb-setup.js"
integration-test/src/test/resources/scripts/mongodb-setup.js
db.createUser(
{
user: "myUser",
pwd: "myPassword",
roles: [
{ role: "readWrite", db: "myDatabase" }
]
})
Finding a way to start again a container with a new parameter (in this case --auth) would help but I can't find how to do that (docker start does not take parameters).
Any idea how I should do what I would like ?
If not, I can still delete everything from my database with some Java code or something else but I would like a complete mongodb docker setup created with a script.
The official mongo image now supports following environment variables that can be used in docker-compose as below:
environment:
- MONGO_INITDB_ROOT_USERNAME=user
- MONGO_INITDB_ROOT_PASSWORD=password
- MONGO_INITDB_DATABASE=test
more explanation at:
https://stackoverflow.com/a/42917632/1069610
This is how I do it, my requirement was to bring up a few containers along with mongodb, the other containers expect a user to be present when they come up, this worked for me. The good part is, the mongoClientTemp exits after the command is executed so the container doesn't stick around.
version: '2'
services:
mongo:
image: mongo:latest
container_name: mongo
ports:
- "27017:27017"
volumes:
- /app/hdp/mongo/data:/data/db
mongoClientTemp:
image: mongo:latest
container_name: mongoClientTemp
links:
- mongo:mongo
command: mongo --host mongo --eval "db.getSiblingDB('dashboard').createUser({user:'db', pwd:'dbpass', roles:[{role:'readWrite',db:'dashboard'}]});"
depends_on:
- mongo
another-container:
image: another-image:v01
container_name: another-container
ports:
- "8080:8080"
volumes:
- ./logs:/app/logs
environment:
- MONGODB_HOST=mongo
- MONGODB_PORT=27017
links:
- mongo:mongo
depends_on:
- mongoClientTemp
EDIT: tutumcloud repository is deprecated and no longer maintained, see other answers
I suggest that you use environment variables to set mongo user, database and password. tutum (owned by Docker) published a very good image
https://github.com/tutumcloud/mongodb
docker run -d -p 27017:27017 -p 28017:28017 -e MONGODB_USER="user" -e MONGODB_DATABASE="mydatabase" -e MONGODB_PASS="mypass" tutum/mongodb
You may convert these variables into docker-compose environments variables. You don't have to hard code it.
environment:
MONGODB_USER: "${db_user_env}"
MONGODB_DATABASE: "${dbname_env}"
MONGODB_PASS: "${db_pass}"
This configuration will read from your session's environment variables.
In your project directory create another directory docker-entrypoint-initdb.d then the
file tree looks like this:
📦Project-directory
┣ 📂docker-entrypoint-initdb.d
┃ ┗ 📜mongo-init.js
┗ 📜docker-compose.yaml
The docker-compose.yml contains:
version: "3.7"
services:
mongo:
container_name: container-mongodb
image: mongo:latest
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
MONGO_INITDB_DATABASE: root-db
volumes:
- ./docker-entrypoint-initdb.d/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
mongo-init.js contains the javascript code to create user with different roles.
print("Started Adding the Users.");
db = db.getSiblingDB("admin");
db.createUser({
user: "userx",
pwd: "1234",
roles: [{ role: "readWrite", db: "admin" }],
});
print("End Adding the User Roles.");
You can modify the mongo-init.js as you need.
After reading the the official mongo docker page, I've found that you can create an admin user one single time, even if the auth option is being used. This is not well documented, but it simply works (hope it is not a feature).
Therefore, you can keep using the auth option all the time.
I created a github repository with scripts wrapping up the commands to be used. The most important command lines to run are:
docker exec db_mongodb mongo admin /setup/create-admin.js
docker exec db_mongodb mongo admin /setup/create-user.js -u admin -p admin --authenticationDatabase admin
The first line will create the admin user (and mongo will not complain even with auth option). The second line will create your "normal" user, using the admin rights from the first one.
Mongo image provides the /docker-entrypoint-initdb.d/ path to deploy custom .js or .sh setup scripts.
Check this post to get more details :
How to create a DB for MongoDB container on start up?
file: docker-compose.yaml
mongo:
image: mongo:latest
volumes_from:
- data
ports:
- "27017:27017"
command: --auth
container_name: "db_mongodb"
data:
image: mongo:latest
volumes:
- /var/lib/mongo
- ./setup:/setup
command: "true"
container_name: "db_mongodb_data"
file: .buildMongo.sh
#!/bin/sh
docker-compose down
docker-compose up -d
sleep 1
docker exec db_mongodb mongo admin /setup/create-admin.js
docker exec db_mongodb mongo myDb /setup/create-user.js -u admin -p admin --authenticationDatabase admin
The create-admin.js and create-user.js files are commands that you use using the mongo shell. So they must be easy for you to understand. The real direction is like the jzqa answer, "environment variables".
So the question here is how to create a user. I think this answers that point at least, you can check the complete setup here https://github.com/Lus1t4nUm/mongo_docker_bootstrap
For initializing mongo with initial user-password-db triple and initdb scripts with only one docker-compose.yml, without any extra configuration, you can use bitnami/mongo image.
In my case, I didn't run my scripts under /docker-entrypoint-initdb.d directory in the container after setting environment variables; MONGODB_USERNAME and MONGODB_PASSWORD (specific env variables for bitnami image) because mongod runs with --auth option automatically when you set these variables. Consequently, I got authentication errors when the container was in the process of executing the scripts.
Because, it was connecting to: mongodb://192.168.192.2:27017/compressors=disabled&gssapiServiceName=mongodb
TERMINAL LOG OF THE ERROR
FIRST DOCKER-COMPOSE FILE:
version: "3"
services:
mongodb:
container_name: mongodb
image: 'docker.io/bitnami/mongodb:4.2-debian-10'
ports:
- "27017:27017"
volumes:
- "mongodb_data:/bitnami/mongodb"
- "./mongodb/scripts:/docker-entrypoint-initdb.d"
environment:
- MONGODB_INITSCRIPTS_DIR=/docker-entrypoint-initdb.d
- MONGODB_USERNAME=some_username
- MONGODB_PASSWORD=some_password
- MONGODB_DATABASE=some_db_name
networks:
backend:
restart: unless-stopped
volumes:
mongodb_data:
networks:
backend:
driver: bridge
INIT JS FILE UNDER ./mongodb/scripts PATH:
let db = connect("localhost:27017/some_db_name");
db.auth("some_username", "some_password");
let collections = db.getCollectionNames();
let storeFound = false;
let index;
for(index=0; index<collections.length; index++){
if ("store" === collections[index]){
storeFound = true;
}
}
if(!storeFound ){
db.createCollection("store");
db.store.createIndex({"name": 1});
}
So, I decided to add new environment variables to my docker-compose.yml after inspecting https://github.com/bitnami/bitnami-docker-mongodb/blob/master/4.2/debian-10/rootfs/opt/bitnami/scripts/libmongodb.sh file.
In this sh file, there is function like mongodb_custom_init_scripts() for executing the scripts. For executing all script files, it runs mongodb_execute() method. In this method, after mongod instance is up and run, mongo client is connecting to the mongod instance by using some parameters.
########################
# Execute an arbitrary query/queries against the running MongoDB service
# Stdin:
# Query/queries to execute
# Arguments:
# $1 - User to run queries
# $2 - Password
# $3 - Database where to run the queries
# $4 - Host (default to result of get_mongo_hostname function)
# $5 - Port (default $MONGODB_PORT_NUMBER)
# $6 - Extra arguments (default $MONGODB_CLIENT_EXTRA_FLAGS)
# Returns:
# None
########################
mongodb_execute() {
local -r user="${1:-}"
local -r password="${2:-}"
local -r database="${3:-}"
local -r host="${4:-$(get_mongo_hostname)}"
local -r port="${5:-$MONGODB_PORT_NUMBER}"
local -r extra_args="${6:-$MONGODB_CLIENT_EXTRA_FLAGS}"
local result
local final_user="$user"
# If password is empty it means no auth, do not specify user
[[ -z "$password" ]] && final_user=""
local -a args=("--host" "$host" "--port" "$port")
[[ -n "$final_user" ]] && args+=("-u" "$final_user")
[[ -n "$password" ]] && args+=("-p" "$password")
[[ -n "$extra_args" ]] && args+=($extra_args)
[[ -n "$database" ]] && args+=("$database")
"$MONGODB_BIN_DIR/mongo" "${args[#]}"
}
After that I added new environment variables to my docker-compose like MONGODB_ADVERTISED_HOSTNAME, MONGODB_PORT_NUMBER, and, MONGODB_CLIENT_EXTRA_FLAGS
So my final docker-compose.yml looks like:
version: "3"
services:
mongodb:
container_name: mongodb
image: 'docker.io/bitnami/mongodb:4.2-debian-10'
ports:
- "27017:27017"
volumes:
- "mongodb_data:/bitnami/mongodb"
- "./mongodb/scripts:/docker-entrypoint-initdb.d"
environment:
- MONGODB_INITSCRIPTS_DIR=/docker-entrypoint-initdb.d
- MONGODB_USERNAME=some_username
- MONGODB_PASSWORD=some_password
- MONGODB_DATABASE=some_db_name
- MONGODB_ADVERTISED_HOSTNAME=localhost
- MONGODB_PORT_NUMBER=27017
- MONGODB_CLIENT_EXTRA_FLAGS=--authenticationDatabase=some_db_name
networks:
backend:
restart: unless-stopped
volumes:
mongodb_data:
networks:
backend:
driver: bridge
Now, it was connecting by this url:
mongodb://localhost:27017/?authSource=some_db_name&compressors=disabled &gssapiServiceName=mongodb
add --noauth option to the mongo command
extract from my docker-compose.yml file
mongors:
image: mongo:latest
command: mongod --noprealloc --smallfiles --replSet mongors2 --dbpath /data/db --nojournal --oplogSize 16 --noauth
environment:
TERM: xterm
volumes:
- ./data/mongors:/data/db