I can able to run single instance on Mongo using the following Docker command
docker run -it --rm -d -p 27017:27017 --user mongodb mongo:3.4
But I can't find out how to configure Config Server and Query Router and also how add shards with Replication
Thanks in Advance
I used this tutorial myself: https://medium.com/#gargar454/deploy-a-mongodb-cluster-in-steps-9-using-docker-49205e231319#.mle6a8wmg
Step 1: create folders
create folders (local on all nodes):
sudo mkdir -p /dockerlocalstorage/data/mongodb
sudo mkdir -p /dockerlocalstorage/backup/mongodb
sudo mkdir -p /dockersharedstorage/config/mongodb/
Step 2: create keyfile
create keyfile as root user and give correct permissions:
sudo su
cd /dockersharedstorage/config/mongodb/
openssl rand -base64 741 > mongodb-keyfile
chmod 600 mongodb-keyfile
sudo chown 999 mongodb-keyfile
Depending on the mount type you might need to use /dockerlocalstorage/ to keep the certs. Mongodb will complain if the permissions are not set correctly (which could be harder to achieve on lets say a cifs mount)
Step 3: setup first node
create mongodb container without auth/keyfile:
docker run --name mongodb \
-v /dockerlocalstorage/data/mongodb:/data/db \
-v /dockersharedstorage/config/mongodb:/opt/keyfile \
--hostname="dock01" \
-p 27017:27017 \
-d mongo
log in to container:
docker exec -it mongodb mongo
create root users, dont forget to change the passwords
use admin
db.createUser( {
user: "admin",
pwd: "PASSWORD",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
});
db.createUser( {
user: "root",
pwd: "PASSWORD",
roles: [ { role: "root", db: "admin" } ]
});
exit and remove the container
exit
docker stop mongodb
docker rm mongodb
Step 4: start nodes
change NODE_NR:
docker run \
-d \
--name mongodb \
-v /dockerlocalstorage/data/mongodb:/data/db \
-v /dockerlocalstorage/backup/mongodb:/data/backup \
-v /dockersharedstore/config/mongodb:/opt/keyfile \
--restart=always \
--hostname="dock01" \
-p 27017:27017 mongo \
--keyFile /opt/keyfile/mongodb-keyfile \
--replSet "SET_NAME"
Step 5: setup replication
connect to node 1:
docker exec -it mongodb mongo
use admin
db.auth("root", "PASSWORD");
initialize the replication set:
rs.initiate()
Check the replica set with: rs.conf() or rs.status()
add others:
rs.add("dock02:27017")
rs.add("dock03:27017")
Step 6: setup mongodump
edit crontab:
sudo su
crontab -l > tempcron
echo new cron into cron file
echo "0 4 * * * docker exec -it mongodb mongodump -u root -p PASSWORD -o /data/backup/daily" >> tempcron
echo "30 4 * * 5 docker exec -it mongodb mongodump -u root -p PASSWORD -o /data/backup/weekly" >> tempcron
install new cron file
crontab tempcron
rm tempcron
exit
You could use docker swarm mode if you want docker native mongodb cluster (Docker >= 1.12 needed).
Have a look at this nice tutorial. This will show you how to get a mongodb cluster with docker, replicated with Config Server. Basically, the steps are:
Create multiple virtual machines (with docker-machine or whatever you use to create a new docker host)
Create the swarm (docker cluster of multiple machines)
Create a swarm overlay network to deal with all your mongodb traffic
Create the services on each swarm node (that will create your mongodb containers on your hosts)
Configure your mongodb replica set
This is a bit of work, but worth it, as when you get there, you'll have tools on docker swarm to orchestrate your mongodb cluster.
Related
I used to create postgresql database for my old project with command:
docker run --name oldpostgresqldb -e POSTGRES_USER=oldadmin -e POSTGRES_PASSWORD=secret -p 5432:5432 -v /data:/var/lib/postgresql/data -d postgres
Then create database with command:
docker exec -i oldpostgresqldb psql -U oldadmin -c "CREATE DATABASE oldDB WITH ENCODING='UTF8' OWNER=oldadmin;"
When I started my new project stoped and removed all containers, images, volumes etc. and even used docker system prune.
Now I am trying to create new container and db with commands:
docker run --name newpostgresqldb -e POSTGRES_USER=newadmin -e POSTGRES_PASSWORD=secret -p 5432:5432 -v /data:/var/lib/postgresql/data -d postgres
and
docker exec -i newpostgresqldb psql -U newadmin -c "CREATE DATABASE newDB WITH ENCODING='UTF8' OWNER=newadmin;"
But I receive psql: error: FATAL: role "newadmin" does not exist.
Furthermore, I still can manage oldDB with oldadmin user in newpostgresqldb container, because user oldadmin is still exists.
How can I delete old data and create new user and database using docker?
Thanks to #Ay0 I've found a solution. I just needed to clear content of /data folder in the host directory manualy.
I am weeks trying to do backup from my docker without any success.
My Docker Compose:
services:
mongo:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: user
MONGO_INITDB_ROOT_PASSWORD: password
networks:
- internal
volumes:
- /opt/rpg/mongo/data:/data/db
Input: docker exec mongo sh -c 'exec mongodump -d rpg --archive' > /home/rpg/all-collections.archive
Output: Failed: error getting collections for database rpg: error running listCollections. Database: rpg Err: command listCollections requires authentication
Then i tried with password
Input: docker exec mongo sh -c 'exec mongodump -d rpg —uri="mongodb://user:password#mongo:27017/rpg?authSource=admin" --archive' > /home/rpg/all-collections.archive
Output: SASL authentication step: Authentication failed.
After weeks trying i get the connection with:
sudo docker run -it --rm --network internal mongo \
mongo --host mongo \
-u user \
-p password \
--authenticationDatabase admin \
rpg
Now I can see the collections and everything, but i still cant get the backup.
Tried too:
Input:
sudo docker run --rm --network internal mongo \
mongodump --host mongo \
-u user \
-p password \
--authenticationDatabase admin \
--db rpg
> ~/rpg2-collections.archive
Output:
Failed: error connecting to db server: server returned error on SASL authentication step: Authentication failed.
Without success, can someone help me ?
Was having the same Authentication problem, and this command worked:
docker exec <docker_container_name> sh -c 'mongodump --uri="mongodb://username:password#localhost:27017/db_name?authSource=admin&readPreference=primary" --archive' > db.dump
I am doing it like this:
mongodump --host='{mongo_server}' --db='{mongo_db}' --collection='{collection_name}' --out={output_folder}
and it works just fine.
though as you can see i am doing collection by collection dump through python script on separate host, because i wasn't able to backup everything at once.
so the first step is to get all collections from container with installed mongo client:
mongo {mongo_server}/{mongo_db} --eval 'db.getCollectionNames()' --quiet
my script also uploads each collection dump to s3 bucket s3cmd, that is why you are seeing only part of it...
After weeks, i get a backup from files (not mongodump).
sudo tar czvf /home/backup.tar.gz /opt/rpg/mongo
And i did a repair and worked at my pc.
mongod --repair ./{{folder}}
I have set up a docker with MongoDB Image. By default it has no password set up. I made a user and assigned it roles, which works perfectly well. But the issue is that the connection is still possible without authentication.
Connect with Authentication > Right Username, Right Password -> CONNECTED
Connect with Authentication > Right Username, Wrong Password -> CONNECTION FAILED
Connection without Authentication > CONNECTED
I want the 3rd point to stop working.
Steps:-
1) Run a docker instance without authentication
$ docker run --name container-name -d -p 27017:27017 -v ~/mongodb:/data/db mongo
2) Create a main administrator user with admin roles
$ mongo --port 27017
$ use admin;
$ db.createUser({user: "adminUserName",pwd: "adminPassword",roles: [{ role: "userAdminAnyDatabase", db: "admin" }})
This will create a user in the admin database with roles "userAdminAnyDatabase". This is like a superuser.
3) Create User for a particular database
$ use
$ db.createUser({user: "dev-read-username",pwd: "dev-read-password",roles:["read"]})
-- User with "read" role
$ db.createUser({user: "dev-write-username",pwd: "dev-write-password",roles:["readWrite"]})
-- User with "readWrite" role
For list of roles available or how to create custom roles, please check https://docs.mongodb.com/manual/reference/built-in-roles/
4) Remove the docker container
$ docker ps -a
$ docker stop container_id
$ docker rm container_id
5) Run the docker instance with authentication enabled
$ docker run --name container-name -d -p 27017:27017 -v ~/mongodb:/data/db mongo --auth
I assume you might not have started the docker container with --auth enabled. Once you start with --auth enabled, then you will not be able to connect without credentials.
Run with auth option to add authorizations docker run --name some-mongo -d mongo --auth
You should create an admin user. You can check if admin user exists using db.getSiblingDB('admin').system.users.find() or create one like : db.createUser({ user: 'jsmith', pwd: 'some-initial-password', roles: [{ role: "userAdminAnyDatabase", db: "admin" } ] });
Source : https://hub.docker.com/r/library/mongo/
I've set up two services on my CoreOS instance. Briefly, the first one is an official mongo container and the other one is a custom made image that's trying to connect to a "mongo" instance.
- name: living-mongo.service
command: start
enable: true
content: |-
[Unit]
Description=Mongo
Author=Living
After=docker.service
[Service]
Restart=always
RestartSec=10s
ExecStartPre=-/usr/bin/docker stop mongo
ExecStartPre=-/usr/bin/docker rm mongo
ExecStart=/usr/bin/docker run --name mongo -p 27017:27017 --hostname mongo mongo:2.6
ExecStop=/usr/bin/docker stop mongo
ExecStopPost=-/usr/bin/docker rm mongo
- name: living-mongo-seed.service
command: start
enable: true
content: |-
[Unit]
Description=Mongo Seed
Author=Living
Requires=living-mongo.service
After=living-mongo.service
[Service]
User=core
Type=oneshot
ExecStartPre=-/usr/bin/docker stop mongo-seed
ExecStartPre=-/usr/bin/docker rm mongo-seed
ExecStart=/usr/bin/docker run --name mongo-seed --link mongo:mongo registry.living-digital-way.com/mongo-qa:v1
ExecStop=/usr/bin/docker stop mongo-seed
Basiclly, first I start the mongo instance, and then I'm trying to connect to it to feed some data on there:
# docker run --name mongo -p 27017:27017 --hostname mongo mongo:2.6
# docker run --name mongo-seed --link mongo:mongo registry.living-digital-way.com/mongo-qa:v1
When the second service is started, it's telling me:
Sep 12 14:12:21 core-01 docker[1672]: Status: Downloaded newer image for registry.living-digital-way.com/mongo-qa:v1
Sep 12 14:12:21 core-01 docker[1672]: 2016-09-12T14:12:21.704+0000 warning: Failed to connect to 172.17.0.4:27017, reason: errno:111 Connection refused
Sep 12 14:12:21 core-01 docker[1672]: couldn't connect to [mongo] couldn't connect to server mongo:27017 (172.17.0.4), connection attempt failed
Sep 12 14:12:21 core-01 docker[1672]: 2016-09-12T14:12:21.728+0000 warning: Failed to connect to 172.17.0.4:27017, reason: errno:111 Connection refused
Sep 12 14:12:21 core-01 docker[1672]: couldn't connect to [mongo] couldn't connect to server mongo:27017 (172.17.0.4), connection attempt failed
Once the system is started, I manually perform the second service on shell:
docker run --name mongo-seed --link mongo:mongo registry.living-digital-way.com/mongo-qa:v1
and it works fine.
What am I doing worng?
EDIT
Custom docker image Dockerfile:
FROM mongo:2.6
MAINTAINER Living Digital Way
COPY ./clients.init.json .
COPY ./users.init.json .
COPY ./import.sh .
RUN ["chmod", "+x", "./import.sh"] # -> only required, if import.sh is not executable
CMD ["./import.sh"]
and import.sh:
mongoimport --host mongo --db lvdb --collection clients --type json --file ./clients.init.json --jsonArray --upsert --upsertFields client_id
mongoimport --host mongo --db lvdb --collection users --type json --file ./users.init.json --jsonArray --upsert --upsertFields username
You need to figure out where the connection is being dropped.
Start with inspecting the mongo container to ensure the public IP is correct.
docker inspect mongo_container | egrep "IPAddress[^,]+"
This will print the IP address - ensure this is correct.
Stand up the Container as a daemon (if not already)
docker run -it -d mongo
Get container ID
docker ps -a
Log into container
docker exec -it mongo_container_id bash
Install curl
sudo apt-get update
sudo apt-get curl
Ping MongoDB Endpoint
curl localhost:27017
Should print out
It looks like you are trying to access MongoDB over HTTP on the native driver port.
control + d to exit the container.
Run the same commands while container is still up but instead of localhost use the IP from this command.
docker inspect mongo_container | egrep "IPAddress[^,]+"
curl 172.17.4.124:27017
Should see the same output
It looks like you are trying to access MongoDB over HTTP on the native driver port.
Either the IP, the port, the port exposure, the server, or the container is not up.
To start the container, I am typing the following command:
sudo docker run -i -t -p 28000:27017 mongo:latest /usr/bin/mongod --smallfiles
But I want to open the shell in this container to type the mongo commands.
What command should I run to do the same?
You can run the interactive mongo shell by running the following command:
docker run -it -p 28000:27017 --name mongoContainer mongo:latest mongo
Otherwise, if your container is already running, you can use the exec command:
docker exec -it mongoContainer mongo
The thing that I struggled too but found a solution:
docker pull mongo
docker run --name CONTAINERNAME --restart=always -d -p 8080:8080 mongo mongod --auth
sudo docker exec -i -t CONTAINERNAME bash
mongo
use admin
db.createUser({user:"user", pwd:"password", roles:[{role:"root", db:"admin"}]})
exit && exit
Now you have created a running Docker container with everything you need. Now if you want to connect from any client with an admin user, just run this
mongo -u "user" -p "password" HOSTIP --authenticationDatabase "admin"
Extending and updating #vladzam answer and if your container is already running in docker, you can use the exec mongosh command with login and pass options like this:
docker exec -it database-dev mongosh -u "myLogin" -p "myPass"
Download the latest MongoDB Docker image from Docker Hub
sudo docker pull mongo
Now set up MongoDB container
docker run --name containername mongo
Interact with the database through the bash shell client
docker exec -it containername bash
Launch the MongoDB shell client
mongosh #now it is mongosh to access shell
It depends which version of MongoDB you're running.
Please see the differences here : The MongoDB Shell versus the Legacy mongo Shell.
For example, with Mongo 3 the executable was mongo :
$ docker run --rm mongo:3 which mongo mongosh mongod
/usr/bin/mongo
/usr/bin/mongod
With Mongo 5 both mongo and mongosh are present :
$ docker run --rm mongo:5 which mongo mongosh mongod
/usr/bin/mongo
/usr/bin/mongosh
/usr/bin/mongod
With Mongo 6 you can only use the newest mongosh :
$ docker run --rm mongo:6 which mongo mongosh mongod
/usr/bin/mongosh
/usr/bin/mongod
Now if you want to try it, run a MongoDB instance :
$ docker run -d -p 29000:27017 --name mongodb-6 mongo:6
Then connect a shell :
$ docker exec -it mongodb-6 mongosh
You'll get something like :
Current Mongosh Log ID: 632456e42dbc88aa0bfe612f
Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.5.4
Using MongoDB: 6.0.1
Using Mongosh: 1.5.4
...
You can also pass options like :
$ docker exec -it mongodb-6 mongosh --version
docker exec -it mongoContainer mongosh
i tried mongo, mongod, and mongodb but failed. Then i changed it to mongosh and it works!
Assuming you have mongo installed on your host computer, which was in my case when I asked this question years ago. This is an alternate way I tried: Open a new terminal
mongo 127.0.0.1:28000
Your mongo shell starts in this terminal now.