This is my mongo.conf
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
net:
port: 27017
bindIp: 0.0.0.0
security:
authorization: enabled
And I'm starting it via systemctl which does this:
/usr/bin/mongod --quiet --auth --config /etc/mongod.conf
Created an admin user like so:
mongo
> use admin
> db.createUser({ user:"user001", pwd:"pwd001", roles:[{role:"root", db:"admin"}]})
I can access mongo on EC2 via:
mongo -u user001 -p pwd001 --authenticationDatabase admin
However, when I try to access it on my local via:
mongo -u user001 -p pwd001 --authenticationDatabase admin --host 58.17.53.6
I get this error:
MongoDB shell version: 3.0.4
connecting to: 58.17.53.6:27017/test
2017-10-30T19:42:33.302-0700 W NETWORK Failed to connect to 58.17.53.6:27017 after 5000 milliseconds, giving up.
2017-10-30T19:42:33.304-0700 E QUERY Error: couldn't connect to server 58.17.53.6:27017 (58.17.53.6), connection attempt failed
at connect (src/mongo/shell/mongo.js:181:14)
at (connect):1:6 at src/mongo/shell/mongo.js:181
exception: connect failed
Also, my security group for instance looks like this:
Any help would be appreciated!
Related
I still don`t can prevent to anonymous access to Mongo DB after do below steps.
1- Create admin user with this command
mongod --port 27017 --logpath D:\Files\Sessions\log\mongo.log --dbpath D:\Files\Sessions\data\db
2- Create mongod.conf file with this config
systemLog:
destination: file
path: "D:/Files/Sessions/log/mongo.log"
storage:
dbPath: "D:/Files/Sessions/data/db"
net:
bindIp: 127.0.0.1, localhost
port: 27017
3- Execute mongod --config "D:\Files\Sessions\mongod.conf"
4- Create admin user with
use admin
db.createUser(
{
user: "myUserAdmin",
pwd: passwordPrompt(), // or cleartext password
roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]
}
)
5- Update mongod.conf with add this config to that
security:
authorization: enabled
6- Execute mongod --config "D:\Files\Sessions\mongod.conf"
I also in between steps, many times execute this command
mongod --port 27017 --logpath D:\Files\Sessions\log\mongo.log --dbpath D:\Files\Sessions\data\db --auth
In final, I can create new admin user and authenticate with that but still i can also authenticate as anonymous user without credential!
Just in case =>
mongo version: 4.2, windows 10
What`s wrong?
I have mongoDB container which runs on azure VM, and I'm trying to connect it to my mongoDB compass.
I have Public IP address to my VM, the port 27017 is open in my vm and also in my mongo container.
I have authentication, so to connect my mongo I'm Enter the mongo container and write the command "mongo -u username -p password --authenticationDatabase admin" (Relevant).
When I'm trying to connect I get "connection timed out" error message.
docker container ls
Open ports on the VM
My compass login page
I solved it by changing the configuration file of the mongoDB container.
Step 1:, make sure port 27017 is open on the VM:
Step 2:, create a mongoDB configuration file as below, name it mongod.conf, and change the bindIp field to your host IP (change < Host IP> to your host IP).
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /data/db
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1,<Host IP>
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
security:
authorization: enabled
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
Step 3: Copy the mogod.conf file into your VM, and save it wherever you want.
Step 4: Run the command:
docker run -d -v <FolderPathOnTheVM>/mongod.conf:/etc/mongod.conf -p 27017:27017 mongo -f /etc/mongod.conf
Make sure you changed the < FolderPathOnTheVM> to the path of the mongod.conf file on the VM (The path of step 3).
Do netstat on 27017 and check if its running on localhost or public IP. If its running on localhost, then change it to your public IP in mongodb config file and then retry. Also, telnet from your local machine to the mongodb IP and port to check if its working locally.
From the same client, I can connect to my remote mongodb using this string:
mongodb://siteadmin:mypassword#myip:27017
But from the shell when I run
mongo "mongodb://siteadmin:mypassword#myip:27017"
DB.prototype._authOrThrow#src/mongo/shell/db.js:1608:20
#(auth):6:1
#(auth):1:2
exception: login failed
root#shadow:~/website-fron
Any idea?
My config is simple:
security:
authorization: 'enabled'
net:
port: 27017
bindIp: 0.0.0.0
I am trying to setup a 3 node MongoDB cluster.
1) Started mongodb in all 3 nodes with the below config file.
net:
bindIp: 0.0.0.0
port: 10901
setParameter:
enableLocalhostAuthBypass: false
systemLog:
destination: file
path: "<LOG_PATH>"
logAppend: true
processManagement:
fork: true
storage:
dbPath: "<DB_PATH>/data"
journal:
enabled: true
security:
keyFile : "<KEY_FILE_PATH>"
sharding:
clusterRole: "configsvr"
replication:
replSetName: "configReplSet"
2) Created Admin user in one of the config node and able to login with the admin user.
mongo --port 10901 -u "admin" -p "adminpwd" --authenticationDatabase "admin" --host <IP>
now the console says, user:PRIMARY>
3) Created replica set using the below command.
rs.initiate(
{
_id: "configReplSet",
configsvr: true,
members: [
{ _id : 0, host : "<IP1>:10901" },
{ _id : 1, host : "<IP2>:10901" },
{ _id : 2, host : "<IP3>:10901" }
]
}
)
4) Executed rs.status() and got the proper output.
5) Started Mongo shards with the below config in all 3 instances.
net:
bindIp: 0.0.0.0
port: 10903
setParameter:
enableLocalhostAuthBypass: false
systemLog:
destination: file
path: "<LOG_PATH>"
logAppend: true
processManagement:
fork: true
storage:
dbPath: "<DB_PATH>/shard_data/"
journal:
enabled: true
security:
keyFile : "<KEY_FILE>"
sharding:
clusterRole: "shardsvr"
replication:
replSetName: "shardReplSet"
6) Created Admin user in one of the shard node also and able to login with the admin user.
mongo --port 10903 -u "admin" -p "adminpwd" --authenticationDatabase "admin" --host <IP>
7) Created shard replica set using the below command.
rs.initiate(
{
_id: "shardReplSet",
members: [
{ _id : 0, host : "<IP1>:10903" },
{ _id : 1, host : "<IP2>:10903" },
{ _id : 2, host : "<IP3>:10903" }
]
}
)
8) Started the router with the below config
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: <LOG_PATH_FOR_MONGOS>
# network interfaces
net:
port: 10902
security:
keyFile: <KEY_FILE>
processManagement:
fork: true
sharding:
configDB: configReplSet/<IP1>:10901,<IP2>:10901,<IP3>:10901
6) Connected to mongos using mongo
mongo --port 10902 -u "admin" -p "adminpwd" --authenticationDatabase "admin" --host <IP>
Now, I see the below in my command.
MongoDB server version: 3.4.2
mongos>
7) Now added each shard in the mongos interface.
Since, I have configured replica set,
sh.addShard("shardReplSet/:10903, :10903, :10903")
Issues :-
1) Unable to connect to mongodb from the remote machine
I am able to connect to the other node within these 3 nodes.
From Node1,
mongo --port 10902 -u "user" -p "password" --authenticationDatabase "admin" --host
mongo --port 10902 -u "user" -p "password" --authenticationDatabase "admin" --host
mongo --port 10902 -u "user" -p "password" --authenticationDatabase "admin" --host
All the above 3 connections are working from Node1 and Node2 and Node3.
But If I try from my localhost to connect to these instances, i get timeout error.
I am able to ssh to these servers.
2) I am running config on port 10901, shard on port 10903 and router on port 10902. Running, config, shard and router on each node. Is this ok?
DB path for config and shard are different. Have to create admin user on each service(config, shard, router). Is this correct?
Created replica set for config and shard server, but not for router? Is this ok?
4) Unable to connect to these instances from a remote mongo chef tool. I use the router port to connect to these instances? Is this correct? If so, Do I need to run router on each node?
5) Do we need to connect to the port 10903 or 10902 or 10901 to create new databases, create new users for db's.?
6) Is there anything else important to be added here?
Thanks
I've build a docker container running a mongodb-instance, that should be exposed to the host.
However, when i want to connect from the host into the mongodb-container, the connection will be denied.
This is my Dockerfile:
FROM mongo:latest
RUN mkdir -p /var/lib/mongodb && \
touch /var/lib/mongodb/.keep && \
chown -R mongodb:mongodb /var/lib/mongodb
ADD mongodb.conf /etc/mongodb.conf
VOLUME [ "/var/lib/mongodb" ]
EXPOSE 27017
USER mongodb
WORKDIR /var/lib/mongodb
ENTRYPOINT ["/usr/bin/mongod", "--config", "/etc/mongodb.conf"]
CMD ["--quiet"]
/etc/mongodb.conf:
And this is the config-file for MongoDB, where i bind the IP 0.0.0.0 explicitly as found here on SO, that 127.0.0.1 could be the root cause of my issue (but it isn't)
systemLog:
destination: file
path: /var/log/mongodb/mongo.log
logAppend: true
storage:
dbPath: /var/lib/mongodb
net:
bindIp: 0.0.0.0
The docker container is running, but a connection from the host is not possible:
host$ docker run -p 27017:27017 -d --name mongodb-test mongodb-image
host$ docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6ec958034a6f mongodb-image "/usr/bin/mongod --co" 4 seconds ago Up 3 seconds 0.0.0.0:27017->27017/tcp mongodb-test
Find the IP-Address:
host$ docker inspect 6ec958034a6f |grep IPA
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.2",
"IPAMConfig": null,
"IPAddress": "172.17.0.2",
Try to connect:
host$ mongo 172.17.0.2:27017
MongoDB shell version v3.4.0
connecting to: mongodb://172.17.0.2:27017
2016-12-16T15:53:40.318+0100 W NETWORK [main] Failed to connect to 172.17.0.2:27017 after 5000 milliseconds, giving up.
2016-12-16T15:53:40.318+0100 E QUERY [main] Error: couldn't connect to server 172.17.0.2:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:234:13
#(connect):1:6
exception: connect failed
When i ssh into the container, i can connect to mongo and list the test database successfully.
Use host.docker.internal with exposed port : host.docker.internal:27017
Using localhost instead of the ip, allows the connection.
Combine it with the exposed port: localhost:27017
I tested the solution as it was stated in the comments, and it works.