Mongo DB Version 5 Access Control With Two Read Replicas - mongodb

I have three ubuntu 20.04 servers:
db-master
db-node1
db-node2
I would like to know how to enable access control.
The servers work fine except when the following configuration is added to /etc/mongod.conf Thanks in advance:
$ mongo --version
MongoDB shell version v5.0.1
$ sudo vim /etc/mongod.conf
security:
keyFile: "/home/ubuntu/security.keyFile"
authorization: enabled
$ sudo systemctl status mongod
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
Active: active (running)
$ sudo systemctl restart mongod
ubuntu#db-node2:~$ sudo systemctl status mongod
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
Active: failed
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
#security:
#operationProfiling:
replication:
replSetName: "rs0"
#sharding:
## Enterprise-Only Options:
#auditLog:
security:
keyFile: "/home/ubuntu/security.keyFile"
authorization: enabled
#snmp:

Solution
ubuntu#:~$ sudo chown mongodb [PATH TO KEY FILE]
ubuntu#:~$ sudo chgrp root [PATH TO KEY FILE]
ubuntu#:~$ sudo chmod 400 [PATH TO KEY FILE]

Related

Error when configuring TLS/SSL for Mongodb

I have an existing MongoDB Instance on my server. I wanted to enable SSH for it. I edited the /etc/mongod.conf as follows and added the TLS Options
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
tls:
mode: requireTLS
certificateKeyFile: /path/to/privatekey.pem
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
#security:
security:
authorization: enabled
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
Whenever I try to restart MongoDB by running sudo service mongod restart I face the following error:
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2021-12-24 04:32:58 IST; 2s ago
Docs: https://docs.mongodb.org/manual
Process: 72902 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=1/FAILURE) Main PID: 72902 (code=exited,
status=1/FAILURE)
Dec 24 04:32:58 tcp-server-1 systemd[1]: Started MongoDB Database
Server. Dec 24 04:32:58 tcp-server-1 systemd[1]: mongod.service: Main
process exited, code=exited, status=1/FAILURE Dec 24 04:32:58
tcp-server-1 systemd[1]: mongod.service: Failed with result
'exit-code'.
But if I comment out the TLS Options, then its working fine. What am I doing wrong?

Mongo auth fails to login when using mongod.conf

So this is the weirdest thing.
I have two centOS 7 servers running mongo. I now wanted to enforce authentication so I added the security.authorization: enabled to the mongod.conf file.
I already have a user on database "buzzztv".
So when I ran mongod --conf /etc/mongod.conf on the first server everything went fine.
Then I did the exact same thing on the second server and whenever I try to connect with one of the users I get the following error:
connecting to: mongodb://127.0.0.1:27017/?authSource=buzzztv&compressors=disabled&gssapiServiceName=mongodb
2020-02-20T13:02:35.166+0000 E QUERY [js] Error: Authentication failed. :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2020-02-20T13:02:35.168+0000 F - [main] exception: connect failed
2020-02-20T13:02:35.168+0000 E - [main] exiting with code 1
Now if I run mongod --fork --logpath /var/log/mongodb/mongod.log --auth the login works perfectly fine.
So obiously I could just run this command, but I want to use the mongod.conf.
Here is my mongod.conf file, I checked and it is a perfect copy of the file from the server in which it does work.
Any ideas?
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# engine:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0 # 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
security:
authorization: enabled
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options
#auditLog:
#snmp:
So after several hurtful hours of looking into it, I neede to change the
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
Apparently something was wrong with my /var/lib/mongo so I backed up the data, and created a new folder /var/lib/mongodb
Then edited the mongod.conf file to:
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
Then it all worked fine. Now I'll just re-create the users and re-insert all the data and I'm good to go.
Hope this saves someone the wasteful hours I've lost

Mongod in recovering after enabling security in mongod.conf

Created 2 shards with 3 node replica set.
After enabling the security in mongod.conf file, every node was in recovering state. But if we disable/comment out in the mongod.conf file the nodes were normal.
All nodes were running on AWS.
OS: CentOS 7
Mongodb Version: 3.6.2
Background: Before enabling the security, I created a user as below.
db.createUser( { user: "cxiroot",
pwd: "root",
roles: [ { role: "root", db: "admin" }] },
{ w: "majority" , wtimeout: 5000 } );
I am able to authenticate with, db.auth("cxiroot","root")
After enabling security and restarting the service, all nodes were in recovering state.
Basically I am trying to do db.shutdownServer() by enabling security.
Log reporting:
2019-10-17T11:40:23.138-0400 I REPL_HB [replexec-1] Error in heartbeat (requestId: 1934) to 109.99.16.36:27018, response status: Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "shard1rs", configVersion: 4, hbv: 1, from: "109.99.16.112:27018", fromId: 1, term: 128, $replData: 1, $clusterTime: { clusterTime: Timestamp(1571326818, 1), signature: { hash: BinData(0, 466B43AC8CDFBE9B5CBEA8AC4860925560B63296), keyId: 6745513220909301779 } }, $db: "admin" }
2019-10-17T11:40:23.150-0400 I ACCESS [conn2] Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "shard1rs", configVersion: 4, hbv: 1, from: "109.99.16.137:27018", fromId: 2, term: 128, $replData: 1, $clusterTime: { clusterTime: Timestamp(1571326818, 1), signature: { hash: BinData(0, 466B43AC8CDFBE9B5CBEA8AC4860925560B63296), keyId: 6745513220909301779 } }, $db: "admin" }
mongod.conf
systemLog:
destination: file
logAppend: true
path: /mongodb/data/logs/mongod.log
storage:
dbPath: /mongodb/data/db
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27018
bindIp: x.x.x.x
security:
authorization: 'enabled'
replication:
replSetName: shard1rs
sharding:
clusterRole: shardsvr
What's causing this problem?.
Edit 2019-10-22:
OP asked to change ports to reflect MongoDB recommendations. It took a bit of digging to find any recommendations from MongoDB. For all modern versions (i.e., 3.6 or later) there is no specific port recommended. Going back to 3.0 docs, however, the default port for config servers is documented as 27019. https://docs.mongodb.com/v3.0/tutorial/deploy-shard-cluster/ . Looking at Ops Manager default deployments we see shard replica sets default to port 27018.
I assume these ports are selected to help protect developers who are writing connection strings and consuming the database from accidentally connecting to a replica set directly, and bypassing the MongoDB router MONGOS. OK, so below I have changed the ports respectively...
Here are instructions to setup 10 hosts to run a 2-shard sharded cluster on AWS. These instructions describe hardware setup only, and do not describe how to select a proper shard key to apply to data. The instructions setup the hardware but have no data. The breakdown of servers is as follows...
1 host for MONGOS
3 hosts for shard config server replica set - each host has a MONGOD installed.
3 hosts for shard0 replica set - each host has a MONGOD installed.
3 hosts for shard1 replica set - each host has a MONGOD installed.
This setup assumes a keyfile will be used for internal authentication. The keyfile is a shared secret stored in a file on all 10 hosts. When using keyfile authentication treat the keyfile like a password. It should be protected.
Generate a keyfile:
openssl rand -base64 741 > mykeyfile
for the sake of this tutorial assume the keyfile generated is ...
bgi+xXyBAHtNXmQnTjDNrSyTa+I9SGQXbBZONHRxHxKw2y/M3kGtpiJCVCyI+bDk
bXKTHnejIGXcyl7Ykc812DHEEngmqw63HfPxsUHFiDZ8FiwU/5X7W/T9lgKk9SoV
ybIL8+EBjSPvWDa9JWgVKrJFYqG0IejSyrO+js9os6n9kq5kneNOYjnovJwS9MgM
euENHfzTJ2XItcMWtcMilMoXd4Pm9VQgkW8i+Cb9hhQcwm1yA/wT7Tr04l6Pgq74
wQgp5MlYXLmlOGMhsTFGgBv4eVfKVR/r3zr2nshLowpBR6CiX098TO/+mZIGIM54
CoULxBlLBxngSXOWH86tvG05YvrjtAOaiDnHFm59fYKmT1+jfecx/NHOT4Cn1bfO
2q4cpGma3Cy2iRHuEdrm8zV1wvx94x0bLIEttiO4qelb32HLZM9MaL+lKodwhfko
A/3Bcx6+c1tTFtCE2sd5xpAngw6oMKau3nfynQgxgvnLrymCW4Hxqj3ew7F1ShYp
OAskY5/qu+ruad0VO9gqM2PHtsPrVHgkO8zBn+twtpYMOvdTE4M4vYMxA14vxkLA
FrELT/TqYmCPSO8pVS8tu12nADEkRUYRK+LqYKXsogl2FolnvYPLsiw4g9psQw4x
2CmJsEVYel1sl3cxq21Sgd+uO9nyWuNEaKBkYOOgLw67xg6xkGWBrLkg3gC840eI
JE0eOJfDLl+EkF1CUubKv8JB7bxK6kwnoTkfd2OGEHqLGQbV/hP6Gpni4gnqoNlp
65meHn2djSUGWu3wS7m5NRjCqICRTbOQs4K/ugM2hVu4e4dZV0RDt/FOF3u+6Anv
G8X5/GqWDvoIJ4WCvPyqVQoAyDG6S5DiSMmhlwCJUaXu0gFn7NuDPEtC8KAzHK75
qOmsTddIhSSs3fjmPm3wKAxyf2r5/6oBIjMq3vN8ahi+NLa+Vz+8VMRa+ajvE4ws
HPAh7P5iar4u9Uu2WE/J0WvGM88p
I generated this using the command above. It is random and only used for this tutorial.
Install MongoDB:
This example uses MongoDB Enterprise. This version requires a license purchased from MongoDB. To use the Community edition change the repo definition.
This example assumes AWS hosts are AMI2 Amazon Linux.
Run this command on all 10 hosts.
echo '[mongodb-enterprise]
name=MongoDB Enterprise Repository
baseurl=https://repo.mongodb.com/yum/amazon/2/mongodb-enterprise/4.0/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc' | sudo tee /etc/yum.repos.d/mongodb-enterprise.repo
sudo yum -y install mongodb-enterprise
Distribute the keyfile to all hosts:
On all 10 AWS hosts issue the following...
echo "bgi+xXyBAHtNXmQnTjDNrSyTa+I9SGQXbBZONHRxHxKw2y/M3kGtpiJCVCyI+bDk
bXKTHnejIGXcyl7Ykc812DHEEngmqw63HfPxsUHFiDZ8FiwU/5X7W/T9lgKk9SoV
ybIL8+EBjSPvWDa9JWgVKrJFYqG0IejSyrO+js9os6n9kq5kneNOYjnovJwS9MgM
euENHfzTJ2XItcMWtcMilMoXd4Pm9VQgkW8i+Cb9hhQcwm1yA/wT7Tr04l6Pgq74
wQgp5MlYXLmlOGMhsTFGgBv4eVfKVR/r3zr2nshLowpBR6CiX098TO/+mZIGIM54
CoULxBlLBxngSXOWH86tvG05YvrjtAOaiDnHFm59fYKmT1+jfecx/NHOT4Cn1bfO
2q4cpGma3Cy2iRHuEdrm8zV1wvx94x0bLIEttiO4qelb32HLZM9MaL+lKodwhfko
A/3Bcx6+c1tTFtCE2sd5xpAngw6oMKau3nfynQgxgvnLrymCW4Hxqj3ew7F1ShYp
OAskY5/qu+ruad0VO9gqM2PHtsPrVHgkO8zBn+twtpYMOvdTE4M4vYMxA14vxkLA
FrELT/TqYmCPSO8pVS8tu12nADEkRUYRK+LqYKXsogl2FolnvYPLsiw4g9psQw4x
2CmJsEVYel1sl3cxq21Sgd+uO9nyWuNEaKBkYOOgLw67xg6xkGWBrLkg3gC840eI
JE0eOJfDLl+EkF1CUubKv8JB7bxK6kwnoTkfd2OGEHqLGQbV/hP6Gpni4gnqoNlp
65meHn2djSUGWu3wS7m5NRjCqICRTbOQs4K/ugM2hVu4e4dZV0RDt/FOF3u+6Anv
G8X5/GqWDvoIJ4WCvPyqVQoAyDG6S5DiSMmhlwCJUaXu0gFn7NuDPEtC8KAzHK75
qOmsTddIhSSs3fjmPm3wKAxyf2r5/6oBIjMq3vN8ahi+NLa+Vz+8VMRa+ajvE4ws
HPAh7P5iar4u9Uu2WE/J0WvGM88p" | sudo tee /var/run/mongodb/mykeyfile
sudo chown mongod.mongod /var/run/mongodb/mykeyfile
sudo chmod 400 /var/run/mongodb/mykeyfile
Setup the Config Servers:
Select 3 AWS instances as the config servers. On these three hosts run the following...
sudo rm /etc/mongod.conf
echo "systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27019
bindIp: 0.0.0.0
security:
keyFile: /var/run/mongodb/mykeyfile
replication:
replSetName: configrs
sharding:
clusterRole: configsvr
" | sudo tee /etc/mongod.conf
sudo systemctl start mongod
On the final config server log into the mongo shell...
mongo
and initiate the replica set. This example uses host names from my AWS instances. Change the host names to match yours. Notice the name of the replica set is configrs.
rs.initiate(
{
_id: "configrs",
version: 1,
members: [
{ _id: 0, host: "ip-172-31-27-98.us-west-2.compute.internal:27019" },
{ _id: 1, host: "ip-172-31-17-202.us-west-2.compute.internal:27019" },
{ _id: 2, host: "ip-172-31-19-63.us-west-2.compute.internal:27019" }
]
}
)
Add credentials to allow root level access for administration. If the prompt says 'SECONDARY' then wait about 1 minute. Test by issuing command use admin. Continue to wait until it says 'PRIMARY'. If it never says 'PRIMARY' you have problems and cannot proceed.
use admin
db.createUser({user: "barry", pwd: "mypassword", roles: [{role: "root", db: "admin"}]})
db.auth("barry", "mypassword")
Configure Shard0:
Identify another 3 hosts for shard0.
Run the following on all 3 hosts...
sudo rm /etc/mongod.conf
echo "systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27018
bindIp: 0.0.0.0
security:
keyFile: /var/run/mongodb/mykeyfile
replication:
replSetName: shard0
sharding:
clusterRole: shardsvr
" | sudo tee /etc/mongod.conf
sudo systemctl start mongod
On the final host log into the mongo shell...
mongo
Initiate the replica set. Notice replica set name shard0. Replace host names with yours...
rs.initiate(
{
_id: "shard0",
version: 1,
members: [
{ _id: 0, host: "ip-172-31-21-228.us-west-2.compute.internal:27018" },
{ _id: 1, host: "ip-172-31-17-221.us-west-2.compute.internal:27018" },
{ _id: 2, host: "ip-172-31-17-145.us-west-2.compute.internal:27018" }
]
}
)
... and create root user...
use admin
db.createUser({user: "barry", pwd: "mypassword", roles: [{role: "root", db: "admin"}]})
db.auth("barry", "mypassword")
Configure shard1:
Select 3 remaining hosts for shard1 and apply these settings to all 3...
sudo rm /etc/mongod.conf
echo "systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27018
bindIp: 0.0.0.0
security:
keyFile: /var/run/mongodb/mykeyfile
replication:
replSetName: shard1
sharding:
clusterRole: shardsvr
" | sudo tee /etc/mongod.conf
sudo systemctl start mongod
On the final of these 3 start the mongo shell ...
mongo
... and initialize the replica set. Notice replica set name shard1. Replace host names with yours...
rs.initiate(
{
_id: "shard1",
version: 1,
members: [
{ _id: 0, host: "ip-172-31-30-65.us-west-2.compute.internal:27018" },
{ _id: 1, host: "ip-172-31-17-88.us-west-2.compute.internal:27018" },
{ _id: 2, host: "ip-172-31-23-140.us-west-2.compute.internal:27018" }
]
}
)
On the final of the 3 create a root user...
use admin
db.createUser({user: "barry", pwd: "mypassword", roles: [{role: "root", db: "admin"}]})
db.auth("barry", "mypassword")
Setup MONGOS:
On the final host reserved for the MONGOS router create a MONGOS config file. Change host names on references to config servers to match your implementation.
echo "systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
processManagement:
fork: true
pidFilePath: /var/run/mongodb/mongos.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
bindIp: 0.0.0.0
port: 27017
sharding:
configDB: configrs/ip-172-31-27-98.us-west-2.compute.internal:27019,ip-172-31-17-202.us-west-2.compute.internal:27019,ip-172-31-19-63.us-west-2.compute.internal:27019
security:
keyFile: /var/run/mongodb/mykeyfile
" | sudo tee /etc/mongos.conf
Create systemd script to start the MONGOS using the user 'mongod'.
echo '[Unit]
Description=MongoDB Database Server Router
Documentation=https://docs.mongodb.org/manual
After=network.target
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongos.conf"
EnvironmentFile=-/etc/sysconfig/mongos
ExecStart=/usr/bin/mongos $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/mongos.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for for mongod as specified in
# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings
[Install]
WantedBy=multi-user.target
' | sudo tee /usr/lib/systemd/system/mongos.service
Reload the systemctl daemon and start the MONGOS.
sudo systemctl daemon-reload
sudo systemctl start mongos
If you prefer to start MONGOS manually instead of using systemd scripts use the following command... This is optional.
sudo -u mongod mongos -f /etc/mongos.conf
On the host having the MONGOS log into the mongo shell and authenticate...
mongo
use admin
db.auth("barry", "mypassword")
Initialize the sharding. Only one host is needed. Mongo will discover all the other hosts in the shard. Change host names to match yours...
sh.addShard("shard0/ip-172-31-21-228.us-west-2.compute.internal:27018")
sh.addShard("shard1/ip-172-31-30-65.us-west-2.compute.internal:27018")
View the status of the sharding...
sh.status()
At this point sharding is prepared. No data is on these hosts. No databases or collections have been told to shard. No shard key has been established. These instructions are merely for setting up hardware using keyfile internal authentication. See MongoDB instructions for sharding databases
Conclusion:
If we utilize keyfile internal authentication the entire cluster requires user authorization to perform tasks. System administrators can choose to implement SCRAM-SHA username/password authentication mechanisms for users, or can use x.509 Certificate based authentication, or others. If your cluster was not using any security, applying keyfile internal authentication may cause surprise as client authentication is now required. My testing showed if I applied authorization: 'enabled' while keyfile was defined there was no change in behavior. Example:
security:
keyFile: /var/run/mongodb/mykeyfile
... behaved the same as ...
security:
keyFile: /var/run/mongodb/mykeyfile
authorization: 'enabled'

Mongodb version 3.6.2 can't restart after authentication enabled

I have installed MongoDB 3.6.2 on a server with Ubuntu Server 16.04.
After installation all works fine. Now i need to enable auth on MongoDB and I have seen the guide and i have set my mongod.conf whit securety: authentication: enabled, this my conf file:
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
securety:
authorization: enabled
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
Now, I'm trying to restart MongoDB with sudo systemtcl restart but when I run status command the MongoDB status is failed and receive this error:
mongod.service - High-performance, schema-free document-oriented database
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: e
Active: failed (Result: exit-code) since gio 2018-01-25 13:02:23 CET; 2s ago
Docs: https://docs.mongodb.org/manual
Process: 1409 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited
Main PID: 1409 (code=exited, status=2
If I comment the security mode the status comeback to active.
Use
security:
authorization: enabled
instead of
securety:
authorization: enabled

mongo wiredTiger db service start

I installed the mongodb 3.0.2 on Ubuntu 14.04.2 with wiredTiger configured as the storage engine.
I change /etc/mongodbConfig.conf file -> /etc/mongod.conf
However
sudo service mongod start
doesn't start MongoDb
** service mongod start : error
Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service mongod start
But when I use this, I get the following error
initctl: Unknown job: mongod
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the start(8) utility, e.g. start mongod
My mongod.conf
storage:
dbPath: "/data/wt"
engine: "wiredTiger"
wiredTiger:
engineConfig:
cacheSizeGB: 8
collectionConfig:
blockCompressor: snappy
replication:
oplogSizeMB: 1024
replSetName: "rs0"
net:
bindIp: "0.0.0.0"
port: 27017
systemLog:
destination: file
path: "/var/log/mongodb/mongodb.log"