mongodb replicaset auth not working - mongodb

I have a problem with replica sets
After I add keyFile path to mongodb.conf I can connect, this is my mongo.conf:
logpath=/path/to/log
logappend=true
replSet = rsname
fork = true
keyFile = /path/to/key
And this is what is showed in the command line:
XXXX#XXXX:/etc$ sudo service mongodb restart
stop: Unknown instance:
mongodb start/running, process 10540
XXXX#XXXX:/etc$ mongo
MongoDB shell version: 2.4.6
connecting to: test
Mon Sep 30 18:44:20.984 Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145
exception: connect failed
XXXX#XXXX:/etc$
if I comment the keyFile line in mongo.conf it works fine.

I solve the problem.
It was related with the key file permissions, I fixed the permissionas and ownership and work like charm:
As a root user I did:
$ chmod 700 keyfile
$ chown monogdb:mongodb keyfile

If the authentication would be the problem you should get a different message (and should be able to start the shell without the authenticated session just prevent you to run most of the commands).
This one means more like a socket exception that where you likely to connect there is no service listening. You can check with netstat if the process is listening that ip:port which is in the message. I assume that the mongod process have not started which can be for several reasons check the logs for the current one. One thing can be that the keyfile is not exists at the specified path or not the appropriate privileges have set on.
Adding a keyfile automaticly turns on the auth option too. This means you have to use a user to authenticate, but you can bypass this authentication with a localhost exception: . Read the documentation.

Related

How to execute 'service mongod start' using additional parameters in CentOS?

I unable to start MonogoDB service after adding users into admin db as well as my db.
I setup MonogoDB and started service using following command i.e.
service mongod start
Using command prompt, I added few users like dbOwner, clusterAdmin, readWrite, read roles base users. Along with that I also changed configuration from /etc/mongod.conf. in that file, I changed port number, IP addresses, dbPath, and security.authorization: enabled.
Then I restarted mongod service using following command.
service mongod restart
After ran this command, mongod service stopped successfully, but failed to start with only 'FAILED' message.
I tried execute following command i.e.
mongod --port 27123 --dbpath /path/to/db --auth
It is working.
Question: How to execute 'service mongod start' using additional parameters in CentOS?
MonogoDB: 3.4
OS: CentOS 7
I got solution i.e.
mongod --config /etc/mongod.conf
Referred: https://docs.mongodb.com/manual/reference/configuration-options/#use-the-configuration-file
It starts child process and also I can stop mongod service using service mongod stop command.
But I don't know whether it is correct or not.
I can't certify exactly where the script that "service" command uses on CentOS 7, but in Ubuntu 18.04 mongod service script file is in
/lib/systemd/system/mongod.service
There you can change the user who executes the process and add any parameters you want, like --auth.
Said that, if you ever executed mongod as root, some files on where you store the db data will have the owner as root, making the database fail to start as another user. The fix I found for that is to manually chown to mongodb:mongodb (or the user you want to use) all the files that are owned by root inside the database.
Hope this helps.
mongod.service file from mongodb github

Cannot start mongodb back up stop: Unknown instance:

So I am trying to restart mongodb 3.2 and enable authorization, so i edit the /etc/mongod.conf file and added
security:
authorization:enabled
then i saved the file, and typed in
sudo service mongod restart
which showed that it restarted correctly, but when I looked at the processes running, mongod is not one of them.
And now I cant restart it at all.
Also, there was already a database with information in monogodb before i enabled authorization. Im not sure if that is important to know.
I checked out the solutions in here Stop: Unknown instance mongodb (Ubuntu)
but i dont have a " fork = true" statement anywhere in /etc/mongod.conf
Okay, so strange enough, I did the following
1) went back to the /etc/mongod.conf file and commented out the authorization configuration.
2) typed in sudo service mongod start and gave me back the the job is already running.
3) checked the the running processes and mongodb was now running properly! I even accessed it through the monoShell
4) then i stopped the service sudo service mongod stop
5) I went back and add the following exact syntax
security
authorization: enabled
6) saved the file and sudo service mongod start
and waaar'yaaa know. its working.....
can someone explain ? that would be helpful

'Failed to unlink socket file" error in MongoDB 3.0

I am new to MongoDB. I am trying to install MongoDb 3.0 on Ubuntu 13.0 LTS, which is a VM on Windows 7 Host. I have installed MongoDB successfully (packages etc.), but when I execute the command sudo service mongod start, I get the following error in the "/var/log/mongodb/mongod.log" log file. Can anyone help me understanding this error. There is nothing on internet related to this.
2015-04-23T00:12:00.876-0400 I CONTROL ***** SERVER RESTARTED *****
2015-04-23T00:12:00.931-0400 E NETWORK [initandlisten] Failed to unlink socket file /tmp/mongodb-27017.sock errno:1 Operation not permitted
2015-04-23T00:12:00.931-0400 I - [initandlisten] Fatal Assertion 28578
2015-04-23T00:12:00.931-0400 I - [initandlisten]
I have fixed this issue myself, by deleting the mongodb-27017.sock file . I ran the service after deleting this file, which worked fine. However, I am still not sure the root cause of the issue. The output of the command ls - lat /tmp/mongodb-27017.sock is now
srwx------ 1 mongodb nogroup 0 Apr 23 06:24 /tmp/mongodb-27017.sock
Alternative to the answer provided by KurioZ7, you can simply set the permissions of the .sock file to the current user:
sudo chown `whoami` /tmp/mongodb-27017.sock
This does the trick for me if I want to run mongod without sudo. If I delete the file like in KurioZ7s answer, I will simply get the same error the next time I restart my machine.
This issue occurs when you use the command
mongod
Before using the command
sudo service mongod start
To fix the issue, either:
Set appropriate permissions on the file:
/tmp/mongodb-27017.sock
OR
Remove the file
/tmp/mongodb-27017.sock
Run
sudo service mongod start && mongod
The most likely cause for this was that the mongod process was at some point started by the root user. The socket file (/tmp/mongodb-27017.sock) was therefore owned by the root user. The mongod process usually runs under its own dedicated user, and that user did not have the permissions to delete that file.
The solution, as you already found out, was to delete it. Then mongodb was able to recreate it with the correct permissions. This should persist after reboot, as long as mongodb is started using the init scripts, or under the correct user account.
$ sudo mongod
it solve problem for me
Change the ownership mongodb-27017.sock file in /tmp directory and start the mongod again.
cd /tmp
sudo chown mongodb:mongodb mongodb-27017.sock
sudo systemctl start mongod
For UNIX-based operating systems, as an alternative to the answer provided by Bastronaut, you could also specify the .sock file to be saved to a folder over which mongod has full user rights (corresponding to the way you are running mongod), that way mongod will also be able to remove the .sock file upon shutdown. The default folder to which the .sock file is saved is '/tmp'. To specify another folder, use a custom mongodb configuration file, for instance 'mongodb.conf', and add the following to it:
net:
unixDomainSocket:
pathPrefix: "anotherFolder"
After which you can run mongod with the command:
$ mongod --config /path/to/mongodb.conf
You can read the documentation on: https://docs.mongodb.org/manual/reference/configuration-options/#net.unixDomainSocket.pathPrefix
Manually restarting mongod service after restart fixed the problem.
Long-term solution was to add static host name, instead of ip address 'net' part of mongod.conf file (I suspect the problem is that ip address is not yet given to server, when mongod servis starts).
If you are having this problem using docker, refer to this question:
MongoDB docker container “Failed to unlink socket file”

Amazon Linux AMI with MongoDB installed - service FAILED to start

I have mongodb installed on a new Amazon Linux AMI after implementing the guide here. When running "service mongod start" though, I just get a Starting mongod FAILED message. Nothing else. Blank log file also.
service mongod stop yields a FAILED also.
service mongod status yields "mongod is stopped"
Any thoughts or next steps?
I faced the same problem with my master mongodb instance. I have a replica set of 3 instances on EC2 and I had the same problem after one reboot that we needed for it, mongod simply failed to start.
First thing I did, was changed the ownership of the log file that is set up in the mongod.conf:
sudo chown mongod:mongod <LOG FILE>
After this I tried only sudo mongod and this will log the problem why the service won't start in the log file.
Mine appeared to be some ownership issues with the WiredTiger conf files in the dbPath directory (were set to root ownership instead to mongod user)
What I do in that case is use
sudo killall mongod
and after that it starts ok. My suspicion is that there is a running instance and that's why it fails to start.

Issue with persistent mongoDB data on EC2

I'm trying to store data in a mongoDB database on Amazon EC2. I'm using starcluster to configure and start the EC2 instance. I have an EBS volume mounted at /root/data. I installed mongoDB following the instructions here. When I log in to the EC2 instance I am able to type mongo, which brings me to the mongo shell with the test database. I have then added some data to a database, let's say database1, with various collections in it. When I exit the EC2 instance and terminate it, using starcluster terminate mycluster, and then create a new, different instance, the database1 data is no longer shown in the mongo shell.
I have tried changing the dbpath in the /etc/mongodb.conf file to /root/data/mongodb, which is the EBS volume, and then start and stop the mongodb service using sudo service mongodb stop and sudo service mongodb start. I then try mongo again and receive
MongoDB shell version: 2.2.2
connecting to: test
Sat Jan 19 21:27:42 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91
exception: connect failed
An additional issue is that whenever I terminate the EC2 instance any changes I made to the config file disappear.
So my basic question is: how do I change where mongoDB stores its data on EC2 so that the data will remain when I terminate one EC2 instance and then start another EC2 instance.
Edit:
In response to the first answer:
The directory does exist
I changed the owner to mongodb
I then issued the command sudo service mongodb stop
Checked to see if the port is released using netstat -anp | grep 27017. There was no output.
Restarted mongodb using sudo service mongodb start
Checked for port 27017 again and receive no output.
Tried to connect to the mongo shell and received the same error message.
Changed the mongodb.conf back to the original settings, restarted mongodb as in the above steps, and tried to connect again. Same error.
The EBS volume is configured in the starcluster config to be reattached on each startup.
For the "connect failed" after you change /etc/mongodb.conf problem, you can check the log file specified in the /etc/mongodb.conf (probably at /var/log/mongodb/mongodb.log):
Check that the directory specified by dbpath exists.
Make sure it is writable by the "mongodb" user. Perhaps it's best to chown to mongodb.
Make sure mongod actually released the 27017 port before starting it using: netstat -anp | grep 27017
Wait a couple seconds for mongod to restart before launching mongo.
It's not clear from your question if you are using Starcluster EBS volumes for Persistent Storage. Note that Ordinary EBS volumes do not automatically persist and reattach when you terminate an instance and start another. You would need to attach and mount them manually.
Once you get that working you'll probably want to create a custom Starcluster AMI with mongo properly installed and /etc/mongodb.conf appropriately modified.