MongoDB: Always receiving 127.0.0.1 when initializing primary node - mongodb

No matter how I try to initialize my primary node, it always gets the name "name" : "127.0.0.1:27017", therefore any remote node additions in the replica set, fail with this message:
"errmsg" : "Either all host names in a replica set configuration must
be localhost references, or none must be; found 1 out of 2"
Here is my .conf
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: mongodb.primary, 127.0.0.1
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
How should I properly initialize my primary so that it does not bind to 127.0.0.1 only?
The mongodb.primary above is resolvable among all machines;
Here is the full error:
rs0:PRIMARY> rs.add('mongodb.secondary1:27017')
{
"operationTime" : Timestamp(1552552019, 1),
"ok" : 0,
"errmsg" : "Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2",
"code" : 103,
"codeName" : "NewReplicaSetConfigurationIncompatible",
"$clusterTime" : {
"clusterTime" : Timestamp(1552552019, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}

The problem is clearly stated: the first replica set member is bound to localhost
To bind to all IPs (Not recommended to use this without authentication it's ok for testing though)
net:
bindIp: 0.0.0.0
See this document here
For you case the chances are that the MongoDB instances at DO are bound to localhost by default. However, you might need to bind them to a different IP address using the net.bindIp configuration option. Please be advised, this wille expose the MongoDB instance being accessible given they connect to the chosen port. Recommended to enable authentication if a MongoDB instance is bound to a different IP than localhost.

Related

Data not moving to new shard servers mongodb

I have added a shard server to my mongodb cluster. After adding the new shard server, I am getting this error on primary node.
DBException thrown :: caused by ::
CannotImplicitlyCreateCollection{ ns: "config.system.sessions" }:
request doesn't allow collection to be created implicitly
The new shard server is new and did not have any data earlier.
How did I add the shard server?
I create a file /etc/mongod.conf (just like my other shard servers)
sharding:
clusterRole: shardsvr
replication:
replSetName: shardReplicaSet10
storage:
dbPath: /mnt/mongodb
systemLog:
traceAllExceptions: true
path: /mnt/log/mongodb/out.log
logAppend: true
logRotate: rename
destination: file
processManagement:
fork: true
net:
bindIp: localhost,172.6.7.5
Then I have another member in this replicaSet with same config file, just the bindIp is changed.
I connected to my mongos node and add the replicaSet
sh.addShard("shardReplicaSet10/172.6.7.5:27018,172.6.7.6:27018")
I can confirm that I have initiated the replicaset using rs.initiate()
It turns out that whenever we add a new shard replicaSet, We need to give it a identity manually. I tried the steps given in this answer
https://dba.stackexchange.com/a/227877
and It worked. On primary of shard replicaSet, simply do this :
use admin
and add identity
db.system.version.insert({
"_id" : "shardIdentity",
"clusterId" : ObjectId("5f098ac1077eb0e078fd5c1e"),
"shardName" : "NameOfYourShardReplicaSet",
"configsvrConnectionString" : "configReplicaSet/ipofconfigserver:27019,ipofotherconfigserver:27019"
})
you can get cluster Id by using
sh.status()
on any of the mongos cluster.

"There are no users authenticated" even though authentication is disabled

I just downloaded the new MongoDB 4.2.1, on Windows, and I just want to use it locally without authentication. I am able to run mongod plain and the server starts fine. I am able to connect to mongodb://localhost:27017, but when I try to do anything I get the error there are no users authenticated.
I never had this issue in previous versions, so I'm wondering if 4.2 now has new restrictions that authentication must be enabled or something. Is that the case?
Edit: This is a new fresh install of MongoDB, and I've uninstalled all other versions. I haven't changed the config. All I have done is create the C:/data/db directory.
Edit 2:
Here is my config file:
storage:
dbPath: C:\Program Files\MongoDB\Server\4.2\data
journal:
enabled: true
net:
port: 27017
bindIp: 127.0.0.1
Some more information from messing around. MongoDB Compass gives me the error immediately upon connecting. A nodejs application is able to connect but when attempting to write anything it gets the error.
However, in the mongo shell I am able to connect and make write operations with no issues.
There are no commands being logged, only the initial startup output which all seems normal.
db._adminCommand( {getCmdLineOpts: 1}) output:
{
"argv" : [
"C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongod.exe",
"--config",
"C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongod.cfg",
"--service"
],
"parsed" : {
"config" : "C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongod.cfg",
"net" : {
"bindIp" : "127.0.0.1",
"port" : 27017
},
"service" : true,
"storage" : {
"dbPath" : "C:\\Program Files\\MongoDB\\Server\\4.2\\data",
"journal" : {
"enabled" : true
}
},
"systemLog" : {
"destination" : "file",
"logAppend" : true,
"path" : "C:\\Program Files\\MongoDB\\Server\\4.2\\log\\mongod.log"
}
},
"ok" : 1
}
Well, for some reason it worked when connecting to 127.0.0.1 and not localhost. Never had that before.
I'm giving this as a "response" (instead of a "comment"), just so I can format things more clearly for you.
I happen to be running MongoDB 4.2.0 on a Linux VM, with no authentication ... and no problems.
SUGGESTIONS:
Check /etc/mongod.conf (Windows equivalent), and make sure authorizationis COMPLETELY COMMENTED OUT (vs. "authorization: disabled")
Check /var/log/mongodb/mongod.log (Windows equivalent). If you find anything "significant", please copy/paste it into your post.
In "mongo", type db._adminCommand( {getCmdLineOpts: 1}) and ensure your runtime configuration settings match what you expect them to be.
Please keep us posted what you find!
Make sure you have installed mongodb

How to migrate from MMAPv1 to WiredTiger with minimal downtime without mongodump/mongorestore

Most guidelines recommend to use mongodump/mongorestore, but for large product databases downtime can be very long
You can use replication and an additional server for this or the same server if the load allows.
You need 3 running MongoDB instance:
Your server you want to update (remind that WiredTiger support since 3.0).
Second instance of MongoDB which can be run on an additional server. Database will be temporarily copied to it by the replication.
And the third instance of MongoDB is arbiter, which doesn’t store data and only participates in the election of primary server. The arbiter can be run on the additional server on a separate port.
Anyway you need to backup your database. You can run “mongodump” without parameters and directory “./dump” will be created with the database dump. You can use “--gzip“ parameter to compress result size.
mongodump --gzip
Just in case, the command to restore:
mongorestore --gzip
It should be run in the same directory where “./dump” dir and “--gzip“ parameter should be added if used in “mongodump”.
Begin configure from the additional server. My target system is Linux RedHat without Internet, so I download and install MongoDB via RPM manually. Add the section to /etc/mongod.conf:
replication:
oplogSizeMB: 10240
replSetName: REPLICA
Check that the net section look like this to allow access from other servers:
net:
bindIp: 0.0.0.0
port: 27017
and run:
service mongod start
Run the third MongoDB instance - arbiter. It can work on the additional server on a different port. Create a temporary directory for the arbiter database:
mkdir /tmp/mongo
chmod 777 -R /tmp/mongo
and run:
mongod --dbpath /tmp/mongo --port 27001 --replSet REPLICA \
--fork --logpath /tmp/mongo/db1.log
Now configure the main server. Edit /etc/mongod.conf
replication:
oplogSizeMB: 10240
replSetName: REPLICA
and restart MongoDB on the main server:
service mongod restart
It’s important! After restarting the main server read operations may be unavailable. I was getting the following error:
{ "ok" : 0, "errmsg" : "node is recovering", "code" : 13436 }
So as quickly as possible you need to connect to MongoDB on the main server via “mongo” console and run the following command to configure replication:
rs.initiate(
{
_id: "REPLICA",
members: [
{ _id: 0, host : "<IP address of main server>:27017",
priority: 1.0 },
{ _id: 1, host : "<IP address of additional server>:27017",
priority: 0.5 },
{ _id: 2, host : "<IP address of additional server(the arbiter)>:27001",
arbiterOnly : true, priority: 0.5 }
]
}
)
After this operation all actions with MongoDB will be available and data synchronization will be started.
I don’t recommend to use rs.initiate() on the main server without parameters as in most tutorials, because name of the main server will be configured by default as DNS-name from the /etc/hostname. It's not very convenient for me because I use IP-addresses for communications in my projects.
To check the synchronization progress you can call from “mongo” console:
rs.status()
Result example:
{
"set" : "REPLICA",
"date" : ISODate("2017-01-19T14:30:34.292Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "<IP address of main server>:27017",
"health" : 1.0,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 165,
"optime" : {
"ts" : Timestamp(6377323060650835, 3),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-01-19T14:30:33.000Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(6377322974751490, 1),
"electionDate" : ISODate("2017-01-19T14:30:13.000Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "<IP address of additional server>:27017",
"health" : 1.0,
"state" : 5,
"stateStr" : "STARTUP2",
"uptime" : 30,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00.000Z"),
"lastHeartbeat" : ISODate("2017-01-19T14:30:33.892Z"),
"lastHeartbeatRecv" : ISODate("2017-01-19T14:30:34.168Z"),
"pingMs" : NumberLong(3),
"syncingTo" : "<IP address of main server>:27017",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "<IP address of additional server (the arbiter)>:27001",
"health" : 1.0,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 30,
"lastHeartbeat" : ISODate("2017-01-19T14:30:33.841Z"),
"lastHeartbeatRecv" : ISODate("2017-01-19T14:30:30.158Z"),
"pingMs" : NumberLong(0),
"configVersion" : 1
}
],
"ok" : 1.0
}
After “stateStr” of the additional server will be replaced from ”STARTUP2” to ”SECONDARY”, our servers are synchronized.
While we wait for the end of the synchronization, it is necessary to modify client applications a little bit they can work with all servers in replica.
If you use the ConnectionString, you should replace it with something like:
mongodb://<IP address of main server>:27017,<IP address of additional server>:27017,<IP address of additional server (the arbiter)>:27001/?replicaSet=REPLICA
If you use C++ mongo-cxx-driver legacy, as I am, you should to use mongo::DBClientReplicaSet instead mongo::DBClientConnection and list all three servers in connection parameters, including the arbiter.
There is a third option - you can simply change IP of MongoDB server in clients after switching PRIMARY-SECONDARY, but it's not very fair.
After the synchronization has ended and an additional server status has established as SECONDARY, we need to switch the PRIMARY and SECONDARY by executing the command in “mongo” console on the main server. This is important because command will not work on the additional server.
cfg = rs.conf()
cfg.members[0].priority = 0.5
cfg.members[1].priority = 1
cfg.members[2].priority = 0.5
rs.reconfig(cfg)
Then check server status by executing:
rs.status()
Stop the MongoDB on the main server
service mongod stop
and simply delete the entire contents of a directory with database. It is safe, because we have a working copy on the additional server, and in the beginning we have made a backup. Be careful. MongoDB doesn’t create a database directory itself. If you've deleted it, you need not only to restore
mkdir /var/lib/mongo
and setup owner:
chown -R mongod:mongod /var/lib/mongo
Check storage engine wiredTiger is configured in /etc/mongod.conf. From 3.2 it is used by default:
storage:
...
engine: wiredTiger
...
And run MongoDB:
service mongod start
The main server will get the configuration from the secondary server automatically and data will be synced back to WiredTiger storage.
After the synchronization is finished switch the PRIMARY server back. This operation should be performed on an additional server because it is the PRIMARY now.
cfg = rs.conf()
cfg.members[0].priority = 1
cfg.members[1].priority = 0.5
cfg.members[2].priority = 0.5
rs.reconfig(cfg)
Return the old version of database clients or change ConnectionString back.
Now turn off replication if necessary. Remove 2 replication servers from the main server:
rs.remove("<IP address of additional server>:27017")
rs.remove("<IP address of additional server (the arbiter)>:27001")
Remove all “replication” section from /etc/mongod.conf and restart MongoDB:
service mongod restart
After these we get the warning when connected via the “mongo” console:
2017-01-19T12:26:51.948+0300 I STORAGE [initandlisten] ** WARNING: mongod started without --replSet yet 1 documents are present in local.system.replset
2017-01-19T12:26:51.948+0300 I STORAGE [initandlisten] ** Restart with --replSet unless you are doing maintenance and no other clients are connected.
2017-01-19T12:26:51.948+0300 I STORAGE [initandlisten] ** The TTL collection monitor will not start because of this.
To get rid of it, you need to remove the database “local”. There is only one collection “startup_log” in this database in default state, so you can do this without fear via “mongo” console
use local
db.dropDatabase()
and restart MongoDB:
service mongod restart
If you will remove the “local” database before “replication” section from /etc/mongod.conf, it is immediately restored. So I could not do only one MongoDB restart.
On the additional server perform the same action:
remove “replication“ section from /etc/mongod.conf
restart MongoDB
drop the “local“ database
again restart
The arbiter just stop and remove:
pkill -f /tmp/mongo
rm -r /tmp/mongo

ReplicaSetId conflict while adding node MongoDB

When I try to add a new node to my replicate set I get this message:
{
"ok" : 0,
"errmsg" : "Our replica set ID of 5890ad86c92c6c88e8573df0 did not match that of 10.0.253.3:27017, which is 5890a6b137e1380d1e697f2a",
"code" : 103,
"codeName" : "NewReplicaSetConfigurationIncompatible"
}
I had the same error and impossible to find out why ... I come back on the post to send the solution if others pass here.
Simply do not initialize the replicate on both servers:
I have two separate X and Y servers without mongodb, X and Y are IP addresses or domains.
Install mongodb on both servers
Edit the sudo nano /etc/mongod.conf configuration files on both servers
[in file] Replace bindIp: 127.0.0.1 with bindIp: 127.0.0.1,X on the X server
[in file] Replace bindIp: 127.0.0.1 with bindIp: 127.0.0.1,Y on the Y server
[in file] Replace #replication: with replication: on both servers
[in file] Add replSetName: "​​myReplicatName" line under replication: on both servers
Launch mongo with the configuration file on both servers
Only on server X run mongo and type commands
Mongo commands:
rs.initiate ({
_id: "rs0",
members: [{
_id: 1,
host: "X:27017"
}]
});
rs.add("Y:YPORT");

Kerberos authentication does not work in Mongodb Enterprise 3.2

Environment Windows 2012 R2. MongoDB Enterprise 3.2.0, this is an evaluation version.
I am logged in as muser1.
Why it is looking for a field pwd?
How can I fix it?
c:\MongoDB\scripts>mongo.exe --authenticationMechanism=GSSAPI --authenticationDatabase='$external' --username muser1#TEST.MNG
MongoDB shell version: 3.2.0
connecting to: test
2016-01-14T14:03:37.572-0800 E QUERY [thread1] Error: Missing expected field "pwd" :
DB.prototype._authOrThrow#src/mongo/shell/db.js:1395:16
#(auth):6:1
#(auth):1:2
exception: login failed
The user exists:
MongoDB Enterprise > user = db.system.users.findOne({user: "muser1#TEST.MNG"})
{
"_id" : "$external.muser1#TEST.MNG",
"user" : "muser1#TEST.MNG",
"db" : "$external",
"credentials" : {
"external" : true
},
"roles" : [
{
"role" : "readWrite",
"db" : "test"
}
]
}
Kerberos is correctly configured based on 3.2 documentation. DNS configured correctly.
MongoDB service is running with a domain account. SPN is there for the default and for the named service also. Tried to have just one of them configured alone then the other one, no luck.
c:\MongoDB\scripts>setspn -L m1svr
Registered ServicePrincipalNames for CN=Mongo1,CN=Users,DC=test,DC=mng:
mongodb/m1.test.mng
MongoDB_M1_D1/m1.test.mng
Here is the startup config file:
# Data Node, with minimal oplog and no jurnal
net:
port: 27017
systemLog:
verbosity: '0' #Debug level from 0-5
destination: file
path: C:\MongoDB\logs\m1-d1.log
logAppend: false
storage:
dbPath: C:\MongoDB\data\m1\D1
journal:
enabled: false
directoryPerDB : true
wiredTiger:
engineConfig:
cacheSizeGB: 1
statisticsLogDelaySecs: 1
journalCompressor: snappy
directoryForIndexes: true
collectionConfig:
blockCompressor: snappy
indexConfig:
prefixCompression: true
#replication:
# oplogSizeMB: 10
# replSetName: RSTest
security:
authorization: enabled
# sasl:
# hostName: m1.test.mng
# serviceName: MongoDB_M1_D1
# keyFile: F:\config\key1.txt
# clusterAuthMode: keyFile
setParameter:
authenticationMechanisms: GSSAPI,SCRAM-SHA-1
I tried to add a pwd field to the user document (just in case), but no luck.
I am out of ideas.
When using Kerberos Authentication, you need to define the fully qualified domain name for the server you are connecting to as part of the connection string. Note that the domain portion also needs to be in all caps. eg:
mongo.exe --host servername.TEST.MNG --authenticationMechanism=GSSAPI --authenticationDatabase='$external' --username muser1#TEST.MNG