Cannot deploy MongoDB replica set on windows? - mongodb

I want to deploy a mongodb replica set with 1 primary and 2 secondaries from config file as following:
the first config file for primary node
#primary node
#===============
dbpath = C:\data\rs0\1
directoryperdb = true
bind_ip = 192.168.2.104
port = 27017
logpath = C:\mongodb2.5.3\logs\primary.log
logappend = true
noauth = true
replSet = rs0
rest = true
the second config file for secondary node
#secondary node
#===============
dbpath = C:\data\rs0\2
directoryperdb = true
bind_ip = 192.168.2.104
port = 27018
logpath = C:\mongodb2.5.3\logs\secondary1.log
logappend = true
noauth = true
replSet = rs0
rest = true
And the third config file for secondary node
#secondary node
#===============
dbpath = C:\data\rs0\3
directoryperdb = true
bind_ip = 192.168.2.104
port = 27019
logpath = C:\mongodb2.5.3\logs\secondary2.log
logappend = true
noauth = true
replSet = rs0
rest = true
But i got this error
....
2013-11-28T16:26:59.734+0700 [initandlisten] options: { bind_ip: "192.168.2.104", config: "config\mongorep.conf", dbpath: "C:\data\rs0\1", directoryperdb: true, logappend: true, logpath: "C:\mongodb2.5.3\logs\primary.log", noauth: true, port: 27017, replSet: "rs0", rest: true }
2013-11-28T16:26:59.742+0700 [FileAllocator] allocating new datafile C:\data\rs0\1\local\local.ns, filling with zeroes...
2013-11-28T16:26:59.742+0700 [FileAllocator] creating directory C:\data\rs0\1\local\_tmp
2013-11-28T16:26:59.835+0700 [FileAllocator] done allocating datafile C:\data\rs0\1\local\local.ns, size: 16MB, took 0.092 secs
2013-11-28T16:26:59.836+0700 [FileAllocator] allocating new datafile C:\data\rs0\1\local\local.0, filling with zeroes...
2013-11-28T16:27:00.101+0700 [FileAllocator] done allocating datafile C:\data\rs0\1\local\local.0, size: 64MB, took 0.265 secs
2013-11-28T16:27:00.102+0700 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 360ms
2013-11-28T16:27:00.102+0700 [initandlisten] waiting for connections on port 27017
2013-11-28T16:27:00.103+0700 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2013-11-28T16:27:00.103+0700 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
2013-11-28T16:27:00.104+0700 [websvr] admin web console waiting for connections on port 28017
2013-11-28T16:27:01.103+0700 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2013-11-28T16:27:02.103+0700 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2013-11-28T16:27:03.103+0700 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2013-11-28T16:27:04.103+0700 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2013-11-28T16:27:05.103+0700 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
And when i did exactly the same this guide
http://docs.mongodb.org/manual/tutorial/deploy-replica-set-for-testing/
I also got the same error
replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
I dont know what esle config i need to do to solve this problem, please help me, thanks a lot.

run rs.initiate() in mongo console after adding/changing the value of replSet in config file.

You could write both startset and initiate in one line, it will make sure that initiate is kicked in as soon as startSet is done.
replicaSet.startSet();replicaSet.initiate()

Are you on EC2 by chance? I had the same problem when I run rs.initiate() I got "couldn't initiate :can't find self in the replset config"
I read somewhere that on EC2 it always gets the localhost IP wrong, so I explicitly added the EC2 private IP in /etc/mongo.conf
e.g
bind_ip=10.1.1.12,127.0.0.1
and that fixed the problem for me. rs.initiate() now works

Related

MongoDB Replication addition failing

I'm trying to add 2 slaves in mongodb replication after successful initialization. But unfortunately it is failing.
repset_init.js file details
rs.add( { host: "10.0.1.170:27017" } )
rs.add( { host: "10.0.2.157:27017" } )
rs.add( { host: "10.0.3.88:27017" } )
command which i have executed for replicaset addition
mongo -u xxxxx -p yyyy --authenticationDatabase admin --port 27017 repset_init.js
command hangs in terminal and below is the log output
{"t":{"$date":"2021-06-09T12:29:34.939+00:00"},"s":"I", "c":"REPL", "id":21393, "ctx":"conn2","msg":"Found self in config","attr":{"hostAndPort":"MongoD-1:27017"}}
{"t":{"$date":"2021-06-09T12:29:34.939+00:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn2","msg":"Slow query","attr":{"type":"command","ns":"local.system.replset","appName":"MongoDB Shell","command":{"replSetReconfig":{"_id":"Shard_0","version":2,"protocolVersion":1,"writeConcernMajorityJournalDefault":true,"members":[{"_id":0,"host":"MongoD-1:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":1.0,"tags":{},"slaveDelay":0,"votes":1},{"host":"10.0.2.157:27017","_id":1.0}],"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"catchUpTakeoverDelayMillis":30000,"getLastErrorModes":{},"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":{"$oid":"60c0b3566991d93637465f55"}}},"lsid":{"id":{"$uuid":"263568b4-ec31-4ea6-8f72-69cec80c1a7c"}},"$db":"admin"},"numYields":0,"reslen":38,"locks":{"ParallelBatchWriterMode":{"acquireCount":{"r":3}},"ReplicationStateTransition":{"acquireCount":{"w":5}},"Global":{"acquireCount":{"r":1,"w":4}},"Database":{"acquireCount":{"w":2,"W":1}},"Collection":{"acquireCount":{"w":2}},"Mutex":{"acquireCount":{"r":2}}},"flowControl":{"acquireCount":2,"timeAcquiringMicros":3},"storage":{},"protocol":"op_msg","durationMillis":151}}
{"t":{"$date":"2021-06-09T12:29:34.940+00:00"},"s":"I", "c":"REPL", "id":21215, "ctx":"ReplCoord-1","msg":"Member is in new state","attr":{"hostAndPort":"10.0.2.157:27017","newState":"STARTUP"}}
{"t":{"$date":"2021-06-09T12:29:34.941+00:00"},"s":"I", "c":"REPL", "id":4508702, "ctx":"conn2","msg":"Waiting for the current config to propagate to a majority of nodes"}
{"t":{"$date":"2021-06-09T12:33:55.701+00:00"},"s":"I", "c":"CONTROL", "id":20712, "ctx":"LogicalSessionCacheReap","msg":"Sessions collection is not set up; waiting until next sessions reap interval","attr":{"error":"ShardingStateNotInitialized: sharding state is not yet initialized"}}
{"t":{"$date":"2021-06-09T12:33:55.701+00:00"},"s":"I", "c":"CONTROL", "id":20714, "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"ShardingStateNotInitialized: sharding state is not yet initialized"}}
{"t":{"$date":"2021-06-09T12:34:35.029+00:00"},"s":"I", "c":"CONNPOOL", "id":22572, "ctx":"MirrorMaestro","msg":"Dropping all pooled connections","attr":{"hostAndPort":"10.0.2.157:27017","error":"ShutdownInProgress: Pool for 10.0.2.157:27017 has expired."}}
Additional details:
Shard_0:PRIMARY> rs.printSlaveReplicationInfo()
WARNING: printSlaveReplicationInfo is deprecated and may be removed in the next major release. Please use printSecondaryReplicationInfo instead.
source: 10.0.2.157:27017
syncedTo: Thu Jan 01 1970 00:00:00 GMT+0000 (UTC) 1623243005 secs (450900.83 hrs) behind the primary
Able to reach the node via port 27017
telnet 10.0.2.157 27017
Trying 10.0.2.157...
Connected to 10.0.2.157.
Escape character is '^]'.
My config file
net:
bindIp: 0.0.0.0
port: 27017
ssl: {}
processManagement:
fork: "true"
pidFilePath: /var/run/mongodb/mongod.pid
replication:
replSetName: Shard_0
security:
authorization: enabled
keyFile: /etc/zzzzzkey.key
setParameter:
authenticationMechanisms: SCRAM-SHA-256
sharding:
clusterRole: shardsvr
storage:
dbPath: /data/dbdata
engine: wiredTiger
systemLog:
destination: file
path: /data/log/mongodb.log
I'm initializing replicaset using below cmd
mongo --host 127.0.0.1 --port {{mongod_port}} --eval 'printjson(rs.initiate())'
Not sure what causing this issue. Could you please help me
The command looks a bit strange:
{
"replSetReconfig": {
"_id": "Shard_0",
"members": [
{ "_id": 0, "host": "MongoD-1:27017", "arbiterOnly": false, "hidden": false, "priority": 1.0, "slaveDelay": 0, "votes": 1 },
{ "_id": 1.0, "host": "10.0.2.157:27017" }
],
}
}
Why do you name your replica set Shard_0? Do you try to setup a Sharded Cluster?
You add _id: 0, host: "MongoD-1:27017" and _id: 1.0, host: "10.0.2.157:27017" which is not consistent, i.e. you mixed IP-Address and hostname. Also _id "0" and "1.0" is confusing.
How does your config files look like and how did you start the MongoDB services?

Mongod Replica set aborting after invariant() failure due to Stable timestamp Timestamp does not equal appliedThrough timestamp

I am a newbie to MongoDB. I was doing a POC on consuming documents using Java Client.
I am using a 4.2.5 version.
I have 3 instances of mongod running in my local with a Replica set as below.
mongod --port 27017 --dbpath /data/d1/ --replSet rs0 --bind_ip localhost
mongod --port 27018 --dbpath /data/d2/ --replSet rs0 --bind_ip localhost
mongod --port 27019 --dbpath /data/d3/ --replSet rs0 --bind_ip localhost
After a certain time, one or two of the instance gets abended and when I tend to start again, I see the same error. I am not sure about what causes this error.
Any help would be appreciated.
Error:
2020-05-25T19:37:47.126+0530 I REPL [initandlisten] Rollback ID is 1
2020-05-25T19:37:47.128+0530 F - [initandlisten] Invariant failure !stableTimestamp || stableTimestamp->isNull() || appliedThrough.isNull() || *stableTimestamp == appliedThrough.getTimestamp() Stable timestamp Timestamp(1590410112, 1) does not equal appliedThrough timestamp { ts: Timestamp(1590410172, 1), t: 5 } src/mongo/db/repl/replication_recovery.cpp 412
2020-05-25T19:37:47.128+0530 F - [initandlisten]
***aborting after invariant() failure
2020-05-25T19:37:47.137+0530 F - [initandlisten] Got signal: 6 (Abort trap: 6).
0x109e10cc6 0x109e1054d 0x7fff5d3c9b5d 0xa00 0x7fff5d2836a6 0x109e04d4a 0x1083597af 0x1083722ba 0x108376eb9 0x108077c6c 0x108071744 0x108070999 0x7fff5d1de3d5 0x9
----- BEGIN BACKTRACE -----
"backtrace":[{"b":"10806F000","o":"1DA1CC6","s":"_ZN5mongo15printStackTraceERNSt3__113basic_ostreamIcNS0_11char_traitsIcEEEE"},{"b":"10806F000","o":"1DA154D","s":"_ZN5mongo12_GLOBAL__N_110abruptQuitEi"},{"b":"7FFF5D3C5000","o":"4B5D","s":"_sigtramp"},{"b":"0","o":"A00"},
...
...
...
...
{ "path" : "/System/Library/PrivateFrameworks/BackgroundTaskManagement.framework/Versions/A/BackgroundTaskManagement", "machType" : 6, "b" : "7FFF41F95000", "vmaddr" : "7FFF3C6CD000", "buildId" : "2A396FC07B7930889A82FB93C1181A57" }, { "path" : "/usr/lib/libxslt.1.dylib", "machType" : 6, "b" : "7FFF5C842000", "vmaddr" : "7FFF56F7A000", "buildId" : "EC50E503AEEE3F50956F55E4AF4584D9" }, { "path" : "/System/Library/PrivateFrameworks/AppleSRP.framework/Versions/A/AppleSRP", "machType" : 6, "b" : "7FFF4177E000", "vmaddr" : "7FFF3BEB6000", "buildId" : "EDD16B2E4F353E13B389CF77B3CAD4EB" } ] }}
mongod(_ZN5mongo15printStackTraceERNSt3__113basic_ostreamIcNS0_11char_traitsIcEEEE+0x36) [0x109e10cc6]
mongod(_ZN5mongo12_GLOBAL__N_110abruptQuitEi+0xBD) [0x109e1054d]
libsystem_platform.dylib(_sigtramp+0x1D) [0x7fff5d3c9b5d]
??? [0xa00]
libsystem_c.dylib(abort+0x7F) [0x7fff5d2836a6]
mongod(_ZN5mongo22invariantFailedWithMsgEPKcRKNSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEES1_j+0x33A) [0x109e04d4a]
mongod(_ZN5mongo4repl23ReplicationRecoveryImpl16recoverFromOplogEPNS_16OperationContextEN5boost8optionalINS_9TimestampEEE+0x43F) [0x1083597af]
mongod(_ZN5mongo4repl26ReplicationCoordinatorImpl21_startLoadLocalConfigEPNS_16OperationContextE+0x3AA) [0x1083722ba]
mongod(_ZN5mongo4repl26ReplicationCoordinatorImpl7startupEPNS_16OperationContextE+0xE9) [0x108376eb9]
mongod(_ZN5mongo12_GLOBAL__N_114_initAndListenEi+0x28FC) [0x108077c6c]
mongod(_ZN5mongo12_GLOBAL__N_111mongoDbMainEiPPcS2_+0xDA4) [0x108071744]
mongod(main+0x9) [0x108070999]
libdyld.dylib(start+0x1) [0x7fff5d1de3d5]
??? [0x9]
----- END BACKTRACE -----
Abort trap: 6
There appears to be a ticket for this issue experienced by another user. You may consider engaging with MongoDB developers in that ticket to provide the requested information.

Mongo-DB Error trying to access my remote db via public url

I have an issue with my mongodb on the remote server, when I bind my Ip to 127.0.0.1 mongod starts fine, but when a add my public IP to the bind mongod fail with the error message below, How can I solve this error?
2016-07-10T11:14:15.678+0300 I CONTROL [initandlisten] options: {
config: "/etc/mongod.conf", net: { bindIp: "127.0.0.1, XX.XXX.XX.XX",
port: 27017 }, storage: { dbPath: "/var/lib/mongodb", journal: {
enabled: true } }, systemLog: { destination: "file", logAppend: true,
path: "/var/log/mongodb/mongod.log" } } 2016-07-10T11:14:15.710+0300 I
NETWORK [initandlisten] getaddrinfo(" XX.XXX.XX.XX") failed: Name or
service not known 2016-07-10T11:14:15.710+0300 E NETWORK
[initandlisten] listen(): socket is invalid.
2016-07-10T11:14:15.710+0300 E STORAGE [initandlisten] Failed to set
up sockets during startup. 2016-07-10T11:14:15.710+0300 I CONTROL
[initandlisten] dbexit: rc: 48

Log only errors in MongoDB logs

Is there any options for only logging the errors in MongoDB log files?
With the current configuration, it seems that every request to Mongo server is logged in log files:
Wed Sep 17 08:08:07.030 [conn117] insert my_database.myCol ninserted:1 keyUpdates:0 locks(micros) w:243505 285ms
Wed Sep 17 08:08:54.447 [conn101] command anotherDatabase.$cmd command: { findandmodify: "myCol", query: { ... }, new: 0, remove: 0, upsert: 0, fields: {...}, update: {...} } update: {...} ntoreturn:1 idhack:1 nupdated:1 fastmod:1 keyUpdates:0 locks(micros) w:124172 reslen:248 124ms
Wed Sep 17 08:10:24.370 [conn95] command my_database.$cmd command: { count: "cms.myCol", query: { ... }, fields: null } ntoreturn:1 keyUpdates:0 locks(micros) r:197368 reslen:48 197ms
...
The current configuration is:
# mongodb.conf
dbpath=/var/lib/mongodb
logpath=/var/log/mongodb/mongodb.log
logappend=true
How can be the configuration updated to only log errors?
Running Mongo shell version: 2.4.10:
$ mongo --version
MongoDB shell version: 2.4.10
Appending quiet=true will reduce a lot of output.
Perhaps it is impossible to avoid any output information except error on current stage.
Appending slowms=threshold to configuration file can reduce normal log output further.
threshold is a integer value (milliseconds). It means if one operation duration doesn't exceed this value, normal log information won't output. The default value is 100.
Also, you can change this value by another way if the instance is running.
var slowms = theValueYouWant;
var level = db.getProfilingStatus().was;
db.setProfilingLevel(level, slowms);

Set smallfiles in ShardingTest

I know there is a ShardingTest() object that can be used to create a testing sharding environment (see https://serverfault.com/questions/590576/installing-multiple-mongodb-versions-on-the-same-server), eg:
mongo --nodb
cluster = new ShardingTest({shards : 3, rs : false})
However, given that the disk space in my testing machine is limited and I'm getting "Insufficient free space for journal files" errors when using the above command, I'd like to set the smallfiles option. I have tried with the following with no luck:
cluster = new ShardingTest({shards : 3, rs : false, smallfiles: true})
How smallfiles can be enabled for a sharding test, please? Thanks!
A good way to determine how to use a MongoDB shell command is to type the command without the parentheses into the shell and instead of running it will print the source code for the command. So if you run
ShardingTest
at the command prompt you will see all of the source code. Around line 30 you'll see this comment:
// Allow specifying options like :
// { mongos : [ { noprealloc : "" } ], config : [ { smallfiles : "" } ], shards : { rs : true, d : true } }
which gives you the correct syntax to pass configuration parameters for mongos, config and shards (which apply to the non replicaset mongods for all the shards). That is, instead of specifying a number for shards you pass in an object. Digging further in the code:
else if( isObject( numShards ) ){
tempCount = 0;
for( var i in numShards ) {
otherParams[ i ] = numShards[i];
tempCount++;
}
numShards = tempCount;
This will take an object and use the subdocuments within the object as option parameters for each shard. This leads to, using your example:
cluster = new ShardingTest({shards : {d0:{smallfiles:''}, d1:{smallfiles:''}, d2:{smallfiles:''}}})
which from the output I can see is starting the shards with --smallfiles:
shell: started program mongod --port 30000 --dbpath /data/db/test0 --smallfiles --setParameter enableTestCommands=1
shell: started program mongod --port 30001 --dbpath /data/db/test1 --smallfiles --setParameter enableTestCommands=1
shell: started program mongod --port 30002 --dbpath /data/db/test2 --smallfiles --setParameter enableTestCommands=1
Alternatively, since you now have the source code in front of you, you could modify the javascript to pass in smallfiles by default.
A thorough explanation of the invoking modes of ShardingTest() is to be found in the source code of the function itself.
E.g., you could set smallFiles for two shards as follows:
cluster = new ShardingTest({shards: {d0:{smallfiles:''}, d1:{smallfiles:''}}})