MongoDB - ReplicaSet - Failed to refresh key cache - mongodb

I am deploying my first replica set on a Windows 10 machine with MongoDB 5.0.
For the purpose of the tutorial, I want all the servers running on my machine, on different ports.
A)
I created my first server with :
mongod --replSet rs0 --port 27018 --dbpath C:\data\R0S1
I opened another command line prompt and I connected to it with mongo --port 27018.
I iniated the set with the command rs.initiate().
B)
I created my second and third server with :
mongod --replSet rs0 --port 27019 --dbpath C:\data\R0S2
mongod --replSet rs0 --port 27020 --dbpath C:\data\R0S3
(I ran each of these commands in new command line prompts)
C)
I added the second and third server to the set by connecting to the primary server (on 27018
with mongo --port 27018) with :
rs.add("localhost:27019")
rs.add("localhost:27020")
At this stage, everything has worked as expected.
Running rs.conf() gives me :
{
"_id" : "rs0",
"version" : 5,
"term" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"secondaryDelaySecs" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "localhost:27019",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"secondaryDelaySecs" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "localhost:27020",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"secondaryDelaySecs" : NumberLong(0),
"votes" : 1
}
],
"protocolVersion" : NumberLong(1),
"writeConcernMajorityJournalDefault" : true,
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("61cc297329dca2f0673c2cff")
}
}
D) In a new command line prompt, I create my fourth server (with the idea of making it an arbiter) with :
mongod --replSet rs0 --port 30000 --dbpath C:\data\arb
Here I have a problem.
The log file for this new server contains the following error :
{"t":{"$date":"2021-12-29T10:50:51.767+01:00"},"s":"I", "c":"-",
"id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to
refresh key cache","attr":{"error":"NotYetInitialized: Cannot use
non-local read concern until replica set is finished
initializing.","nextWakeupMillis":19200}}
When connecting to the primary and trying to do "rs.addArb(localhost:30000)", the command blocks and does nothing, because of the above error on server on port 30000 I believe.
Do you have any ideas on what's going on and how I could solve my issue ?
---- edit 1 ----
below is my mongod.cfg file :
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: C:\Program Files\MongoDB\Server\5.0\data
journal:
enabled: true
# engine:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: C:\Program Files\MongoDB\Server\5.0\log\mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
#processManagement:
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:

it might be due to majority read concern introduced as part of mongodb 5.0 as default setting.
Check this if gives read concern as majority then change it to 2.
db.adminCommand({ "getDefaultRWConcern" : 1 })
This might be an issue with mongodb -https://www.mongodb.com/community/forums/t/single-node-replicaset-never-finishing-instanciating-error-cannot-use-non-local-read-concern-until-replica-set-is-finished-initializing/164815/3

Related

MongoDB single node replica set faild on restart azure vm

I'm running mongodb single node replica set on azure VM with Linux (ubuntu 20.04).
everything worked fine and i was able to start the replica set following the documentation from mongodb.
today i restarted the VM from the azure portal and the mongodb server seems to refuse getting connections.
I got this log from mongod.log :
"c":"CONTROL", "id":20698, "ctx":"main","msg":"***** SERVER RESTARTED *****"}
"c":"REPL", "id":21405, "ctx":"ReplCoord-0","msg":"Locally stored replica set configuration does not have a valid entry for the current node; waiting for reconfig or remote heartbeat","attr":{"error":{"code":74,"codeName":"NodeNotFound","errmsg":"No host described in new configuration with {version: 2, term: 5} for replica set rs0 maps to this node"},"localConfig":{"_id":"rs0","version":2,"term":5,"protocolVersion":1,"writeConcernMajorityJournalDefault":true,"members":[{"_id":0,"host":"cloud.visual-factories.com:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":1.0,"tags":{},"slaveDelay":0,"votes":1}],"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"catchUpTakeoverDelayMillis":30000,"getLastErrorModes":{},"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":{"$oid":"610b7cd7fb1f746f7ce41e65"}}}}}
"c":"REPL", "id":21392, "ctx":"ReplCoord-0","msg":"New replica set config in use","attr":{"config":{"_id":"rs0","version":2,"term":5,"protocolVersion":1,"writeConcernMajorityJournalDefault":true,"members":[{"_id":0,"host":"cloud.visual-factories.com:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":1.0,"tags":{},"slaveDelay":0,"votes":1}],"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"catchUpTakeoverDelayMillis":30000,"getLastErrorModes":{},"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":{"$oid":"610b7cd7fb1f746f7ce41e65"}}}}}
"c":"REPL", "id":21394, "ctx":"ReplCoord-0","msg":"This node is not a member of the config"}
"c":"REPL", "id":21358, "ctx":"ReplCoord-0","msg":"Replica set state transition","attr":{"newState":"REMOVED","oldState":"STARTUP"}}
"c":"-", "id":20883, "ctx":"conn1","msg":"Interrupted operation as its client disconnected","attr":{"opId":2573}}
"c":"NETWORK", "id":22989, "ctx":"conn1","msg":"Error sending response to client. Ending connection from remote","attr":{"error":{"code":9001,"codeName":"SocketException","errmsg":"Broken pipe"},"remote":"127.0.0.1:55402","connectionId":1}}
this is my replication config :
{
"_id" : "rs0",
"version" : 2,
"term" : 2,
"protocolVersion" : NumberLong(1),
"writeConcernMajorityJournalDefault" : true,
"members" : [
{
"_id" : 0,
"host" : "cloud.visual-factories.com:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("611419575ebb3f4f4ebe44ab")
}
}
and mongod.conf
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
security:
authorization: enabled
keyFile: /etc/keyfile.txt
#operationProfiling:
replication:
replSetName: "rs0"
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
any ideas why this is happen ?

mongos returns empty collection when queried from different applications

We use sharded MongoDB configuration with 2 shards. There are several front applications, which retrieve data via mongos service (installed on each application server).
Problem is that at some point running the same query from Application server #1 will return empty collection, while running the same query from Application server #3 will return correct results.
The problem persists on Application server #1 until flush command is run. As documentation states:
"You should only need to run flushRouterConfig after movePrimary has
been run or after manually clearing the jumbo chunk flag."
But neither Primary was moved nor was flag cleared.
Any ideas why could this happen?
Config file
# mongos.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: false
logRotate: rename
path: /var/log/mongos.log
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongos.pid # location of pidfile
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1,94.130.167.133,192.168.0.1 # Listen to local interface only, comment to listen on all interfaces.
security:
keyFile: /var/lib/database/mongos/.keyfile
sharding:
configDB: myproject-config/192.168.0.15:27027,192.168.0.16:27017,192.168.0.17:27017
printShardingStatus()
MongoDB shell version: 3.2.8
connecting to: cron0.myproject.smapps.org:27017/test
config.locks collection empty or missing. be sure you are connected to a mongos
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("2c5bc3fd9b2385s9613b0la5")
}
shards:
{ "_id" : "myproject0", "host" : "myproject0/192.168.0.11:27017,192.168.0.12:27017", "state" : 1, "tags" : [ ] }
{ "_id" : "myproject1", "host" : "myproject1/192.168.0.13:27017,192.168.0.18:27017", "state" : 1, "tags" : [ "game" ] }
{ "_id" : "myproject2", "host" : "myproject2/192.168.0.26:27017,192.168.0.28:27017", "state" : 1, "tags" : [ "game" ] }
active mongoses:
"4.0.6" : 14
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
8 : Success
databases:
{ "_id" : "myproject", "primary" : "myproject0", "partitioned" : true, "version" : { "uuid" : BinData(4,"+YxjwrKzTuekDlKutwT1IA=="), "lastMod" : 1 } }
...

Sharded mongodb cluster use only one shard

I'm trying to create a sharded mongodb cluster, I have 3 VM to use and I was trying to get the following setup:
VM1) Node.js Application + mongos instance
VM2) CSRS PRIMARY + Shard
VM3) CSRS SECONDARY + Shard
I'm not interested in replication but I need sharding because I will have to query over large number of results.
To achieve what stated above I have done the following:
On both VM2 and VM3
mongod –configsvr --replSet configM4C –bind_ip <ipVM2/3> --port 27040
mongod –shardsvr --replSet shardM4C --bind_ip <ipVM2/3> --port 27041
Then, connected to the configM4C on VM2(mongo --host --port 27040) and run
rs.initiate({
_id:”configM4C”,
configsvr: true,
members: [
{_id: 0, host: “<ipVM2>:27040”},
{_id:1, host: “<ipVM3>:27040”}
]
})
Then, connected to shardM4C on VM2(mongo --host --port 27041) and run
rs.initiate({
_id: shardM4C,
members: [
{_id: 0, host: “<ip1>:27041”},
{_id:1, host: “<ip2>:27041”}
]
})
Started mongos on VM1
mongos –configdb configM4C/<ipVM2>:27040,<ipVM3>:27040 --port 27042
Connected to mongos on VM1(mongo --port 27042):
sh.addShard("<ipVM2>:27041)
sh.addShard("<ipVM3>:27041)
sh.enableSharding("pings")
sh.shardCollection("pings.pings", {provider:1, from_zone: 1, to_zone: 1})
Anyway at this point if I run db.pings.getShardDistribution() I get:
Shard shardM4C at shardM4C/<ipVM3>:27041,<ipVM2>:27041
data : 19.47MiB docs : 102876 chunks : 1
estimated data per chunk : 19.47MiB
estimated docs per chunk : 102876
Totals
data : 19.47MiB docs : 102876 chunks : 1
Shard shardM4C contains 100% data, 100% docs in cluster, avg obj size on shard : 198B
sh.status():
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5b58a6b4986cc2d94694a3f9")
}
shards:
{ "_id" : "shardM4C", "host" : "shardM4C/10.0.0.4:27041,10.0.0.7:27041", "state" : 1 }
active mongoses:
"3.6.3" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 5
Last reported error: Could not find host matching read preference { mode: "primary" } for set shardM4C
Time of Reported error: Wed Jul 25 2018 21:44:24 GMT+0000 (UTC)
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shardM4C 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shardM4C Timestamp(1, 0)
{ "_id" : "pings", "primary" : "shardM4C", "partitioned" : true }
pings.pings
shard key: { "provider" : 1 }
unique: false
balancing: true
chunks:
shardM4C 1
{ "provider" : { "$minKey" : 1 } } -->> { "provider" : { "$maxKey" : 1 } } on : shardM4C Timestamp(1, 0)
{ "_id" : "test", "primary" : "shardM4C", "partitioned" : false }
And if I run a query in the executionStats I get:
...
"queryPlanner" : {
"mongosPlannerVersion" : 1,
"winningPlan" : {
"stage" : "SINGLE_SHARD",
"shards" : [
{
"shardName" : "shardM4C",
"connectionString" : "shardM4C/<ipVM3>:27041,<ipVM2>:27041",
"serverInfo" : {
"host" : "dpiscaglia-2",
"port" : 27041,
"version" : "3.6.3",
"gitVersion" : "9586e557d54ef70f9ca4b43c26892cd55257e1a5"
},
...
Both results makes me think that the query is actually working on only one shard, and the second is used only for replication and as a backup(which is not what i want), am i correct?
If yes could you please help me understand what I did wrong?
Thank you very much in advance for your help
Have a good day

Error when try to connect to Replica Set in Mongo

When I try to connect to mongo replica set in AWS I get this error:
slavenode:27017: [Errno -2] Name or service not
known,ip-XXX-XX-XX-XX:27017: [Errno -2] Name or service not known
(where XXX-XX.. corresponds to my actual ip address)
The code to connect is shown below:
client = MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/myFirstDB?replicaSet=rs0")
db = client.myFirstDB
try:
db.command("serverStatus")
except Exception as e:
print(e)
else:
print("You are connected!")
client.close()
(where in Master-PublicIP and Slave-PublicIP I have the actual IPv4 Public IP's from AWS console)
I have already a replica set and the configuration is:
rs0:PRIMARY> rs.conf()
{
"_id" : "rs0",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "ip-XXX-XX-XX-XXX:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "SlaveNode:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
I have create the /data/db in PRIMARY and the /data/db1 in SECONDARY and I have give the proper ownership with sudo chmod -R 755 /data/db
My MongoDB version is 3.0.15. Is anyone know what is going wrong?
Thanks in advance.
Have you tried removing the myFirstDB from within the MongoClient()
MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/?replicaSet=rs0")
Because your next line then specifies which db you want to use
db = client.myFirstDB
Or I think you can specify the db by putting a dot after the closing brace on the MongoClient()
MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/?replicaSet=rs0").myFirstDB
I manage to solve the problem. As #N3i1 suggests in commnets, I use the Public DNS (IPv4). There was an issue with the hosts that I had declare in /etc/hosts.
In this file I had define the ips of master/ slaves with some names. For some reason this didn't work. I delete them and then I reconfigure the replica set configuration.
In PRIMARY in mongo shell I did:
cfg = {"_id" : "rs0", "members" : [{"_id" : 0,"host" : "Public DNS (IPv4):27017"},{"_id" : 1,"host" : "Public DNS (IPv4):27017"}]}
rs.reconfig(cfg,{force: true});
Then I connect in the replica set with python with:
MongoClient("mongodb://Public DNS (IPv4):27017,Public DNS (IPv4):27017/?replicaSet=rs0")
Of course change the Public DNS (IPv4) adresses with yours.

MongoDB secondary replica does not have collections as in primary

I have set up a mongodb replica set on my local machine, and created couple of collections in a database named "adaptive-db". From the mongo shell, when i connect to the primary and run show dbs, i can see and query my database "adaptive-db".
I then switched to secondary, and ran rs.slaveOk() expecting to see the "adaptive-db" created in primary to be present in secondary as well, but i don't see it.
Here are the commands i ran:
shell:bin user1$ ./mongo localhost:27017
MongoDB shell version: 3.0.2
connecting to: localhost:27017/test
rs0:PRIMARY> show dbs
adaptive-db 0.125GB
local 0.281GB
rs0:PRIMARY> use adaptive-db
switched to db adaptive-db
rs0:PRIMARY> show collections
people
student
system.indexes
rs0:PRIMARY> db.people.find().count()
6003
rs0:PRIMARY> exit
bye
shell:bin user1$ ./mongo localhost:27018
MongoDB shell version: 3.0.2
connecting to: localhost:27018/test
rs0:SECONDARY> show dbs
2015-06-25T11:16:40.751-0400 E QUERY Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" }
at Error (<anonymous>)
at Mongo.getDBs (src/mongo/shell/mongo.js:47:15)
at shellHelper.show (src/mongo/shell/utils.js:630:33)
at shellHelper (src/mongo/shell/utils.js:524:36)
at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> show dbs
admin 0.031GB
local 0.281GB
rs0:SECONDARY>
Can someone plz explain me why? Here is my rs.conf():
rs0:SECONDARY> rs.conf()
{
"_id" : "rs0",
"version" : 7,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "localhost:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 2,
"host" : "localhost:27019",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
Thanks
Run the command rs.slaveOk() on the secondary replica member; this allows the current connection to allow read operations to run on the secondary member.
For reference:
http://docs.mongodb.org/manual/reference/method/rs.slaveOk/
I'm guessing a little, but judging by other collections you see, you are in the local database. Which, as the name suggests is not replicated (it contains oplog, startup_log and other things like that, which are specific to instance and would get messed up if replicated).
Use a different database. Either connect to 127.0.0.1/somedb (use appropriate IP/hostname), or do use somedb, to switch databases while in console. Then create collections and those should get replicated (to the database of same name, of course - somedb in my example).
For newer version of MongoDB run the command rs.secondaryOk() on the secondary database.