Grails 2.2.1
MongoDB GORM plugin 1.2
When running with a replica set I am finding that stepping down the primary causes the following infinitely repeated errors in the java driver.
2013-09-09 16:00:19,655 [SimpleAsyncTaskExecutor-1] ERROR grails.app.services.plover.UserStreamAnalyzerService - Exception while handling status update event: org.springframework.data.mongodb.UncategorizedMongoDbException: not talking to master and retries used up; nested exception is com.mongodb.MongoException: not talking to master and retries used up
...
Caused by: org.springframework.data.mongodb.UncategorizedMongoDbException: not talking to master and retries used up; nested exception is com.mongodb.MongoException: not talking to master and retries used up
The stacktrace is here:
Caused by: com.mongodb.MongoException: not talking to master and retries used up
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:314)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:316)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:257)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:310)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:295)
at com.mongodb.DBCursor._check(DBCursor.java:368)
at com.mongodb.DBCursor._hasNext(DBCursor.java:459)
at com.mongodb.DBCursor._fill(DBCursor.java:518)
at com.mongodb.DBCursor.toArray(DBCursor.java:553)
at com.mongodb.DBCursor.toArray(DBCursor.java:542)
at org.grails.datastore.mapping.mongo.query.MongoQuery$MongoResultList.<init>(MongoQuery.java:908)
at org.grails.datastore.mapping.mongo.query.MongoQuery$36.doInDB(MongoQuery.java:536)
at org.grails.datastore.mapping.mongo.query.MongoQuery$36.doInDB(MongoQuery.java:508)
I have setup a local test environment to replicate this problem. Here is the config output:
{
"set" : "rsMesh",
"date" : ISODate("2013-09-10T01:08:20Z"),
"myState" : 2,
"syncingTo" : "macbookpro.local:27018",
"members" : [
{
"_id" : 1,
"name" : "macbookpro.local:27018",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 9940,
"optime" : {
"t" : 1378767619,
"i" : 5
},
"optimeDate" : ISODate("2013-09-09T23:00:19Z"),
"lastHeartbeat" : ISODate("2013-09-10T01:08:19Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "macbookpro.local:27019",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 9914,
"lastHeartbeat" : ISODate("2013-09-10T01:08:19Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"pingMs" : 0
},
{
"_id" : 3,
"name" : "macbookpro.local:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 10392,
"optime" : {
"t" : 1378767619,
"i" : 5
},
"optimeDate" : ISODate("2013-09-09T23:00:19Z"),
"self" : true
}
],
"ok" : 1
}
Replica set configuration has been set in Datasource.groovy as per documentation:
grails {
mongo {
replicaSet = ["macbookpro.local:27017", "macbookpro.local:27018", "macbookpro.local:27019"]
}
}
So I am not running in standalone, the replica set servers are synched properly, and all servers are running properly. But if I force a new server to become the primary then all access appears to fail as if the driver was not redirecting queries to the new primary.
What am I missing?
No one ever answered this so I decided that the recovery from replica sets was non-functional. Instead I went to sharding and hoped that layering a mongos between the app server and the clusters themselves would provide enough protection.
The answer was a definitive "sort of". Before, when I stepped a primary down (or simulated a crash) the app server would hang indefinitely. Now I just get a few errors about not finding a primary in a given cluster and then the system recovers. Not a desirable solution but at least it is better than a permanent failure.
Related
Situation: I have a MongoDB replication set over two computers.
One computer is a server that holds the primary node and the arbiter. This server is a live server and is always on. It's local IP that is used in replication is 192.168.0.4.
Second is a PC that the secondary node resides on and is on for a few hours a day. It's local IP that is used in replication is 192.168.0.5.
My expectation: I wanted the live server to be the main point of data interaction of my application, regardless of the state of the PC (whether it is reachable or not, since PC is secondary), so I wanted to make sure that server's node is always primary.
The following is the result of rs.config():
liveSet:PRIMARY> rs.config()
{
"_id" : "liveSet",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "192.168.0.4:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 10,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "192.168.0.5:5051",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 2,
"host" : "192.168.0.4:5052",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
Also I have set the storage engine to be WiredTiger, if that matters.
What I actually get, and the problem: When I turn off the PC, or kill its mongod process, then the node on the server becomes secondary.
The following is the output of the server when I killed PC's mongod process, while connected to primary node's shell:
liveSet:PRIMARY>
2015-11-29T10:46:29.471+0430 I NETWORK Socket recv() errno:10053 An established connection was aborted by the software in your host machine. 127.0.0.1:27017
2015-11-29T10:46:29.473+0430 I NETWORK SocketException: remote: 127.0.0.1:27017 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:27017]
2015-11-29T10:46:29.475+0430 I NETWORK DBClientCursor::init call() failed
2015-11-29T10:46:29.479+0430 I NETWORK trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2015-11-29T10:46:29.481+0430 I NETWORK reconnect 127.0.0.1:27017 (127.0.0.1) ok
liveSet:SECONDARY>
There are two doubts for me:
Considering this part of MongoDB documentation:
Replica sets use elections to determine which set member will become primary. Elections occur after initiating a replica set, and also any time the primary becomes unavailable.
The election occurs when the primary is not available (or at the time of initiating, however this is part does not concern our case), but primary was always available, so why the election happens.
Considering this part of the same documentation:
If a majority of the replica set is inaccessible or unavailable, the replica set cannot accept writes and all remaining members become read-only.
Considering the part 'members become read-only', I have two nodes up vs one down, so this should not also affect our replication.
Now my question: How to keep the node on the server as primary, when the node on PC is not reachable?
Update:
This is the output of rs.status().
Thanks to Wan Bachtiar, now This makes the behavior obvious, since arbiter was not reachable.
liveSet:PRIMARY> rs.status()
{
"set" : "liveSet",
"date" : ISODate("2015-11-30T04:33:03.864Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.0.4:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1807553,
"optime" : Timestamp(1448796026, 1),
"optimeDate" : ISODate("2015-11-29T11:20:26Z"),
"electionTime" : Timestamp(1448857488, 1),
"electionDate" : ISODate("2015-11-30T04:24:48Z"),
"configVersion" : 2,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.0.5:5051",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 496,
"optime" : Timestamp(1448796026, 1),
"optimeDate" : ISODate("2015-11-29T11:20:26Z"),
"lastHeartbeat" : ISODate("2015-11-30T04:33:03.708Z"),
"lastHeartbeatRecv" : ISODate("2015-11-30T04:33:02.451Z"),
"pingMs" : 1,
"configVersion" : 2
},
{
"_id" : 2,
"name" : "192.168.0.4:5052",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"lastHeartbeat" : ISODate("2015-11-30T04:33:00.008Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"configVersion" : -1
}
],
"ok" : 1
}
liveSet:PRIMARY>
As stated in the documentation, if a majority of the replica set is inaccessible or unavailable, the replica set cannot accept writes and all remaining members become read-only.
In this case the primary has to step down if the arbiter and the secondary are not reachable. rs.status() should be able to determine the health of the replica members.
One thing you should also watch for is the primary oplog size. The size of the oplog determines how long a replica set member can be down for and still be able to catch up when it comes back online. The bigger the oplog size, the longer you can deal with a member being down for as the oplog can hold more operations. If it does fall too far behind, you must resynchronise the member by removing its data files and performing an initial sync.
See Check the size of the Oplog for more info.
Regards,
Wan.
I'm having trouble diagnosing an issue where my Java application's requests to MongoDB are not getting routed to the Nearest replica, and I hope someone can help. Let me start by explaining my configuration.
The Configuration:
I am running a MongoDB instance in production that is a Sharded ReplicaSet. It is currently only a single shard (it hasn't gotten big enough yet to require a split). This single shard is backed by a 3-node replica set. 2 nodes of the replica set live in our primary data center. The 3rd node lives in our secondary datacenter, and is prohibited from becoming the Master node.
We run our production application simultaneously in both data centers, however the instance in our secondary data center operates in "read-only" mode and never writes data into MongoDB. It only serves client requests for reads of existing data. The objective of this configuration is to ensure that if our primary datacenter goes down, we can still serve client read traffic.
We don't want to waste all of this hardware in our secondary datacenter, so even in happy times we actively load balance a portion of our read-only traffic to the instance of our application running in the secondary datacenter. This application instance is configured with readPreference=NEAREST and is pointed at a mongos instance running on localhost (version 2.6.7). The mongos instance is obviously configured to point at our 3-node replica set.
From a mongos:
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 4,
"minCompatibleVersion" : 4,
"currentVersion" : 5,
"clusterId" : ObjectId("52a8932af72e9bf3caad17b5")
}
shards:
{ "_id" : "shard1", "host" : "shard1/failover1.com:27028,primary1.com:27028,primary2.com:27028" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "shard1" }
{ "_id" : "MyApplicationData", "partitioned" : false, "primary" : "shard1" }
From the failover node of the replicaset:
shard1:SECONDARY> rs.status()
{
"set" : "shard1",
"date" : ISODate("2015-09-03T13:26:18Z"),
"myState" : 2,
"syncingTo" : "primary1.com:27028",
"members" : [
{
"_id" : 3,
"name" : "primary1.com:27028",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 674841,
"optime" : Timestamp(1441286776, 2),
"optimeDate" : ISODate("2015-09-03T13:26:16Z"),
"lastHeartbeat" : ISODate("2015-09-03T13:26:16Z"),
"lastHeartbeatRecv" : ISODate("2015-09-03T13:26:18Z"),
"pingMs" : 49,
"electionTime" : Timestamp(1433952764, 1),
"electionDate" : ISODate("2015-06-10T16:12:44Z")
},
{
"_id" : 4,
"name" : "primary2.com:27028",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 674846,
"optime" : Timestamp(1441286777, 4),
"optimeDate" : ISODate("2015-09-03T13:26:17Z"),
"lastHeartbeat" : ISODate("2015-09-03T13:26:18Z"),
"lastHeartbeatRecv" : ISODate("2015-09-03T13:26:18Z"),
"pingMs" : 53,
"syncingTo" : "primary1.com:27028"
},
{
"_id" : 5,
"name" : "failover1.com:27028",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 8629159,
"optime" : Timestamp(1441286778, 1),
"optimeDate" : ISODate("2015-09-03T13:26:18Z"),
"self" : true
}
],
"ok" : 1
}
shard1:SECONDARY> rs.conf()
{
"_id" : "shard1",
"version" : 15,
"members" : [
{
"_id" : 3,
"host" : "primary1.com:27028",
"tags" : {
"dc" : "primary"
}
},
{
"_id" : 4,
"host" : "primary2.com:27028",
"tags" : {
"dc" : "primary"
}
},
{
"_id" : 5,
"host" : "failover1.com:27028",
"priority" : 0,
"tags" : {
"dc" : "failover"
}
}
],
"settings" : {
"getLastErrorModes" : {"ACKNOWLEDGED" : {}}
}
}
The Problem:
The problem is that requests which hit this mongos in our secondary datacenter seem to be getting routed to a replica running in our primary datacenter, not the nearest node, which is running in the secondary datacenter. This incurs a significant amount of network latency and results in bad read performance.
My understanding is that the mongos is deciding which node in the replica set to route the request to, and it's supposed to honor the ReadPreference from my java driver's request. Is there a command I can run in the mongos shell to see the status of the replica set, including ping times to nodes? Or some way to see logging of incoming requests which indicates the node in the replicaSet that was chosen and why? Any advice at all on how to diagnose the root cause of my issue?
While configuring read preference, when ReadPreference = NEAREST the system does not look for minimum network latency as it may decide primary as the nearest, if the network connection is proper. However, the nearest read mode, when combined with a tag set, selects the matching member with the lowest network latency. Even nearest may be any of primary or secondary. Behaviour of mongos when preferences configured , and in terms of network latency is not so clearly explained in the official docs.
http://docs.mongodb.org/manual/core/read-preference/#replica-set-read-preference-tag-sets
hope this helps
If I start mongos with flag -vvvv (4x verbose) then I am presented with request routing information in the log files, including information about the read preference used and the host to which requests were routed. for example:
2015-09-10T17:17:28.020+0000 [conn3] dbclient_rs say
using secondary or tagged node selection in shard1,
read pref is { pref: "nearest", tags: [ {} ] }
(primary : primary1.com:27028,
lastTagged : failover1.com:27028)
Despite the wording, when using nearest, the absolute fastest member isn't necessarily the one chosen. Instead, a random member is chosen out of a pool of members that have a latency lower than the calculated latency window.
The latency window is calculated by taking the fastest member's ping and adding replication.localPingThresholdMs, whose default is 15ms. You can read more about the algorithm here.
So what I do is I combine nearest with tags so that I can specify the member manually that I know is geographically closest.
Problem Description
I have a three member replica set, and a php web front end that a) writes a record, and then b) does a .find() on the collection and returns all documents in the database.
To better understand how replica sets work, I did the following:
stopped the mongo service on the primary server(mongohost1). the web page kept working.
stopped the mongo service on the server that got promoted to primary (mongohost2). At this point, even though I have another mongo host (mongohost3) with the same database, the PHP web app fails with the above error message.
I was expecting that the system would let me at least read the records from the database, even if the write failed.
What I've checked / tried so far:
All of the hosts are reachable. I've trying pinging by hostname from each box and it alll works.
Here's how the replica set has been configured as per mongohost3:
jlrs0:SECONDARY> cfg=rs.config()
{
"_id" : "jlrs0",
"version" : 5,
"members" : [
{
"_id" : 0,
"host" : "monghost1.test.mm.org:27017",
"priority" : 3
},
{
"_id" : 1,
"host" : "mongohost2.test.mm.org:27017",
"priority" : 2
},
{
"_id" : 2,
"host" : "mongohost3.test.mm.org:27017",
"priority" : 2
}
]
}
jlrs0:SECONDARY>
and the status of each member in the replica set per mongohost3:
jlrs0:SECONDARY> rs.status()
{
"set" : "jlrs0",
"date" : ISODate("2014-11-19T15:16:21Z"),
"myState" : 2,
"members" : [
{
"_id" : 0,
"name" : "mongohost1.test.mm.org:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(1416419914, 1),
"optimeDate" : ISODate("2014-11-19T17:58:34Z"),
"lastHeartbeat" : ISODate("2014-11-19T15:16:20Z"),
"lastHeartbeatRecv" : ISODate("2014-11-19T14:06:49Z"),
"pingMs" : 0
},
{
"_id" : 1,
"name" : "mongohost2.test.mm.org:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(1416419914, 5),
"optimeDate" : ISODate("2014-11-19T17:58:34Z"),
"lastHeartbeat" : ISODate("2014-11-19T15:16:17Z"),
"lastHeartbeatRecv" : ISODate("2014-11-19T14:10:58Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "mongohost3.test.mm.org:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 451417,
"optime" : Timestamp(1416419914, 5),
"optimeDate" : ISODate("2014-11-19T17:58:34Z"),
"self" : true
}
],
"ok" : 1
}
Here's the PHP code to connect:
$m = new MongoClient("mongodb://mongohost1.test.mm.org:27017,mongohost2.test.mm.org:27017,mongohost3.test.mm.org:27017/?replicaSet=jlrs0");
I'm still reading up on replica sets etc. so I'm sure it's something that I've missed / neglected to set up.
For example, I haven't set up an arbitor...
not sure if it's related or not but just in case, i thought i'd mention it. I'm not sure what else to check.
Thanks.
You need to set your read preference to primaryPreferred.
You need to specify that it ok to read from secondary when primary is not available.
By default, it is not so.
Link to documentation
Please check also your php mongo pecl lib version.
Before 1.5.6 there were 2 errors related to not selecting primary server by PHP after fail in replica set.
Be sure to have pecl mongo at least 1.5.6.
I'm having troubles connecting to replica set from my c++ application.
Everything is fine if connecting to single mongo instance. But the application crashes if trying to connect to replica set.
"Crashes" means that the process just disappears after entering into ScopedDbConnection::getScopedDbConnection.
Below is my code. It was compiled on EC2 instance running Amazon linux with g++ compiler.
I come from Windows world and don't know how to extract more information regarding the crash (for example, stack).
void run()
{
syslog(LOG_INFO, "Before connection");
// scoped_ptr<ScopedDbConnection> conn( ScopedDbConnection::getScopedDbConnection("54.83.49.200")); // works just fine
// next line causes a crash
scoped_ptr<ScopedDbConnection> conn( ScopedDbConnection::getScopedDbConnection("myreplset/54.83.49.200,54.83.53.241,54.83.52.158"));
DBClientBase * client = conn->get();
syslog(LOG_INFO, "After connection"); // newer happens if connecting to replica set
do_something(client);
conn->done();
}
MongoDB servers installed on Amazon EC2 from pre-configured images (AMI) provided by 10gen and have version 2.4.9
The only change was setting up the replica set.
C++ driver compiled from MongoDB source version 2.4.9. Boost version is 1.53.
Replica set configuration:
myreplset:PRIMARY> rs.status()
{
"set" : "myreplset",
"date" : ISODate("2014-03-26T22:15:11Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "54.83.49.200:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 595,
"optime" : Timestamp(1395872014, 1),
"optimeDate" : ISODate("2014-03-26T22:13:34Z"),
"self" : true
},
{
"_id" : 1,
"name" : "54.83.53.241:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 95,
"optime" : Timestamp(1395872014, 1),
"optimeDate" : ISODate("2014-03-26T22:13:34Z"),
"lastHeartbeat" : ISODate("2014-03-26T22:15:10Z"),
"lastHeartbeatRecv" : ISODate("2014-03-26T22:15:11Z"),
"pingMs" : 1,
"syncingTo" : "54.83.49.200:27017"
},
{
"_id" : 2,
"name" : "54.83.52.158:27017",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 93,
"lastHeartbeat" : ISODate("2014-03-26T22:15:10Z"),
"lastHeartbeatRecv" : ISODate("2014-03-26T22:15:10Z"),
"pingMs" : 1
}
],
"ok" : 1
}
Firewall seems to be tuned correctly.
Any help is very much appreciated.
You can add the port info and try:
scoped_ptr<ScopedDbConnection> conn( ScopedDbConnection::getScopedDbConnection("myreplset/54.83.49.200:27017,54.83.53.241:27017,54.83.52.158:27017"));
Refer client/dbclientinterface.h:
ConnectionString handles parsing different ways to connect to mongo and determining method
foo/server:port,server:port SET
I am trying to configure a replica set with two nodes but when I execute rs.add("node2") and then rs.status() both nodes are set to PRIMARY. Also when I run rs.status() on the other node the only node that appears is the local one.
Edit1:
rs.status() output:
{
"set" : "rs0",
"date" : ISODate("2012-09-22T01:01:12Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "node1:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 70968,
"optime" : Timestamp(1348207012000, 1),
"optimeDate" : ISODate("2012-09-21T05:56:52Z"),
"self" : true
},
{
"_id" : 1,
"name" : "node2:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 68660,
"optime" : Timestamp(1348205568000, 1),
"optimeDate" : ISODate("2012-09-21T05:32:48Z"),
"lastHeartbeat" : ISODate("2012-09-22T01:01:11Z"),
"pingMs" : 0
}
],
"ok" : 1
}
Edit2: I tried doing the same thing with 3 different nodes and I got the same result (rs.status() says I have a replica set with three primary nodes). Is it possible that this problem is caused by some specific configuration of the network?
If you issue rs.initiate() from both of your the members of the replica set before rs.add() then both will come up as primary.
You should only use rs.initiate() on one of the members of the replica set, the one that you intend to be primary initially. Then you can rs.add() the other member to the replica set.
The answer above does not answer how to fix it. I kind of got it done using trial and error.
I have cleaned up the data directory (as in rm -rf *) and restarted these PRIMARY nodes, except one. I added them back. It seems to work.
Edit1
The nice little trick below did not seem to work for me,
So, I logged into the mongod console using mongo <hostname>:27018
here is how the shell looks like:
rs2:PRIMARY> rs.conf()
{
"_id" : "rs2",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "ip-10-159-42-911:27018"
}
]
}
I decided to change it to secondary. So,
rs2:PRIMARY> var c = {
... "_id" : "rs2",
... "version" : 1,
... "members" : [
... {
... "_id" : 1,
... "host" : "ip-10-159-42-911:27018",
... "priority": 0.5
... }
... ]
... }
rs2:PRIMARY> rs.reconfig(c, { "force": true})
Mon Nov 11 19:46:39.244 DBClientCursor::init call() failed
Mon Nov 11 19:46:39.245 trying reconnect to ip-10-159-42-911:27018
Mon Nov 11 19:46:39.245 reconnect ip-10-159-42-911:27018 ok
reconnected to server after rs command (which is normal)
rs2:SECONDARY>
Now it is secondary. I do not know if there is a better way. But this seems to work.
HTH