mongodb replica set unreachable - mongodb

I am trying to configure a standalone mongodb replica set with 3 instances. I seem to have gotten into a funky state. Two of my instances went down, and I was left with all secondary nodes. I tried to follow this: http://docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/
I got this error though:
rs0:SECONDARY> rs.reconfig(cfg, {force : true})
{
"errmsg" : "exception: need most members up to reconfigure, not ok : obfuscated_hostname:27019",
"code" : 13144,
"ok" : 0
}
When I look at the logs I see this:
Fri Aug 2 20:45:11.895 [initandlisten] options: { config: "/etc/mongodb1.conf",
dbpath: "/var/lib/mongodb1", logappend: "true", logpath: "/var/log/mongodb/mongodb1.log",
port: 27018, replSet: "rs0" }
Fri Aug 2 20:45:11.897 [initandlisten] journal dir=/var/lib/mongodb1/journal
Fri Aug 2 20:45:11.897 [initandlisten] recover begin
Fri Aug 2 20:45:11.897 [initandlisten] recover lsn: 0
Fri Aug 2 20:45:11.897 [initandlisten] recover /var/lib/mongodb1/journal/j._0
Fri Aug 2 20:45:11.899 [initandlisten] recover cleaning up
Fri Aug 2 20:45:11.899 [initandlisten] removeJournalFiles
Fri Aug 2 20:45:11.899 [initandlisten] recover done
Fri Aug 2 20:45:11.923 [initandlisten] waiting for connections on port 27018
Fri Aug 2 20:45:11.925 [websvr] admin web console waiting for connections on port 28018
Fri Aug 2 20:45:11.927 [rsStart] replSet I am hostname_obfuscated:27018
Fri Aug 2 20:45:11.927 [rsStart] replSet STARTUP2
Fri Aug 2 20:45:11.929 [rsHealthPoll] replset info hostname_obf:27017 thinks that we are down
Fri Aug 2 20:45:11.929 [rsHealthPoll] replSet member hostname_obf:27017 is up
Fri Aug 2 20:45:11.929 [rsHealthPoll] replSet member hostname_obf:27017 is now in state SECONDARY
Fri Aug 2 20:45:12.587 [initandlisten] connection accepted from ip_obf:52446 #1 (1 connection now open)
Fri Aug 2 20:45:12.587 [initandlisten] connection accepted from ip_obf:52447 #2 (2 connections now open)
Fri Aug 2 20:45:12.588 [conn1] end connection ip_obf:52446 (1 connection now open)
Fri Aug 2 20:45:12.928 [rsSync] replSet SECONDARY
I'm unable to connect to the mongo instances, even though the logs say that it is up and running. Any ideas on what to do here?

You did not mention which version of mongodb you are using, but I assume it is post-2.0.
I think the problem with your forced reconfiguration is that after this reconfiguration, you still need to have the minimum number of nodes for a functioning replica set, i.e. 3. But since you originally had 3 members and lost 2, there is no way you could turn that single surviving node into a functioning replica set.
Your only option for recovery would be to bring up the surviving node as a stand-alone server, backup the database, and then create a new 3-node replica set with that data.

Yes you can turn up a single secondary replica to primary if the secondary server is running fine.Do follow the below simple steps:
Step 1: Connect to member and check the current configuration
rs.conf()
Step 2: Save the current configuration to another variable.
x = rs.conf()
Step 3: Select the id,host and port of the member that is to be made as primary.
x.members = [{"_id":1,"host" : "localhost.localdomain:27017"}]
Step 4: Reconfigure the new replica set by force.
rs.reconfig(x, {force:true})
Now the desired member will be promoted as the primary.

Related

Mongodb balancer have big chunk difference

i have a sharded mongo env, everything was ok but recently i noticed that
the shards have big difference:
chunks:
ProductionShardC 939
ProductionShardB 986
ProductionShardA 855
edPrimaryShard 1204
balancer is running and i can see it also in the locks:
db.locks.find( { _id : "balancer" } ).pretty()
{
"_id" : "balancer",
"process" : "ip-10-0-0-100:27017:1371132087:1804289383",
"state" : 2,
"ts" : ObjectId("51e1e5d75e1777de5f007ea5"),
"when" : ISODate("2013-07-13T23:42:15.660Z"),
"who" : "ip-10-0-0-100:27017:1371132087:1804289383:Balancer:846930886",
"why" : "doing balance round"
}
here is the /var/log/mongo/mongos.log of mongos
cat mongos.log
Sun Aug 4 15:33:29.859 [mongosMain] MongoS version 2.4.4 starting: pid=8520 port=27017 64-bit host=ip-10-0-0-100 (--help for usage)
Sun Aug 4 15:33:29.859 [mongosMain] git version: 4ec1fb96702c9d4c57b1e06dd34eb73a16e407d2
Sun Aug 4 15:33:29.859 [mongosMain] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49
Sun Aug 4 15:33:29.859 [mongosMain] options: { configdb: "10.0.1.200:27019,10.0.1.201:27019,10.0.1.202:27019", keyFile: "/media/Data/db/mongoKeyFile", logpath: "/var/log/mongo/mongos.log" }
Sun Aug 4 15:33:30.078 [mongosMain] SyncClusterConnection connecting to [10.0.1.200:27019]
Sun Aug 4 15:33:30.079 [mongosMain] SyncClusterConnection connecting to [10.0.1.201:27019]
Sun Aug 4 15:33:30.080 [mongosMain] SyncClusterConnection connecting to [10.0.1.202:27019]
Sun Aug 4 15:33:30.092 [mongosMain] SyncClusterConnection connecting to [10.0.1.200:27019]
Sun Aug 4 15:33:30.093 [mongosMain] SyncClusterConnection connecting to [10.0.1.201:27019]
Sun Aug 4 15:33:30.093 [mongosMain] SyncClusterConnection connecting to [10.0.1.202:27019]
Sun Aug 4 15:33:30.809 [mongosMain] waiting for connections on port 27017
Sun Aug 4 15:33:30.809 [Balancer] about to contact config servers and shards
Sun Aug 4 15:33:30.810 [websvr] admin web console waiting for connections on port 28017
Sun Aug 4 15:33:30.810 [Balancer] starting new replica set monitor for replica set edPrimaryShard with seed of 10.0.1.150:27017,10.0.1.151:27017,10.0.1.152:27017
Sun Aug 4 15:33:30.811 [Balancer] successfully connected to seed 10.0.1.150:27017 for replica set edPrimaryShard
Sun Aug 4 15:33:30.811 [Balancer] changing hosts to { 0: "10.0.1.150:27017", 1: "10.0.1.152:27017", 2: "10.0.1.151:27017" } from edPrimaryShard/
Sun Aug 4 15:33:30.811 [Balancer] trying to add new host 10.0.1.150:27017 to replica set edPrimaryShard
Sun Aug 4 15:33:30.812 [Balancer] successfully connected to new host 10.0.1.150:27017 in replica set edPrimaryShard
Sun Aug 4 15:33:30.812 [Balancer] trying to add new host 10.0.1.151:27017 to replica set edPrimaryShard
Sun Aug 4 15:33:30.813 [Balancer] successfully connected to new host 10.0.1.151:27017 in replica set edPrimaryShard
Sun Aug 4 15:33:30.813 [Balancer] trying to add new host 10.0.1.152:27017 to replica set edPrimaryShard
Sun Aug 4 15:33:30.813 [Balancer] successfully connected to new host 10.0.1.152:27017 in replica set edPrimaryShard
Sun Aug 4 15:33:31.013 [Balancer] Primary for replica set edPrimaryShard changed to 10.0.1.150:27017
Sun Aug 4 15:33:31.019 [Balancer] replica set monitor for replica set edPrimaryShard started, address is edPrimaryShard/10.0.1.150:27017,10.0.1.151:27017,10.0.1.152:27017
Sun Aug 4 15:33:31.019 [ReplicaSetMonitorWatcher] starting
Sun Aug 4 15:33:31.021 [Balancer] starting new replica set monitor for replica set ProductionShardA with seed of 10.0.1.160:27017,10.0.1.161:27017,10.0.1.162:27017
Sun Aug 4 15:33:31.021 [Balancer] successfully connected to seed 10.0.1.160:27017 for replica set ProductionShardA
Sun Aug 4 15:33:31.022 [Balancer] changing hosts to { 0: "10.0.1.160:27017", 1: "10.0.1.162:27017", 2: "10.0.1.161:27017" } from ProductionShardA/
Sun Aug 4 15:33:31.022 [Balancer] trying to add new host 10.0.1.160:27017 to replica set ProductionShardA
Sun Aug 4 15:33:31.022 [Balancer] successfully connected to new host 10.0.1.160:27017 in replica set ProductionShardA
Sun Aug 4 15:33:31.022 [Balancer] trying to add new host 10.0.1.161:27017 to replica set ProductionShardA
Sun Aug 4 15:33:31.023 [Balancer] successfully connected to new host 10.0.1.161:27017 in replica set ProductionShardA
Sun Aug 4 15:33:31.023 [Balancer] trying to add new host 10.0.1.162:27017 to replica set ProductionShardA
Sun Aug 4 15:33:31.024 [Balancer] successfully connected to new host 10.0.1.162:27017 in replica set ProductionShardA
Sun Aug 4 15:33:31.187 [Balancer] Primary for replica set ProductionShardA changed to 10.0.1.160:27017
Sun Aug 4 15:33:31.232 [Balancer] replica set monitor for replica set ProductionShardA started, address is ProductionShardA/10.0.1.160:27017,10.0.1.161:27017,10.0.1.162:27017
Sun Aug 4 15:33:31.234 [Balancer] starting new replica set monitor for replica set ProductionShardB with seed of 10.0.1.170:27017,10.0.1.171:27017,10.0.1.172:27017
Sun Aug 4 15:33:31.235 [Balancer] successfully connected to seed 10.0.1.170:27017 for replica set ProductionShardB
Sun Aug 4 15:33:31.237 [Balancer] changing hosts to { 0: "10.0.1.170:27017", 1: "10.0.1.172:27017", 2: "10.0.1.171:27017" } from ProductionShardB/
Sun Aug 4 15:33:31.237 [Balancer] trying to add new host 10.0.1.170:27017 to replica set ProductionShardB
Sun Aug 4 15:33:31.237 [Balancer] successfully connected to new host 10.0.1.170:27017 in replica set ProductionShardB
Sun Aug 4 15:33:31.237 [Balancer] trying to add new host 10.0.1.171:27017 to replica set ProductionShardB
Sun Aug 4 15:33:31.238 [Balancer] successfully connected to new host 10.0.1.171:27017 in replica set ProductionShardB
Sun Aug 4 15:33:31.238 [Balancer] trying to add new host 10.0.1.172:27017 to replica set ProductionShardB
Sun Aug 4 15:33:31.238 [Balancer] successfully connected to new host 10.0.1.172:27017 in replica set ProductionShardB
Sun Aug 4 15:33:31.361 [Balancer] Primary for replica set ProductionShardB changed to 10.0.1.170:27017
Sun Aug 4 15:33:31.379 [Balancer] replica set monitor for replica set ProductionShardB started, address is ProductionShardB/10.0.1.170:27017,10.0.1.171:27017,10.0.1.172:27017
Sun Aug 4 15:33:31.383 [Balancer] starting new replica set monitor for replica set ProductionShardC with seed of 10.0.1.180:27017,10.0.1.181:27017,10.0.1.182:27017
Sun Aug 4 15:33:31.383 [Balancer] successfully connected to seed 10.0.1.180:27017 for replica set ProductionShardC
Sun Aug 4 15:33:31.384 [Balancer] changing hosts to { 0: "10.0.1.180:27017", 1: "10.0.1.182:27017", 2: "10.0.1.181:27017" } from ProductionShardC/
Sun Aug 4 15:33:31.384 [Balancer] trying to add new host 10.0.1.180:27017 to replica set ProductionShardC
Sun Aug 4 15:33:31.385 [Balancer] successfully connected to new host 10.0.1.180:27017 in replica set ProductionShardC
Sun Aug 4 15:33:31.385 [Balancer] trying to add new host 10.0.1.181:27017 to replica set ProductionShardC
Sun Aug 4 15:33:31.385 [Balancer] successfully connected to new host 10.0.1.181:27017 in replica set ProductionShardC
Sun Aug 4 15:33:31.385 [Balancer] trying to add new host 10.0.1.182:27017 to replica set ProductionShardC
Sun Aug 4 15:33:31.386 [Balancer] successfully connected to new host 10.0.1.182:27017 in replica set ProductionShardC
Sun Aug 4 15:33:31.499 [Balancer] Primary for replica set ProductionShardC changed to 10.0.1.180:27017
Sun Aug 4 15:33:31.510 [Balancer] replica set monitor for replica set ProductionShardC started, address is ProductionShardC/10.0.1.180:27017,10.0.1.181:27017,10.0.1.182:27017
Sun Aug 4 15:33:31.513 [Balancer] config servers and shards contacted successfully
Sun Aug 4 15:33:31.513 [Balancer] balancer id: ip-10-0-0-100:27017 started at Aug 4 15:33:31
Sun Aug 4 15:33:31.513 [Balancer] SyncClusterConnection connecting to [10.0.1.200:27019]
Sun Aug 4 15:33:31.514 [Balancer] SyncClusterConnection connecting to [10.0.1.201:27019]
Sun Aug 4 15:33:31.514 [Balancer] SyncClusterConnection connecting to [10.0.1.202:27019]
Sun Aug 4 15:33:31.537 [LockPinger] creating distributed lock ping thread for 10.0.1.200:27019,10.0.1.201:27019,10.0.1.202:27019 and process ip-10-0-0-100:27017:1375619611:1804289383 (sleeping for 30000ms)
Sun Aug 4 15:33:35.777 [mongosMain] connection accepted from 84.108.44.142:50916 #1 (1 connection now open)
Sun Aug 4 15:33:35.963 [conn1] authenticate db: admin { authenticate: 1, user: "root", nonce: "50c90ba9496d0a2d", key: "52390c478fffe89d03b776dd14e7c0d6" }
Sun Aug 4 15:33:37.704 [conn1] ChunkManager: time to load chunks for profiles.devices: 104ms sequenceNumber: 2 version: 2898|1177||51bb0e3a5e1777de5ffbf898 based on: (empty)
Sun Aug 4 15:33:37.712 [conn1] ChunkManager: time to load chunks for profiles.user_devices: 4ms sequenceNumber: 3 version: 92|25||51bb10be5e1777de5ffbf8d5 based on: (empty)
Sun Aug 4 15:33:37.715 [conn1] creating WriteBackListener for: 10.0.1.150:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.715 [conn1] creating WriteBackListener for: 10.0.1.151:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.715 [conn1] creating WriteBackListener for: 10.0.1.152:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.718 [conn1] creating WriteBackListener for: 10.0.1.160:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.718 [conn1] creating WriteBackListener for: 10.0.1.161:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.718 [conn1] creating WriteBackListener for: 10.0.1.162:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.722 [conn1] creating WriteBackListener for: 10.0.1.170:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.722 [conn1] creating WriteBackListener for: 10.0.1.171:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.722 [conn1] creating WriteBackListener for: 10.0.1.172:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.725 [conn1] creating WriteBackListener for: 10.0.1.180:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.725 [conn1] creating WriteBackListener for: 10.0.1.181:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.725 [conn1] creating WriteBackListener for: 10.0.1.182:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:39.468 [conn1] warning: mongos collstats doesn't know about: systemFlags
Sun Aug 4 15:33:39.468 [conn1] warning: mongos collstats doesn't know about: userFlags
Sun Aug 4 15:33:39.469 [conn1] warning: mongos collstats doesn't know about: systemFlags
Sun Aug 4 15:33:39.469 [conn1] warning: mongos collstats doesn't know about: userFlags
Sun Aug 4 15:33:39.470 [conn1] warning: mongos collstats doesn't know about: systemFlags
Sun Aug 4 15:33:39.470 [conn1] warning: mongos collstats doesn't know about: userFlags
Sun Aug 4 15:33:39.470 [conn1] warning: mongos collstats doesn't know about: systemFlags
Sun Aug 4 15:33:39.470 [conn1] warning: mongos collstats doesn't know about: userFlags
why it have such a big different? 1 shard has 855 and another 1204
how can i fix it?

MongoDB provides a basic authentication system. Has it changed in version 2.2.3?

Scenario: Installed MongoDB 2.2.3 on the machine (Windows 64-bit)
Followed all the steps to enforce authentication on MongoDB server.
Added User to admin database
use admin
db.addUser('me_admin', '12345');
db.auth('me_admin','12345');
Ran database server (mongod.exe process) with the --auth option to enable
authentication
Followed all answers for similar question:
How to secure MongoDB with username and password
Issue: With new version 2.2.3 I am not able set up authentication. After following the same steps I was able to set up the authentication for the version 2.0.8 on the same machine. But its mentioned somewhere in MongoDB docs that "Authentication on Localhost varies slightly between before and after version 2.2"
Question: What is the change and how can enforce the authentication in new versions i.e. 2.2 onwards. Can anybody give some idea or solution to proceed the same with new MongoDB 2.2.3?
Update:
I had checked that authentication works same on 2.2.3 when I start mongod.exe process with --auth parameter from command prompt.
I was using auth=true parameter in config file as mentioned in docs, but this did not work.
Research done:
When mongod.cfg file contains following configurations
logpath=c:\mongodb\log\mongo.log, auth=true, profile=2
The log.txt file contains following logs
Mon Mar 11 15:06:35 Trying to start Windows service 'MongoDB'
Mon Mar 11 15:06:35 Service running
Mon Mar 11 15:06:35 [initandlisten] MongoDB starting : pid=7152 port=27017 dbpath=\data\db\ 64-bit host=AMOL-KULKARNI
Mon Mar 11 15:06:35 [initandlisten] db version v2.2.3, pdfile version 4.5
Mon Mar 11 15:06:35 [initandlisten] git version: f570771a5d8a3846eb7586eaffcf4c2f4a96bf08
Mon Mar 11 15:06:35 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
Mon Mar 11 15:06:35 [initandlisten] options: { config: "C:\mongodb\mongod.cfg", logpath: "c:\mongodb\log\mongo.log auth=true profile=2", service: true }
Mon Mar 11 15:06:35 [initandlisten] journal dir=/data/db/journal
Mon Mar 11 15:06:35 [initandlisten] recover : no journal files present, no recovery needed
Mon Mar 11 15:06:35 [initandlisten] waiting for connections on port 27017
Mon Mar 11 15:06:35 [websvr] admin web console waiting for connections on port 28017
When I run from command prompt mongod --auth, following log will be displayed:
Mon Mar 11 15:09:40 [initandlisten] MongoDB starting : pid=6536 port=27017 dbpath=\data\db\ 64-bit host=AMOL-KULKARNI
Mon Mar 11 15:09:40 [initandlisten] db version v2.2.3, pdfile version 4.5
Mon Mar 11 15:09:40 [initandlisten] git version: f570771a5d8a3846eb7586eaffcf4c2f4a96bf08
Mon Mar 11 15:09:40 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
Mon Mar 11 15:09:40 [initandlisten] options: { auth: true }
Mon Mar 11 15:09:40 [initandlisten] journal dir=/data/db/journal
Mon Mar 11 15:09:40 [initandlisten] recover : no journal files present, no recovery needed
Mon Mar 11 15:09:40 [initandlisten] waiting for connections on port 27017
Mon Mar 11 15:09:40 [websvr] admin web console waiting for connections on port 28017
Note: The change in the options:
options:{ config: "C:\mongodb\mongod.cfg", logpath:
"c:\mongodb\log\mongo.log auth=true profile=2", service: true } //Does not work
options: { auth: true } //Works
Interesting to observe is that,
When its ran from config file.
logpath=c:\mongodb\log\mongo.log, auth=true, profile=2
It got changed to:
logpath: "c:\mongodb\log\mongo.log auth=true profile=2", service: true
I know here is the issue. It should be like:
logpath: "c:\mongodb\log\mongo.log", auth=true, profile="2", service: true
So, the question is how to pass auth=true parameter from config file and run mongod.exe process as service on Windows7
The change is only minor as described under the part you quoted:
In general if there are no users for the admin database, you may connect via the localhost interface. For sharded clusters running version 2.2, if mongod is running with auth then all users connecting over the localhost interface must authenticate, even if there aren’t any users in the admin database.
Basically before 2.2 if you were in a sharded cluster you could connect to localhost and not be forced to auth if there are no users found in the admin database. This means that if you set-up a sharded cluster it might be wise to setup a default user, which you have already done.
Can anybody give some idea or solution to proceed the same with new MongoDB 2.2.3?
The new auth system will just be there, you don't need to do anything; it just will be.
Found out the solution.
To run MongoDB process (mongod.exe) as service with auth=true, following has to be taken care while registering MongoDB service itself (not mentioned in docs)
Service has to be registered with following command:
C:\mongodb\bin\mongod.exe --config C:\mongodb\mongod.cfg --auth --install
mongod.cfg file will have only logpath=c:\mongodb\log\mongo.log
Shared with you all so that effort & time will not be put on the same issue again.
Happy exploring to all.. :-)
Note this time log contains:
Mon Mar 11 15:58:06 Trying to start Windows service 'MongoDB'
Mon Mar 11 15:58:06 Service running
Mon Mar 11 15:58:06 [initandlisten] MongoDB starting : pid=6800 port=27017 dbpath=\data\db\ 64-bit host=AMOL-KULKARNI
Mon Mar 11 15:58:06 [initandlisten] db version v2.2.3, pdfile version 4.5
Mon Mar 11 15:58:06 [initandlisten] git version: f570771a5d8a3846eb7586eaffcf4c2f4a96bf08
Mon Mar 11 15:58:06 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
Mon Mar 11 15:58:06 [initandlisten] options: { auth: true, config: "C:\mongodb\mongod.cfg", logpath: "c:\mongodb\log\mongo.log", service: true }
Mon Mar 11 15:58:06 [initandlisten] journal dir=/data/db/journal
Mon Mar 11 15:58:06 [initandlisten] recover : no journal files present, no recovery needed
Mon Mar 11 15:58:06 [initandlisten] waiting for connections on port 27017
Mon Mar 11 15:58:06 [websvr] admin web console waiting for connections on port 28017

Mongodb gives error during startup

Whenever I try to play with mongo's interactive shell, it dies:
somekittens#DLserver01:~$ mongo
MongoDB shell version: 2.2.2
connecting to: test
Mon Dec 17 13:14:16 DBClientCursor::init call() failed
Mon Dec 17 13:14:16 Error: Error during mongo startup. :: caused by :: 10276 DBClientBase::findN: transport error: 127.0.0.1:27017 ns: admin.$cmd query: { whatsmyuri: 1 } src/mongo/shell/mongo.js:91
exception: connect failed
I'm able to repair the install (deleting mongodb.lock, etc) and get back to this point, but it'll only die again.
/var/log/mongodb/mongodb.log
Mon Dec 17 13:14:03
Mon Dec 17 13:14:03 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Mon Dec 17 13:14:03
Mon Dec 17 13:14:03 [initandlisten] MongoDB starting : pid=2674 port=27017 dbpath=/var/lib/mongodb 32-bit host=DLserver01
Mon Dec 17 13:14:03 [initandlisten]
Mon Dec 17 13:14:03 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Mon Dec 17 13:14:03 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
Mon Dec 17 13:14:03 [initandlisten] ** with --journal, the limit is lower
Mon Dec 17 13:14:03 [initandlisten]
Mon Dec 17 13:14:03 [initandlisten] db version v2.2.2, pdfile version 4.5
Mon Dec 17 13:14:03 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267
Mon Dec 17 13:14:03 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
Mon Dec 17 13:14:03 [initandlisten] options: { config: "/etc/mongodb.conf", dbpath: "/var/lib/mongodb", logappend: "true", logpath: "/var/log/mongodb/mongodb.log" }
Mon Dec 17 13:14:03 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/var/lib/mongodb/journal"
Mon Dec 17 13:14:03 [initandlisten] couldn't unlink socket file /tmp/mongodb-27017.sockerrno:1 Operation not permitted skipping
Mon Dec 17 13:14:03 [initandlisten] waiting for connections on port 27017
Mon Dec 17 13:14:03 [websvr] admin web console waiting for connections on port 28017
Mon Dec 17 13:14:16 [initandlisten] connection accepted from 127.0.0.1:57631 #1 (1 connection now open)
Mon Dec 17 13:14:16 Invalid operation at address: 0x819bb23 from thread: conn1
Mon Dec 17 13:14:16 Got signal: 4 (Illegal instruction).
Mon Dec 17 13:14:16 Backtrace:
0x8759eaa 0x817033a 0x81709ff 0x20e40c 0x819bb23 0x854cd54 0x85377d1 0x846b594 0x83e5591 0x83e6c15 0x81902b4 0x8746731 0x49ad4c 0x34ed3e
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x2a) [0x8759eaa]
/usr/bin/mongod(_ZN5mongo10abruptQuitEi+0x3ba) [0x817033a]
/usr/bin/mongod(_ZN5mongo24abruptQuitWithAddrSignalEiP7siginfoPv+0x2af) [0x81709ff]
[0x20e40c]
/usr/bin/mongod(_ZNK5mongo7BSONObj4copyEv+0x33) [0x819bb23]
/usr/bin/mongod(_ZN5mongo11ParsedQuery4initERKNS_7BSONObjE+0x494) [0x854cd54]
/usr/bin/mongod(_ZN5mongo11ParsedQueryC1ERNS_12QueryMessageE+0x91) [0x85377d1]
/usr/bin/mongod(_ZN5mongo8runQueryERNS_7MessageERNS_12QueryMessageERNS_5CurOpES1_+0x34) [0x846b594]
/usr/bin/mongod() [0x83e5591]
/usr/bin/mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0x3d5) [0x83e6c15]
/usr/bin/mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x84) [0x81902b4]
/usr/bin/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x421) [0x8746731]
/lib/i386-linux-gnu/libpthread.so.0(+0x6d4c) [0x49ad4c]
/lib/i386-linux-gnu/libc.so.6(clone+0x5e) [0x34ed3e]
Connecting using node's shell:
> mdb.open(function(err, db) { console.log(err) });
[Error: failed to connect to [localhost:27017]]
I've searched around for this error and found nothing of use. This is running on a fairly old server (Ubuntu 12.04 32-bit, 640MB RAM, 500MHz P2). How can I fix this?
There is an issue Invalid operation at address: 0x819b263 from thread: TTLMonitor in mongodb jira list. I think that is about your case.
A new server may be the easiest solution, otherwise you have to download the source code, make some modification and compile it youself.

How can I fix "EMPTYUNREACHABLE" on deploying a test replset on my mac?

I'm trying to deploy a development/test replication set on my macbook pro using this document.
http://docs.mongodb.org/manual/tutorial/deploy-replica-set/
I started 3 instances of mongod, each on port 10001, 10002, 10003
I used the configuration file to start mongod. The configuration file below:
rs0:
dbpath = /Users/Thomas/mongodb/data/rs0/
port = 10000
logpath = /Users/Thomas/mongodb/log/rs0.log
logappend = true
replSet = rs0
rs1:
dbpath = /Users/Thomas/mongodb/data/rs1/
port = 10001
logpath = /Users/Thomas/mongodb/log/rs1.log
logappend = true
replSet = rs0
rs2:
dbpath = /Users/Thomas/mongodb/data/rs2/
port = 10002
logpath = /Users/Thomas/mongodb/log/rs2.log
logappend = true
replSet = rs0
And using the following command to start:
mongod -f config/rs0.conf
mongod -f config/rs1.conf
mongod -f config/rs2.conf
then I connect to it using mongo: mongo localhost:10001, but when I use rs.initiate() command to initialize the repl set, it failed.
> rs.initiate()
{
"startupStatus" : 4,
"info" : "rs0",
"errmsg" : "all members and seeds must be reachable to initiate set",
"ok" : 0
}
and check by rs.status()
> rs.status()
{
"startupStatus" : 4,
"errmsg" : "can't currently get local.system.replset config from self or any seed (EMPTYUNREACHABLE)",
"ok" : 0
}
And the log shows it's a EMPTYUNREACHABLE error. how could solve it?
***** SERVER RESTARTED *****
Mon Oct 15 22:02:31 [initandlisten] MongoDB starting : pid=568 port=10000 dbpath=/Users/Thomas/mongodb/data/rs0/ 64-bit host=bogon
Mon Oct 15 22:02:31 [initandlisten]
Mon Oct 15 22:02:31 [initandlisten] ** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
Mon Oct 15 22:02:31 [initandlisten] db version v2.2.0, pdfile version 4.5
Mon Oct 15 22:02:31 [initandlisten] git version: f5e83eae9cfbec7fb7a071321928f00d1b0c5207
Mon Oct 15 22:02:31 [initandlisten] build info: Darwin bs-osx-106-x86-64-1.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_49
Mon Oct 15 22:02:31 [initandlisten] options: { config: "config/rs0.conf", dbpath: "/Users/Thomas/mongodb/data/rs0/", logappend: "true", logpath: "/Users/Thomas/mongodb/log/rs0.log", port: 10000, replSet: "rs0", rest: "true" }
Mon Oct 15 22:02:31 [initandlisten] journal dir=/Users/Thomas/mongodb/data/rs0/journal
Mon Oct 15 22:02:31 [initandlisten] recover : no journal files present, no recovery needed
Mon Oct 15 22:02:31 [websvr] admin web console waiting for connections on port 11000
Mon Oct 15 22:02:31 [initandlisten] waiting for connections on port 10000
Mon Oct 15 22:02:37 [rsStart] trying to contact bogon:10000
Mon Oct 15 22:02:43 [rsStart] couldn't connect to bogon:10000: couldn't connect to server bogon:10000
Mon Oct 15 22:02:49 [rsStart] replSet can't get local.system.replset config from self or any seed (yet)
Mon Oct 15 22:03:05 [rsStart] trying to contact bogon:10000
Mon Oct 15 22:03:11 [rsStart] couldn't connect to bogon:10000: couldn't connect to server bogon:10000
Mon Oct 15 22:03:17 [rsStart] replSet can't get local.system.replset config from self or any seed (yet)
Mon Oct 15 22:03:33 [rsStart] trying to contact bogon:10000
Mon Oct 15 22:03:39 [rsStart] couldn't connect to bogon:10000: couldn't connect to server bogon:10000
Mon Oct 15 22:03:45 [rsStart] replSet can't get local.system.replset config from self or any seed (yet)
Mon Oct 15 22:04:01 [rsStart] trying to contact bogon:10000
Mon Oct 15 22:04:07 [rsStart] couldn't connect to bogon:10000: couldn't connect to server bogon:10000
Mon Oct 15 22:04:13 [rsStart] replSet can't get local.system.replset config from self or any seed (yet)
Try setting bind_ip to 127.0.0.1 or add an entry for `bogon' to your /etc/hosts?
mongodb appears to be getting the hostname() for the local system but it’s not resolvable.
I faced the exact issue today. I was able to start mongodb but not the replica sets. I had to delete the following line from the mongod.conf file
'bind_ip: "127.0.0.1"'
I also figured out that the mongod.conf files for mongodb and replica sets were different and stored at different locations, maybe because I brew installed the latest version of mongodb?. I found my replica set config file at "/usr/local/etc/mongod.conf".
FYI, I believe you hit this bug in 2.2.0:
https://jira.mongodb.org/browse/SERVER-7367
It is now fixed and scheduled for release in 2.2.1 - the fixes for 2.2.0 basically involve making sure that you use resolvable and reachable addresses for your sets because 2.2.0 tries to reach even local instances over the network.

MongoDB - too many connections w/ node.js

I'm using node.js and MongoDB for my application. Whenever I use localhost, I have no problems and the application works fine. However, on the server, the database is limiting me to 50 connections. Here's an example of the log:
Wed Jul 27 13:33:29 [initandlisten] waiting for connections on port 27017
Wed Jul 27 13:33:29 [websvr] web admin interface listening on port 28017
Wed Jul 27 13:34:50 [initandlisten] connection accepted from 127.0.0.1:42035 #1
Wed Jul 27 13:35:16 [initandlisten] connection accepted from 127.0.0.1:42181 #2
Wed Jul 27 13:35:16 [initandlisten] connection accepted from 127.0.0.1:42182 #3
Wed Jul 27 13:35:25 [initandlisten] connection accepted from 127.0.0.1:42249 #4
...
Wed Jul 27 13:36:09 [initandlisten] connection accepted from 127.0.0.1:42518 #50
Wed Jul 27 13:36:10 [initandlisten] connection accepted from 127.0.0.1:42524 #51
Wed Jul 27 13:36:10 [initandlisten] can't create new thread, closing connection
I'm launching the process with the command mongod --maxConns=5000. Does anyone know what could be causing this connection limit?
Can you post the code you're using to connect? If you're connecting to the DB on each request then you'll quickly run out of connections. In most cases it's best to share the DB connection among requests, for example by connecting on app startup.