mongorestore not working. collection is empty - mongodb

i am trying to dump a mongodb collection to file, and then use that to restore to another mongodb instance.
dumping -
mongodump --host 127.0.0.1 --port 27017 --username vespauser --password <passwd> --collection vespastats --db vespa --out /archive/vespa-archive/vespa-db-backup_001
connected to: 127.0.0.1:27017
2015-04-21T16:24:07.070-0400 DATABASE: vespa to /archive/vespa-archive/vespa-db-backup_testing01/vespa
2015-04-21T16:24:07.141-0400 vespa.system.indexes to /archive/vespa-archive/vespa-db-backup_testing01/vespa/system.indexes.bson
2015-04-21T16:24:07.148-0400 4 documents
2015-04-21T16:24:07.149-0400 vespa.vespastats to /archive/vespa-archive/vespa-db-backup_testing01/vespa/vespastats.bson
2015-04-21T16:24:07.316-0400 59724 documents
2015-04-21T16:24:08.118-0400 Metadata for vespa.vespastats to /archive/vespa-archive/vespa-db-backup_testing01/vespa/vespastats.metadata.json
restoring -
mongorestore -v --drop --host 127.0.0.1 --port 27017 --username admin --password <passwd> /archive/vespa-archive/vespa-db-backup_001
2015-04-21T16:31:11.962-0400 creating new connection to:127.0.0.1:27017
2015-04-21T16:31:11.963-0400 [ConnectBG] BackgroundJob starting: ConnectBG
2015-04-21T16:31:11.963-0400 connected to server 127.0.0.1:27017 (127.0.0.1)
2015-04-21T16:31:11.963-0400 connected connection!
connected to: 127.0.0.1:27017
2015-04-21T16:31:11.966-0400 /home/amurty/vespa-db/vespa-db-backup_testing01/vespa/vespastats.bson
2015-04-21T16:31:11.966-0400 going into namespace [vespa.vespastats]
2015-04-21T16:31:11.966-0400 dropping
file size: 88808161
59724 objects found
2015-04-21T16:31:13.730-0400 Creating index: { key: { _id: 1 }, name: "_id_", ns: "vespa.vespastats" }
2015-04-21T16:31:13.848-0400 Creating index: { key: { url: 1 }, name: "url_1", ns: "vespa.vespastats", background: true }
2015-04-21T16:31:13.858-0400 Creating index: { key: { r_tstpm: 1 }, name: "r_tstpm_1", ns: "vespa.vespastats", background: true }
2015-04-21T16:31:13.859-0400 Creating index: { key: { url: 1, r_tstpm: 1 }, name: "url_1_r_tstpm_1", ns: "vespa.vespastats", background: true }
from /var/log/mongodb/mongod.log -
2015-04-21T16:31:11.963-0400 [initandlisten] connection accepted from 127.0.0.1:58444 #23 (1 connection now open)
2015-04-21T16:31:11.964-0400 [conn23] authenticate db: admin { authenticate: 1, nonce: "xxx", user: "admin", key: "xxx" }
2015-04-21T16:31:11.968-0400 [conn23] CMD: drop vespa.vespastats
2015-04-21T16:31:13.757-0400 [conn23] allocating new ns file /var/lib/mongo/vespa.ns, filling with zeroes...
2015-04-21T16:31:13.838-0400 [FileAllocator] allocating new datafile /var/lib/mongo/vespa.0, filling with zeroes...
2015-04-21T16:31:13.846-0400 [FileAllocator] done allocating datafile /var/lib/mongo/vespa.0, size: 64MB, took 0.007 secs
2015-04-21T16:31:13.847-0400 [conn23] build index on: vespa.vespastats properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "vespa.vespastats" }
2015-04-21T16:31:13.848-0400 [conn23] added index to empty collection
2015-04-21T16:31:13.857-0400 [conn23] build index on: vespa.vespastats properties: { v: 1, key: { url: 1 }, name: "url_1", ns: "vespa.vespastats", background: true }
2015-04-21T16:31:13.857-0400 [conn23] added index to empty collection
2015-04-21T16:31:13.858-0400 [conn23] build index on: vespa.vespastats properties: { v: 1, key: { r_tstpm: 1 }, name: "r_tstpm_1", ns: "vespa.vespastats", background: true }
2015-04-21T16:31:13.859-0400 [conn23] added index to empty collection
2015-04-21T16:31:13.860-0400 [conn23] build index on: vespa.vespastats properties: { v: 1, key: { url: 1, r_tstpm: 1 }, name: "url_1_r_tstpm_1", ns: "vespa.vespastats", background: true }
2015-04-21T16:31:13.860-0400 [conn23] added index to empty collection
2015-04-21T16:31:13.862-0400 [conn23] end connection 127.0.0.1:58444 (0 connections now open)
now when i login to my new mongodb instance and check collection size, i get a big 0 -
# mongo
MongoDB shell version: 2.6.9
connecting to: test
> use vespa
switched to db vespa
> db.auth('vespauser', '<paswd>')
1
> db.vespastats.find()
> db.vespastats.count()
0
>

Collection may or may not exist in the used database but the query is not returning an error, just 0.
db.vespastats.find().count()
The issue should be because it is added to database test. (doc mentions it should be automatic but I was able to reproduce this behaviour).
Therefore
use test
db.vespastats.find().count()
would have returned the actual documents in the collection vespastats.
The issue is caused by not specifying db name when using mongo binary command mongorestore. doc for mongorestore mongorestore --nsInclude=vesta.vestastats should be the updated version (even if -d still works).
To know where the collection would land, I would run 2 times the restore dump and check show dbs in mongo shell 3 times (before and after) > the db size is changing (not immediately though as it may show 8kb right after the restoration).

Related

MongoDB index failing to create/build

There was a related post which states it was fixed in 4.x but I'm facing this error with stable 6.0.2 as well.
Index creation:
Enterprise rs0 [direct: primary] bs> db.keyvalue.createIndex({"key": 1},{unique:true,name: "kv_key_idx"})
kv_key_idx
BUT, only the default index is present:
Enterprise rs0 [direct: primary] bs>
Indexes for keyvalue:
[ { v: 2, key: { _id: 1 }, name: '_id_' } ]
From the logs:
{"t":{"$date":"2022-11-02T12:46:02.500+00:00"},"s":"I", "c":"INDEX", "id":20438, "ctx":"conn919","msg":"Index build: registering","attr":{"buildUUID":{"uuid":{"$uuid":"d1c79715-78e7-4e4d-ae7b-af96be6b3a6b"}},"namespace":"bs.keyvalue","collectionUUID":{"uuid":{"$uuid":"f168d08c-6182-4b89-935c-cf2564bd184d"}},"indexes":1,"firstIndex":{"name":"kv_key_idx"},"command":{"createIndexes":"keyvalue","v":2,"indexes":[{"unique":true,"name":"kv_key_idx","key":{"key":1}}],"ignoreUnknownIndexOptions":false}}}
and it very much is failing:
{"t":{"$date":"2022-11-02T12:46:02.516+00:00"},"s":"I", "c":"INDEX", "id":20448, "ctx":"conn919","msg":"Index build: failed because collection dropped","attr":{"buildUUID":{"uuid":{"$uuid":"d1c79715-78e7-4e4d-ae7b-af96be6b3a6b"}},"namespace":"bs.keyvalue","collectionUUID":{"uuid":{"$uuid":"f168d08c-6182-4b89-935c-cf2564bd184d"}},"exception":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Caught exception during index builder (d1c79715-78e7-4e4d-ae7b-af96be6b3a6b) initialization on namespace bs.keyvalue (f168d08c-6182-4b89-935c-cf2564bd184d). 1 index specs provided. First index spec: { v: 2, unique: true, key: { key: 1 }, name: \"kv_key_idx\" } :: caused by :: Collection not found: config.system.indexBuilds"}}}
The db & collection is very much there:
Enterprise rs0 [direct: primary] test> show dbs
READ__ME_TO_RECOVER_YOUR_DATA 40.00 KiB
admin 80.00 KiB
bs. 2.13 TiB
config 168.00 KiB
local 59.85 GiB
Enterprise rs0 [direct: primary] test> use bs
switched to db bs
Enterprise rs0 [direct: primary] bs> show collections
keyvalue
Enterprise rs0 [direct: primary] bs> use local
switched to db local
Enterprise rs0 [direct: primary] local> show collections
oplog.rs
replset.election
replset.initialSyncId
replset.minvalid
replset.oplogTruncateAfterPoint
startup_log
system.replset
system.rollback.id
system.tenantMigration.oplogView [view]
system.views
What am I missing here? Thanks!

Mongo : db.auth() fails on windows

I'm trying to run a mongo instance on a windows container.
I have found this answer regarding authentication but I does not work for me
MongoDB: Server has startup warnings ''Access control is not enabled for the database''
I have a cfg file which I'm using to start mongo, my image is based on an existing mongo docker image on top of which I'm just copying my config file amd I'm trying to instruct mongo to use it. I actually don't know if it really does this, but as far as I know the base image CMD is overriden with my new CMD.
This is the dockerfile
FROM mongo:windowsservercore-1809
WORKDIR c:\
COPY .\mongod.Win.cfg .
CMD ["mongod", "--auth", "-f", "mongod.Win.cfg"]
And this is my mongod.win.cfg
storage:
dbPath: C:\data\db
journal:
enabled: true
security:
authorization: enabled
And I'm building the image in a docker-compose
invoice_db:
build:
context: ./Invoice.Db
dockerfile: ./mongo.win.Dockerfile
image: mongo:v1
container_name: invoice-db
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: "admin"
MONGO_INITDB_ROOT_PASSWORD: "pass"
volumes:
- invoice-data-volume:c:\data\db
restart: unless-stopped
volumes:
invoice-data-volume:
name: invoice-data
When I ssh in the container and try to login as admin with the password pass I get this
PS C:\> mongo
MongoDB shell version v5.0.9
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("17467fb1-ecf9-426c-9041-0f15c3a47d30") }
MongoDB server version: 5.0.9
================
Warning: the "mongo" shell has been superseded by "mongosh",
which delivers improved usability and compatibility.The "mongo" shell has been deprecated and will be removed in
an upcoming release.
For installation instructions, see
https://docs.mongodb.com/mongodb-shell/install/
================
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
https://docs.mongodb.com/
Questions? Try the MongoDB Developer Community Forums
https://community.mongodb.com
> use admin
switched to db admin
> db.auth("admin", "pass")
Error: Authentication failed.
0
> db.auth("admin", passwordPrompt())
Enter password:
Error: Authentication failed.
0
>
The logs from the running container.
{"t":{"$date":"2022-07-18T23:38:10.420+03:00"},"s":"I", "c":"ACCESS", "id":20436, "ctx":"conn1","msg":"Checking authorization failed","attr":{"error":{"code":13,"codeName":"Unauthorized","errmsg":"not authorized on admin to execute command { getCmdLineOpts: 1.0, lsid: { id: UUID("17467fb1-ecf9-426c-9041-0f15c3a47d30") }, $db: "admin" }"}}}
{"t":{"$date":"2022-07-18T23:38:18.120+03:00"},"s":"I", "c":"ACCESS", "id":20436, "ctx":"conn1","msg":"Checking authorization failed","attr":{"error":{"code":13,"codeName":"Unauthorized","errmsg":"not authorized on admin to execute command { listCollections: 1.0, filter: {}, nameOnly: true, authorizedCollections: true, maxTimeMS: 1000.0, lsid: { id: UUID("17467fb1-ecf9-426c-9041-0f15c3a47d30") }, $db: "admin" }"}}}
{"t":{"$date":"2022-07-18T23:38:21.712+03:00"},"s":"I", "c":"ACCESS", "id":20251, "ctx":"conn1","msg":"Supported SASL mechanisms requested for unknown user","attr":{"user":{"user":"admin","db":"admin"}}}
{"t":{"$date":"2022-07-18T23:38:21.713+03:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn1","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"admin","authenticationDatabase":"admin","remote":"127.0.0.1:49160","extraInfo":{},"error":"UserNotFound: Could not find user "admin" for db "admin""}}
{"t":{"$date":"2022-07-18T23:38:25.438+03:00"},"s":"I", "c":"ACCESS", "id":20436, "ctx":"conn1","msg":"Checking authorization failed","attr":{"error":{"code":13,"codeName":"Unauthorized","errmsg":"not authorized on admin to execute command { listCollections: 1.0, filter: {}, nameOnly: true, authorizedCollections: true, maxTimeMS: 1000.0, lsid: { id: UUID("17467fb1-ecf9-426c-9041-0f15c3a47d30") }, $db: "admin" }"}}}
{"t":{"$date":"2022-07-18T23:38:32.311+03:00"},"s":"I", "c":"ACCESS", "id":20251, "ctx":"conn1","msg":"Supported SASL mechanisms requested for unknown user","attr":{"user":{"user":"admin","db":"admin"}}}
{"t":{"$date":"2022-07-18T23:38:32.312+03:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn1","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"admin","authenticationDatabase":"admin","remote":"127.0.0.1:49160","extraInfo":{},"error":"UserNotFound: Could not find user "admin" for db "admin""}}
{"t":{"$date":"2022-07-18T23:38:37.028+03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1658176717:28384][1272:140723313332832], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 34, snapshot max: 34 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}
{"t":{"$date":"2022-07-18T23:39:37.051+03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1658176777:50893][1272:140723313332832], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 37, snapshot max: 37 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}
{"t":{"$date":"2022-07-18T23:40:37.067+03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1658176837:67089][1272:140723313332832], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 39, snapshot max: 39 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}
Can someone help me figure out this ?
try with:
db.auth({user:"admin", pwd:"pass", mechanism:"SCRAM"})

Mongodb Replication doesnt start

we are trying to move from mongo 2.4.9 to 3.4, we have a lot of data so we tried to set replication and wait while data will be synced and then swap primary.
Configurations done but when replication is initiated new server cant stabilize replication:
017-07-07T12:07:22.492+0000 I REPL [replication-1] Starting initial sync (attempt 10 of 10)
2017-07-07T12:07:22.501+0000 I REPL [replication-1] sync source candidate: mongo-2.blabla.com:27017
2017-07-07T12:07:22.501+0000 I STORAGE [replication-1] dropAllDatabasesExceptLocal 1
2017-07-07T12:07:22.501+0000 I REPL [replication-1] ******
2017-07-07T12:07:22.501+0000 I REPL [replication-1] creating replication oplog of size: 6548MB...
2017-07-07T12:07:22.504+0000 I STORAGE [replication-1] WiredTigerRecordStoreThread local.oplog.rs already started
2017-07-07T12:07:22.505+0000 I STORAGE [replication-1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
2017-07-07T12:07:22.505+0000 I STORAGE [replication-1] Scanning the oplog to determine where to place markers for truncation
2017-07-07T12:07:22.519+0000 I REPL [replication-1] ******
2017-07-07T12:07:22.521+0000 I REPL [replication-1] Initial sync attempt finishing up.
2017-07-07T12:07:22.521+0000 I REPL [replication-1] Initial Sync Attempt Statistics: { failedInitialSyncAttempts: 9, maxFailedInitialSyncAttempts: 10, initialSyncStart: new Date(1499429233163), initialSyncAttempts: [ { durationMillis: 0, status: "CommandNotFound: error while getting last oplog entry for begin timestamp: no such cmd: find", syncSource: "mongo-2.blabla.com:27017" }, { durationMillis: 0, status: "CommandNotFound: error while getting last oplog entry for begin timestamp: no such cmd: find", syncSource: "mongo-2.blabla.com:27017" }, { durationMillis: 0, status: "CommandNotFound: error while getting last oplog entry for begin timestamp: no such cmd: find", syncSource: "mongo-2.blabla.com:27017" }, { durationMillis: 0, status: "CommandNotFound: error while getting last oplog entry for begin timestamp: no such cmd: find", syncSource: "mongo-2.blabla.com:27017" }, { durationMillis: 0, status: "CommandNotFound: error while getting last oplog entry for begin timestamp: no such cmd: find", syncSource: "mongo-2.blabla.com:27017" }, { durationMillis: 0, status: "CommandNotFound: error while getting last oplog entry for begin timestamp: no such cmd: find", syncSource: "mongo-2.blabla.com:27017" }, { durationMillis: 0, status: "CommandNotFound: error while getting last oplog entry for begin timestamp: no such cmd: find", syncSource: "mongo-2.blabla.com:27017" }, { durationMillis: 0, status: "CommandNotFound: error while getting last oplog entry for begin timestamp: no such cmd: find", syncSource: "mongo-2.blabla.com:27017" }, { durationMillis: 0, status: "CommandNotFound: error while getting last oplog entry for begin timestamp: no such cmd: find", syncSource: "mongo-2.blabla.com:27017" } ] }
2017-07-07T12:07:22.521+0000 E REPL [replication-1] Initial sync attempt
failed -- attempts left: 0 cause: CommandNotFound: error while getting last
oplog entry for begin timestamp: no such cmd: find
2017-07-07T12:07:22.521+0000 F REPL [replication-1] The maximum number
of retries have been exhausted for initial sync.
2017-07-07T12:07:22.522+0000 E REPL [replication-0] Initial sync failed,
shutting down now. Restart the server to attempt a new initial sync.
2017-07-07T12:07:22.522+0000 I - [replication-0] Fatal assertion 40088 CommandNotFound: error while getting last oplog entry for begin timestamp: no such cmd: find at src/mongo/db/repl/replication_coordinator_impl.cpp 632
please assits guys, since we have more than 100G of data, so dump and restore will take a lot of downtime
Configurations:
3.4.5 new machine:
storage:
dbPath: /mnt/dbpath
journal:
enabled: true
engine: wiredTiger
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
net:
port: 27017
replication:
replSetName: prodTest
2.4.9 old machine with data:
dbpath=/var/lib/mongodb
logpath=/var/log/mongodb/mongodb.log
logappend=true port = 27017
the task have been solved in such way:
-create replica master-v2.4, 3 slaves-v2.6
-stop app, step down master
-stop new master and upgrade mongo version to v3.0,
start master and upgrade slaves sequentually to 3.2(slave db files
removed new version started on wiredTiger engine)
-step down master, upgrade all slaves to 3.4
This process become very fast because replica slave recovery of 40G db takes around 30m.

Mongorestore not restoring data

I have an existing mongodump of a single collection that I am trying to restore. After running mongo restore, no errors show up and the data is not in the collection. Are there any known reasons how this could happen? I would expect if the data weren't inserted for some reason, an error would be provided in the log.
To create and attempt to restore the dump, I followed the answer provided for this question:
How to use mongodump for 1 collection
I've created a new database on a different server and it has an empty collection. I've checked the mongo log file and there are no errors, it shows the connection open and authenticate, then disconnect on the next line.
mongorestore -vvvvv -u user -p 'password' --db=MyDatabase --collection=MyCollection dump1/MyCollection.bson
2015-03-04T18:20:31.331+0000 creating new connection to:127.0.0.1:27017
2015-03-04T18:20:31.332+0000 [ConnectBG] BackgroundJob starting: ConnectBG
2015-03-04T18:20:31.332+0000 connected to server 127.0.0.1:27017 (127.0.0.1)
2015-03-04T18:20:31.332+0000 connected connection!
connected to: 127.0.0.1
2015-03-04T18:20:31.333+0000 drillDown: dump1/MyCollection.bson
2015-03-04T18:20:31.333+0000 dump1/MyCollection.bson
2015-03-04T18:20:31.333+0000 going into namespace [MyDatabase.MyCollection]
Restoring to MyDatabase.MyCollection without dropping. Restored data will be inserted without raising errors; check your server log
file size: 94876
130 objects found
2015-03-04T18:20:31.336+0000 Creating index: { key: { _id: 1 }, name: "_id_", ns: "MyDatabase.MyCollection" }
2015-03-04T18:20:31.340+0000 Creating index: { key: { geometry: "2dsphere" }, name: "geometry_2dsphere", ns: "MyDatabase.MyCollection", 2dsphereIndexVersion: 2 }
Log file:
2015-03-04T18:20:31.333+0000 [conn874] authenticate db: MyDatabase { authenticate: 1, nonce: "xxx", user: "user", key: "xxx" }
2015-03-04T18:20:31.342+0000 [conn874] end connection 127.0.0.1:59420 (25 connections now open)
The query I am using on the origin and destination is:
db.MyCollection.find()
On the origin server, the collection has 130 elements, which is what is also shown in the mongorestore output "130 objects found".
Edit:
I added the --drop option to the mongorestore command. The log file output clearly shows that it is creating the index on an empty collection.
2015-03-20T15:03:57.565+0000 [conn61965] authenticate db: MyDatabase { authenticate: 1, nonce: "xxx", user: "user", key: "xxx" }
2015-03-20T15:03:57.566+0000 [conn61965] CMD: drop MyDatabase.MyCollection
2015-03-20T15:03:57.631+0000 [conn61965] build index on: MyDatabase.MyCollection properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "MyDatabase.MyCollection" }
2015-03-20T15:03:57.631+0000 [conn61965] added index to empty collection
2015-03-20T15:03:57.652+0000 [conn61965] build index on: MyDatabase.MyCollection properties: { v: 1, key: { geometry: "2dsphere" }, name: "geometry_2dsphere", ns: "MyDatabase.MyCollection", 2dsphereIndexVersion: 2 }
2015-03-20T15:03:57.652+0000 [conn61965] added index to empty collection
2015-03-20T15:03:57.654+0000 [conn61965] end connection 127.0.0.1:59456 (21 connections now open)
So the issue ended up being that the user I was trying to do the restore with only had the read and dbAdmin roles. I had made a separate user so that the regular user used by the application did not have administrative rights. After changing my user's role from read to readWrite, it worked as expected.
To be honest, if the user didn't have the correct permissions, I really would have expected the log to show an error of some sort when it tries to run the restore without the correct permission.

MongoDs in ReplSet won't start after trying out some MapReduce

I was practicing some MapReduce inside of my Primary's mongo shell when it suddenly became a Secondary. I SSHed into the two other VM's with the other secondaries, and discovered that the mongod's had been rendered inoperaple. I killed them and I issued the mongod --config /etc/mongod.conf to kick them off and I entered the mongo shell. After a few seconds they were interrupted with:
2014-09-14T22:29:54.142-0500 DBClientCursor::init call() failed
2014-09-14T22:29:54.143-0500 trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2014-09-14T22:29:54.143-0500 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2014-09-14T22:29:54.143-0500 reconnect 127.0.0.1:27017 (127.0.0.1) failed failed couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed
>
This is from their (the two original secondaries in the replicaset) logs:
2014-09-14T22:09:21.879-0500 [rsBackgroundSync] replSet syncing to: vm-billing-001:27017
2014-09-14T22:09:21.880-0500 [rsSync] replSet still syncing, not yet to minValid optime 54165090:1
2014-09-14T22:09:21.882-0500 [rsBackgroundSync] replset setting syncSourceFeedback to vm-billing-001:27017
2014-09-14T22:09:21.886-0500 [rsSync] replSet SECONDARY
2014-09-14T22:09:21.886-0500 [repl writer worker 1] build index on: test.tmp.mr.CCS.nonconforming_1_inc properties: { v: 1, key: { 0: 1 }, name: "_temp_0", ns: "test.tmp.mr.CCS.nonconforming_1_inc" }
2014-09-14T22:09:21.887-0500 [repl writer worker 1] added index to empty collection
2014-09-14T22:09:21.887-0500 [repl writer worker 1] build index on: test.tmp.mr.CCS.nonconforming_1 properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "test.tmp.mr.CCS.nonconforming_1" }
2014-09-14T22:09:21.887-0500 [repl writer worker 1] added index to empty collection
2014-09-14T22:09:21.888-0500 [repl writer worker 1] build index on: test.tmp.mr.CCS.nonconforming_1 properties: { v: 1, unique: true, key: { id: 1.0 }, name: "id_1", ns: "test.tmp.mr.CCS.nonconforming_1" }
2014-09-14T22:09:21.888-0500 [repl writer worker 1] added index to empty collection
2014-09-14T22:09:21.891-0500 [repl writer worker 2] ERROR: writer worker caught exception: :: caused by :: 11000 insertDocument :: caused by :: 11000 E11000 duplicate key error index: cisco.tmp.mr.CCS.nonconforming_1.$id_1 dup key: { : null } on: { ts: Timestamp 1410748561000|46, h: 9014687153249982311, v: 2, op: "i", ns: "cisco.tmp.mr.CCS.nonconforming_1", o: { _id: 14, value: 1.0 } }
2014-09-14T22:09:21.891-0500 [repl writer worker 2] Fatal Assertion 16360
2014-09-14T22:09:21.891-0500 [repl writer worker 2]
I can issue mongo --host ... --port ... from both of the two VMs that can't start the mongo to the original primary mongo, but I do see some connection refused notes above in the error log.
My original primary mongod can still be connected to in the mongo shell, but it is a primary. I can kill it and restart it and it will start up in secondary.
How can I roll back to the last known state and restart my replica set?