Applying oplog but found duplicate key error - mongodb

Mongo version is 3.0.6, I have a process to apply oplog from another database to destination database by mongodump and mongorestore by using --oplogReplay option.
But I found duplicate key error messages many time, source and target database have the same structure (indies and fields) that is impossible to have duplicated record on target because it should be error on source db first.
And error message looks like this
2017-08-20T00:55:55.900+0000 Failed: restore error: error applying oplog: applyOps: exception: E11000 duplicate key error collection: <collection_name> index: <field> dup key: { : null }
And today I found a mystery message like this
2017-08-25T01:02:14.134+0000 Failed: restore error: error applying oplog: applyOps: not master
What's a mean? And my understanding, mongorestore has "--stopOnError" option that means the default process, if have any errors, the restore process will skip and move on. But I got above error and then the restore process has been terminated anytime. :(

This does not answer directly to your question, sorry for that, but...
If you need to apply oplog changes for database A to database B, it would be better to use mongo-connector program, than mongodump/mongorestore -pair.

Related

data corrupted in postgres - right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx"

I am new to Postgres and we are using it for tests reports, we had an issue with our environment that entered duplicate keys to one of the table and since then we are getting this message when trying to run migration scripts:
error: migration failed: right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx" in line 0: UPDATE log SET project_id = (SELECT project_id FROM item_project WHERE item_project.item_id=log.item_id LIMIT 1); (details: pq: right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx")
I tried to run pg_dump and got this error:
pg_dump: error: query was: SELECT pg_catalog.pg_get_viewdef('457544'::pg_catalog.oid) AS viewdef
pg_dumpall: error: pg_dump failed on database "reportportal", exiting
Can anyone help here?
Restore your backup, and research what parameters you changed and what you did to end up with data corruption in the first place.

Error cloning collection using Cosmic Clone

I'm trying to clone an existing MongoDB collection that is running on azure cosmos DB to another collection on the same DB using Cosmic Clone.
access validation succeeds but the process fails with the following error message:
Collection Copy log
Begin Document Migration.
Source Database: myDB Source Collection: X
Target Database: myDB Target Collection: Y
LogError
Error: Error reading JObject from JsonReader. Path '', line 0, position 0., Message: Error reading JObject from JsonReader. Path '', line 0, position 0.
Main process exits with error
LogError
Error: One or more errors occurred., Message: Error reading JObject from JsonReader. Path '', line 0, position 0.
Any ideas are appreciated.
I've not used this tool but I took a quick look at the source for it and I'm fairly certain it is not designed to work with MongoDB collections in Cosmos DB.
If you're looking to copy a MongoDB collection you're better off using native Mongo tools like mongodump and mongorestore.
More details here, https://docs.mongodb.com/database-tools/

Is it possible to restore db from WALs twice?

I have a main database server that WALs is periodically archived on s3. So s3 has a 'snapshot' of a database with all the corresponding latest WALs.
I have another (local) database server that I want to periodically
update to be actual to the state of the main database server.
So I once copied "main" directory from s3 and applied all the WALs from s3 by using restore.conf
The only thing I've changed in this file is:
restore_command = 'aws s3 cp s3://%bucketName%/database/pg_wal/%f %p'
It was successful.
After some time I want to apply all the latest WALs from s3 to being "more synchronized" with a main database server. Is it possible to do it somehow? I know exactly, that I did not make any updates or writes into my "copied" database server. When I'm trying to do it in the exactly same way as before I am getting the next errors (from stderr):
fatal error: An error occurred (404) when calling the HeadObject
operation: Key "database/pg_wal/00000001000001EF0000001F" does not
exist
fatal error: An error occurred (404) when calling the HeadObject
operation: Key "database/pg_wal/00000002.history" does not exist
fatal error: An error occurred (404) when calling the HeadObject
operation: Key "database/pg_wal/00000001.history" does not exist
fatal error: An error occurred (403) when calling the HeadObject
operation: Forbidden
fatal error: An error occurred (403) when calling the HeadObject
operation: Forbidden
fatal error: An error occurred (403) when calling the HeadObject
operation: Forbidden
fatal error: An error occurred (403) when calling the HeadObject
operation: Forbidden
fatal error: An error occurred (403) when calling the HeadObject
operation: Forbidden
This is a more detailed description of my procedure:
I have a two directories on s3: basebackup and pg_wal. basebackup contains base, global, pg_logical, pg_multixact, pg_xact, PG_VERSION, backup_label files.
When I recover it the first time, I do the following:
Stop postgres
aws s3 sync s3://%bucketname%/basebackup ~/10/main
mkdir empty directories in ~/10/main
copied recovery.conf.sample into ~/10/main/recovery.conf
edit recovery.conf as above
start PostgreSQL
When I'm doing it again after some time I'm doing steps 1, 4, 5, 6 and getting the described result.
Probably, I need to somehow specify the first WAL from s3 bucket to being restored? Because we already restored some of them before. Or it is impossible at all?
There seems to be a lot wrong with your procedures:
A complete backup does not only consist of the files and directories you list above, but of the complete data directory (pg_wal/pg_xlog can be empty).
After the first recovery, PostgreSQL will choose a new time line, rename backup_label and recovery.conf and come up as a regular database.
You cannot resume recovering such a database. I don't know what exactly you did to get into recovery mode again, but you must have broken something.
Once a database has finished recovery, the only way to recover further is to restore the initial backup again and recover from the beginning.
Have you considered using point-in-time recovery with recovery_target_action = 'pause'? Then PostgreSQL will stay in recovery mode, and you can run queries against the database. To continue recovering, define a new recovery target and restart the server.

mongodump lower version mongodb

I tried to use mongodump(Version 3.2.5) to backup MongoDB(version 2.4.9). It sucessed. But I can't restore this backup. Why?
./mongorestore -h 127.0.0.1 -u xxx -p xxx --dir /home/jonkyon/mongo_2 --authenticationDatabase admin --drop
2016-04-25T19:08:24.028+0800 building a list of dbs and collections to restore from /home/jonkyon/mongo_2 dir
2016-04-25T19:08:24.029+0800 assuming users in the dump directory are from <= 2.4 (auth version 1)
2016-04-25T19:08:24.030+0800 cannot drop system collection products.system.users, skipping
2016-04-25T19:08:24.031+0800 reading metadata for products.system.users from /home/jonkyon/mongo_2/products/system.users.metadata.json
2016-04-25T19:08:24.031+0800 restoring products.system.users from /home/jonkyon/mongo_2/products/system.users.bson
2016-04-25T19:08:24.032+0800 error: E11000 duplicate key error index: products.system.users.$_id_ dup key: { : ObjectId('570e2f0ca19b9c2cb7e75905') }
2016-04-25T19:08:24.066+0800 restoring indexes for collection products.system.users from metadata
2016-04-25T19:08:24.068+0800 reading metadata for runoob.runoob from /home/jonkyon/mongo_2/runoob/runoob.metadata.json
2016-04-25T19:08:24.070+0800 finished restoring products.system.users (2 documents)
2016-04-25T19:08:24.070+0800 restoring runoob.runoob from /home/jonkyon/mongo_2/runoob/runoob.bson
2016-04-25T19:08:24.070+0800 restoring indexes for collection runoob.runoob from metadata
2016-04-25T19:08:24.071+0800 finished restoring runoob.runoob (2 documents)
2016-04-25T19:08:24.071+0800 restoring users from /home/jonkyon/mongo_2/admin/system.users.bson
2016-04-25T19:08:24.088+0800 Failed: restore error: error running merge command: no such cmd: _mergeAuthzCollections
The docs states the following "The data format used by mongodump from version 2.2 or later is incompatible with earlier versions of mongod. Do not use recent versions of mongodump to back up older data stores."
Even you are using mongodb 2.4.9, I think you should avoid using a recent version of mongodump with older data stores

mongorestore failing because of DocTooLargeForCapped error

I'm trying to restore a collection like so:
$ mongorestore --verbose --db MY_DB --collection MY_COLLECTION /path/to/MY_COLLECTION.bson --port 1234 --noOptionsRestore
Here's the error output (timestamps removed):
using write concern: w='majority', j=false, fsync=false, wtimeout=0
checking for collection data in /path/to/MY_COLLECTION.bson
found metadata for collection at /path/to/MY_COLLECTION.metadata.json
reading metadata file from /path/to/MY_COLLECTION.metadata.json
skipping options restoration
restoring MY_DB.MY_COLLECTION from file /path/to/MY_COLLECTION.bson
file /path/to/MY_COLLECTION.bson is 241330 bytes
error: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
error: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
restoring indexes for collection MY_DB.MY_COLLECTION from metadata
Failed: restore error: MY_DB.MY_COLLECTION: error creating indexes for MY_DB.MY_COLLECTION: createIndex error: exception: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
The result of the restore is a database and collection with correct names but no documents.
OS: Ubuntu 14.04 running on Azure VM.
I just solved my own problem. See answer below.
The problem seemed to be that I was using mongod on the replica set PRIMARY member.
Once I commented out the following line in /etc/mongod.conf, it worked without problems:
replSet=REPL_SET_NAME --> #replSet=REPL_SET_NAME
I assume passing the correct replica set name to the mongorestore command (like in this question) could also work, but haven't tried that yet.