I want to convert the MongoDB local Oplog file into an actual real query so I can execute that query and get the exact copy database.
Is there any package, file, build-in tools, or script for it?
It's not possible to get the exact query from the oplog entry because MongoDB doesn't save the query.
The oplog has an entry for each atomic modification performed. Multi-inserts/updates/deletes performed on the mongo instance using a single query are converted to multiple entries and written to the oplog collection. For example, if we insert 10,000 documents using Bulk.insert(), 10,000 new entries will be created in the oplog collection. Now the same can also be done by firing 10,000 Collection.insertOne() queries. The oplog entries would look identical! There is no way to tell which one actually happened.
Sorry, but that is impossible.
The reason is that, that opLog doesn't have queries. OpLog includes only changes (add, update, delete) to data, and it's there for replication and redo.
To get an exact copy of DB, it's called "replication", and that is of course supported by the system.
To "replicate" changes to f.ex. one DB or collection, you can use https://www.mongodb.com/docs/manual/changeStreams/.
You can get the query from the Oplogs. Oplog defines multiple op types, for instance op: "i","u", "d" etc, are for insert, update, delete. For these types, check the "o"/"o2" fields which have corresponding data and filters.
Now based on the op types call the corresponding driver APIs db.collection.insert()/update()/delete().
Related
I'd like to "rebuild" my collection atomically, which means delete all existing documents and populate it from scratch.
The thing is, since transactions are not supported there is a small time gap that the collection is empty, which is what I want to avoid.
Is there a way to perform such action in an atomically matter? so there will be no point where the collection is empty?
You can build a new collection with a different name and then use rename command to rename the new collection and drop the existing collection (using dropTarget=True option).
There are several caveats though:
The command will invalidate open cursors which interrupts queries that
are currently returning data.
renameCollection blocks all database activity for the duration of the operation.
renameCollection is not compatible with sharded collections.
If the renameCollection operation does not complete, the target collection and indexes will not be usable and will require manual intervention to clean up.
You can find more info in the official docs.
I am confused by how mongo renames collections and how much time will it take to rename a very large collection.
Here is the scenario, I have a mongo collection with too much data (588 million documents), which slows down finding and insertion, so I creating an archive collection to keep all this data.
For this I am thinking to rename the old collection to oldcollectionname_archive and start with a fresh collection with oldcollectionname.
And planning to do this by following command :
db.oldCollectionName.renameCollection("oldCollectionName_archive")
But I am not sure, how much time it will take.
I read the mongodocs and many stackoverflow answers regarding collection renaming, but I could find anywhere any data regarding whether the size of the collection affect the time required to renaming the collection.
Please help if anyone has any knowledge regarding this or any same experience.
Note : I have read other issues which can occur during renaming, on mongo documentation and other SO answers.
From the mongodb documentation (https://docs.mongodb.com/manual/reference/command/renameCollection/)
renameCollection has different performance implications depending on the target namespace.
If the target database is the same as the source database, renameCollection simply changes the namespace. This is a quick operation.
If the target database differs from the source database, renameCollection copies all documents from the source collection to the target collection. Depending on the size of the collection, this may take longer to complete. Other operations which require exclusive access to the affected databases will be blocked until the rename completes. See What locks are taken by some common client operations? for operations which require exclusive access to all databases.
Note that:
* renameCollection is not compatible with sharded collections.
* renameCollection fails if target is the name of an existing collection and you do not specify dropTarget: true.
I have renamed multiple collections with around 500M documents. It completes in ~0 time.
This is true for MongoDB 3.2 and 3.4, and I would guess also for older versions.
I have have a Python application that is iteratively going through every document in a MongoDB (3.0.2) collection (typically between 10K and 1M documents), and adding new fields (probably doubling/tripling the number of fields in the document).
My initial thought was that I would use upsert the entire of the revised documents (using pyMongo) - now I'm questioning that:
Given that the revised documents are significantly bigger should I be inserting only the new fields, or just replacing the document?
Also, is it better to perform a write to the collection on a document by document basis or in bulk?
this is actually a great question that can be solved a few different ways depending on how you are managing your data.
if you are upserting additional fields does this mean your data is appending additional fields at a later point in time with the only changes being the addition of the additional fields? if so you could set the ttl on your documents so that the old ones drop off over time. keep in mind that if you do this you will want to set an index that sorts your results by descending _id so that the most recent additions are selected before the older ones.
the benefit of this of doing it this way is that your are continually writing data as opposed to seeking and updating data so it is faster.
in regards to upserts vs bulk inserts. bulk inserts are always faster than upserts since bulk upserting requires you to find the original document first.
Given that the revised documents are significantly bigger should I be inserting only the new fields, or just replacing the document?
you really need to understand your data fully to determine what is best but if only change to the data is additional fields or changes that only need to be considered from that point forward then bulk inserting and setting a ttl on your older data is the better method from the stand point of write operations as opposed to seek, find and update. when using this method you will want to db.document.find_one() as opposed to db.document.find() so that only your current record is returned.
Also, is it better to perform a write to the collection on a document by document basis or in bulk?
bulk inserts will be faster than inserting each one sequentially.
I am using mongo version 3.0 db and java driver. I have a collection 100,000+ entries. Each day there will be approx 500 updates and approx 500 inserts which should be done in a batch. I will get the updated documents with old fields plus some new ones which I have to store. I dont know which are the feilds are newly added also for each field I am maintaining a summary statistic. Since I dont know what were the changes I will have to fetch the records which already exist to see the difference between updated ones and new ones to appropriately set the summary statistics.So I wanted inputs regarding how this can be done efficiently.
Should I delete the existing records and insert again or should I update the 500 records. And should I consider doing 1000 upsers if it has potential advantages.
Example UseCase
initial record contains: f=[185, 75, 186]. I will get the update request as f=[185, 75, 186, 1, 2, 3] for the same record. Also the summary statistics mentioned above store the counts of the ids in f. So the counts for 1,2,3 will be increased and for 185, 75, 186 will remain the same.
Upserts are used to add a document if it does not exist. So if you're expecting new documents then yes, set {upsert: true}.
In order to update your statistics I think the easiest way is to redo the statistics if you were doing it in mongo (e.g. using the aggregation framework). If you index your documents properly it should be fine. I assume that your statistics update is an offline operation.
If you weren't doing the statistics in mongo then you can add another collection where you can save the updates along with the old fields (of course you update your current collection too) so you will know which documents have changed during the day. At the end of the day you can just remove this temporary/log collection once you've extracted the needed information.
Mongo maintains every change log using oplog.rs capped collection in local db. We are creating a tailable cursor on oplog.rs on timestamp basis and each change in db / collection are streamed thru. Believe this is the best way to identify changes in mongo. One can certainly drop no interest document changes.
Further read http://docs.mongodb.org/manual/reference/glossary/#term-oplog
I think the easiest way is to redo the statistics if you were doing it in mongo (e.g. using the aggregation framework). If you index upsers documents properly it should be fine. I assume that your statistics update is an offline operation.
What type of locking does findAndModify() offer? Is is a write lock only, or read/write? Does it prevent simultaneous updates on the same record?
MongoDB has a global (per-instance) write lock, which serializes all updates across all data in the server (though different servers in a sharded cluster will each have their own independent locks). This means that at any given instant in time, only one update is taking place on any document, and therefore only one update for any given document.
findAndModify doesn't do anything different in this regard than an ordinary update -- it just returns the document to you.
According to the MongoDB docs for MongoDB: findAndModify() for under MongoDB: Atomic Operations it should be.