I need to clean a mongodb collection of 200Tb, and delete older timstamp. I am trying to build a new collection from the new, and run a delete query, since, running a del on the present collection that is in use, will slow down the other requests to it. I have thought of cloning a new collection either by taking a dump of the following collection, or by create a read and and write script, such that, it will read from the present collection and write to the cloned collection. My question is is a read/write operation of a batch ex: 1000 read and write faster than a dump ?
EDIT:
I found this, this and this article, and want to know, if writing a script in the above mentioned way the same as creating a ssh pipe of read and write ? ex: is a node/python script to fetch 1000 rows from a collection and insert that to a clone collection the same as ssh *** ". /etc/profile; mongodump -h sourceHost -d yourDatabase … | mongorestore -h targetHost -d yourDatabase ?
I would suggest this approach:
Rename the collection. Your application will immediately create a new empty collection with the old name when it tries to insert some data. You may create some indexes.
Run mongoexport/mongoimport to import the valid data, i.e. skip the outdated.
Yes, in general mongodump/mongorestore might be faster, however at mongoexport you can define a query and limit the data which is exported. Could be like this:
mongoexport --uri "..." --db=yourDatabase --collection=collection --query='{timestamp: {$gt: ISODate("2022-01-010")}}' | mongoimport --uri "..." --db=yourDatabase --collection=collection --numInsertionWorkers=10
Utilize parameter numInsertionWorkers to run multiple workers. It will speed up your inserts.
So you run a sharded cluster? If yes, then you should use sh.splitAt() on the new collection, see How to copy a collection from one database to another in MongoDB
Related
When I want to remove all objects from my mongoDB collection comments I do this with this command:
mongo $MONGODB_URI --eval 'db.comments.deleteMany({});'
However, this is super slow when there are millions of records inside the collection.
In a relational db like Postgres I'd simply copy the structure of the collection, create a comments2 collection, drop the comments collection, and rename comments2 to comments.
Is this possible to do in MongoDB as well?
Or are there any other tricks to speed up the progress?
Thanks, the answers inspired my own solution. I forgot that MongoDB doesn't have a schema like a relationalDB.
So what I did is this:
1. dump an empty collection + the indexes of the collection
mongodump --host=127.0.0.1 --port=7001 --db=coral --collection=comments --query='{"id": "doesntexist"}' --out=./dump
This will create a folder ./dump with the contents comments.bson (empty) and comments.metadata.json
2. Drop the comments collection
mongo mongodb://127.0.0.1:7001/coral --eval 'db.comments.drop();'
3. Import new data new_comments.json (different from comments.bson)
mongoimport --uri=mongodb://127.0.0.1:7001/coral --file=new_comments.json --collection comments --numInsertionWorkers 12
This is way faster than first adding the indexes, and then importing.
4. Add indexes back
mongorestore --uri=mongodb://127.0.0.1:7001/coral --dir dump/coral --nsInclude coral.comments --numInsertionWorkersPerCollection 12
Note that --numInsertionWorkers speeds up to process by dividing the work over 12 cpus.
How many cpus do you have can be found on OSx with:
sysctl -n hw.ncpu
db.cities.aggregate([{ $match: {} }, { $out: "collection2" }]) in case you can login to the mongo prompt and simply drop the previous collection.
Otherwise, the approach you have posted is the one.
mongoexport.exe /host: /port: /db:test /collection:collection1 /out:collection1.json
mongoimport.exe /host: /port: /db:test /collection:collection2 /file:collection1.json
Thanks,
Neha
For mongodb version >=4.0 you can do this via db.comments.renameCollection("comments2") ,but it is kind of resource intensive operation and for bigger collections better you do mongodump/mongorestore. So the best action steps are:
mongodump -d x -c comments -out dump.bson
>use x
>db.comments.drop()
mongorestore -d x -c comments2 dump.bson
Plese, note deleteMany({}) is even more resource intensive operation since it will create oplog single entry for every document you delete and propagate to all replicaSet members.
I am trying to import/restore a single collection from within MongoDB (i.e. mongorestore cannot be accessed, I think ...?).
Is it possible? What is the command? Ideally, I'd like to include indexes as well. The backup has been produced by mongodump.
Specifically, I am using the IntelliShell from the excellent MongoChef. I perform other commands in this as well, such as renaming existing collections first.
I have a mongo collection(s) with 2.5 million data and that may grow upto 3 million. I am using spring batch and am trying to copy that collection to another collection. Approaches I have used are as follows :
Inside a tasklet, I have Created a ProcessBuilder object and called a shell script which executes a mongo query. Content of shell script is as follows :
> mongo $serverURL/$dbName js-file-to-execute.js
// js file contains copy command (db.collection.copyto('newCollection'))
For less data (< 200 k) it works fine but for 2 million data it hangs the mongo server and the job got failed with Socket Exception
Used a mongo template and executed a query
dbMongoTemplate.getDb().getCollection("collection").aggregate(Arrays.asList((DBObject) new BasicDBObject("$out","newCollection")));
This executes a mongo aggregate query db.collection.aggregate({$out : "newCollection"})
This also worked for collections with less data but for larger data set it keeps running until socket time out occurs and fails the job at the end.
Please suggest efficient way to copy data?
//Fastest way to copy a Collection in MongoDB
db.getCollection('OriginalCollection').aggregate([ { $out: "ClonedCollection" } ]);
This command copied a collection of 2 million records in about 2-3 minutes.
https://gist.github.com/tejzpr/ff37324a8c26d13fef08c318278c0718
To copy this collection I will sugest using mongodump/mongoexport
mongodump --db databaseName --collection collectionName --out directory-path
then copy directory directory-path and then restore on target machine using
mongorestore --db databaseName --collection collectionName directory-path
This query is related to building a small MongoDB test database from a large existing database.
My plan to execute this is as follows:
a) Use mongodump with an aggregate query which specifies my conditions for the records to be copied over to the test database.
Will this idea work? From what I have read on forums, using a MongoDB query as is in a mongodump command will not work.
Any guidance on this is most appreciated.
You can use the following command to get the subset of the DB.
mongodump --query "your query here"
For more information read the mongodump documentation here.
I have an existing MongoDB dump and I would like to cherry pick some of the data to a clean DB.
Is dumping a single collection and restoring them (mongodump & mongorestore) the way to do this?
You can to this by using the --filter '<JSON>' option on mongorestore.
That's like the first argument of db.find().
If you just want to filter by collection --collection <collection>
See more info in the doc