does anyone know a way to update a capped collection in Mongo3.2? I had this working in 2.x where by I updated a collection, and basically removed all its content so I knew it had been processed. This would then age out.
When I do the same in 3.2 I get the following error on the command line.
Cannot change the size of a document in a capped collection: 318 != 40
Here you can see I'm shrinking the document from 318bytes to 40bytes.
Is there a way to do this?
As mentioned in mongodb docs
Changed in version 3.2.
If an update or a replacement operation changes the document size, the
operation will fail.
https://docs.mongodb.com/manual/core/capped-collections/
So your operation is changing the size of the capped collection, which is not allowed in mongodb 3.2+
Capped collections are fixed size Collections. Hence it fails when the record update exceeds it's previously allocated size.
Changed in version 3.2.
If an update or a replacement operation changes the document size, the operation will fail.
For more information, Please visit the link: https://docs.mongodb.com/manual/core/capped-collections/
So, the solutions is create a new Collection without choosing the capped one ( if this fits your requirement ). It works.
Shrinking a document in a capped collection is no longer allowed in 3.2. I did not find anything related to this in the documentation, but there is a rationale at https://jira.mongodb.org/browse/SERVER-20529
(basicalling shrinking document cannot be rolled back)
Your only option is to find a same-size update, for example update a boolean.
[Solve] It's work with me
Create new Collection not Capped and copy all your document.
Copy, replace your collection name and paste in your terminal.
// replace <mycol> by <your collections name>
db.createCollection( "mycol_temp")
var cur = db.mycol.find()
while (cur.hasNext()) {
mycol = cur.next();
db.mycol_temp.insert(logItem);
}
db.mycol.drop()
db.mycol_temp.renameCollection("mycol")
Now, update() or remove() document is accepted.
Related
I'm confused about how MongoDB updates works.
In the following docs: https://docs.mongodb.com/manual/core/write-operations-atomicity/ says:
In MongoDB, a write operation is atomic on the level of a single
document, even if the operation modifies multiple embedded documents
within a single document.
When a single write operation modifies multiple documents, the
modification of each document is atomic, but the operation as a whole
is not atomic and other operations may interleave.
I guess it means: if I'm updating all fields of a document I will be unable to see a partial update:
If I get the document before the update I will see it without any change
If I get the document after the update I will see it with all the changes
For a multiple elements the same behavior happens for each document. I guess we could say there is a transaction for each document update instead of a big one for all of them.
But let's say there are a lots of documents on the multiple update, and it takes a while to update all of them. What happen with the queries by other threads during the update?
They will see the old version? Or they will be blocked until the update finishes?
Other updates to same documents are possible during this big update? If so, could this intermediate update exclude some document from the big update?
They will see the old version? Or they will be blocked until the update finishes?
I guess other threads may see the old version of a document or the new, depending on whether they query the document before or after the update is finished, but they will never see a partial update on a document (i.e. one field changed and another not changed).
Other updates to same documents are possible during this big update? If so, could this intermediate update exclude some document from the big update?
Instead of big or small updates, think of 2 threads doing an update on the same document. Thread 1 sets fields {a:1, b:2} and thread 2 sets {b:3, c:4}. If the original document is {a:0, b:0, c:0} then we can have two scenarios:
Update 1 is executed before update 2:
The document will finally be {a:1, b:3, c:4}.
Update 2 is executed before update 1:
The document will finally be {a:1, b:2, c:4}.
Can we save new record in decending order in MongoDB? So that the first saved document will be returned last in a find query. I do not want to use $sort, so data should be presaved in decending order.
Is it possible?
According to above mentioned description ,as an alternative solution if you do not need to use $sort, you need to create a Capped collection which maintains order of insertion of documents into MongoDB collection
For more detailed description regarding Capped collections in MongoDB please refer the documentation mentioned in following URL
https://docs.mongodb.org/manual/core/capped-collections/
But please note that capped collections are fixed size collections hence it will automatically flush old documents in case when collection size exceeds size of capped collection
The order of the records is not guaranteed by MongoDB unless you add a $sort operator. Even if the records happen to be ordered on disk, there is no guarantee that MongoDB will always return the records in the same order. MongoDB does quite a bit of work under the hood and as your data grows in size, the query optimiser may pick a different execution plan and return the data in a different order.
How can I resize a mongodb capped collection without losing data?
Is there a command for that, or could somebody provide a script?
This is what I do to resize capped collections:
db.runCommand({"convertToCapped": "log", size: 1000000000});
I already have a Capped Collection named "log". So I just run the "convertToCapped" on it again, specifying a new size. I've not tried it on reducing the size of the collection. That may be something that you'd need to use Scott Hernandez's version on. But this works for increasing the size of your capped collections without losing any data or your indexes.
EDIT: #JMichal is correct. Data is preserved, but indexes are not and will need to be recreated.
You basically need to create a new capped collection and copy the docs to it. This can be done very easily in the javascript (shell), or your language of choice.
db.createCollection("new", {capped:true, size:1073741824}); /* size in bytes */
db.old.find().forEach(function (d) {db.new.insert(d)});
db.old.renameCollection("bak", true);
db.new.renameCollection("old", true);
Note: Just make sure nobody is inserting/updating the old collection when you switch. If you run that code in a db.eval("....") it will lock the server while it runs.
There is a feature request for increasing the size of an existing collection: http://jira.mongodb.org/browse/SERVER-1864
https://www.mongodb.com/docs/manual/core/capped-collections/#change-a-capped-collection-s-size
New in version 6.0:
db.runCommand( { collMod: "log", cappedSize: 100000 } )
I am working on a MongoDB cluster.
One DB named bnccdb, with a collection named AnalysedLiterature. It has about 7,000,000 documents in it.
For each document, I want to add two keys and then update this document.
I am using Java client. So I query this document, add these both keys to the BasicDBObject and then I use the save() method to update this object. I found the speed is so slow that it will take me nearly several weeks to complete the update for this whole collection.
I wonder the reason why my update operation is so slow is that what I do is add keys.
This will cause a disk/block re-arrangement in the background, so this operation becomes extremely time-consuming.
After I changed from save() to update, problem remains.This is my status information.
From the output of mongostat,it is very obvious that the faults rate is absolutely high.But I don't know what cased it.
Anyone can help me?
How can I resize a mongodb capped collection without losing data?
Is there a command for that, or could somebody provide a script?
This is what I do to resize capped collections:
db.runCommand({"convertToCapped": "log", size: 1000000000});
I already have a Capped Collection named "log". So I just run the "convertToCapped" on it again, specifying a new size. I've not tried it on reducing the size of the collection. That may be something that you'd need to use Scott Hernandez's version on. But this works for increasing the size of your capped collections without losing any data or your indexes.
EDIT: #JMichal is correct. Data is preserved, but indexes are not and will need to be recreated.
You basically need to create a new capped collection and copy the docs to it. This can be done very easily in the javascript (shell), or your language of choice.
db.createCollection("new", {capped:true, size:1073741824}); /* size in bytes */
db.old.find().forEach(function (d) {db.new.insert(d)});
db.old.renameCollection("bak", true);
db.new.renameCollection("old", true);
Note: Just make sure nobody is inserting/updating the old collection when you switch. If you run that code in a db.eval("....") it will lock the server while it runs.
There is a feature request for increasing the size of an existing collection: http://jira.mongodb.org/browse/SERVER-1864
https://www.mongodb.com/docs/manual/core/capped-collections/#change-a-capped-collection-s-size
New in version 6.0:
db.runCommand( { collMod: "log", cappedSize: 100000 } )