Couchdb compaction and deleted documents - version-control

When I delete document its revision is increased, then PUT the same document, its revision is increased. After compaction, PUT the same document, its revision is started from 1. And now, when I got document, I have the message, that this document was deleted.
After second time I do PUT the same document, I have document, with revision before compaction + 1, and after GET this document, shows me correctly actual state.
Why ?

This is an instance of COUCHDB-1415, which happens if you delete a document then attempt to insert the document again with exactly the same content. The workaround is to add changed data to the document before inserting a new revision after the delete happens.
From the bug, looks like it will be fixed in 2.0, which is being worked on at the moment.

Related

Deleting and recreating the same document in a batch update in Google Cloud Firestore

There may be a situation in my app where the exact same document could be deleted and also added in the same batch update. The document ID and data would be exactly the same. If it matters, the deleteDocument batch update operation is added first. My assumption would be that the document would remain exactly as before. Is this the correct assumption?
It may not matter, but I am using Xcode, ios, and swift.
The operations in a batch update will occur sequentially in the order you add them. So for the described case, the document will appear unchanged at the end of the batch update. If the delete operation is added after the create/update operation, the document will be deleted. As Doug said, this is always the case.

MongoDB multiple update isolation

I'm confused about how MongoDB updates works.
In the following docs: https://docs.mongodb.com/manual/core/write-operations-atomicity/ says:
In MongoDB, a write operation is atomic on the level of a single
document, even if the operation modifies multiple embedded documents
within a single document.
When a single write operation modifies multiple documents, the
modification of each document is atomic, but the operation as a whole
is not atomic and other operations may interleave.
I guess it means: if I'm updating all fields of a document I will be unable to see a partial update:
If I get the document before the update I will see it without any change
If I get the document after the update I will see it with all the changes
For a multiple elements the same behavior happens for each document. I guess we could say there is a transaction for each document update instead of a big one for all of them.
But let's say there are a lots of documents on the multiple update, and it takes a while to update all of them. What happen with the queries by other threads during the update?
They will see the old version? Or they will be blocked until the update finishes?
Other updates to same documents are possible during this big update? If so, could this intermediate update exclude some document from the big update?
They will see the old version? Or they will be blocked until the update finishes?
I guess other threads may see the old version of a document or the new, depending on whether they query the document before or after the update is finished, but they will never see a partial update on a document (i.e. one field changed and another not changed).
Other updates to same documents are possible during this big update? If so, could this intermediate update exclude some document from the big update?
Instead of big or small updates, think of 2 threads doing an update on the same document. Thread 1 sets fields {a:1, b:2} and thread 2 sets {b:3, c:4}. If the original document is {a:0, b:0, c:0} then we can have two scenarios:
Update 1 is executed before update 2:
The document will finally be {a:1, b:3, c:4}.
Update 2 is executed before update 1:
The document will finally be {a:1, b:2, c:4}.

Mongodb : Stream change

The rc0 of mongoldb 3.6 as been released till last Friday, and I have
tested the new feature regarding the stream change.
My tests show me that I can retrieve the inserted/updated (did not test the replace yet) document when the operation occurred in mongo shell.
But here is the thing: When I perform a delete operation under mongo shell, I can't retrieve the document with the same java code.
I know that the driver 3.6.0-beta2 is not ready, but I'm wondering if this should appear normal to retrieve such a thing when deleting a document.
Right now I don't see why this feature will not be available. I know also this is speculation, but just like to have your opinion about this.
The change stream will trigger an event upon document deletion (see manual) but since the document is already deleted when this is trigger the result includes just the document id/key but no field data.

Replace instead update

I have parent document with references. The question is, it is OK to delete all referenced documents and insert new ones, instead updating old, inserting new and deleting removed documents? In SQL it's not very good practice, be cause index becomes fragmented.
When you start inserting documents into MongoDB, it puts each
document right next to the previous one on disk. Thus, if a document
gets bigger, it will no longer fit in the space it was originally
written to and will be moved to another part of the collection
i believe its better to remove and insert incase we are not sure of the size, else if the size of update is bigger we can face performance concerns in case of relocating.
If i am not wrong, what you are trying to achieve is the behavior of Document replacement, i believe you can use db.collection.findAndModify() , it has update and remove field, which can help you achieve your desired behavior.

MongoDB, Updating a capped collection

does anyone know a way to update a capped collection in Mongo3.2? I had this working in 2.x where by I updated a collection, and basically removed all its content so I knew it had been processed. This would then age out.
When I do the same in 3.2 I get the following error on the command line.
Cannot change the size of a document in a capped collection: 318 != 40
Here you can see I'm shrinking the document from 318bytes to 40bytes.
Is there a way to do this?
As mentioned in mongodb docs
Changed in version 3.2.
If an update or a replacement operation changes the document size, the
operation will fail.
https://docs.mongodb.com/manual/core/capped-collections/
So your operation is changing the size of the capped collection, which is not allowed in mongodb 3.2+
Capped collections are fixed size Collections. Hence it fails when the record update exceeds it's previously allocated size.
Changed in version 3.2.
If an update or a replacement operation changes the document size, the operation will fail.
For more information, Please visit the link: https://docs.mongodb.com/manual/core/capped-collections/
So, the solutions is create a new Collection without choosing the capped one ( if this fits your requirement ). It works.
Shrinking a document in a capped collection is no longer allowed in 3.2. I did not find anything related to this in the documentation, but there is a rationale at https://jira.mongodb.org/browse/SERVER-20529
(basicalling shrinking document cannot be rolled back)
Your only option is to find a same-size update, for example update a boolean.
[Solve] It's work with me
Create new Collection not Capped and copy all your document.
Copy, replace your collection name and paste in your terminal.
// replace <mycol> by <your collections name>
db.createCollection( "mycol_temp")
var cur = db.mycol.find()
while (cur.hasNext()) {
mycol = cur.next();
db.mycol_temp.insert(logItem);
}
db.mycol.drop()
db.mycol_temp.renameCollection("mycol")
Now, update() or remove() document is accepted.