KPI for retryRead and retryWrite - mongodb

With mongo 4.2, mongoDB support retryWrites and retryReads. But do we have any KPI/statistics to know how many such retryWrites/reads happend? Also may I know is the retry applicable only if we use transaction

do we have any KPI/statistics to know how many such retryWrites/reads happend
one retry attempt for failed operation and only for exception that are considered retryable. Effectively all meaningful for user exceptions are retryable though.
Also may I know is the retry applicable only if we use transaction
no, it's not related to transactions

Related

BeginBinaryImport in a transaction

As far as I can tell, the COPY command in Postgres supports transactions, but I don't see a way to specify a transaction with NpgsqlConnection.BeginBinaryImport. Is it not supported?
BeginBinaryImport implicitly participates in a transaction started before it. So just do NpgsqlConnection.BeginTransaction first, and then call BeginBinaryImport.

Check if MongoDB mutation will succeed without actually executing it

I wonder how/if a MongoDB mutation can be simulated. By "simulated" I mean performing an insert, update or delete action without actually executing it. For example, I'd like to test if the uniqueness index will throw when trying to insert a duplicated value. I search for similar functionality to Ethereum estimate gas action which will throw on an invalid transaction before the transaction is actually sent to the network.
If you're using MongoDB 4.0 or newer, you can use transactions to simulate a dry run. Something like:
conn = pymongo.MongoClient()
with conn.start_session() as s:
s.start_transaction()
conn.test.test.insert_one({'_id':1}, session=s)
conn.test.test.delete_one({'_id':2}, session=s)
if ...dry run condition...:
s.abort_transaction()
else:
s.commit_transaction()
You can abort_transaction() for your dry run, or commit otherwise, like in a typical SQL style transaction. Similarly, a transaction will auto abort if it encounters any error.
Note that transactions require a replica set and MongoDB >= 4.0 to function. See the manual page on transactions for more details.

Handling DB Failure during projection in cqrs

We are creating system using CQRS. Our projections are in mongodb. We are facing some cases. We have an event say OrderCreated. We need to produce a sequential order_no for example #3, #4 etc. We could use a projection and keep a sequence in table then called upsert method. and get a new number. Post a new command : GenerateOrderNumber. now before this post accepted hardware failure occur. If we retry we will have another number. It is not good. How to solve such use case in cqrs.
Our projections are in mongodb <...>
now before this post accepted hardware failure occur
Most likely that described issue is not about CQRS or EventSoucring itself, but related to projection storage, which is MongoDB in question above.
You are trying to perform potential atomic operation without transaction guarantees. Since hardware failure can be caused within random time, database should provide ability for rollbacking failed atomic operations in current transaction.
Best choice is native MongoDB transactions, which are available since 4.0 version - https://docs.mongodb.com/manual/core/transactions/ - and your code will look like this:
session.startTransaction( … );
try {
const lastNo = await eventsCollection.findOne( ... )
await eventsCollection.insertOne( …, lastNo +1 )
session.commitTransaction()
} catch (error) {
session.abortTransaction()
}
If you have to use older MongoDB versions, transactions still can be used. But instead of using intrinsic operator, you should manually write transaction log, and after reconnect to database perform monitoring for broken transactions and revert them manually via log.
You should do all actions via events, even generating sequence no.
In your case I suggest you using saga:
build a projection for generating order_no
fire new event OrderCreated (after this point you have Order Aggregate with some unique id)
saga, listening to this event, fire event GenerateOrderNo (get next free number from projection)
In that case, any time you ask new order_no after failure it'll be the same.
Correct me please if I understood you wrong.

Where to handle errors on database in mongoDB?

When user is registering on website, e-mail needs to be provided which is unique. I've made unique index on schema's email attribute, so if I try to save the document in database, error with code 11000 will be returned. My question is, regarding to business layer and data layer, should I just pass the document to database and catch/check error codes which it returns or should I check if the user with that e-mail exists before? I've being told that data integrity should be checked before passing it to the database by the business layer, but I don't see the reason why should I do that since I believe that mongo would be much faster raising the exception itself since it has that index provided. The only disadvantage I see in error code checking is that error codes might change (but I could abstract them) and the syntax might be changed.
There is the practical matter of speed and the fragility of "check-then-set" systems. If you try and check if an email exists before you write the document keyed on email, there is a chance that between the time you check and the time you right the conditions of the unique index are met and your write fails anyhow. This is a classic race condition. Further, it takes 2 queries to do check-then-set but only 1 query to do the insert and handle the failure. In my application I am having success with just letting the failure occur and reacting to the result.
As #JamesWahlin says, it is the difference between dong this all in one or causing mixed results (along with the index check) from potential race conditions by adding the extra client read.
Definitely rely on the response of only insert from MongoDB here.

MongoDB critical section

I need to perform a few operations (read and writes) on my mongodb without having another process interrupting. It's for an online game and when a user sends resources to another the following steps are performed:
Check his resource value
Abort if it's not enough
Insert a resource transaction
Decrement his resource value
Increment the other ones resource value
I'm concerned that while checking if its enough or inserting the resource transaction some other transaction has already been inserted and the values become invalid. How can I make sure that this part is executed in this order?
I can see two ways:
Use client side transactions to hold a "lock": http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/
Or use versioning here whereby you hold a field with a $inc'd version number which gets updated every time you save and must be queried by whenever you go to save. A good example is within Vermongo: https://github.com/thiloplanz/v7files/wiki/Vermongo
Those seem to be the two most plausible ways I see of getting this done.
Transaction is a almost forbidden word when talking about mongo. But you can perform steps 1,2 and 4 using a atomic uptade with $inc using resource value as condition, and then perform steps 3 and 5. You will not have support for rolling back on step if next steps fails.
I am an engineer at Tokutek
TokuMX is a MongoDB replacement server that uses the same protocol and drivers and supports native multi-statement transactions on non-sharded setups. What you want can be accomplished with a serializable transaction, which will take document-level locks on documents you touch. This would be done something like
> db.beginTransaction("serializable");
> if (resourcesInsufficient()) { db.rollbackTransaction(); }
> // insert and update
> db.commitTransaction()
Again, this is not supported in sharding but may be useful for your application. More details, features and limitations are discussed here.