making multiple operations atomic in sqflite - flutter

I am creating a mobile application in flutter where I need to do entries in the transaction table and also need to update the balance in the user table. Whenever a new transaction is to be added I first check the current balance of the user, add the entry in the transaction table and update the balance. My question is how can I make the entire operation atomic. Say if the update balance fails because for any reason how can I roll back from the transaction table also.

That is exactly what SQLite transactions are for! (Sorry for the unfortunate name of your table which is also called transaction)
More info on how to use transactions in sqflite here:
https://github.com/tekartik/sqflite/blob/master/sqflite/doc/sql.md#transaction
some copy/pasted information for convenience:
transaction
transaction handle the 'all or nothing' scenario. If one command fails (and throws an error), all other commands are reverted.
await db.transaction((txn) async {
await txn.insert('my_table', {'name': 'my_name'});
await txn.delete('my_table', where: 'name = ?', whereArgs: ['cat']);
});
Make sure to sure the inner transaction object - txn in the code above - is used in a transaction (using the db object itself will cause a deadlock),
You can throw an error during a transaction to cancel a transaction,
When an error is thrown during a transaction, the action is cancelled right away and previous commands in the transaction are reverted,
No other concurrent modification on the database (even from an outside process) can happen during a transaction,
The inner part of the transaction is called only once, it is up to the developer to handle a try-again loop - assuming it can succeed at some point.

Related

Perform additional query at end of every transaction (pg-promise)

As a potential way of storing metadata about transactions, I would like to execute a query at end end of every transaction.
I have looked at adding logic inside the transact event, but there does not seem to be a way to make another request using the current transaction. Is there a way that this could be done using pg-promise? Is this an anti-pattern?
You can only query against the transaction connection while inside a transaction callback. You cannot do it by handling transact event, it does not have the transaction connection available.

Making POST requests idempotent

I have been looking for a way to design my API so it will be idempotent, meaning that some of that is to make my POST request routes idempotent, and I stumbled upon this article.
(If I have understood something not the way it is, please correct me!)
In it, there is a good explanation of the general idea. but what is lacking are some examples of the way that he implemented it by himself.
Someone asked the writer of the article, how would he guarantee atomicity? so the writer added a code example.
Essentially, in his code example there are two cases,
the flow if everything goes well:
Open a transaction on the db that holds the data that needs to change by the POST request
Inside this transaction, execute the needed change
Set the Idempotency-key key and the value, which is the response to the client, inside the Redis store
Set expire time to that key
Commit the transaction
the flow if something inside the code goes wrong:
and exception inside the flow of the function occurs.
a rollback to the transaction is performed
Notice that the transaction that is opened is for a certain DB, lets call him A.
However, it is not relevant for the redis store that he also uses, meaning that the rollback of the transaction will only affect DB A.
So it covers the case when something happends inside the code that make it impossible to complete the transaction.
But what will happend if the machine, which the code runs on, will crash, while it is in a state when it has already executed the Set expire time to that key and it is now about to run the committing of the transaction?
In that case, the key will be available in the redis store, but the transaction has not been committed.
This will result in a situation where the service is sure that the needed changes have already happen, but they didn't, the machine failed before it could finish it.
I need to design the API in such a way that if the change to the data or setting of the key and value in redis fail, that they will both roll back.
What is the solution to this problem?
How can I guarantee the atomicity of a changing the needed data in one database, and in the same time setting the key and the needed response in redis, and if any of them fails, rollback them both? (Including in a case that a machine crashes in the middle of the actions)
Please add a code example when answering! I'm using the same technologies as in the article (nodejs, redis, mongo - for the data itself)
Thanks :)
Per the code example you shared in your question, the behavior you want is to make sure there was no crash on the server between the moment where the idempotency key was set into the Redis saying this transaction already happened and the moment when the transaction is, in fact, persisted in your database.
However, when using Redis and another database together you have two independent points of failure, and two actions being executed sequentially in different moments (and even if they are executed asynchronously at the same time there is no guarantee the server won’t crash before any of them completed).
What you can do instead is include in your transaction an insert statement to a table holding relevant information on this request, including the idempotent key. As the ACID properties ensure atomicity, it guarantees either all the statements on the transaction to be executed successfully or none of them, which means your idempotency key will be available in your database if the transaction succeeded.
You can still use Redis as it’s gonna provide faster results than your database.
A code example is provided below, but it might be good to think about how relevant is the failure between insert to Redis and database to your business (could it be treated with another strategy?) to avoid over-engineering.
async function execute(idempotentKey) {
try {
// append to the query statement an insert into executions table.
// this will be persisted with the transaction
query = ```
UPDATE firsttable SET ...;
UPDATE secondtable SET ...;
INSERT INTO executions (idempotent_key, success) VALUES (:idempotent_key, true);
```;
const db = await dbConnection();
await db.beginTransaction();
await db.execute(query);
// we're setting a key on redis with a value: "false".
await redisClient.setAsync(idempotentKey, false, 'EX', process.env.KEY_EXPIRE_TIME);
/*
if server crashes exactly here, idempotent key will be on redis with false as value.
in this case, there are two possibilities: commit to database suceeded or not.
if on next request redis provides a false value, query database to verify if transaction was executed.
*/
await db.commit();
// you can now set key value to true, meaning commit suceeded and you won't need to query database to verify that.
await redis.setAsync(idempotentKey, true);
} catch (err) {
await db.rollback();
throw err;
}
}

Update a row in a table respecting a constraint on another table

book:
id: primary key, integer
title: varchar
borrowed: boolean
borrowed_by_user_id: foreign key user.id
user:
id: primary key, integer
name: varchar
blocked: boolean
The isolation level is READ COMMITED, because it is default level in PostgreSQL (this requirement is not from me).
I am using one database transaction to SELECT FOR UPDATE a book and lend it to any user if book is not borrowed yet. The book was selected FOR UPDATE so it cannot be borrowed concurrently.
But there is another problem. We cannot allow to lend a book to blocked user. How can we ascertain that? Even if we check at the beginning if user is not blocked, the result might not be correct because a concurrent transaction could block the user after that check.
For example, a user can be blocked by a concurrent transaction from the admin's panel.
How to solve that issue?
I see that I can use SERIALIZABLE. It requires a handling errors, yes?
I am not sure how that CHECK works. Could you say more about it?
These are actually two questions.
About the books:
If you lock the book with SELECT ... FOR UPDATE as soon as you consider lending it out, this is an example of “pessimistic locking” and will block the book for all concurrent activity.
That is fine if the transactions are very short – specifically, if there is no user interaction between the locking and the end of the transaction.
Otherwise you should use “optimistic locking”. This can be done in several ways:
Use REPEATABLE READ transaction isolation. Then updating a book that has been modified since you read its data will lead to a serialization error (see the note at the end).
When selecting books, remember the values of the system columns ctid and xmin. Then update as follows:
UPDATE books SET ...
WHERE id = ...
AND ctid = original_ctid AND xmin = original_xmin;
If no row gets updated, somebody must have modified the book since you looked at it.
About the users:
Three ideas:
You use SERIALIZABLE transaction isolation (see the note at the end).
You maintain a counter on the user that contains the number of books the user has borrowed.
Then you can have a check constraint like
ALTER TABLE users ADD CHECK (NOT blocked OR books_borrowed = 0);
Such a check constraint is evaluated at the end of each statement and has to yield TRUE, else an error is thrown.
So either the transaction that borrows a book or the transaction that blocks the user must fail (both transactions have to modify the user).
Right before lending a book to a user, you run
SELECT blocked FROM users WHERE id = ... FOR UPDATE;
If you get TRUE, you abort the transaction, otherwise lend out the book.
A concurrent transaction that wants to block the user has to SELECT ... FOR UPDATE on the user as well and only then check if there are any books lent to that user.
That way, no inconsistency can happen: if you want to block a user, all concurrent transactions that want to lend a book to the user must either be completed, so that you see their effect, or they must wait until you are done blocking the user, whereupon they will fail.
Note about higher isolation levels:
If you run transactions at an isolation level of REPEATABLE READ or SERIALIZABLE, you can encounter serialization errors. These are not bugs in your program, they are normal and to be expected. If you encounter a serialization error, you have to rollback and try the same transaction again. That is the price you pay for not having to worry about race conditions.

OptimisticLockException with concurrent JPA update under Spring Boot

Here is the sample project where the exception is reproduced.
This sample illustrates the issue when many concurrent transactions are modifiying Account balance. Account can have many Card entities bound. Transactions are related to Order and last in time. Each Thread executes as follows:
client requests '/order/{hashId}' for first available Order by given card hash id
client starts new tx for given order - '/tx/{orderId}/start'
client completes tx - '/tx/{txId}/stop/{amount}' where the tx amount is subtracted from Account balance.
Entity Locking
Account and Order entities are versioned with #javax.persistence.Version. In last step Account entity is locked with pessimistic write lock:
#Override
public Account getLockedAccount(Integer id) {
final Account account = findOne(id);
em.lock(account, LockModeType.PESSIMISTIC_WRITE);
return account;
}
Testing
To test the concurrent access use JMeter script src/main/resources/StressTest.jmx. NB: Extra libs have to be installed to JMeter home to run the script due to usage of JSON Path extractor. With these specific settings on an average laptop you can get around 10% of errors for TxEnd request:
{
"timestamp":1425407408204,
"status":500,
"error":"Internal Server Error",
"exception":"org.springframework.orm.ObjectOptimisticLockingFailureException",
"message":"Object of class [sample.data.jpa.domain.Account] with identifier [1]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [sample.data.jpa.domain.Account#1]",
"path":"/tx/1443/stop/46.4"
}
Question
Despite of using pessimistic write lock I still get the optimistic locking exception. Is there any other approach to ensure the integrity of account without creating a task execution queue for all updates or synchronizing methods?
UPD: The work around with task executor is placed in another branch. Spring ThreadPoolTaskExecutor combined with transactional task remediates the issue.
Between find and locking, the Account object may have been already modified.
You need to do it in one statement
EM.find(Account.class, id, LockModeType.PESSIMISTIC_WRITE)

continue insert when exception is raised in postgres

HI,
Iam trying to insert batch of records at a time when any of the record fails to insert i need to trap that record and log that to my failed record maintanance table and then the insert should continue. Kindly help on how to do this.
If using a Spring or EJB container there is a simple trick which works very well : provide a LogService witn a logWarning(String message) method. The method must be annotated/configured with the REQUIRES_NEW transaction setting.
If not then you'll have to simulate it using API calls. Open a different connection for the logging, when you enter the method begin the transaction, before leaving commit the transaction.
When not using transactions for the insert, there is actually nothing special you need to do, as by default most database run in autocommit and commit after every statement.