Database model design for single entry transaction between two accounts? - postgresql

I am building an app that helps people transfer money from one account to another. I have two tables "Users" and "Transactions". The way I am currently handling transfers is by
Check if the sender has enough balance to make a transfer.
Deduct the balance from the sender and update the sender's balance.
Add the amount deducted from the sender's account to the recipient's account and then update the balance of the recipient.
Then finally write the transaction record on the "Transactions" table as a single entry like below:
id | transactionId | senderAccount | recipientAccount | Amount |
—--+---------------+---------------+------------------+--------+
1 | ijiej33 | A | B | 100 |
so my question is, is recording a transaction as a single entry like above a good practice or will this kind of database model design produce future challenges?
Thanks

Check if the sender has enough balance to make a transfer.
Deduct the balance from the sender and update the sender's balance.
Yes, but.
If two concurrent connections attempt to deduct money from the sender at the same time, they may both successfully check that there is enough money for each transaction on its own, then succeed even though the balance is insufficient for both transactions to succeed.
You must use a SELECT FOR UPDATE when checking. This will lock the row for the duration of the transaction (until COMMIT or ROLLBACK), and any concurrent connection attempting to also SELECT FOR UPDATE on the same row will have to wait.
Presumably the receiver account can always receive money, so there is no need to lock it explicitly, but the UPDATE will lock it anyway. And locks must always be acquired in the same order or you will get deadlocks.
For example if a transatcion locks rows 1 then 2, while another locks rows 2 then 1: the first one will lock 1, the second will lock 2, then the first will try to lock 2 but it is already locked, and the second will try to lock 1 but it is also already locked by the other transaction. Both transactions will wait for each other forever until the deadlock detector nukes one of them.
One simple way to dodge this is to use ORDER BY:
SELECT ... FROM users WHERE user_id IN (sender_id,receiver_id)
ORDER BY user_id FOR UPDATE;
This will lock both rows in the order of their user_ids, which will always be the same.
Then you can do the rest of the procedure.
Since it is always a good idea to hold locks for the shortest amount of time, I'd recommend to put the whole thing inside a plpgsql stored procedure, including the COMMIT/ROLLBACK and error handling. Try to make the stored procedure failsafe and atomic.
Note, for security purposes, you should:
Store the balance of both accounts before the money transfer occured into the transactions table. You're already SELECT'ing it in the SELECT for update, might as well use it. It will be useful for auditing.
For security, if a user gets their password stolen there's not much you can do, but if your application gets hacked it would be nice if the hacker was not able to issue global UPDATEs to all the account balances, mess with the audit tables, etc. This means you need to read up on this and create several postgres users/roles with suitable permissions for backup, web application, etc. Some tables and especially the transactions table should have all UPDATE privileges revoked, and INSERT allowed only for the transactions stored procs, for example. The aim is to make the audit tables impossible to modify, basically append-only from the point of view of the application code.
Likewise you can handle updates to balance via stored procedures and forbid the web application role from messing with it. You could even add take a user-specific security token passed as a parameter to the stored proc, to authenticate the app user to the database, so the database only allows transfers from the account of the user who is logged in, not just any user.
Basically if it involves money, then it involves legislation, and you have to think about how not to go to jail when your web app gets hacked.

Related

XRPL: How to get the history of the balance of an account?

I would like to query the history of the balance of an XRPL account with the new WebSocket API.
For example, how do I check the balance of an account on a particular day?
I know with the v2 api, there was a possibility to query balance_changes. But this doesn't seem to be part of the new version.
For example:
https://data.ripple.com/v2/accounts/rf1BiGeXwwQoi8Z2ueFYTEXSwuJYfV2Jpn/balance_changes?start=2018-01-01T00:00:00Z
How is this done with the new Websocket API's?
There's no convenient API call that the WebSocket API can do to get this. I assume you want the XRP balance, not token/issued currency balances, which are in a different place.
One way to go about it is to make an account_tx call and then iterate through the metadata. Many, but not all, transactions will have a ModifiedNode entry of type AccountRoot—if that transaction changed the account's XRP balance, you can see the difference in the PreviousFields vs. FinalFields for that entry. The Look Up Transaction Results tutorial has some details on how to parse out metadata this way. There are some kind of tricky edge cases here: for example, if you send a transaction that buys 10 drops of XRP in the exchange but burns 10 drops of XRP as a transaction cost, then the metadata won't show a balance change because the net change was zero (+10, -10).
Another approach could be to estimate what ledger_index was most recently closed at a given time, then use account_info to look up the account's balance as of that time. The hard part there is figuring out what the latest ledger index was at a given time. This is one of the places where the Data API was just more convenient than the WebSocket API—there's no way to look up by date in WebSocket so you have to try a ledger index, see what the close time of the ledger was, try another ledger index, see what the date is, etc.

How to detect a negative integer in firebase?

I'm using ServerValue.increment in Flutter to update an inventory amount in Firebase. It is a nice solution when my users are offline, but I need to fix the folowing case:
The user 1 reads the inventory of 40 (in example) and inmediately goes offline
The user 2 reads the same inventory (40) and spend 10, then the online inventory is updated to 30
The user 1 spends 35 (less than 40). When he/she goes online again the inventory is updated to -5 (30 - 35)
I would like to detect this negative number to execute a procedure. How can I detect it in Firebase?
I'm using ServerValue.increment in this way:
db.child('quantityInStock')
.set(ServerValue.increment(-quantityToReduce.round()));
How can I detect when quantityInStock ends up being a negative number in order to execute a new procedure automatically?
If the new value depends on the existing value in the way you describe, you have two options:
Use security rules to ensure the write operations is only allowed when there's enough inventory.
".write": "newData.val() >= 0"
Use a transaction to ensure that your client can actively check the current value, to determine the new value.
dataRef.runTransaction((MutableData transaction) async{
if (transaction.value >= 40) {
transaction.value = transaction.value - 40;
}
return transaction;
});
Both approaches have advantages and disadvantages.
For example: using security rules in your scenario with an offline user may prevent your application code from knowing the write was rejected, as completion listeners are not persisted across app restarts.
Using a transaction you won't have this problem, but in that case your app will only work when the user is connected to the database. Transactions don't work when the user is offline.

Update a row in a table respecting a constraint on another table

book:
id: primary key, integer
title: varchar
borrowed: boolean
borrowed_by_user_id: foreign key user.id
user:
id: primary key, integer
name: varchar
blocked: boolean
The isolation level is READ COMMITED, because it is default level in PostgreSQL (this requirement is not from me).
I am using one database transaction to SELECT FOR UPDATE a book and lend it to any user if book is not borrowed yet. The book was selected FOR UPDATE so it cannot be borrowed concurrently.
But there is another problem. We cannot allow to lend a book to blocked user. How can we ascertain that? Even if we check at the beginning if user is not blocked, the result might not be correct because a concurrent transaction could block the user after that check.
For example, a user can be blocked by a concurrent transaction from the admin's panel.
How to solve that issue?
I see that I can use SERIALIZABLE. It requires a handling errors, yes?
I am not sure how that CHECK works. Could you say more about it?
These are actually two questions.
About the books:
If you lock the book with SELECT ... FOR UPDATE as soon as you consider lending it out, this is an example of “pessimistic locking” and will block the book for all concurrent activity.
That is fine if the transactions are very short – specifically, if there is no user interaction between the locking and the end of the transaction.
Otherwise you should use “optimistic locking”. This can be done in several ways:
Use REPEATABLE READ transaction isolation. Then updating a book that has been modified since you read its data will lead to a serialization error (see the note at the end).
When selecting books, remember the values of the system columns ctid and xmin. Then update as follows:
UPDATE books SET ...
WHERE id = ...
AND ctid = original_ctid AND xmin = original_xmin;
If no row gets updated, somebody must have modified the book since you looked at it.
About the users:
Three ideas:
You use SERIALIZABLE transaction isolation (see the note at the end).
You maintain a counter on the user that contains the number of books the user has borrowed.
Then you can have a check constraint like
ALTER TABLE users ADD CHECK (NOT blocked OR books_borrowed = 0);
Such a check constraint is evaluated at the end of each statement and has to yield TRUE, else an error is thrown.
So either the transaction that borrows a book or the transaction that blocks the user must fail (both transactions have to modify the user).
Right before lending a book to a user, you run
SELECT blocked FROM users WHERE id = ... FOR UPDATE;
If you get TRUE, you abort the transaction, otherwise lend out the book.
A concurrent transaction that wants to block the user has to SELECT ... FOR UPDATE on the user as well and only then check if there are any books lent to that user.
That way, no inconsistency can happen: if you want to block a user, all concurrent transactions that want to lend a book to the user must either be completed, so that you see their effect, or they must wait until you are done blocking the user, whereupon they will fail.
Note about higher isolation levels:
If you run transactions at an isolation level of REPEATABLE READ or SERIALIZABLE, you can encounter serialization errors. These are not bugs in your program, they are normal and to be expected. If you encounter a serialization error, you have to rollback and try the same transaction again. That is the price you pay for not having to worry about race conditions.

Concurrent create in OrientDB

Due to some vague reasons we are using replicated orient-db for our application.
It looks likely that we will have cases when a single record could be created twice. Here is how it happens:
we have entities user_document, they have ID of user and ID of document - both users and documents are managed in another application and stored in another database;
this another application on creating new document sends broadcast event (via rabbit-mq topic);
several instances of our application receive this message and create another user_document with the same pair of user_id and document_id.
If I understand correct, we should use UNIQUE index on the pair of these ids and rely upon distributed transactions.
However due to some reasons (we have another team writitng layer between application and database) we probably could not use UNIQUE though it may sound stupid :)
What are our chances then?
Could we, for example, allow all instances to create redundant records and immediately after creation select by user_id and document_id and if more than one were found, delete ones with lexicografically higher own id?
Sure you can do in that way.
You can try to use something like
DELETE FROM (SELECT FROM user_document where user_id=? and document_id=? skip 1)
However, take a notice that without creation of index this approach may consume some additional resources on server, and you might got a significant slow down if user_document have big amount of records.

GWT: Pragmatic unlocking of an entity

I have a GWT (+GAE) webapp that allows users to edit Customer entities. When a user starts editing, the lockedByUser attribute is set on the Customer entity. When the user finishes editing the Customer, the lockedByUser attribute is cleared.
No Customer entity can be modified by 2 users at the same time. If a user tries to open the Customer screen which is already opened by a different user, he get's a "Customer XYZ is being modified by user ABC".
The question is what is the most pragmatic and robust way to handle the case where the user forcefully closes the browser and hence the lockedByUser attribute is not cleared.
My first thought is a timer on the user side that would update the lockRefreshedTime each 30 seconds or so. A different user trying to modify the Customer would then look at the lockRefreshedTime and if the if the refresh happened more then say 35 seconds ago, it would acquire the lock by setting the lockedByUser and updating the lockRefreshedTime.
Thanks,
Matyas
FWIW, your lock with expiry approach is the one used by WebDAV (and implemented in tools like Microsoft Word, for instance).
To cope for network latency, you should renew your lock at least half-way through the lock lifetime (e.g. the lock expires after 2 minutes, and you renew it every minute).
Have a look there for much more details on how clients and servers should behave: https://www.rfc-editor.org/rfc/rfc4918#section-6 (note that, for example, they always assume failure is possible: "a client MUST NOT assume that just because the timeout has not expired, the lock still exists"; see https://www.rfc-editor.org/rfc/rfc4918#section-6.6 )
Another approach is to have an explicit lock/unlock flow, rather than an implicit one.
Alternatively, you could allow several users to update the customer at the same time, using a "one field at a time" approach: send an RPC to update a specific field on each ValueChangeEvent on that field. Handling conflicts (another user has updated the field) is then made a bit easier, or could be simply ignored: if user A changed the customers address from "foo" to "bar", it really means to set "bar" in the field, not to change _from "foo" to "bar", so if the actual value on the server has already be updated by user B from "foo" to "baz", that wouldn't be a problem, user A would have probably still set the value to "bar", changing it from "foo" or from "baz" doesn't really matter.
Using a per-field approach, "implicit locks" (the time it takes to edit and send the changes to the server) are much shorter, because they're reduced to a single field.
The "challenge" then is to update the form in near real-time when another user saved a change to the edited customer; or you could choose to not do that (not try to do it in near real-time).
The way to go is this:
Execute code on window close in GWT
You have to ask the user to confirm to really close the window in edit mode.
If the user really wants to exit you can then send an unlock call.