Atomic RPC calls - atomic

Could you tell me if an RPC is executed atomically?
For example making a transaction between two accounts, I would have an RPC such:
1. client.rpc.provide('xfer', (data, response) => {
2. var srcWallet = getRecord(data.srcWalletId);
3. var dstWallet = getRecord(data.dstWalletId);
4. if (srcWallet.get('balance') >= data.xferAmount) {
5. srcWallet.set('balance', srcWallet.get('balance') - xferAmount);
6. dstWallet.set('balance', dstWallet.get('balance') + xferAmount);
7. }
Is it certain that the srcWallet balance cannot change between line 4 and 5?

Just to clarify: deepstream does RPC's (request/response) using ds.rpc.make/ds.rpc.provide, your example refers to records.
The above example would work temporarily for JavaScript implementations that are single threaded, however you can't be sure that there isn't an incoming transaction on the way to the client that changes the wallet's balance. To enforce rules like the above, use a Valve rule on the server, e.g.
_('src-wallet').balance >= data.xferAmount
Please find more at
https://deepstream.io/tutorials/core/permission-conf-simple/
https://deepstream.io/tutorials/core/permission-conf-advanced/
https://deepstream.io/tutorials/core/permissions-dynamic/

Related

ServerValue.increment doesn't work properly when Internet goes down

The addition of ServerValue.increment() (Add increment() for atomic field value increments #2437) was a great news as it allows field values ​​to be increased atomically in Firebase RTDB.
I have an application that keeps inventories and this function has been key because it allows updating the inventory regardless of whether the user is offline at times. However, I started to notice that sometimes the function is executed twice, which completely misstates the inventory in the wrong way.
To isolate the problem I decided to do the following test, which shows that ServerValue.Increment() works wrong when the connection goes from Online to Offline:
Make a for loop function from 1 to 200:
for (var i = 1; i <= 200; i++) {
testBloc.incrementTest(i);
print('Pos: $i');
}
The function incrementTest(i) must increment two variables: position (count from 1 in 1 up to 200) and sum (add 1 + 2 + 3, ..., + 200 which should result in 20,100)
Future<bool> incrementTest(int value) async {
try {
db.child('test/position')
.set(ServerValue.increment(1));
db.child('test/sum')
.set(ServerValue.increment(value));
} catch (e) {
print(e);
}
return true;
}
Note that db refers to the Firebase instance (FirebaseDatabase.instance.reference())
With this, comes the tests:
Test 1: 100% Online. PASSED
The function works properly, reaching the two variables to the correct result (in the Firebase console):
position: 200
sum: 20100
Test 2: 100% Offline. PASSED
To do this I used a physical device in airplane mode, then I executed the for loop function, and when the function finished executing I deactivated airplane mode and checked the result in the firebase console, which was satisfactory:
position: 200
sum: 20100
Test 3: Start Online and then go to Offline. FAILED
It is a typical operating scenario when the Internet Connection goes down. Even worse when the connections are intermittent, you are traveling on a subway or you are in a low coverage site for which Offline Persistence is a desired feature. To simulate it, what I did was run the for loop function in online mode, and before it finished, I put the physical device in airplane mode. Later I went Online to finish the test and see the results on the Firebase console. The results obtained are incorrect in all cases. Here are some of the results:
As you can see, the Increment was erroneously repeated 10, 18 and 9 times more.
How can I avoid this behavior?
Is there any other way to increment atomically a number in Firebase that works properly online / Offline ?
firebaser here
That's an interesting edge-case in the increment behavior. Between the client and the server neither can be certain whether the increment was executed or not, so it ends up being retried from the client upon the reconnect. This problem can only occur with the increment operation as far as I can tell, as all the other write operations are idempotent except for transactions, but those don't work while offline.
It is possible to ensure each increment happens only once, but it'll take some work:
First, add a nonce to write operation that unique identifies this operation. You can use a push key for this, but any other UUID also works fine. Combine this with your original set() call into a single multi-path update call, writing the nonce to a top-level node with a server-side timestamp as its value.
Now in your security rules for the top-level location, only allow the write if there is no existing data. This ensures the secondary writes you're seeing get rejected, and since security rules are checked across multi-path updates as a whole, the faulty increment will get rejected too.
You'll probably want to periodically clean up the node with nonce keys, based on the timestamp value in there. It won't matter for performance (since you're never searching here outside of during the cleanup), but may help control the storage cost for the nonces.
I haven't used this approach for this specific use-case yet, but have done it for others. If you'd include a client-side retry, the above essentially builds your own multi-path transaction mechanism, which is what I needed it for in the past. But since you don't need that here, it's simpler without that.
Based on #puf answer, you can proceed as follows:
Future<bool> incrementTest(int value, int dateOfToday) async {
var id = db.push().key;
Map<String, dynamic> _updates = {
'test/position': ServerValue.increment(1),
'test/sum': ServerValue.increment(value),
'test/nonce/$id': dateOfToday,
};
db.child('previousPath').update(_updates)
.catchError((error) => print('Increment Duplication Rejected ${error.message}'));
return true;
}
Then, in Firebase Security Rules, you need to add a rule in test/nonce/id location. Something as follows:
{
"previousPath": {
"test": {
".read": "auth != null", //It depends on your root rules
".write": "auth != null", //It depends on your root rules
"nonce": {
"$nonce_id": {
".validate": "!data.exists()" //THE MAGIC IS HERE
}
}
}
}
}
In this way, when the device tries to write to the database again (wrongly), Firebase will reject it since it already had a write with that same ID before.
I hope it serves someone else!!!

Avoiding repetitive calls when creating reactfire hooks

When initializing a component using reactfire, each time I add a reactfire hook (e.g. useFirestoreDocData), it triggers a re-render and therefore repeats all previous initialization. For example:
const MyComponent = props => {
console.log(1);
const firestore = useFirestore();
console.log(2);
const ref = firestore.doc('count/counter');
console.log(3);
const { value } = useFirestoreDocDataOnce(ref);
console.log(4);
...
return <span>{value}</span>;
};
will output:
1
1
2
3
1
2
3
4
This seems wasteful, is there a way to avoid this?
This is particularly problematic when I need the result of one reactfire hook to create another (e.g. retrieve data from one document to determine which other document to read) and it duplicates the server calls.
See React's documentation of Suspense.
Particulary that part: Approach 3: Render-as-You-Fetch (using Suspense)
Reactfire uses this mechanics. It is not supposed to fetch more than one time for each call even if the line is executed more than once. The mechanics behind "understand" that the fetch is already done and will start the next one.
In your case, react try to render your component, see it needs to fetch, stop rendering and show suspense's fallback while fetching. When fetch is done it retry to render your component and as the fetch is completed it will render completely.
You can confirm in your network tab that each calls is done only once.
I hope I'm clear, please don't hesitate to ask for more details if i'm not.

Function now executing properly after subscribe

I am having a Mono object, On which I have subscribed for doOnsuccess, In this method again I am saving the data in DB(CouchBase Using ReactiveCouchbaseRepository). after that, I am not getting any logs for Line1 and line2.
But this is working fine if I do not save this object, means I am getting logs for line 2.
Mono<User> result = context.getPayload(User.class);
result.doOnSuccess( user -> {
System.out.println("############I got the user"+user);
userRepository.save(user).doOnSuccess(user2->{
System.out.println("user saved"); // LINE 1
}).subscribe();
System.out.println("############"+user); // LINE2
}).subscribe();
Your code snippet is breaking a few rules you should follow closely:
You should not call subscribe from within a method/lambda that returns a reactive type such as Mono or Flux; this will decouple the execution from the main task while they'll both still operate on that shared state. This often ends up on issues because things are trying to read twice the same stream. It's a bit like you're trying to create two separate threads that try reading on the same outputstream.
you should not do I/O operations in doOnXYZ operators. Those are "side-effects" operators, meaning they are useful for logging, increment counters.
What you should try instead to chain Reactor operators to create a single reactive pipeline and return the reactive type for the final client to subscribe on it. In a Spring WebFlux application, the HTTP clients (through the WebFlux engine) are subscribing.
Your code snippet could look like this:
Mono<User> result = context.getPayload(User.class)
.doOnSuccess(user -> System.out.println("############Received user "+user))
.flatMap(user -> {return userRepository.save(user)})
.doOnSuccess(user -> System.out.println("############ Saved "+user));
return result;

Simpy 3.0.4, setting resource priority

I am having trouble with resource priority in simpy. Consider the following code:
import simpy
env = simpy.Environment()
res = simpy.PriorityResource(env, capacity = 1)
def go(id):
with res.request(priority = id) as req:
yield req
print id,res
env.process(go(3))
env.process(go(2))
env.process(go(4))
env.process(go(5))
env.process(go(1))
env.run()
Lower number means higher priority, so I should get 1,2,3,4,5. But instead i am getting 3,1,2,4,5. So the first output is wrong, after that its sorted!
Thanks in advance for your help.
This is correct. When "3" requests the resource, it is empty so it gets the
slot. The remaining processes have to queue and will get the resource in the
order 1, 2, 4, 5.
If you use the PreemptiveResource instead (like request(priority=id,
preempt=True)), 3 will still get the resource first but will be preempted by
2. 2 will then get preempted by 1. 2 and 3 would then have to request the
resource again to gain access to it.
Even I had the same problem where I was supposed to make a factory FIFO. At that time I assigned a reaction time to a part and made it to follow the previous part. That is only if the previous part had got into service of resource, I made the next part request. It solved the problem objectively but seemed like it slowed down the simulation little and also gave a rexn time to the part. It was basically a revamp of the factory process. But I would love to see a feature when the part doesn't have to request again.
Can it be done in the present version?

Play 1.2.3 framework - Right way to commit transaction

We have a HTTP end-point that takes a long time to run and can also be called concurrently by users. As part of this request, we update the model inside a synchronized block so that other (possibly concurrent) requests pick up that change.
E.g.
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
//Long running operation continues here. It can involve further changes to instance "m"
The reason for the synchronized block is to ensure that even concurrent requests get to pick up the latest status. However, the underlying JPA does not commit my changes (m.save()) until the request is complete. Since this is a long-running request, I do not want to wait until the request is complete and still want to ensure that other callers are notified of the change in status. I tried to call "m.em().flush(); JPA.em().getTransaction().commit();" after m.save(), but that makes the transaction unavailable for the subsequent action as part of the same request. Can I just given "JPA.em().getTransaction().begin();" and let Play handle the transaction from then on? If not, what is the best way to handle this use-case?
UPDATE:
Based on the response, I modified my code as follows:
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
new MyModelUpdateJob(m.id).now();
And in my job, I have the following line:
doJob() {
MyModel m = MyModel.findById(id);
print m.status; //This still prints the old status as-if m.save() had no effect...
}
What am I missing?
Put your update code in a job an call
new MyModelUpdateJob(id).now().get();
thus the update will be done in another transaction that is commited at the end of the job
ouch, as soon as you add more play servers, you will be in trouble. You may want to play with optimistic locking in your example or and I advise against it pessimistic locking....ick.
HOWEVER, looking at your code, maybe read the article Building on Quicksand. I am not sure you need a synchronized block in that case at all...try to go after being idempotent.
In your case if
1. user 1 and user 2 both call that method and it is pending, then it goes to active(Idempotent)
If user 1 or user 2 wins, well that would be like you had the synchronization block anyways.
I am sure however you have a more complex scenario not shown here, BUT READ that article Building on Quicksand as it really changes the traditional way of thinking and is how google and amazon and very large scale systems operate.
Another option for distributed transactions across play servers is zookeeper which the big large nosql guys use BUT only as a last resort ;) ;)
later,
Dean