The addition of ServerValue.increment() (Add increment() for atomic field value increments #2437) was a great news as it allows field values to be increased atomically in Firebase RTDB.
I have an application that keeps inventories and this function has been key because it allows updating the inventory regardless of whether the user is offline at times. However, I started to notice that sometimes the function is executed twice, which completely misstates the inventory in the wrong way.
To isolate the problem I decided to do the following test, which shows that ServerValue.Increment() works wrong when the connection goes from Online to Offline:
Make a for loop function from 1 to 200:
for (var i = 1; i <= 200; i++) {
testBloc.incrementTest(i);
print('Pos: $i');
}
The function incrementTest(i) must increment two variables: position (count from 1 in 1 up to 200) and sum (add 1 + 2 + 3, ..., + 200 which should result in 20,100)
Future<bool> incrementTest(int value) async {
try {
db.child('test/position')
.set(ServerValue.increment(1));
db.child('test/sum')
.set(ServerValue.increment(value));
} catch (e) {
print(e);
}
return true;
}
Note that db refers to the Firebase instance (FirebaseDatabase.instance.reference())
With this, comes the tests:
Test 1: 100% Online. PASSED
The function works properly, reaching the two variables to the correct result (in the Firebase console):
position: 200
sum: 20100
Test 2: 100% Offline. PASSED
To do this I used a physical device in airplane mode, then I executed the for loop function, and when the function finished executing I deactivated airplane mode and checked the result in the firebase console, which was satisfactory:
position: 200
sum: 20100
Test 3: Start Online and then go to Offline. FAILED
It is a typical operating scenario when the Internet Connection goes down. Even worse when the connections are intermittent, you are traveling on a subway or you are in a low coverage site for which Offline Persistence is a desired feature. To simulate it, what I did was run the for loop function in online mode, and before it finished, I put the physical device in airplane mode. Later I went Online to finish the test and see the results on the Firebase console. The results obtained are incorrect in all cases. Here are some of the results:
As you can see, the Increment was erroneously repeated 10, 18 and 9 times more.
How can I avoid this behavior?
Is there any other way to increment atomically a number in Firebase that works properly online / Offline ?
firebaser here
That's an interesting edge-case in the increment behavior. Between the client and the server neither can be certain whether the increment was executed or not, so it ends up being retried from the client upon the reconnect. This problem can only occur with the increment operation as far as I can tell, as all the other write operations are idempotent except for transactions, but those don't work while offline.
It is possible to ensure each increment happens only once, but it'll take some work:
First, add a nonce to write operation that unique identifies this operation. You can use a push key for this, but any other UUID also works fine. Combine this with your original set() call into a single multi-path update call, writing the nonce to a top-level node with a server-side timestamp as its value.
Now in your security rules for the top-level location, only allow the write if there is no existing data. This ensures the secondary writes you're seeing get rejected, and since security rules are checked across multi-path updates as a whole, the faulty increment will get rejected too.
You'll probably want to periodically clean up the node with nonce keys, based on the timestamp value in there. It won't matter for performance (since you're never searching here outside of during the cleanup), but may help control the storage cost for the nonces.
I haven't used this approach for this specific use-case yet, but have done it for others. If you'd include a client-side retry, the above essentially builds your own multi-path transaction mechanism, which is what I needed it for in the past. But since you don't need that here, it's simpler without that.
Based on #puf answer, you can proceed as follows:
Future<bool> incrementTest(int value, int dateOfToday) async {
var id = db.push().key;
Map<String, dynamic> _updates = {
'test/position': ServerValue.increment(1),
'test/sum': ServerValue.increment(value),
'test/nonce/$id': dateOfToday,
};
db.child('previousPath').update(_updates)
.catchError((error) => print('Increment Duplication Rejected ${error.message}'));
return true;
}
Then, in Firebase Security Rules, you need to add a rule in test/nonce/id location. Something as follows:
{
"previousPath": {
"test": {
".read": "auth != null", //It depends on your root rules
".write": "auth != null", //It depends on your root rules
"nonce": {
"$nonce_id": {
".validate": "!data.exists()" //THE MAGIC IS HERE
}
}
}
}
}
In this way, when the device tries to write to the database again (wrongly), Firebase will reject it since it already had a write with that same ID before.
I hope it serves someone else!!!
Related
I am trying to send a message to an agent in a specific state from Main. I have already done this many times, but this time AnyLogic is returning the NullPointerException error.
Here is the code I am using to send the message:
User chosen_user = randomWhere(users, t -> t.inState(User.Innactive));
users.get(chosen_user.getIndex()).receive("message");
Using traceln(chosen_users.getIndex()) for printing the index, everything works fine. Just when I plug the index in the function get() it returns the error.
Even if I plug just a random number, let's say users.get(1).receive("message"), it still returns the same error (my population has 700 agents).
Any thoughts?
It will just be because no user is in the Innactive state at some particular time so your randomWhere call returns null.
You also don't need the indirection of using the index; just send the message directly to it using the send function:
if (chosen_users != null) {
send("message", chosen_users);
}
(Your naming is misleading too since chosen_users is always one User agent (or null).)
I have a Moto Mc9096 device, EDMK SDK, VS2008 etc all of the prereq's
I'm having an issue where once I've scanned a barcode it constantly repeats the event. normally when this happens its a flag or status needs changing but there are no obvious settings to stop it reading again.
code below
private void Barcode_Read(object sender, ReaderData readerdata)
{
if (readerdata.Text != null)
{
if (readerdata.Text == "abc")
{
MessageBox.Show(readerdata.text);
}
}
}
Notes
I've tried
bar.Dispose();
bar.Reader.Actions.Flush();
bar.ReaderData.Dispose() ;
with no success. the EnabledScanner is set on form load and off during form close.
My expectation was when the user scans a barcode it fires the read event once.
but it constantly fires after the users first scan.
You might want to check the aimType property, by default it should be AIM_TYPE_TRIGGER but other settings allow a single trigger pull to perform multiple scans (AIM_TYPE_CONTINUOUS_READ) so perhaps that has been changed.
You should have some samples installed by the SDK at file:///C:/Users/Public/Motorola%20EMDK%20for%20.NET/v2.9/SampLauncher2008.htm (by default) that show best practice.
We have a HTTP end-point that takes a long time to run and can also be called concurrently by users. As part of this request, we update the model inside a synchronized block so that other (possibly concurrent) requests pick up that change.
E.g.
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
//Long running operation continues here. It can involve further changes to instance "m"
The reason for the synchronized block is to ensure that even concurrent requests get to pick up the latest status. However, the underlying JPA does not commit my changes (m.save()) until the request is complete. Since this is a long-running request, I do not want to wait until the request is complete and still want to ensure that other callers are notified of the change in status. I tried to call "m.em().flush(); JPA.em().getTransaction().commit();" after m.save(), but that makes the transaction unavailable for the subsequent action as part of the same request. Can I just given "JPA.em().getTransaction().begin();" and let Play handle the transaction from then on? If not, what is the best way to handle this use-case?
UPDATE:
Based on the response, I modified my code as follows:
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
new MyModelUpdateJob(m.id).now();
And in my job, I have the following line:
doJob() {
MyModel m = MyModel.findById(id);
print m.status; //This still prints the old status as-if m.save() had no effect...
}
What am I missing?
Put your update code in a job an call
new MyModelUpdateJob(id).now().get();
thus the update will be done in another transaction that is commited at the end of the job
ouch, as soon as you add more play servers, you will be in trouble. You may want to play with optimistic locking in your example or and I advise against it pessimistic locking....ick.
HOWEVER, looking at your code, maybe read the article Building on Quicksand. I am not sure you need a synchronized block in that case at all...try to go after being idempotent.
In your case if
1. user 1 and user 2 both call that method and it is pending, then it goes to active(Idempotent)
If user 1 or user 2 wins, well that would be like you had the synchronization block anyways.
I am sure however you have a more complex scenario not shown here, BUT READ that article Building on Quicksand as it really changes the traditional way of thinking and is how google and amazon and very large scale systems operate.
Another option for distributed transactions across play servers is zookeeper which the big large nosql guys use BUT only as a last resort ;) ;)
later,
Dean
I'm in the process of writing a query manager for a WinForms application that, among other things, needs to be able to deliver real-time search results to the user as they're entering a query (think Google's live results, though obviously in a thick client environment rather than the web). Since the results need to start arriving as the user types, the search will get more and more specific, so I'd like to be able to cancel a query if it's still executing while the user has entered more specific information (since the results would simply be discarded, anyway).
If this were ordinary ADO.NET, I could obviously just use the DbCommand.Cancel function and be done with it, but we're using EF4 for our data access and there doesn't appear to be an obvious way to cancel a query. Additionally, opening System.Data.Entity in Reflector and looking at EntityCommand.Cancel shows a discouragingly empty method body, despite the docs claiming that calling this would pass it on to the provider command's corresponding Cancel function.
I have considered simply letting the existing query run and spinning up a new context to execute the new search (and just disposing of the existing query once it finishes), but I don't like the idea of a single client having a multitude of open database connections running parallel queries when I'm only interested in the results of the most recent one.
All of this is leading me to believe that there's simply no way to cancel an EF query once it's been dispatched to the database, but I'm hoping that someone here might be able to point out something I've overlooked.
TL/DR Version: Is it possible to cancel an EF4 query that's currently executing?
Looks like you have found some bug in EF but when you report it to MS it will be considered as bug in documentation. Anyway I don't like the idea of interacting directly with EntityCommand. Here is my example how to kill current query:
var thread = new Thread((param) =>
{
var currentString = param as string;
if (currentString == null)
{
// TODO OMG exception
throw new Exception();
}
AdventureWorks2008R2Entities entities = null;
try // Don't use using because it can cause race condition
{
entities = new AdventureWorks2008R2Entities();
ObjectQuery<Person> query = entities.People
.Include("Password")
.Include("PersonPhone")
.Include("EmailAddress")
.Include("BusinessEntity")
.Include("BusinessEntityContact");
// Improves performance of readonly query where
// objects do not have to be tracked by context
// Edit: But it doesn't work for this query because of includes
// query.MergeOption = MergeOption.NoTracking;
foreach (var record in query
.Where(p => p.LastName.StartsWith(currentString)))
{
// TODO fill some buffer and invoke UI update
}
}
finally
{
if (entities != null)
{
entities.Dispose();
}
}
});
thread.Start("P");
// Just for test
Thread.Sleep(500);
thread.Abort();
It is result of my playing with if after 30 minutes so it is probably not something which should be considered as final solution. I'm posting it to at least get some feedback with possible problems caused by this solution. Main points are:
Context is handled inside the thread
Result is not tracked by context
If you kill the thread query is terminated and context is disposed (connection released)
If you kill the thread before you start a new thread you should use still one connection.
I checked that query is started and terminated in SQL profiler.
Edit:
Btw. another approach to simply stop current query is inside enumeration:
public IEnumerable<T> ExecuteQuery<T>(IQueryable<T> query)
{
foreach (T record in query)
{
// Handle stop condition somehow
if (ShouldStop())
{
// Once you close enumerator, query is terminated
yield break;
}
yield return record;
}
}
[I am new to ADO.NET and the Entity Framework, so forgive me if this questions seems odd.]
In my WPF application a user can switch between different databases at run time. When they do this I want to be able to do a quick check that the database is still available. What I have easily available is the ObjectContext. The test I am preforming is getting the count on the total records of a very small table and if it returns results then it passed, if I get an exception then it fails. I don't like this test, it seemed the easiest to do with the ObjectContext.
I have tried setting the connection timeout it in the connection string and on the ObjectConntext and either seem to change anything for the first scenario, while the second one is already fast so it isn't noticeable if it changes anything.
Scenario One
If the connect was down when before first access it takes about 30 seconds before it gives me the exception that the underlying provider failed.
Scenario Two
If the database was up when I started the application and I access it, and then the connect drops while using the test is quick and returns almost instantly.
I want the first scenario described to be as quick as the second one.
Please let me know how best to resolve this, and if there is a better way to test the connectivity to a DB quickly please advise.
There really is no easy or quick way to resolve this. The ConnectionTimeout value is getting ignored with the Entity Framework. The solution I used is creating a method that checks if a context is valid by passing in the location you which to validate and then it getting the count from a known very small table. If this throws an exception the context is not valid otherwise it is. Here is some sample code showing this.
public bool IsContextValid(SomeDbLocation location)
{
bool isValid = false;
try
{
context = GetContext(location);
context.SomeSmallTable.Count();
isValid = true;
}
catch
{
isValid = false;
}
return isValid;
}
You may need to use context.Database.Connection.Open()