KafkaStreams: forwarding only updated keys from Transformer - apache-kafka

In my KafkaStreams app I have a registered local store (simple counters) that is updated in the transform method.
In the punctuate method I basically loop over the KV-store and push all the data to the output topic (so even if the value hasn't been updated).
One idea is to store the update timestamp for every key and forward only records updated since the last punctuate call.
But I think there should be a more convenient solution for that.
How to make this more performant and forward updated entries only?

As indicated in the comments from Matthias, keeping track of updated the records is not supported at the moment.
Your approach of updating a timestamp in the value (or creating a "Value Wrapper" object that contains a timestamp you can modify) and checking if an update has occurred since the last punctuate call is a valid approach.
-Bill

Related

Can I use Time as globally unique event version?

I found time as the best value as event version.
I can merge perfectly independent events of different event sources on different servers whenever needed without being worry about read side event order synchronization. I know which event (from server 1) had happened before the other (from server 2) without the need for global sequential event id generator which makes all read sides to depend on it.
As long as the time is a globally ever sequential event version , different teams in companies can act as distributed event sources or event readers And everyone can always relay on the contract.
The world's simplest notification from a write side to subscribed read sides followed by a query pulling the recent changes from the underlying write side can simplify everything.
Are there any side effects I'm not aware of ?
Time is indeed increasing and you get a deterministic number, however event versioning is not only serves the purpose of preventing conflicts. We always say that when we commit a new event to the event store, we send the new event version there as well and it must match the expected version on the event store side, which must be the previous version plus exactly one. If there will be a thousand or three millions of ticks between two events - I do not really care, this does not give me the information I need. And if I have missed one event on the go is critical to know. So I would not use anything else than incremental counter, with events versioned per aggregate/stream.

How does the .sync() function works?

I was looking at the diagram for a sync operation in Kinto docs and i have a doubt.
Why does .sync() is a pull.then(push).then(pull) instead of just pull.then(push)?
What do we need the last pull for?
When you do your push you will update the records last_modified value, so at the end you will need to grab the new last_modified value of the list.
You may also have got some changes on the collection while you were pushing your changes (done by another device).
pulling after the push will let you grab the new last_modified value, the changes that you've made as well as the changes that were made in the meantime.
At this point you might think that grabbing the changes you've made is a bit silly (because you already know what you just pushed). It is basically the subject of this issue.
The idea is that you can also try to pull with the last_modified value of your last update as a If-Match header using the last_modified value of the collection before your changes as a _since parameter and excluding all the records IDs you've changed.
In that case you will get a 304 most of the time or a list of changes that were made by others while you were doing your push.

breezejs update cache with changes from server

I am using breezejs in a offline first manner, executing query’s initially against a server and stashing the entities in local storage where I query the entity manager cache.
When data changes on the server (by means of another app changing it using breeze) the client app synchronizes by just getting a new copy of the entities from the server.
This works great but I am wondering if there is a way that I can get only the changes from the server, I was thinking maybe set a revision GUID or timestamp on each record and then checking the metadata if it needs updating but I really have no idea on how to proceed.
So my question is can breeze be tweaked to allow for such a use case?
And is there maybe a beter way that I am overlooking?
I think your direction is correct .If you had a column with a TimeDate in every table e.g "LastModified" and that column would get updated on every record update. then you could add a filter to every breeze query after the first that says that that date must be later than the last time you did this "rebase" query or the initial loading. so It's not supported out of the box, but you can get it to work yourself. the guid per version will not really be a good idea, as you will have to send all these guids on every request, and then check all of them. time stamp makes more sense.

Determining whether mongodb save method really update a record or not

My question is clear as in the title. When a request come to my service for updating related record in mongoDb, we use "save" method.
However, I would like to understand whether the save method really updates the record or not.
In other words, I would like to know if the content going to save is the same with the existing content in mongoDb. Accordingly, even if save method is executed without any errors, is it possible to understand whether it is really updated or not?
Thanks in advance
There are several ways to checks this.
The first is after calling Save, is to call the getLastError method. Within the console this is just db.getLastError().
This will tell you if an error occurred during the last operation. More details can be found at te following address http://docs.mongodb.org/manual/core/write-operations/#write-concern.
Another way would be to call findAndModify, this will allow you to update the document and either get the updated document back.
http://docs.mongodb.org/manual/reference/command/findAndModify/
Both of these are available in all of the official drivers.
Save method always writes the record.
There is no situation in Mongo where the write would not happen because the record that is being saved is identical to the record that's already there. The write would simply happen and "overwrite" existing data with new data (which happens to be identical).
The only way you can tell is by comparing the old and new documents - and that's a lot of extra work.

How do you manage concurrent access to forms?

We've got a set of forms in our web application that is managed by multiple staff members. The forms are common for all staff members. Right now, we've implemented a locking mechanism. But the issue is that there's no reliable way of knowing when a user has logged out of the system, so the form needs to be unlocked. I was wondering if there was a better way to manage concurrent users editing the same data.
You can use optimistic concurrency which is how the .Net data libraries are designed. Effectively you assume that usually no one will edit a row concurrently. When it occurs, you can either throw away the changes made, or try and create some nicer retry logic when you have two users edit the same row.
If you keep a copy of what was in the row when you started editing it and then write your update as:
Update Table set column = changedvalue
where column1 = column1prev
AND column2 = column2prev...
If this updates zero rows, then you know that the row changed during the edit and you can then deal with it, or simply throw an error and tell the user to try again.
You could also create some retry logic? Re-read the row from the database and check whether the change made by your user and the change made in the database are able to be safely combined, then do so automatically. Or you could present a choice to the user as to whether they still wish to make their change based on the values now in the database.
Do something similar to what is done in many version control systems. Allow anyone to edit the data. When the user submits the form, the database is checked for changes. If the record has not been changed prior to this submission, allow it as usual. If both changes are the same, ignore the incoming (now redundant) change.
If the second change is different from the first, the record is now in conflict. The user is presented with a new form, which indicates which fields were changed by the conflicting update. It is then the user's responsibility to resolve the conflict (by updating both sets of changes), or to allow the existing update to stand.
As Spence suggested, what you need is optimistic concurrency. A standard website that does no accounting for whether the data has changed uses what I call "last write wins". Simply put, whichever connection saves to the database last, that version of the data is the one that sticks. In optimistic concurrency, you use a "first write wins" logic such that if two connections try to save the same row at the same time, the first one that commits wins and the second is rejected.
There are two pieces to this mechanism:
The rules by which you fail the second commit
How the system or the user handles the rejected commit.
Determining whether to reject the commit
Two approaches:
Comparison column that changes each time a commit happens
Compare the data with its committed version in the database.
The first one entails using something like SQL Server's rowversion data type which is guaranteed to change each time the row changes. The upside is that it makes it simple to roll your own logic to determine if something has changed. When you get the data, you pull the rowversion column's value and when you commit, you compare that value with what is currently in the database. If they are different, the data has changed since you last retrieved it and you should reject the commit otherwise proceed to save the data.
The second one entails comparing the columns you pulled with their existing committed values in the database. As Spence suggested, if you attempt the update and no rows were updated, then clearly one of the criteria failed. This logic can get tricky when some of the values are null. Many object relational mappers and even .NET's DataTable and DataAdapter technology can help you handle this.
Handling the rejected commit
If you do not leave it up to the user, then the form would throw some message stating that the data has changed since they last edited and you would simply re-retrieve the data overwriting their changes. As you can imagine, users aren't particularly fond of this solution especially in a high volume system where it might happen frequently.
A more sophisticated (and also more complicated) approach is to show the user what has changed allow them to choose which items to try to re-commit, Behind the scenes you would retrieve the data again, overwrite the values picked by the user with their entries and try to commit again. In high volume system, this will still be problematic because by the time the user has tried to re-commit, the data may have changed yet again.
The checkout concept is effectively pessimistic concurrency where users "lock" rows. As you have discovered, it is difficult to implement in a stateless environment. Users are notorious for simply closing their browser while they have something checked out or using the Back button to return a set that was checked out and try to recommit it. IMO, it is more trouble than it is worth to try go this route in a web-based solution. Assuming you write the user name that last changed a given row, with optimistic concurrency, you can inform the user whose changes are rejected who saved the data before them.
I have seen this done two ways. The first is to have a "checked out" column in your database table associated with that data. Your service would have to look for this flag to see if it is being edited. You can have this expire after a time threshold is met (with a trigger) if the user doesn't commit changes. The second way is having a dedicated "checked out" table that stores id's and object names (probably the table name). It would work the same way and you would have less lookup time, theoretically. I see concurrency issues using the second method, however.
Why do you need to look for session timeout? Just synchronize access to your data (forms or whatever) and that's it.
UPDATE: If you mean you have "long transactions" where form is locked as soon as user opens editor (or whatever) and remains locked until user commits changes, then:
either use optimistic locking, implement it by versioning of forms data table
optimistic locking can cause loss of work, if user have been away for a long time, then tried to commit his changes and discovered that someone else already updated a form. In this case you may want to implement explicit "locking" of form, where user "locks" form as soon as he starts work on it. Other user will notice that form is "locked" and either communicate with lock owner to resolve issue, or he can "relock" form for himself, loosing all updates of first user in process.
We put in a very simple optimistic locking scheme that works like this:
every table has a last_update_date
field in it
when the form is created
the last_update_date for the record
is stored in a hidden input field
when the form is POSTED the server
checks the last_update_date in the
database against the date in the
hidden input field.
If they match,
then no one else has changed the
record since the form was created so
the system updates the data.
If they don't match, then someone else has
changed the record since the form was
created. The system sends the user back to the form edit page and tells the user that someone else edited the record and they must reapply their changes.
It is very simple and works well enough.
You can use "timestamp" column on your table. Refer: What is the mysterious 'timestamp' datatype in Sybase?
I understand that you want to avoid overwriting existing data with consecutively updates.
If so, when the user opens a screen you have to get last "timestamp" column to the client.
After changing data just before update, you should check the "timestamp" columns(yours and db) to make sure if anyone has changed tha data while he is editing.
If its changed you will alert an error and he has to startover. If it is not, update the data. Timestamp columns updated automatically.
The simplest method is to format your update statement to include the datetime when the record was last updated. For example:
UPDATE my_table SET my_column = new_val WHERE last_updated = <datetime when record was pulled from the db>
This way the update only succeeds if no one else has changed the record since the last read.
You can message to the user on conflict by checking if the update suceeded via a SELECT after the UPDATE.