Scala ScalaFX: how to deal with large set of changes of Observable* - scala

I am using a ObservableMap for data modeling, and want to update the whole entry. Initially the ObservableMap is empty, and it is filled asynchronously with lots of elements.
Now the problem is that an onChanged event is shot for each and every entry, which creates too many events and a bogging down of the GUI. I used pkgs.clear() ; pkgs ++= newpkgs.
Is there way to either only trigger one onChanged, either by disabling the handler temporarily, or by having an operation on the map that updates all elements but fires only afterwards.

I'm not aware of a mechanism to disable/delay/buffer UI updates.
I don't know about other JavaFX view types, but I have experience with this using TableView in a similar situation. I suspect other views may be similar in this regard.
At least the TableView has a property called itemsProperty. When data in itemsProperty is updated using setItems (wrapper for itemsProperty.set(value)), the table tries very hard to update just the minimum slice of it self.
However, for the optimizations to work, the key is that the items must be "value objects" (hashCode and equals are deep and based on the actual data being displayed and not some random references.)
In the case of TableView this may require elaborate rowFactory and
cellFactory implementations. The reason is that the data in items can't be "preformatted" in any way or it would spoil the optimizations inside TableView.
Realizing the above, solved my update churn problems. Maybe other answers can provide other tips.

Related

Manually Setting rowCount in GWT DataGrid with a ListDataProvider does not work?

I am using a DataGrid together with a ListDataProvider to display various row data in my application. For most cases, it was fine to fetch everything from the server at once.
Now, it appears to be necessary to fetch data on pagination steps. This means, my RPC call returns 10 items each time and the total count of possible results.
The total count is used to setup the attached SimplePager by manually calling datagrid.setRowCount(totalCount, true) after having set the row data. After is important here,
because setRowData also triggers a setRowCount call with the concrete number of items (always 10 in my case).
The problem is that after having set the row count manually, another participant, a ScheduledCommand triggers a flushCommand which in turn triggers a setRowCount call which
sets the count back to 10. The consequence: the pager shows 1-10 of 10 and the pager controls are disabled.
How can I enforce a certain rowCount even if the ListDataProvider only has 10 items each time?
You might suggest to use an AsyncDataProvider. However, there already is a quite complex generic design (AbstractTablePresenter<DTO, ...> implementing all the logic to fetch data and push it to a generic display)
which is backed by ListDataProviders. Hard to explain, but actually, I would prefer keep using the ListDataProvider.
For my usecase, the easiest fix was to subclass my AbstractTablePresenter for the on-demand case and use an AsyncDataProvider which brings all the features I need. The hurts concerning my design was less heavy than expected (knocking on my shoulders ;-) ).
Tried to subclass ListDataProvider first but the relationship of data, rowCount, rowCountEvents and the attached pager objects is so manifold that you would end up overriding most of the methods of ListDataProvider and your pager implementation.

Sorting cells in UITableView into sections, after TableView has been loaded

Right, so my UITableView loads and puts all the cells in Alphabetical order. Information is then downloaded from a server and calculations are done, with the TableView being reloaded once everything is complete. Currently this is quite a simple procedure as once the information is downloaded from the server, the cells don't even move, they are left in their alphabetical order. Nothing really happens other than half of the information is filled in and small changes are made depending on the calculations. I was wondering if there was an easy way of putting the cells into sections depending on the calculations done after the download is complete? I did have an idea of creating 4 arrays (there will only be 4 sections ever) and once isLoading is set to no, changing the data source of the TableView to have sections, however, that sounds a bit... iffy. I know this is a theoretical question as opposed to a coding problem, but before I go and mess up my code, in what is sure to be a stupidly inefficient way of doing things, is there an easy way of "assigning" UITableViewCells to sections?
My main issue with my way of doing it is that should the user delete a cell, deleting the appropriate entry in Core Data will be a little tricky and prone to errors. This lead me on to another idea. What if I added an extra attribute to my Core Data entity. That attribute would be assigned and then saved once the calculations were done. The problem with this is that no existing databases would work. There has to be a neat way of achieving this.
Thanks for the help. If you need me to post any code just say so and I will.
You should be fine if you implement the data source methods related to sections.
For example:
numberOfSectionsInTableView
sectionIndexTitlesForTableView.
Any time the table data is reloaded (e.g., [self.tableView reloadData]), these methods will be called and the data will be placed into their sections.
Keep in mind that the cells are just the visual representation of your model, which in this case is your fetched data. The cells are not assigned to sections; they are simply created however you specify for your model (via the table view data source and delegate methods).
Regarding deletion of entries while using Core Data, I suggest taking a look at NSFetchedResultsController. The latter will monitor any changes to your table's data and message its delegate, your table view controller, when updates are made.
For example, a deletion would start with a call to the table view delegate like normal (i.e., via tableView:didEndEditingRowAtIndexPath). Within the latter, you would then delete the entry from core data (e.g., [self.myDatabase.managedObjectContext deleteObject:entity]). Assuming you initiated the NSFetchedResultsController w/ the same managed object context, the deletion would be automatically reflected back to your user.
If you're using a remote DB, however, you'll also have to perform a save (however you've implemented that) to ensure the DB is updated too.
Note also that if you use an NSFetchedResultsController, you don't need to implement the section data source methods since NSFetchedResultsController can handle that for you. Just define the key-path in your data model that will return the section name when initializing the NSFetchedResultsController.

iPhone Strange CoreData Caching Performance Issues

I'm working on an app that uses Core Data and NSFetchedResultsController. A major component of the app is the filtering down items in an indexed table view based on a set of 15 or so pre-defined switches that correspond to a property or relationship of my Managed Objects. In most of my situations, I'm searching through a set of around 300-400 objects, so caching/performance is not an issue. Everything is very snappy with no caching required.
However, there is a part of my app that basically searches through all objects in my CD database (~15,000 items). Here, I'm trying to implement caching on the NSFetchedResultsController to improve performance. The 'cacheString' property for the NSFetchedResultsController is simply the predicate's string value. Whenever the user toggles a filter switch, I create a new predicate, create a new NSFetchedResultsController, and set the cache to the new predicate's string value. The first hit to get all the items (unfiltered) takes ~7 seconds, with subsequent hits taking less than one.
What's strange, though - and here's my problem - is that once I proceed to the 'next step' of the table view (I push a new view controller to the nav controller, passing it a reference to the NSFetchedResultsController's fetchedObjects), performance drops considerably. This next view is essentially a different representation (a horizontally paging scroll view) of the previous view's table list with one item on the screen at once. When I page from one item to the next, accessing the previous or next object in the fetchedObjects array locks up the phone for about 5 seconds. The 'lock up' duration increases the further you go into the fetchedObjects array. If 'i == 0', there is no perceivable lag. If 'i == 10,000', it takes about 15 seconds to access the next object. Nuts! If I disable caching (or it's a query that wasn't cached so it needed to pull fresh results), everything except for the initial filter query is fast and snappy with zero lag.
Does enabling caching ONLY cache indexing info for a table view and not the fetched objects themselves?
I'm not sure what the deal is here. I hope I explained this well enough - let me know if you want to see some code or need additional info.
Thanks!
Billy
Alright, I've found out what my problem was...
Basically, asking my NSFetchedResultsController for a managedObject via objectAtIndexPath: is IMMENSELY faster than going directly to the fetchedObjects array and asking for objectAtIndex: (which, of course, is what I was doing), especially as your index gets into the thousands. I'm not sure 100% why that is, though. I'm guessing NSFetchedResultsController does some fancy stuff to efficiently pull out single objects rather than going straight to the raw data. So, I don't think the caching had anything to do with my performance issue.
Thanks to those who checked out my question. I hope this helps anyone else having similar issues.

How to keep track of objects deleted from an ObservableCollection in CRUD scenarios?

In our multi-tier business application we have ObservableCollections of Self-Tracking Entities that are returned from service calls.
The idea is we want to be able to get entities, add, update and remove them from the collection client side, and then send these changes to the server side, where they will be persisted to the database.
Self-Tracking Entities, as their name might suggest, track their state themselves.
When a new STE is created, it has the Added state, when you modify a property, it sets the Modified state, it can also have Deleted state but this state is not set when the entity is removed from an ObservableCollection (obviously). If you want this behavior you need to code it yourself.
In my current implementation, when an entity is removed from the ObservableCollection, I keep it in a shadow collection, so that when the ObservableCollection is sent back to the server, I can send the deleted items along, so Entity Framework knows to delete them.
Something along the lines of:
protected IDictionary<int, IList> DeletedCollections = new Dictionary<int, IList>();
protected void SubscribeDeletionHandler<TEntity>(ObservableCollection<TEntity> collection)
{
var deletedEntities = new List<TEntity>();
DeletedCollections[collection.GetHashCode()] = deletedEntities;
collection.CollectionChanged += (o, a) =>
{
if (a.OldItems != null)
{
deletedEntities.AddRange(a.OldItems.Cast<TEntity>());
}
};
}
Now if the user decides to save his changes to the server, I can get the list of removed items, and send them along:
ObservableCollection<Customer> customers = MyServiceProxy.GetCustomers();
customers.RemoveAt(0);
MyServiceProxy.UpdateCustomers(customers);
At this point the UpdateCustomers method will verify my shadow collection if any items were removed, and send them along to the server side.
This approach works fine, until you start to think about the life-cycle these shadow collections. Basically, when the ObservableCollection is garbage collected there is no way of knowing that we need to remove the shadow collection from our dictionary.
I came up with some complicated solution that basically does manual memory management in this case. I keep a WeakReference to the ObservableCollection and every few seconds I check to see if the reference is inactive, in which case I remove the shadow collection.
But this seems like a terrible solution... I hope the collective genius of StackOverflow can shed light on a better solution.
EDIT:
In the end I decided to go with subclassing the ObservableCollection. The service proxy code is generated so it was a relatively simple task to change it to return my derived type.
Thanks for all the help!
Instead of rolling your own "weak reference + poll Is it Dead, Is it Alive" logic, you could use the HttpRuntime.Cache (available from all project types, not just web projects).
Add each shadow collection to the Cache, either with a generous timeout, or a delegate that can check if the original collection is still alive (or both).
It isn't dreadfully different to your own solution, but it does use tried and trusted .Net components.
Other than that, you're looking at extending ObservableCollection and using that new class instead (which I'd imagine is no small change), or changing/wrapping UpdateCustomers method to remove the shadow collection form DeletedCollections
Sorry I can't think of anything else, but hope this helps.
BW
If replacing ObservableCollection is a possibility (e.g. if you are using a common factory for all the collections instances) then you could subclass ObservableCollection and add a Finalize method which cleans up the deleted items that belongs to this collection.
Another alternative is to change the way you compute which items are deleted. You could maintain the original collection, and give the client a shallow copy. When the collection comes back, you can compare the two to see what items are no longer present. If the collections are sorted, then the comparison can be done in linear time on the size of the collection. If they're not sorted, then the modified collection values can be put in a hash table and that used to lookup each value in the original collection. If the entities have a natural id, then using that as the key is a safe way of determining which items are not present in the returned collection, that is, have been deleted. This also runs in linear time.
Otherwise, your original solution doesn't sound that bad. In java, a WeakReference can register a callback that gets called when the reference is cleared. There is no similar feature in .NET, but using polling is a close approximation. I don't think this approach is so bad, and if it's working, then why change it?
As an aside, aren't you concerned about GetHashCode() returning the same value for distinct collections? Using a weak reference to the collection might be more appropriate as the key, then there is no chance of a collision.
I think you're on a good path, I'd consider refactoring in this situation. My experience is that in 99% of the cases the garbage collector makes memory managment awesome - almost no real work needed.
but in the 1% of the cases it takes someone to realize that they've got to up the ante and go "old school" by firming up their caching/memory management in those areas. hats off to you for realizing you're in that situation and for trying to avoid the IDispose/WeakReference tricks. I think you'll really help the next guy who works in your code.
As for getting a solution, I think you've got a good grip on the situation
-be clear when your objects need to be created
-be clear when your objects need to be destroyed
-be clear when your objects need to be pushed to the server
good luck! tell us how it goes :)

Any tips or best practices for adding a new item to a history while maintaining a maximum total number of items?

I'm working on some basic logging/history functionality for a Core Data iPhone app. I want to maintain a maximum number of history items.
My general plan is to ignore the maximum when adding a new item and enforce it whenever I need to fetch all the items anyway (e.g. for searching or browsing the history). Alternatively, I could do it when adding a new item: fetch the current items, add the new one, and delete the oldest one if we're at the maximum. The second way seems less efficient, since I would be fetching all the items when I otherwise wouldn't need to.
So, the questions:
Which way is better? Is there an even better way to do this that I'm not considering?
How many items would be a reasonable maximum? The history is used for text field autocompletion, so more items means better usability, unless the number of items is so huge that it's slowing stuff down.
Thanks!
Whichever method is easier to implement is the right one. You shouldn't bother with a more efficient/more complicated implementation unless it proves it's needed.
If these objects are in a to-many relationship of some kind, I'd use the relationship to manage the maximum number. (Override add<Whatever>Object: and delete the extraneous items then).
If you're just fetching them, then that's really your only opportunity to filter them out. If you're using an NSArrayController, you might be able to implement a subclass that detects when new objects are added and chops off the extra ones.
If the items are added manually by the user, then you can safely use the method of cleaning up later. With text data, a user won't enter more a few hundred items at most and text data takes up very little room. If the items are added by software, you have to check every so many entries or risk spill over.
You might not want to spend a lot of time on this. Autocomplete is not that big, usually just a few hundred entries. I would right it the simplest way, with clean up later, and then fiddle with it only if you hit a definite performance bottleneck.
Remember, premature optimization is the root of all programming evil. That and the dweebs in marketing.