How to Remove No Longer Used OData Bindings? - sapui5

Assume a page that shows a complex data structure (for example, an article with many details). This view will be reused from time to time by rebinding it to different articles.
Now, I noticed that the ODataModel keeps all used article entities in memory (also if they are no longer bound to any control).
This will lead to two issues:
Memory consumption increases over time (if application will not be reloaded).
If the application forces a refresh of the data model, all entities will be loaded (also not used).
The second issue seems to be the bigger problem. It slows down the speed of the application.
I have not found a solution for that problem. If I use refresh(true, true) it seems all data will be reloaded.
Is there an way to clean the model?
Edit
Lets say you have a list of thousands of articles. User can click on one of the articles and will navigate to a detailed screen of that article.
The OData model in client side will cache this. To see it, do something like:
var oModel = this.getModel("modelName");
look with the debugger into oModel.oData.
If the user now navigates back and chooses the next article, this will be cached as well.
If user does this 1000 times, all articles are now in the model.
If you trigger a oModel.refresh(true);, all these data (of 1000 articles) will be reloaded not only the one bound to the view.
Now my application is not about showing article information. It's a more complex structure with subitems. Each time user is visiting this page, more data will be cached (and re-fetched in case of a refresh call on the model).
Edit 2
The function updateBindings(bForceUpdate?) seems to help a little bit.
Anyhow, the data accumulation is still there in the ODataModel class.
That means: Each visited data path will stay in memory since the next reload (F5) of the full page. If someone uses such an application over a day, the data accumulates and a refresh call on the model will read all data again, if still bound to a view or not.

Try deleteCreatedEntry(oContext). Even though this is not the supposed use case for this method it might work to delete an entity from the model without triggering a backend request.
You could also try if updateBindings(bForceUpdate?) only triggers an update on actually bound entities.

1) I do not really understand your problem here. What is it exactly that you do? OData always holds the result of your request plus a queue of changes to that request. If you create lots of entries while your application is running, of course the memory consumption will increase. If you want to revert back to the original request you can use resetChanges(). THis way the used memory should decrease again. But you lose all your changes to the model.
2) Maybe you should look into Odata filtering (http://www.odata.org/getting-started/basic-tutorial/) so that you only load the entities you really want. If you only want a part of the entity loaded then you should maybe redesign your entities to avoid a lot of overhead.
It is hard to speculate what your exact problem is.

Well, if you know exactly what are you doing, you can try something like this:
this.getModel("modelname").aBindings = []
Better solution would be go through the aBindings array and remove redundant bindings.

Related

How do I create actors that express hierarchical structures in akka?

I recently started developing using akka event sourcing/cluster sharding, and thanks to the online resources I think I understood the basic concepts and how to create a simple application with it. I am however struggling to apply this methodology in a slightly more complex data structure:
As an example, let's think about webpages and URLs.
Each Page can be represented with an actor in the cluster (having its unique id as the path of the page, e.g. /questions/60037683).
On each page I can issue commands such as
Create page (so if the page does not exist, it will be created)
Edit page (editing the details of the page)
Get page content (and children)
Etc.
When issuing commands to single pages, everything is easy as it's "written on the manual". But I have the added the complexity that a web page can have children, so when creating a "child page" I need the parent to update references to its children.
I thought of some possible approaches, but they feel incomplete.
Sending all events to the single WebPage and when creating a page, finding the parent page (if any) and communicate that a new child has been added
Sending all events to the single WebPage, and when creating a page, the message is sent to the parent, and then it will create a new command that will tell the child to initialize
Creating an infrastructure as WebPageRepository that will keep track of the page tree and will relay CRUD commands to all web page actors.
My real problem is, I think, handling the return of Futures properly when relaying messages to other actors that have to actually perform the job.
I'm making a lot of confusion and some reading resources would be greatly appreciated.
Thanks for your time.
EDIT: the first version was talking about a generical hierarchical file-system-like structure. I updated with the real purpose, webpages and urls and tried to clarify better my issues
After some months of searching, I reached the conclusion that what I'm doing is trying to have actors behave transactionally, so that when something is created, the parent is also updated in a safe manner, meaning that if one operation fails, all operations who completed successfully are rolled back.
The best pattern for this, in my opinion, proved to be the saga pattern, which adds a bit of complexity to the whole process, but in the long run it does what I needed.
Basically I ended up implementing the main actor as a stand alone piece (as it should be) that can receive create commands and add children commands.
There is then a saga actor which takes care of creating the content, adding the child to the parent and rolling back everything if something fails during the process.
If someone else has a better solution, I'll be glad to hear it out

mvvmcross editting a complex object over multiple pages and saving data

I'm fairly new to mvvmcross and the mvvm model in general. I have been trying to create my own cross platform app for several weeks now and I'm stuck at what would be good practice. I have two main problems, I hope somebody can help me with
Question 1:
I have a complex model with many properties, sub items, and sub items in those sub items. Also, many values are automatically calculated based on other values.
I implemented the MvxNavigatingObject everywhere, and all values are correctly notified when changes occured. So far so good.
Now I want to let people use the app to change the values in my model. But because there are so many input fields, I want to divide the data over several pages. But each page has it's own view model of course. That means reloading my large object every time the page changes.
To solve this, I created a DataHolderService, which is loaded as a singleton on all the view models. Then I let my viewmodels change the data in the DataHolderService and I never have to reload the data.
But I wonder, is this good practice? It feels a bit strange to be doing this. Are there other possibilities? Like using the same viewmodel on multiple pages?
Question 2
I would like to save my data to the database so it persists between sessions. I have a SQLite database and am able to save the data using a button. But if the user forgets to use the save button and the app is put in the background until the system eventually kills it, the data would be lost.
I therefore added a timer, which periodically saves the data to the database. But I can understand that this isn't very good practice. What would be a good way to save the data back into the database without having the user needing to press a save button? Is there an event/function that will fire before the view model is disposed?
Its a bit hard to understand exactly what you are trying to achieve, however.
Is it not better to use a list of questions rather than spreading it over multiple pages?
We have created a similar page recently. If your data is shown as a checkbox/radio buttons/spinner etc, we save them immediately when the value has changed.
For saving text, we use a 1 second timer that starts when the user starts typing, is reset if the user changes the text within that time and is saved otherwise.

Why should the event store be on the write side?

Event sourcing is pitched as a bonus for a number of things, e.g. event history / audit trail, complete and consistent view regeneration, etc. Sounds great. I am a fan. But those are read-side implementation details, and you could accomplish the same by moving the event store completely to the read side as another subscriber.. so why not?
Here's some thoughts:
The views/denormalizers themselves don't care about an event store. They just handle events from the domain.
Moving the event store to the read side still gives you event history / audit
You can still regenerate your views from the event store. Except now it need not be a write model leak. Give him read model citizenship!
Here seems to be one technical argument for keeping it on the write side. This from Greg Young at http://codebetter.com/gregyoung/2010/02/20/why-use-event-sourcing/:
There are however some issues that exist with using something that is storing a snapshot of current state. The largest issue revolves around the fact that you have introduced two models to your data. You have an event model and a model representing current state.
The thing I find interesting about this is the term "snapshot", which more recently has become a distinguished term in event sourcing as well. Introducing an event store on the write side adds some overhead to loading aggregates. You can debate just how much overhead, but it's apparently a perceived or anticipated problem, since there is now the concept of loading aggregates from a snapshot and all events since the snapshot. So now we have... two models again. And not only that, but the snapshotting suggestions I've seen are intended to be implemented as an infrastructure leak, with a background process going over your entire data store to keep things performant.
And after a snapshot is taken, events before the snapshot become 100% useless from the write perspective, except... to rebuild the read side! That seems wrong.
Another performance related topic: file storage. Sometimes we need to attach large binary files to entities. Conceptually, sometimes these are associated with entities, but sometimes they ARE the entities. Putting these in the event store means you have to physically load that data each and every time you load the entity. That's bad enough, but imagine several or hundreds of these in a large aggregate. Every answer I have seen to this is to basically bite the bullet and pass a uri to the file. That is a cop-out, and undermines the distributed system.
Then there's maintenance. Rebuilding views requires a process involving the event store. So now every view maintenance task you ever write further binds your write model into using the event store.. forever.
Isn't the whole point of CQRS that the use cases around the read model and write model are fundamentally incompatible? So why should we put read model stuff on the write side, sacrificing flexibility and performance, and coupling them back up again. Why spend the time?
So all in all, I am confused. In all respects from where I sit, the event store makes more sense as a read model detail. You still achieve the many benefits of keeping an event store, but you don't over-abstract write side persistence, possibly reducing flexibility and performance. And you don't couple your read/write side back up by leaky abstractions and maintenance tasks.
So could someone please explain to me one or more compelling reasons to keep it on the write side? Or alternatively, why it should NOT go on the read side as a maintenance/reporting concern? Again, I'm not questioning the usefulness of the store. Just where it should go :)
This is a long dead question that someone pointed me to. There are quite a few reasons why its better to store events on the write side.
From my understanding the architecture you are talking about is a very common one that I see ... fail. We will store our domain model in a relational database then put out events. You add the twist of them saving the events on the read side in an event store. This will likely lead to a mess.
The first issue you will run into is in the publishing of your events. What happens when I save to the database and publish to say MSMQ (I die in the middle). So DTC gets introduced between them. This is a huge thing to bring in, distributed transactions should be avoided like the plague. It is also quite inefficient as I am probably making the data durable twice (once to queue once to database). This will limit system throughput by a lot (DTC benchmarks of 200-300 messages/second are common, with events only 20-30k/second is common).
Some work around the need for DTC by putting a table in their database that has the events and operates as a queue. This will avoid the need for DTC however this will still run into the next issue.
What happens when you have a bug? I know you would never write buggy code but one of the Jrs/maintenance developers later working with the project. As an example what happens when the domain object change and the event raised do not match? Say you set State on your domain object to "LA" (hardcoded) but you properly set State on the event to cmd.State ("CT").
How will you detect such errors are occurring? The biggest problem with what is being discussed is that there are now two sources of "truth" there is the database on the write side and the event stream coming out. There is no way to prove that they are equivalent. This will cause all sorts of weird bugs down the line.
I think this is really an excellent question. Treating your aggregate as a sequence of events is useful in its own right on the write side, making command retries and the like easier. But I agree that it seems upsetting to work to create your events, then have to make yet another model of your object for persistence if you need this snapshotting performance improvement.
A system where your aggregates only stored snapshots, but sent events to the read-model for projection into read models would I think be called "CQRS", just not "Event Sourcing". If you kept the events around for re-projection, I guess you'd have a system that was very much both.
But then wouldn't you have three definitions? One for persisting your aggregates, one for communicating state changes, and any number more for answering queries?
In such a system it would be tempting to start answering queries by loading your aggregates and asking them questions directly. While this isn't forbidden by any means, it does tend to start causing those aggregates to accrete functionality they might not otherwise need, not to mention complicating threading and transactions.
One reason for having the event store on the write-side might be for resolving concurrency issues before events become "facts" and get distributed/dispatched, e.g. through optimistic locking on committing to event streams. That way, on the write side you can make sure that concurrent "commits" to the same event stream (aggregate) are resolved, one of them gets through, the other one has to resolve the conflicts in a smart way through comparing events or propagating the conflict to the client, thus rejecting the command.

Asynchronous data loading in Entity-Framework?

Did anyone hear about asynchronous executing of an EF query?
I want my items control to be filled right when the form loads and the user should be able to view the list while the rest of the items are still being loaded.
Maybe by auto-splitting the execution in bulks of items (i.e. few queries for each execution) all in same connection.
I posted a feature suggestion to Microsoft, please share them with your ideas as well.
Not wanting to sound like a commercial, but I noticed that the latest DevExpress grid gives features like this in their WPF grid. Essentially you want to load visible-count items first, then load the rest in a background thread so your UI isn't freezing up. The background thread should probably load another page at a time and make them available to the UI thread.
It's something you would want to think about carefully and make sure you get it right, or simply buy a control that does the hard work for you.
I take from your link that this is a web app. Is that correct?
A Query must complete and return data before rendering can begin. An EF feature will not help you here. Rather. look at breaking up your process into several processes that can be done at once.
Keep in mind that ASP.NET cannot return a response to a browser if it is not done rendering the HTML.
Let me assume you are executing a single query, getting the results back and displaying them to a page.
Best option: Page your results. if you Have 4000 records, show the first 50. If you show 200+ records to a user, They cannot digest that much information.
If that does not fit your needs, look at firing one query for 50 results. Make an Ajax call to the the remaining records and build the UI from there, in (reasonably sized) chunks.

Wicket, page stack, and memory usage

A Wicket application serializes and caches all pages to support stateful components, as well as for supporting the back button, among other possible reasons. I have an application which uses setResponsePage to navigate from screen to screen. Over a pretty short amount of time the session gets rather large because all of the prior pages are stored in the session. For the most part, I only need the session to contain the current page, for obvious reasons, and perhaps the last 2 or 3 pages to allow easy navigation using the browser's back button.
Can I force a page to expire after I have navigated away from it and I know that I don't want to use to back button to that version of the page? More generally what is the recommended way to deal with session growth in Wicket?
http://apache-wicket.1842946.n4.nabble.com/Wicket-Session-grows-too-big-real-fast-td1875816.html
If you use loads of domain objects on your page, which are eventually tightly coupled to other domain objects, be sure to avoid serialization for these!
Have a look at
LoadableDetachableModel for wrapping domaing objects
DataView and IDataProvider for displaying list of domain objects
Thou shalt not stuff domain objects into instance variables of components.
Thou shalt not make domain object references final in order to use them in anonymous subclasses.
Thou shalt not pass a mere List of domain objects to a ListView.
Perhaps, when subclassing WbeRequestCycle in your Application class, you might gain control of a page's lifetime in the pagemap... haven't tried it, though
In order to avoid Session choke due to continuous stacking of byte-stream due to serialization in a session and memory usage piling , you can use detachable models by using hooks to arrange for their own storage and restoration at the beginning of each request cycle , this way you have complete control over models containing byte-stream of pages not required in the session or navigable through 'Back' button.