Apache wicket : For how much time the component exists in memory for remembering previous state - wicket

i have a question that for how much time an Wicket component exists in memory for remembering its previous state. Is there any time limit for that ? for example, session timeout of about 20 min...? If this happens, when there are lots of users for say 1 million users accessing the server..will the wicket stay stable or run into out of memory ? Please explain the internal handling of request in wicket if possible.

Components in Wicket exist only as part of the component tree of a page. A stateful page will be kept around for the duration of the session, so its components will exist just as long.
However: By default, only the most recently rendered page will actually be in the session itself. Older pages are asynchronously serialized and stored on disk. These old pages are rarely needed and will simply be loaded again when requested. This way Wicket can respond quickly and at the same time keep a low memory footprint.

Related

How to improve performance on mobile devices using xamarin forms

We are using xamarin forms with prism. We have simple pages with small amount of data to be displayed on each page and include simple calculations. We are using prism navigation service to navigate between pages. We are experiencing some latency from clicking a button to navigating to next page. Data is fetched inside OnNavigatedTo since navigation parameter changes the data. Can someone shed some light of why is there a latency, it is close 1+ second and sometimes 2 seconds.
Also, it seems like each page is getting rendered twice... Once before OnNaviagatedTo and then data changes. OnProperty or OnCollection changed is fired from within OnNavigatedTo and it seems to cause the rendering again.
Version 6.3.0 introduced the concept of OnNavigatingTo, while OnNavigatedTo has been around for a while. There is a distinct difference between the two. Understanding the order in which things occur should help you create a nicer user experience.
New Page is created
OnNavigatedFrom is called
OnNavigatingTo is called
New Page is pushed onto the Navigation Stack and becomes visible
OnNavigatedTo is called
Applications that have to reach out and fetch data can often experience latency issues because it takes time to reach out to the remote service and get the data we want and then parse that data into a usable object. This particular problem was one in which many developers wanted to cut down the demand on the UI with having to refresh as the bindings were being updated which led to the introduction of OnNavigatingTo.
While neither one will reduce network latency it gives you an ability to make the calling page enter an IsBusy state that may display some sort of loading icon which would then be updated to false when NavigateAsync completes and your new page is displayed already loaded.

What is Page fault service time?

I am reading operating systems and having a doubt regarding page fault service time ?
Average memory access Time = prob. of no page fault (memory access time)
+
prob. of page fault (Page fault service time)
My doubt is that what does the page fault service time includes ?
According to me,
First address translation is there in TLB or Page table , but when entry is not found in page table, it means page fault occurred . So, i have to fetch from disk and all the entries get updated in the TLB and as well as page table.
Hence, page Fault service time = TLB time + page table time + page fetch from disk
Plz someone confirm it ?
What you are describing is academic Bulls____. There are so many factors that a simple equation like that does not describe the access time. Nonetheless, there are certain idiotic operating systems books that put out stuff like that to sound intellectual (and professors like it for exam questions).
What these idiots are trying to say is that a page reference will be in memory or not in memory with the two probabilities adding up to 1.0. This is entirely meaningless because the relative probabilities are dynamic. If other processes start using memory, the likelihood of a page fault increases and if other processes stop using memory, the probability declines.
Then you have memory access times. That is not constant either. Accessing a cached memory location is faster than a non-cached location. Accessing memory that is shared by multiple processors and interlocked is slower. This is not a constant either.
Then you have page fault service time. There are soft and hard page faults. A page fault on a demand zero page is different in time for one that has to be loaded from disk. Is the disk access cached or not cached? How much activity is there on the disk?
Oh, is the page table paged? If so, is the page fault on the page table or on the page itself? It could even be both.
Servicing a page fault:
The process enters the exception and interrupt handler.
The interrupt handler dispatches to the page fault handler.
The page fault handler has to find where the page is stored.
If the page is in memory (has been paged out but not written to disk), the handler just has to update the page table.
If the page is not in memory, the handler has to look up where the page is stored (this is system and type of memory specific).
The system has to allocate a physical page frame for the memory.
If this is a first reference to a demand zero page, there is no need to read from disk, just set everything to zero.
If the page is in a disk cache, get the page from that.
Otherwise read the page from disk to the page frame.
Reset the process's registers as appropriate.
Return to user mode
Restart the instruction causing the fault.
(All of the above have gross simplifications.)
The TLB has nothing really to do with this except that the servicing time is marginally faster if the page table entry in question is in the TLB.
Hence, page Fault service time = TLB time + page table time + page fetch from disk
Not at all.

How to Remove No Longer Used OData Bindings?

Assume a page that shows a complex data structure (for example, an article with many details). This view will be reused from time to time by rebinding it to different articles.
Now, I noticed that the ODataModel keeps all used article entities in memory (also if they are no longer bound to any control).
This will lead to two issues:
Memory consumption increases over time (if application will not be reloaded).
If the application forces a refresh of the data model, all entities will be loaded (also not used).
The second issue seems to be the bigger problem. It slows down the speed of the application.
I have not found a solution for that problem. If I use refresh(true, true) it seems all data will be reloaded.
Is there an way to clean the model?
Edit
Lets say you have a list of thousands of articles. User can click on one of the articles and will navigate to a detailed screen of that article.
The OData model in client side will cache this. To see it, do something like:
var oModel = this.getModel("modelName");
look with the debugger into oModel.oData.
If the user now navigates back and chooses the next article, this will be cached as well.
If user does this 1000 times, all articles are now in the model.
If you trigger a oModel.refresh(true);, all these data (of 1000 articles) will be reloaded not only the one bound to the view.
Now my application is not about showing article information. It's a more complex structure with subitems. Each time user is visiting this page, more data will be cached (and re-fetched in case of a refresh call on the model).
Edit 2
The function updateBindings(bForceUpdate?) seems to help a little bit.
Anyhow, the data accumulation is still there in the ODataModel class.
That means: Each visited data path will stay in memory since the next reload (F5) of the full page. If someone uses such an application over a day, the data accumulates and a refresh call on the model will read all data again, if still bound to a view or not.
Try deleteCreatedEntry(oContext). Even though this is not the supposed use case for this method it might work to delete an entity from the model without triggering a backend request.
You could also try if updateBindings(bForceUpdate?) only triggers an update on actually bound entities.
1) I do not really understand your problem here. What is it exactly that you do? OData always holds the result of your request plus a queue of changes to that request. If you create lots of entries while your application is running, of course the memory consumption will increase. If you want to revert back to the original request you can use resetChanges(). THis way the used memory should decrease again. But you lose all your changes to the model.
2) Maybe you should look into Odata filtering (http://www.odata.org/getting-started/basic-tutorial/) so that you only load the entities you really want. If you only want a part of the entity loaded then you should maybe redesign your entities to avoid a lot of overhead.
It is hard to speculate what your exact problem is.
Well, if you know exactly what are you doing, you can try something like this:
this.getModel("modelname").aBindings = []
Better solution would be go through the aBindings array and remove redundant bindings.

How does Scala's Lift manage state?

I'm quite impressed by what Lift 2.0 brings to the table with Actors and StatefulSnippets, etc, but I'm a little worried about the memory overhead of these things. My question is twofold:
How does Lift determine when to garbage collect state objects?
What does the memory footprint of a page request look like?
If a web crawler dances across the footprint of the site, are they going to be opening up enough state objects to drown out a modest VPS (512M)? The question is very obviously application dependent, but I'm curious if anyone has any real world figures they can throw out at me.
Lift stores state information in a session, so once the session is destroyed the state associated with that session goes away.
Within the session, Lift tracks each page that state is allocated for (e.g., mapping between an ajax button in the browser and a function on the server) and have a heart-beat from the browser. Functions for pages that have not seen the heartbeat in 10 minutes are unreferenced so the JVM can garbage collection them. All of this is tunable, so you can change heart-beat frequency, function lifespan, etc., but in practice the defaults work quite well.
In terms of session explosion, yeah... that's a minor issue. Popular sites (including http://demo.liftweb.net/ ) experience it. The example code (see http://github.com/lift/lift/tree/master/examples/example/ ) detects sessions that were created by a single request and then abandoned and expires those early. I'm running demo.liftweb.net with 256MB of heap size (that'd fit in a 512MB VPS) and occasionally, the session count rises over 1,000, but that's quickly tamped down for search engine traffic.
I think the question about memory footprint was once answered somewhere on the mailing list, but I can’t find it at the moment.
Garbage collection is done after some idle time. There is, however, an example on the wiki which uses some better heuristics to kill off sessions spawned by web crawlers.
Of course, for your own project it makes sense to check memory consumption with something like VisualVM while spawning a couple of sessions yourself.

Memory mapped files and "soft" page faults. Unavoidable?

I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work.
If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write.
My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient.
Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that.
Thanks.