Page frame and dirty page - operating-system

I am reading operating system concepts and I read the following lines:
When evicting a page from a page frame, the page must be synchronized to disk if the page was "dirty".
What does this mean can anyone help me with this??

Related

How to use Network's Waterfall in Chrome Dev Tool to diagnose web rendering performance issue?

One of our web pages has a rendering performance issue, when the page is open, the spinner is freeze or loading very laggy, and after 6-12 seconds the page completes loading. So i'm using the Network's waterfall in chrome dev tool to diagnose the issue. But I got a few scenarios which i don't understand what happened.
In the following screenshots, all the resources for the corresponding page are loaded in a very short time, but the spinner is freeze for 6 seconds or 9 seconds, i'm not sure what is happening after the resources are loaded and before the page completes loading, maybe the spinner is in a wrong thread or gets blocked somehow? What is the means that i should use to find out the cause?
Scenario 1
Scenario 2
UPDATE
Network Screenshot
Timeline Screenshot
UPDATE
After checking the Event Log, i think the issue happens at Angular digest cycle, that endpoint response time should still be 780ms.
Thanks for the detailed info. It'd be more helpful if you can link to the page, but I understand that's often not possible. I'll just provide some general data for people in the same boat. I don't know if I'll be able to completely answer this specific question, though.
In the Scenario 1 and Scenario 2 screenshots you can see that your resources are loading in 1 or 2 seconds. That's your cue that the issue isn't related to the Network.
So while this is a page load issue, it has nothing to do with the network.
In Timeline Screenshot you can see that your CPU usage is completely maxed out from about 1900ms to beyond 16000ms. So your page is forcing the browser to do a tremendous amount of work. This is probably in the JavaScript.
To diagnose this, I'd investigate the flame chart (under Main) which you can see in Timeline Screenshot. The longer the bar, the longer that function is taking to complete. Or, if you see a small function getting called thousands of times, that could be the cause. If you can optimize those calls, then you can get your page visually loaded faster. You can click Self Time header in the UPDATE screenshot to rank the function calls according to which took the most time.
Again, I don't know how helpful this answer is for this particular question, but I thought I'd try to rephrase the problem in a different, more general way.

What is Page fault service time?

I am reading operating systems and having a doubt regarding page fault service time ?
Average memory access Time = prob. of no page fault (memory access time)
+
prob. of page fault (Page fault service time)
My doubt is that what does the page fault service time includes ?
According to me,
First address translation is there in TLB or Page table , but when entry is not found in page table, it means page fault occurred . So, i have to fetch from disk and all the entries get updated in the TLB and as well as page table.
Hence, page Fault service time = TLB time + page table time + page fetch from disk
Plz someone confirm it ?
What you are describing is academic Bulls____. There are so many factors that a simple equation like that does not describe the access time. Nonetheless, there are certain idiotic operating systems books that put out stuff like that to sound intellectual (and professors like it for exam questions).
What these idiots are trying to say is that a page reference will be in memory or not in memory with the two probabilities adding up to 1.0. This is entirely meaningless because the relative probabilities are dynamic. If other processes start using memory, the likelihood of a page fault increases and if other processes stop using memory, the probability declines.
Then you have memory access times. That is not constant either. Accessing a cached memory location is faster than a non-cached location. Accessing memory that is shared by multiple processors and interlocked is slower. This is not a constant either.
Then you have page fault service time. There are soft and hard page faults. A page fault on a demand zero page is different in time for one that has to be loaded from disk. Is the disk access cached or not cached? How much activity is there on the disk?
Oh, is the page table paged? If so, is the page fault on the page table or on the page itself? It could even be both.
Servicing a page fault:
The process enters the exception and interrupt handler.
The interrupt handler dispatches to the page fault handler.
The page fault handler has to find where the page is stored.
If the page is in memory (has been paged out but not written to disk), the handler just has to update the page table.
If the page is not in memory, the handler has to look up where the page is stored (this is system and type of memory specific).
The system has to allocate a physical page frame for the memory.
If this is a first reference to a demand zero page, there is no need to read from disk, just set everything to zero.
If the page is in a disk cache, get the page from that.
Otherwise read the page from disk to the page frame.
Reset the process's registers as appropriate.
Return to user mode
Restart the instruction causing the fault.
(All of the above have gross simplifications.)
The TLB has nothing really to do with this except that the servicing time is marginally faster if the page table entry in question is in the TLB.
Hence, page Fault service time = TLB time + page table time + page fetch from disk
Not at all.

How page faults are handled with Page Size Extension?

I was trying to understand the concept of Page Size Extension, used in x86 processor but was not able to relate it with the page fault mechanism. From my understanding, when a page fault occurs, the virtual address is written in a register and an error code is pushed onto the stack. But if we are using page size extension, then how does the page fault handler comes to know what page size needs to be allocated.Can anyone help me with this?
There is a bit in the page directory. Intel calls this the PS bit. (Page size?) If the bit is set, it is a large page. If clear, a small page.
While Intel allows both page sizes to be in use simultaneously, I would wager that few OS implementations would support mixed page sizes.

Apache wicket : For how much time the component exists in memory for remembering previous state

i have a question that for how much time an Wicket component exists in memory for remembering its previous state. Is there any time limit for that ? for example, session timeout of about 20 min...? If this happens, when there are lots of users for say 1 million users accessing the server..will the wicket stay stable or run into out of memory ? Please explain the internal handling of request in wicket if possible.
Components in Wicket exist only as part of the component tree of a page. A stateful page will be kept around for the duration of the session, so its components will exist just as long.
However: By default, only the most recently rendered page will actually be in the session itself. Older pages are asynchronously serialized and stored on disk. These old pages are rarely needed and will simply be loaded again when requested. This way Wicket can respond quickly and at the same time keep a low memory footprint.

Memory mapped files and "soft" page faults. Unavoidable?

I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work.
If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write.
My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient.
Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that.
Thanks.