I learned that on virtual memory, the penalty caused by page fault is expensive. How do we reduce this page fault??I saw one argument that says a smaller page size reduces the page fault. Why is this true??
To consider why smaller page sizes might reduce fault rates, consider and extreme example in the other direction. Assume you have 2GB of physical memory and pages that are 1GB in size. As soon as you allocate more than 2GB of virtual memory, you will have at least 3 pages, of which only 2 will fit in memory. More than 1-in-3 memory accesses would cause a page fault.
Having smaller page sizes means you have more granularity, allowing the OS to perform more targeted swapping.
Of course (isn't it always that way), there are trade-offs. For one, smaller page sizes means more pages, which means more overhead to manage pages.
One method to reduce page faults is to use a memory allocator that is smart about allocating memory likely to be used at the same time on the same pages.
For example, at the application level, bucket allocators (example) allow an application to request a chunk of memory that the application will then allocate from. The application can use the bucket for specific phases of program execution and then release the bucket as a unit. This helps to minimize memory fragmentation that might cause active and inactive parts of the program from receiving memory allocations from the same physical page.
Related
I am reading operating systems and having a doubt regarding page fault service time ?
Average memory access Time = prob. of no page fault (memory access time)
+
prob. of page fault (Page fault service time)
My doubt is that what does the page fault service time includes ?
According to me,
First address translation is there in TLB or Page table , but when entry is not found in page table, it means page fault occurred . So, i have to fetch from disk and all the entries get updated in the TLB and as well as page table.
Hence, page Fault service time = TLB time + page table time + page fetch from disk
Plz someone confirm it ?
What you are describing is academic Bulls____. There are so many factors that a simple equation like that does not describe the access time. Nonetheless, there are certain idiotic operating systems books that put out stuff like that to sound intellectual (and professors like it for exam questions).
What these idiots are trying to say is that a page reference will be in memory or not in memory with the two probabilities adding up to 1.0. This is entirely meaningless because the relative probabilities are dynamic. If other processes start using memory, the likelihood of a page fault increases and if other processes stop using memory, the probability declines.
Then you have memory access times. That is not constant either. Accessing a cached memory location is faster than a non-cached location. Accessing memory that is shared by multiple processors and interlocked is slower. This is not a constant either.
Then you have page fault service time. There are soft and hard page faults. A page fault on a demand zero page is different in time for one that has to be loaded from disk. Is the disk access cached or not cached? How much activity is there on the disk?
Oh, is the page table paged? If so, is the page fault on the page table or on the page itself? It could even be both.
Servicing a page fault:
The process enters the exception and interrupt handler.
The interrupt handler dispatches to the page fault handler.
The page fault handler has to find where the page is stored.
If the page is in memory (has been paged out but not written to disk), the handler just has to update the page table.
If the page is not in memory, the handler has to look up where the page is stored (this is system and type of memory specific).
The system has to allocate a physical page frame for the memory.
If this is a first reference to a demand zero page, there is no need to read from disk, just set everything to zero.
If the page is in a disk cache, get the page from that.
Otherwise read the page from disk to the page frame.
Reset the process's registers as appropriate.
Return to user mode
Restart the instruction causing the fault.
(All of the above have gross simplifications.)
The TLB has nothing really to do with this except that the servicing time is marginally faster if the page table entry in question is in the TLB.
Hence, page Fault service time = TLB time + page table time + page fetch from disk
Not at all.
I dropped out of the CS program at my university... So, can someone who has a full understanding of Computer Science please tell me: what is the meaning of Dirty and Resident, as relates to Virtual Memory? And, for bonus points, what the heck is Virtual Memory anyway? I am using the Allocations/VM Tracker tool in Instruments to analyze an iOS app.
*Hint - try to explain as if you were talking to an 8-year old kid or a complete imbecile.
Thanks guys.
"Dirty memory" is memory which has been changed somehow - that's memory which the garbage collector has to look at, and then decide what to do with it. Depending on how you build your data structures, you could cause the garbage collector to mark a lot of memory as dirty, having each garbage collection cycle take longer than required. Keeping this number low means your program will run faster, and will be less likely to experience noticeable garbage collection pauses. For most people, this is not really a concern.
"Resident memory" is memory which is currently loaded into RAM - memory which is actually being used. While your application may require that a lot of different items be tracked in memory, it may only require a small subset be accessible at any point in time. Keeping this number low means your application has lower loading times, plays well with others, and reduces the risk you'll run out of memory and crash as your application is running. This is probably the number you should be paying attention to, most of the time.
"Virtual memory" is the total amount of data that your application is keeping track of at any point in time. This number is different from what is in active use (what's being used is marked as "Resident memory") - the system will keep data that's tracked but not used by your application somewhere other than actual memory. It might, for example, save it to disk.
WWDC 2013 - 410 Fixing Memory Issues Explains this nicely. Well worth a watch since it also explains some of the practical implications of dirty, resident and virtual memory.
When running The Allocations Instrument at any given moment my iPad app has less than 5MB of memory allocated. I've been very thorough and made sure everything is being released correctly. My app is a tab bar app that does load a lot of images, videos, and PDFs. I've made sure to handle this appropriately and empty caches, etc to free up memory.
However, when I run the Activity Monitor Instrument, with my app running on my iPad, the memory usage of my app gradually increases eventually reaching over 100MB and crashes.
I'm not really sure what to do and there isn't a specific block of code that is causing issues. The entire app is a memory hog and I've never had this issue before.
Besides allocations what reasons would my app consume this much memory? Is there another tool I can use to trace what processes are using up memory?
Edit: As someone mentioned, I've used Build and Analyze to make sure all issues have been cleaned up.
Lots of times CGImages and other large media blobs do not show up on Allocations - they might show up as some small innocent looking object, but they point to some large object like an image that is allocated using 'weird' techniques (like memory mapped files, video card memory, etc).
The activity monitor instrument on the other hand looks at memory used in terms of 4k pages loaded for your app, and thus includes these media blobs.
I don't know what your caching scheme is. Here is a scenario:
You need to load 50 100k jpegs - the user will see only a max of 3 at once. 50 100k images is 5MB of memory. So you can load all the data for the jpegs from the internet. If you then create 50 CGImages from this data then each one will consume (assuming the jpegs are 1000x1000 * 4bytes per pix = ) 4MB of memory. So that would be 200MB to sit them all in memory. Which won't work. So you need to keep the 100k compressed nsdata blobs around, then only create the 1 or 3 CGimages at a time that you need them. Its an art to keep things smooth and balanced.
In other workds: in Allocations - look at the number of CGImageRefs, etc that you have at any one time, and lower that number.
I am just having hard time understanding the difference between virutal memory vs physical memory leak from debugging .net application perspective.
Can anyone elaborate this concept with example how can we have only one type of leak and not other one.
TIA
Virtual memory comprises ranges of a process's address-space that have been marked as available for its use. When you leak memory, virtual memory is almost always involved, since that is the only concept that most programs deal with.
Physical memory is usually consumed only when a program accesses virtual memory, for which the OS must provide physical memory to match. This rarely leaks independently of virtual memory, since it is under the control of the OS.
OTOH, a program can exercise more control over the allocation of physical memory by forcing certain virtual memory pages to remain mapped to physical memory (the mechanisms for this vary between OSes). In such cases, it is possible for a buggy program to leak physical memory.
A softer form of physical memory leak is when a program keeps touching pages of virtual memory that it doesn't logically need to access. This will keep such pages hot and stymie the operating systems efforts to keep the working set (the set of physically mapped pages) small.
I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work.
If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write.
My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient.
Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that.
Thanks.