How does virtual memory support the use of more memory than is physically installed in a computer? - operating-system

How an operating system’s use of virtual memory enables the operating system to appear to support the use of more memory than is physically installed in a computer
I'm not sure how to explain this in detail, but im thinking because virtual memory is based on paging, thus a single process can demand more memory than the amount of physical memory stored. Therefore it "appears" to use more memory than the amount of physical memory.
But Im not sure if that explains it :(

Basically, as the name states, the virtual memory doesn't "exists" or is not directly related to physical memory.
The virtual memory of a process is stored on the disk, containing all the information concerning the real process. See virtual memory for some more insights. When a process is scheduled in by the processor, some parts of its memory is brought back into the main memory through memory swapping. The pages that are needed at that moment by the process are in the main memory. And the pages that are not that much used (by any process) are just swapped-out because the main memory can't contain everything at the same time.
Hope I helped :)

Related

Detect if system is currently paging to disk

I'm running algorithms that consume a lot of memory. For that reason I'm using memory mapped files. The problem is that the memory is allocated faster than the memory manager is able to write data to disk - and ultimately the system is stalling because data allocation and paging are interferring each other. So I need to throttle the data processing when the memory manager is currently doing extensive paging.
I have already found out how to get if the disk is currently writing/in use, but I haven't found a way to see if it is due to paging.
So the question is if there is a way to find out if the disk/memory manager is paging - or if there is even a better way to do it.

Stored Program Computer in modern computing

I was given this exact question on a quiz.
Question
Answer
Does the question make any sense? My understanding is that the OS schedules a process and manages what instructions it needs the processor to execute next. This is because the OS is liable to pull all sorts of memory management tricks, especially in main memory where fragmentation is a way of life. I remember that there is supposed to be a special register on the processor called the program counter. In light of the scheduler and memory management done by the OS I have trouble figuring out the purpose of this register unless it is just for the OS. Is the concept of the Stored Program Computer really relevant to how a modern computer operates?
Hardware fetches machine code from main memory, at the address in the program counter (which increments on its own as instructions execute, or is modified by executing a jump or call instruction).
Software has to load the code into RAM (main memory) and start the process with its program counter pointing into that memory.
And yes, if the OS wants to page that memory out to disk (or lazily load it in the first place), hardware will trigger a page fault when the CPU tries to fetch code from an unmapped page.
But no, the OS does not feed instructions to the CPU one at a time.
(Unless you're debugging a program by putting the CPU into "single step" mode when returning to user-space for that process, so it traps after executing one instruction. Like x86's trap flag, for example. Some ISAs only have software breakpoints, not HW support for single stepping.)
But anyway, the OS itself is made up of machine code that runs on the CPU. CPU hardware knows how to fetch and execute instructions from memory. An OS is just a fancy program that can load and manage other programs. (Remember, in a von Neumann architecture, code is data.)
Even the OS has to depend on the processing architecture. Memory today often is virtualized. That means the memory location seen by the program is not the real physical location, but is indirected by one or more tables describing the actual location and some attributes (e.g. read/write/execute allowed or not) for memory accesses. If the accessed virtual memory has not been loaded into main memory (these tables say so), an exception is generated, and the address of an exception handler is loaded into the program counter. This exception handler is by the OS and resides in main memory. So the program counter is quite relevant with today's computers, but the next instruction can be changed by exceptions (exceptions are also called for thread or process switching in preemptive multitasking systems) on the fly.
Does the question make any sense?
Yes. It makes sense to me. It is a bit imprecise, but the meanings of each of the alternatives are sufficiently distinct to be able to say that D) is the best answer.
(In theory, you could create a von Neumann computer which was able to execute instructions out of secondary storage, registers or even the internet ... but it would be highly impractical for various reasons.)
My understanding is that the OS schedules a process and manages what instructions it needs the processor to execute next. This is because the OS is liable to pull all sorts of memory management tricks, especially in main memory where fragmentation is a way of life.
Fragmentation of main memory is not actually relevant. A modern machine uses special hardware (and page tables) to deal with that. From the perspective of executing code (application or kernel) this is all hidden. The code uses virtual addresses, and the hardware maps them to physical addresses. (This is even true when dealing with page faults, though special care will be taken to ensure that the code and page table entries for the page fault handler are in RAM pages that are never swapped out.)
I remember that there is supposed to be a special register on the processor called the program counter. In light of the scheduler and memory management done by the OS I have trouble figuring out the purpose of this register unless it is just for the OS.
The PC is fundamental. It contains the virtual memory address of the next instruction that the CPU is to execute. For application code AND for OS kernel code. When you switch between the application and kernel code, the value in the PC is updated as part of the context switch.
Is the concept of the Stored Program Computer really relevant to how a modern computer operates?
Yes. Unless you are working on a special custom machine where (say) the program has been transformed into custom silicon.

What happens if the size of a program is larger than virtual memory?

I came across this question recently in a telephonic interview:
What happens if the size of a program is larger than the size of virtual memory?
Will it not be allowed to run or how does the os go about dealing with it?
Yes, it is possible to have program that will run even if total size is bigger than address space.
Programs large than available address space existed for very long time. Common way is to split program into chunks that can fit into address space and than sequentially/on demand load other chunks.
If you have player that can play a file it will play a file. Not sure how it is related to OS...
Yes, you definitely can. Overlaying is the mechanism used. CPU brings in only that part of the code that is to be executed in the main memory and is currently needed. The rest of the code resides in the secondary memory and can then be brought when needed.

What do "Dirty" and "Resident" mean in relation to Virtual Memory?

I dropped out of the CS program at my university... So, can someone who has a full understanding of Computer Science please tell me: what is the meaning of Dirty and Resident, as relates to Virtual Memory? And, for bonus points, what the heck is Virtual Memory anyway? I am using the Allocations/VM Tracker tool in Instruments to analyze an iOS app.
*Hint - try to explain as if you were talking to an 8-year old kid or a complete imbecile.
Thanks guys.
"Dirty memory" is memory which has been changed somehow - that's memory which the garbage collector has to look at, and then decide what to do with it. Depending on how you build your data structures, you could cause the garbage collector to mark a lot of memory as dirty, having each garbage collection cycle take longer than required. Keeping this number low means your program will run faster, and will be less likely to experience noticeable garbage collection pauses. For most people, this is not really a concern.
"Resident memory" is memory which is currently loaded into RAM - memory which is actually being used. While your application may require that a lot of different items be tracked in memory, it may only require a small subset be accessible at any point in time. Keeping this number low means your application has lower loading times, plays well with others, and reduces the risk you'll run out of memory and crash as your application is running. This is probably the number you should be paying attention to, most of the time.
"Virtual memory" is the total amount of data that your application is keeping track of at any point in time. This number is different from what is in active use (what's being used is marked as "Resident memory") - the system will keep data that's tracked but not used by your application somewhere other than actual memory. It might, for example, save it to disk.
WWDC 2013 - 410 Fixing Memory Issues Explains this nicely. Well worth a watch since it also explains some of the practical implications of dirty, resident and virtual memory.

Virtual vs Physical memory leak

I am just having hard time understanding the difference between virutal memory vs physical memory leak from debugging .net application perspective.
Can anyone elaborate this concept with example how can we have only one type of leak and not other one.
TIA
Virtual memory comprises ranges of a process's address-space that have been marked as available for its use. When you leak memory, virtual memory is almost always involved, since that is the only concept that most programs deal with.
Physical memory is usually consumed only when a program accesses virtual memory, for which the OS must provide physical memory to match. This rarely leaks independently of virtual memory, since it is under the control of the OS.
OTOH, a program can exercise more control over the allocation of physical memory by forcing certain virtual memory pages to remain mapped to physical memory (the mechanisms for this vary between OSes). In such cases, it is possible for a buggy program to leak physical memory.
A softer form of physical memory leak is when a program keeps touching pages of virtual memory that it doesn't logically need to access. This will keep such pages hot and stymie the operating systems efforts to keep the working set (the set of physically mapped pages) small.