I was reading about paging and swap-space and I'm a little confused about how much space (and where) on the hard-disk is used to page out / swap-out frames. Let's think of the following scenario :
We have a single process which progressively uses newer pages in virtual memory. Each time for a new page, we allocate a frame in physical memory.
But after a while, frames in the physical memory get exhausted and we choose a victim frame to be removed from RAM.
I have the following doubts :
Does the victim frame get swapped out to the swap space or paged out to some different location (apart from swap-space) on the hard-disk?
From what I've seen, swap space is usually around 1-2x size of RAM, so does this mean a process can use only RAM + swap-space amount of memory in total? Or would it be more than that and limited by the size of virtual memory?
Does the victim frame get swapped out to the swap space or paged out to some different location (apart from swap-space) on the hard-disk?
It gets swapped to the swap space. Swap space is used for that. A system without swap space cannot use this feature of virtual memory. It still has other features like avoiding external fragmentation and memory protection.
From what I've seen, swap space is usually around 1-2x size of RAM, so does this mean a process can use only RAM + swap-space amount of memory in total? Or would it be more than that and limited by the size of virtual memory?
The total memory available to a process will be RAM + swap-space. Imagine a computer with 1GB of RAM + 1GB of swap space and a process which requires 3GB. The process has virtual memory needs above what is available. This will not work because eventually the process will access all this code/data and it will make the program crash. Basically, the process image is bigger than RAM + swap space so eventually the program will get loaded completely from the executable and the computer will simply not have enough space to hold the process. It will crash the process.
There's really 2 options here. You either store a part of the process in RAM directly or you store it in the swap space. If there's no room in both of these for your process then the kernel doesn't have anywhere else to go. It thus crashes the process.
Related
Why do Ram size is always smaller than Secondary Storage(HDD/SSD)? If you observe any device you will get the same question
Why do Ram size is always smaller than Secondary Storage(HDD/SSD)? If you observe any device you will get the same question
The primary reason is price. For example (depending a lot on type, etc) currently RAM is around $4 per GiB and "rotating disk" HDD is $0.04 per GiB, so RAM costs about 100 times as much per GIB.
Another reason is that HDD/SSD is persistent (the data remains when you turn the power off); and the amount of data you want to keep when power is turned off is typically much larger than amount of data you don't want to keep when power is turned off. A special case for this is when you put a computer into a "hibernate" state (where the OS stores everything in RAM and turns power off, and then when power is turned on again it loads everything back into RAM so that it looks the same; where the amount of persistent storage needs to be larger than the amount of RAM).
Another (much smaller) reason is speed. It's not enough to be able to store data, you have to be able to access it too, and the speed of accessing data gets worse as the amount of storage increases. This holds true for all kinds of storage for different reasons (and is why you also have L1, L2, L3 caches ranging from "very small and very fast" to "larger and slower"). For RAM it's caused by the number of address lines and the size of "row select" circuitry. For HDD it's caused by seek times. For humans getting the milk out of a refrigerator it's "search time + movement speed" (faster to get the milk out of a tiny bar fridge than to walk around inside a large industrial walk-in refrigerator).
However; there are special cases (there's always special cases). For example, you might have a computer that boots from network and then uses the network for persistent storage; where there's literally no secondary storage in the computer at all. Another special case is small embedded systems where RAM is often larger than persistent storage.
I am trying to understand both paradigms of memory management;however, I fail to see the big picture and the difference between both. Paging consists of taking fixed size pages from a secondary to a primary storage in order to do some task requested by a process. Segmentation consists of assigning to each unit in a process an address space, so they are allowed to grow. I don't quiet see how they are related and that's because there are still a lot of holes in my understanding. Can someone fill them up?
I think you have something confused. One problem you have is that the term "segment" had multiple meanings.
Segmentation is a method of memory management. Memory is managed in segments that are of variable or fixed length, depending upon the processor. Segments originated on 16-bit processors as a means to access more than 64K of memory.
On the PDP-11, programmers used segments to map different memory into the 64K address space. At any given time a process could only access 64K of memory but the memory that made up that 64K could change.
The 8086 and it successors used segments with base registers. Each segment could have 64K (that grew with the processors) but a process could have 4 segments (more in later processors).
Paging allows a process to have a larger address space than there is physical memory available.
The 8086's successors used the kludge of paging on top of segments. However, that bit of ugliness has finally gone away in 64-bit mode.
You got your answer right there, paging relates with fixed size pages in a storage while segmentation deals with units in a page. 'Segments' are objects in the class 'Page'
Why is there a portion of virtual memory reserved for OS? Why is it limited to a certain size? This seems to be a universally known fact because when I googled I didn't find anyone asking similar questions.
If the OS segment (the part in VM reserved for OS) is accessed, what happens?
How does the OS segment affect the translation between virtual and physical memory?
For example if your virtual memory is 128KB, the first 32KB is allocated for seg 0 and the last 32KB for seg 1. Then you reserve the first 16KB for the OS seg. What happens to seg 0? Does its size shrink to 16KB because 16KB has been changed to OS seg? Or does it stays the same?
Why is there a portion of virtual memory reserved for OS? Why is it limited to a certain size? This seems to be a universally known fact because when I googled I didn't find anyone asking similar questions.
The reason some area of the logical address space is reserved for the OS is because the same physical memory is shared by all processes and it needs to be at the same location.
When an interrupt occurs, any process can be running. So the kernel mode handler needs to be in the same location.
Usually the reserved OS area is so large that the actual OS will never come close to using it all. So it is not really limited in size.
If the OS segment is accessed, what happens?
That depends upon how it is accessed. If a process accesses it in kernel mode (system call, interrupt, exception), that is normal. If it accesses the reserved area in user mode, it usually triggers an access violation of some kind. Some systems may make some areas of system memory readable from user mode but usually is all write protected.
How does the OS segment affect the translation between virtual and physical memory?
This is system dependent. Some systems make the user page tables pageable. The user page tables can then be in pageable areas in the system address space. In other words, the page tables are in virtual/logical memory, giving an additional translation for user addresses that does not occur for system addresses
Doing the same for the system address space would cause a chicken and egg problem. In such a system, the system page tables would be in physical locations (another reason everyone uses the same address range for system space).
Other systems use physical addresses for all page tables. In case, they translation is the same.
For example if your virtual memory is 128KB, the first 32KB is allocated for seg 0 and the last 32KB for seg 1. Then you reserve the first 16KB for the OS seg. What happens to seg 0? Does its size shrink to 16KB because 16KB has been changed to OS seg? Or does it stays the same?
This is not a good example. Virtual memory is never this small. Imagine a 32-bit system. The virtual address space is 4GB. The system assigns the first 3 GB to user the user space and the last 1 GB to the system space.
All processes share the same 1GB system space. They have there own, unique 3 GB user space.
beginners question:
"If" paging is disabled and only segmentation is enabled (CR0.PE is set) then does that mean if a program is loaded in memory (RAM), its whole binary image is loaded and none of its "part" is swapped out, becoz a program is broken into fixed size chunks only when paging is enabled (which then can be swapped out). And if it's true this will reduce the number of processes that run in memory of a particular size of RAM, say 2 GB?
Likely, but not necessarily.
It depends on the operating system...
You could write an operating system that uses a segment to map a part of the program into memory. When the program accesses memory outside the segment, you get a segmentation fault. As the segmentation fault is then passed to the operating system, it could swap in some data from disk and modify segmentation information, before returning control to the program.
However, this is probably difficult and expensive to do, and i do not know of any operating system that acts in this way.
As to the number of processes - you need to split the available memory into contiguous parts, one for each process. This is easy if processes do not grow; if they do, you need padding and may need to copy processes around, which is rather expensive...
What limits the size of a memory-mapped file? I know it can't be bigger than the largest continuous chunk of unallocated address space, and that there should be enough free disk space. But are there other limits?
You're being too conservative: A memory-mapped file can be larger than the address space. The view of the memory-mapped file is limited by OS memory constraints, but that's only the part of the file you're looking at at one time. (And I guess technically you could map multiple views of discontinuous parts of the file at once, so aside from overhead and page length constraints, it's only the total # of bytes you're looking at that poses a limit. You could look at bytes [0 to 1024] and bytes [240 to 240 + 1024] with two separate views.)
In MS Windows, look at the MapViewOfFile function. It effectively takes a 64-bit file offset and a 32-bit length.
This has been my experience when using memory-mapped files under Win32:
If your map the entire file into one segment, it normally taps out at around 750 MB, because it can't find a bigger contiguous block of memory. If you split it up into smaller segments, say 100MB each, you can get around 1500MB-1800MB depending on what else is running.
If you use the /3g switch you can get more than 2GB up to about 2700MB but OS performance is penalized.
I'm not sure about 64-bit, I've never tried it but I presume the max file size is then limited only by the amount of physical memory you have.
Under Windows: "The size of a file view is limited to the largest available contiguous block of unreserved virtual memory. This is at most 2 GB minus the virtual memory already reserved by the process. "
From MDSN.
I'm not sure about LINUX/OSX/Whatever Else, but it's probably also related to address space.
Yes, there are limits to memory-mapped files. Most shockingly is:
Memory-mapped files cannot be larger than 2GB on 32-bit systems.
When a memmap causes a file to be created or extended beyond its current size in the filesystem, the contents of the new part are unspecified. On systems with POSIX filesystem semantics, the extended part will be filled with zero bytes.
Even on my 64-bit, 32GB RAM system, I get the following error if I try to read in one big numpy memory-mapped file instead of taking portions of it using byte-offsets:
Overflow Error: memory mapped size must be positive
Big datasets are really a pain to work with.
The limit of virtual address space is >16 Terabyte on 64Bit Windows systems. The issue discussed here is most probably related to mixing DWORD with SIZE_T.
There should be no other limits. Aren't those enough? ;-)