Where are multiple stacks and heaps put in virtual memory? - operating-system

I'm writing a kernel and need (and want) to put multiple stacks and heaps into virtual memory, but I can't figure out how to place them efficiently. How do normal programs do it?
How (or where) are stacks and heaps placed into the limited virtual memory provided by a 32-bit system, such that they have as much growing space as possible?
For example, when a trivial program is loaded into memory, the layout of its address space might look like this:
[ Code Data BSS Heap-> ... <-Stack ]
In this case the heap can grow as big as virtual memory allows (e.g. up to the stack), and I believe this is how the heap works for most programs. There is no predefined upper bound.
Many programs have shared libraries that are put somewhere in the virtual address space.
Then there are multi-threaded programs that have multiple stacks, one for each thread. And .NET programs have multiple heaps, all of which have to be able to grow one way or another.
I just don't see how this is done reasonably efficient without putting a predefined limit on the size of all heaps and stacks.

I'll assume you have the basics in your kernel done, a trap handler for page faults that can map a virtual memory page to RAM. Next level up, you need a virtual memory address space manager from which usermode code can request address space. Pick a segment granularity that prevents excessive fragmentation, 64KB (16 pages) is a good number. Allow usermode code to both reserve space and commit space. A simple bitmap of 4GB/64KB = 64K x 2 bits to keep track of segment state gets the job done. The page fault trap handler also needs to consult this bitmap to know whether the page request is valid or not.
A stack is a fixed size VM allocation, typically 1 megabyte. A thread usually only needs a handful of pages of it, depending on function nesting level, so reserve the 1MB and commit only the top few pages. When the thread nests deeper, it will trip a page fault and the kernel can simply map the extra page to RAM to allow the thread to continue. You'll want to mark the bottom few pages as special, when the thread page faults on those, you declare this website's name.
The most important job of the heap manager is to prevent fragmentation. The best way to do that is to create a lookaside list that partitions heap requests by size. Everything less than 8 bytes comes from the first list of segments. 8 to 16 from the second, 16 to 32 from the third, etcetera. Increasing the size bucket as you go up. You'll have to play with the bucket sizes to get the best balance. Very large allocations come directly from the VM address manager.
The first time an entry in the lookaside list is hit, you allocate a new VM segment. You subdivide the segment into smaller blocks with a linked list. When such an allocation is released, you add the block to the list of free blocks. All blocks have the same size regardless of the program request so there won't be any fragmentation. When the segment is fully used and no free blocks are available you allocate a new segment. When a segment contains nothing but free blocks you can return it to the VM manager.
This scheme allows you to create any number of stacks and heaps.

Simply put, as your system resources are always finite, you can't go limitless.
Memory management always consists of several layers each having its well defined responsibility. From the perspective of the program, the application-level manager is visible that is usually concerned only with its own single allocated heap. A level above could deal with creating the multiple heaps if needed out of (its) one global heap and assigning them to subprograms (each with its own memory manager). Above that could be the standard malloc()/free() that it uses and above those the operating system dealing with pages and actual memory allocation per process (it is basically not concerned not only about multiple heaps, but even user-level heaps in general).
Memory management is costly and so is trapping into the kernel. Combining the two could impose severe performance hit, so what seems to be the actual heap management from the application's point of view is actually implemented in user space (the C runtime library) for the sake of performance (and other reason out of scope for now).
When loading a shared (DLL) library, if it is loaded at program startup, it will of course be most probably loaded to CODE/DATA/etc so no heap fragmentation occurs. On the other hand, if it is loaded at runtime, there's pretty much no other chance than using up heap space.
Static libraries are, of course, simply linked into the CODE/DATA/BSS/etc sections.
At the end of the day, you'll need to impose limits to heaps and stacks so that they're not likely to overflow, but you can allocate others.
If one needs to grow beyond that limit, you can either
Terminate the application with error
Have the memory manager allocate/resize/move the memory block for that stack/heap and most probably defragment the heap (its own level) afterwards; that's why free() usually performs poorly.
Considering a pretty large, 1KB stack frame on every call as an average (might happen if the application developer is unexperienced) a 10MB stack would be sufficient for 10240 nested call -s. BTW, besides that, there's pretty much no need for more than one stack and heap per thread.

Related

What and when does the stack and heap actually created?

I know basic stuffs like when you run a process, it will load environment variables and the parameters to the stack; or how rsp, rbp registers be doing when we run a function. But, what thing and when does the stack/heap actually created when we run a process?
The stack is allocated at launch time of the process with a default size. It also has a maximum size. If the stack grows past its default size, the kernel checks whether the stack is bigger than its maximum size. It allocates more pages to the stack if it is smaller or kills the process if it is bigger.
The heap is allocated a region of virtual memory but not of physical memory. The heap will have a reserved region of virtual memory so that it can freely grow within it. When you ask for dynamic memory using the new operator in C++, you make a system call in the kernel to ask that it allocates a part of that virtual address space reserved for the heap in the physical address space. Once a portion of the physical address space has been allocated to your process, you can use it in your application freely.
Most C++ standard library implementations will have code that will follow how the heap grows and attempt to avoid internal fragmentation. The smallest unit of allocation in most OSes is 4KB. If you ask for, let's say, 1024 bytes of data, one page is allocated to your process. The process thus have one full page allocated. There is some memory lost here to internal fragmentation. The C++ standard library implementation thus keeps track of the allocated pages for you so that when you ask for more memory it will use the same page for your memory needs instead of requesting several pages for nothing.
There are four components on the ram: the code itself, global variables, stack, and heap.
In c++, the first three gets created during compilation, heap gets created when you create it(eg. int *ptr = new int[10];)
In python, everything is pointers (stack memory only contains the pointers, actual values in heap), meaning all four are created during compilation(interpreting)
The stack is created in every scope. Stack
The heap is created when allocated. Heap

How memory fragmentation is avoided in Swift

GC's compaction, sweep and mark avoids heap memory fragmentation. So how is memory fragmentation is avoided in Swift?
Are these statements correct?
Every time a reference count becomes zero, the allocated space gets added to an 'available' list.
For the next allocation, the frontmost chunk of memory which can fit the size is used.
Chunks of previously used up memory will be used again to be best extent possible
Is the 'available list' sorted by address location or size?
Will live objects be moved around for better compaction?
I did some digging in the assembly of a compiled Swift program, and I discovered that swift:: swift_allocObject is the runtime function called when a new Class object is instantiated. It calls SWIFT_RT_ENTRY_IMPL(swift_allocObject) which calls swift::swift_slowAlloc, which ultimately calls... malloc() from the C standard library. So Swift's runtime isn't doing the memory allocation, it's malloc() that does it.
malloc() is implemented in the C library (libc). You can see Apple's implementation of libc here. malloc() is defined in /gen/malloc.c. If you're interested in exactly what memory allocation algorithm is used, you can continue the journey down the rabbit hole from
there.
Is the 'available list' sorted by address location or size?
That's an implementation detail of malloc that I welcome you to discover in the source code linked above.
1. Every time a reference count becomes zero, the allocated space gets added to an 'available' list.
Yes, that's correct. Except the "available" list might not be a list. Furthermore, this action isn't necessarily done by the Swift runtime library, but it could be done by the OS kernel through a system call.
2. For the next allocation, the frontmost chunk of memory which can fit the size is used.
Not necessarily frontmost. There are many different memory allocation schemes. The one you've thought of is called "first fit". Here are some example memory allocation techniques (from this site):
Best fit: The allocator places a process in the smallest block of unallocated memory in which it will fit. For example, suppose a process requests 12KB of memory and the memory manager currently has a list of unallocated blocks of 6KB, 14KB, 19KB, 11KB, and 13KB blocks. The best-fit strategy will allocate 12KB of the 13KB block to the process.
First fit: There may be many holes in the memory, so the operating system, to reduce the amount of time it spends analyzing the available spaces, begins at the start of primary memory and allocates memory from the first hole it encounters large enough to satisfy the request. Using the same example as above, first fit will allocate 12KB of the 14KB block to the process.
Worst fit: The memory manager places a process in the largest block of unallocated memory available. The idea is that this placement will create the largest hold after the allocations, thus increasing the possibility that, compared to best fit, another process can use the remaining space. Using the same example as above, worst fit will allocate 12KB of the 19KB block to the process, leaving a 7KB block for future use.
Objects will not be compacted during their lifetime. The libc handles memory fragmentation by the way it allocates memory. It has no way of moving around objects that have already been allocated.
Alexander’s answer is great, but there are a few other details somewhat related to memory layout. Garbage Collection requires memory overhead for compaction, so the wasted space from malloc fragmentation isn’t really putting it at much of a disadvantage. Moving memory around also has a hit on battery and performance since it invalidates the processor cache. Apple’s memory management implementation can compress memory that hadn’t been accessed in awhile. Even though virtual address space can be fragmented, the actual RAM is less fragmented due to compression. The compression also allows faster swaps to disk.
Less related, but one of the big reasons Apple picked reference counting has more to do with c-calls then memory layout. Apple’s solution works better if you are heavily interacting with c-libraries. A garbage collected system is normally in order of magnitude slower interacting with c since it needs to halt garbage collection operations before the call. The overhead is usually about the same as a syscall in any language. Normally that doesn’t matter unless you are calling c functions in a loop such as with OpenGL or SQLite. Other threads/processes can normally use the processor resources while a c-call is waiting for the garbage collector so the impact is minimal if you can do your work in a few calls. In the future there may be advantages to Swift’s memory management when it comes to system programming and a rust-like lifecycle memory management. It is on Swift’s roadmap, but Swift 4 is not yet suited for systems programming. Typically in C# you would drop to managed c++ for systems programming and operations that make heavy use of c libraries.

What is the rationale for small stack even when memory is available?

Recently, I was asked in an interview, why would you have a smaller stack when the available memory has no limit? Why would you have it in 1KB range even when you might have 4GB physical memory? Is this a standard design practice?
The other answers are good; I just thought I'd point out an important misunderstanding inherent in the question. How much physical memory you have is completely irrelevant. Having more physical memory is just an optimization; it prevents having to use disk as storage. The precious resource consumed by a stack is address space, not physical memory. The bits of the stack that aren't being used right now are not even going to reside in physical memory; they'll be paged out to disk. But as soon as they are committed, they are consuming virtual address space.
The smaller your stacks, the more of them you can have. A 1kB stack is pretty useless, as I can't think of an architecture that has pages that small. A more typical size is 128kB-1MB.
Since each thread has its own stack, the number of stacks you can have is an upper limit on the number of threads you can have. Some people complain about the fact that they can't create more than 2000 threads in a standard 2GB address space of a 32-bit Windows process, so it's not surprising that some people would want even smaller stacks to allow even more threads.
Also, consider that if a stack has to be completely reserved ahead of time, it is carving a chunk out of your address space that can't be returned until the stack isn't used anymore (i.e. the thread exits). That chunk of reserved address space then limits the size of a contiguous allocation you can make.
I don't know the "real" answer, but my guess is:
It's committed on-demand.
Do you really need it?
If the system uses 1 MiB for a stack, then a typical system with 1024 threads would be using 1 GiB of memory for (mostly) nothing... which may not be what you want, especially since you don't really need it.
One reason is, even though memory is huge these days, it is still not unlimited. A 32-bit process is normally limited to 4GB of address space (yes, you can use PAE to increase that, but that requires support from the OS and a return to a segmented memory model.) Each thread uses up some of that memory for its stack, and if a stack is megabytes in size -- whether it's paged in or not -- it's taking up a significant part of the app's address space.
The smaller the stack, the more threads you can squeeze into the app, and the more memory you have available for everything else. Ideally, you want a stack just large enough to handle all possible control flows through the thread, but small enough that you don't have wasted address space.
There are two things here. First, the limit on the stack size will put the limit on number of processes/threads in the system. And then too, the limit is not because of the size of physical memory but because of the limit on addressable virtual memory. Secondly, rarely processes/threads need more stack size then that, and if they do, they can ask for it (libraries handle this seamlessly). So, when starting a new process/thread, it makes sense to give them a small stack space.
Other answers here already mention the core concept, that the most significant consumed resource of a stack is address space (since its implementation requires chunks of contiguous address space) and that the default space consumed on windows for each thread is not insignificant.
However the full story is extremely nuanced (and can and will change over time) over many layers and levels.
This article by Mark Russinovich as part of his "Pushing the limits of Windows" series goes into extremely detailed levels of analysis. The work is in no way an introductory article though, and most people would not consider it the sort of thing that would be expected to be known in a job interview unless perhaps you were interviewing for a job in that particular field.
Maybe because everytime you call a function the OS has to allocate memory to be the stack of that function. Because functions can chain, several function calls will incur more stack allocations. A large default stack size, like 4GiB, would be impractical. But that's just my guess...

What's the difference between "virtual memory" and "swap space"?

Can any one please make me clear what is the difference between virtual memory and swap space?
And why do we say that for a 32-bit machine the maximum virtual memory accessible is 4 GB only?
There's an excellent explantation of virtual memory over on superuser.
Simply put, virtual memory is a combination of RAM and disk space that running processes can use.
Swap space is the portion of virtual memory that is on the hard disk, used when RAM is full.
As for why 32bit CPU is limited to 4gb virtual memory, it's addressed well here:
By definition, a 32-bit processor uses
32 bits to refer to the location of
each byte of memory. 2^32 = 4.2
billion, which means a memory address
that's 32 bits long can only refer to
4.2 billion unique locations (i.e. 4 GB).
There is some confusion regarding the term Virtual Memory, and it actually refers to the following two very different concepts
Using disk pages to extend the conceptual amount of physical memory a computer has - The correct term for this is actually Paging
An abstraction used by various OS/CPUs to create the illusion of each process running in a separate contiguous address space.
Swap space, OTOH, is the name of the portion of disk used to store additional RAM pages when not in use.
An important realization to make is that the former is transparently possible due to the hardware and OS support of the latter.
In order to make better sense of all this, you should consider how the "Virtual Memory" (as in definition 2) is supported by the CPU and OS.
Suppose you have a 32 bit pointer (64 bit points are similar, but use slightly different mechanisms). Once "Virtual Memory" has been enabled, the processor considers this pointer to be made as three parts.
The highest 10 bits are a Page Directory Entry
The following 10 bits are a Page Table Entry
The last 12 bits make up the Page Offset
Now, when the CPU tries to access the contents of a pointer, it first consults the Page Directory table - a table consisting of 1024 entries (in the X86 architecture the location of which is pointed to by the CR3 register). The 10 bits Page Directory Entry is an index in this table, which points to the physical location of the Page Table. This, in turn, is another table of 1024 entries each of which is a pointer in physical memory, and several important control bits. (We'll get back to these later). Once a page has been found, the last 12 bits are used to find an address within that page.
There are many more details (TLBs, Large Pages, PAE, Selectors, Page Protection) but the short explanation above captures the gist of things.
Using this translation mechanism, an OS can use a different set of physical pages for each process, thus giving each process the illusion of having all the memory for itself (as each process gets its own Page Directory)
On top of this Virtual Memory the OS may also add the concept of Paging. One of the control bits discussed earlier allows to specify whether an entry is "Present". If it isn't present, an attempt to access that entry would result in a Page Fault exception. The OS can capture this exception and act accordingly. OSs supporting swapping/paging can thus decide to load a page from the Swap Space, fix the translation tables, and then issue the memory access again.
This is where the two terms combine, an OS supporting Virtual Memory and Paging can give processes the illusion of having more memory than actually present by paging (swapping) pages in and out of the swap area.
As to your last question (Why is it said 32 bit CPU is limited to 4GB Virtual Memory). This refers to the "Virtual Memory" of definition 2, and is an immediate result of the pointer size. If the CPU can only use 32 bit pointers, you have only 32 bit to express different addresses, this gives you 2^32 = 4GB of addressable memory.
Hope this makes things a bit clearer.
IMHO it is terribly misleading to use the concept of swap space as equivalent to virtual memory. VM is a concept much more general than swap space. Among other things, VM allows processes to reference virtual addresses during execution, which are translated into physical addresses with the support of hardware and page tables. Thus processes do not concern about how much physical memory the system has, or where the instruction or data is actually resident in the physical memory hierarchy. VM allows this mapping. The referenced item (instruction or data) may be resident in L1, or L2, or RAM, or finally on disk, in which case it is loaded into main memory.
Swap space it is just a place on secondary memory where pages are stored when they are inactive. If there is no sufficient RAM, the OS may decide to swap-out pages of a process, to make room for other process pages. The processor never ever executes instruction or read/write data directly from swap space.
Notice that it would be possible to have swap space in a system with no VM. That is, processes that directly access physical addresses, still could have portions of it on
disk.
Though the thread is quite old and has already been answered. Still would like to share this link as this is the simplest explanation I have found so far. Below link has got diagrams for better visualization.
Key Difference: Virtual memory is an abstraction of the main memory. It extends the available memory of the computer by storing the inactive parts of the content RAM on a disk. Whenever the content is required, it fetches it back to the RAM. Swap memory or swap space is a part of the hard disk drive that is used for virtual memory. Thus, both are also used interchangeably.
Virtual memory is quiet different from the physical memory. Programmers get direct access to the virtual memory rather than physical memory. Virtual memory is an abstraction of the main memory. It is used to hide the information of the real physical memory of the system. It extends the available memory of the computer by storing the inactive parts of the RAM's content on a disk. When the content is required, it fetches it back to the RAM. Virtual memory creates an illusion of a whole address space with addresses beginning with zero. It is mainly preferred for its optimization feature by which it reduces the space requirements. It is composed of the available RAM and disk space.
Swap memory is generally called as swap space. Swap space refers to the portion of the virtual memory which is reserved as a temporary storage location. Swap space is utilized when available RAM is not able to meet the requirement of the system’s memory. For example, in Linux memory system, the kernel locates each page in the physical memory or in the swap space. The kernel also maintains a table in which the information regarding the swapped out pages and pages in physical memory is kept.
The pages that have not been accessed since a long time are sent to the swap space area. The process is referred to as swapping out. In case the same page is required, it is swapped in physical memory by swapping out a different page. Thus, one can conclude that swap memory and virtual memory are interconnected as swap memory is used for the technique of virtual memory.
difference-between-virtual-memory-and-swap-memory
"Virtual memory" is a generic term. In Windows, it is called as Paging or pagination. In Linux, it is called as Swap.

Total memory consumption of the system

Is it correct to assume that the total memory consumption (virtual + physical) of a system is sum of "Memory Usage" and "VM Size" columns shown by the task manager in windows?
Read these posts by Mark Russinovich:
http://blogs.technet.com/markrussinovich/archive/2008/07/21/3092070.aspx
http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx
In modern Windows there really is no single truth about "Total Memory Consumption". It depends of course on the definition, but the real question is what you want to do with the answer.
Some processes like SQL-Server tend to use every byte of memory they can get their hands on, if you let them. The .NET CLR garbage collector monitors memory use and acts accordingly, trying to free more memory when it gets scarce.
So for instance you can have a system with 8 GB of physical memory, of which 90% is "used". How much of that memory is actually needed, is very hard to say. The same system may run on a 4 GB machine with no noticeable performance loss or any other issues.
If you want to explore some of the complexities of memory management under Windows, download "VMMap v2.0" from the former sysinternals site. It shows very detailed memory usage per process and may aid you in your quest.
To quote from VMMaps Help:
VMMap categorizes memory into one of several types:
Image
The memory represents an executable file such as a .exe or .dll. The Details column shows the file's path.
Private
Private memory cannot be shared with other processes, is charged against the system commit limit, and typically contains application data.
Shareable
Shareable memory can be shared with other processes, is charged against the system commit limit and typically contains data shared between DLLs in different processes or inter-process communication messages. The Windows APIs refer to this type of memory as pagefile-backed sections.
Mapped File
The memory represents a file on disk and the Details column shows the file's path. Mapped files typically contain application data.
Heap
Heaps represent memory managed by the user-mode heap manager and, like Private memory, is charged against the system commit limit and contains application data.
Managed Heap
Managed heap represents memory that's allocated and used by the .NET garbage collector.
Stack
Stacks are memory used to store function parameters, local function variables and function invocation records for individual threads. Stacks are charged agains the commit limit and typically grow on demand.
System
System memory is kernel-mode physical memory associated with the process. The vast majority of System memory consists of the process page tables.
Free
Free memory regions are spaces in the process address space that are not allocated.
Now you just need to define what types of memory you consider as "used", add these up for all processes, remove multiple duplicates and look at the number... There is a reason why in task manager or other tools, there is no single number labeled "Total Memory Consumption" :-)
No, physical memory and virtual memory may overlap. If a page of memory is in virtual memory and then paged in to physical memory the virtual memory is not necessarily freed, it may be reserved for when the page gets paged out again.