Is it possible to say that internal fragmentation occurs only in physical memory and external fragmentation occurs only in virtual memory ?
If we can't say that, could you explain where internal and external fragmentation can happen ?
I disagree that internal fragmentation occurs only in physical memory. The unused memory is also marked as used in the free list, and it's when using this free list that the OS allocates more than what is needed. I would argue that the problem of internal fragmentation doesn't occur in any one type of memory, but is an issue of the allocation algorithm the OS is using. It's an issue in both.
Related
I am currently reading OS and read about internal and external memory fragmentation.
Internal fragmentation is based on fixed size partitioning. For example = paging is based on fixed size partitioning and hence, paging suffers from internal fragmentation.
On the other hand, External fragmentation is based on variable size partitioning.
For example = segmentation is based on dynamic variable size partitioning and hence, segmentation suffers from external fragmentation.
So, my doubt is there is internal fragmentation in paging, so it has 0 external fragmentation or there is something very small, so we can neglect that
and
Similarly, for segmentation, does it also has 0 internal fragmentation or very small, that can be neglected?
Is my understanding right ?
Internal fragmentation is sujected to "Fixed size partitioning scheme" and external fragmentation to "variable size partitioning ".
No, there can never be external fragementation in fixed size partitioning because the leftover space cannot be used to allocate to any other process. External fragmentation occurs only when "there is space available which can be allocated to the process but due to non availability of enough contiguous space,the available space cannot be allocated".
On the other hand,in case of variable size partitioning, there can never be internal fragmentation because the lefover space can be allocated to the process of same or less than the available space(though the probability of allotment could be very less).
We can remove internal fragmentation and external fragmentation, if we can use a method "non-contiguous allocation" in "variable size partitioning".
As processes are loaded and removed from memory , the free memory space is broken into little pieces ,causing fragmentation ... but how does this happen ?
And what is the best solution to external fragmentation ?
External fragmentation exists when there is enough total memory to satisfy a request (from a process usually), but the total required memory is not available at a contiguous location i.e, its fragmented.
Solution to external fragmentation :
1) Compaction : shuffling the fragmented memory into one contiguous location.
2) Virtual memory addressing by using paging and segmentation.
External Fragmentation
External fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous.
see following example
0x0000 0x1000 0x2000
A B C //Allocated three blocks A, B, and C, of size 0x1000.
A C //Freed block B
Now Notice that the memory that B used cannot be included for an allocation larger than B's size
External fragmentation can be reduced by compaction or shuffle memory contents to place all free memory together in one large block. To make compaction feasible, relocation should be dynamic.External fragmentation is also avoided by using paging technique.
The best solution to avoid external fragmentation is Paging.
Paging is a memory management technique usually used by virtual memory operating systems to help ensure that the data you need is available as quickly as possible.
for more see this : What's the difference between operating system "swap" and "page"?
In case of Paging there is no external fragmentation but it doesn't avoid internal fragmentation.
What does the virtual memory space size depend on? Does it depend on the RAM or on the architecture or something else.
Basically it depends on the architecture (32bit 64bit and so...).
This is a very simplistic explanation of things, but so called "architecture" limits size of the virtual address space. For example, 32bit architecture will enable to address 2^31 memory addresses.
The size of the RAM will limit the amount of physical memory that can be used, but not the virtual address space. (potentially the Hard-drive can be used to extend the available physical memory)
Anyway I recommend to read the wiki page on virtual memory
Very simply, virtual memory is just a way of letting your software use more memory addresses than there is actual physical memory, such that when the data being access isn't already hosted in physical memory it's transparently read in from disk, and when some more physical memory is needed to do things like that some of the current content of physical memory is temporarily written or "swapped" out to disk (e.g. the least-recently used memory). In other words, some of the physical memory becomes a kind of cache for a larger virtual memory space including hard disk.
When I run a job on a node, using PBS, and I get finally in the job report:
resources_used.mem=1616024kb
resources_used.vmem=2350176kb
resources_used.walltime=00:06:32
What does the virtual memory really means? I don't think there is a hard drive connected to each node.
Which kind of memory should I take into account when I try to increase the size of the problem, such that I don't hit the 16GB capacity of the node memory, the normal memory (mem) or the virtual memory (vmem) ?
Thanks
The vmem indicates how much memory your job was using in total. It used all of the available physical memory (see the mem value), and more. An operating system allows programs to allocate more memory than there is physical memory available.
If you're actively using more memory than there is physical memory available, you'll start seeing swap activity (data that was swapped out to disk being brought back into memory, and other stuff being put to disk). That's bad, it will basically kill your performance if it happens a lot.
So, as long as you're not actively using more than 16GB, you're fine. But the mem or vmem values won't tell you this, it depends on what the application is actually doing.
We know that malloc() and new operation allocate memory from heap dynamically, but where does heap reside? Does each process have its own private heap in the namespace for dynamic allocation or the OS have a global one shared by all the processes. What's more, I read from a textbook that once memory leak occurs, the missing memory cannot be reused until next time we restart our computer. Is this thesis right? If the answer is yes, how can we explain it?
Thanks for your reply.
Regards.
The memory is allocated from the user address space of your process virtual memory. And all the memory is reclaimed by the OS when the process terminates there is no need to restart the computer.
Typically the C runtime will use the various OS APIs to allocate memory which is part of its process address space. The, within that allocated memory, it will create a heap and allocate memory itself from that heap via calls to malloc or new.
The reason for this is that often OS APIs are course-grained and require you to allocate memory in large chunks (such as a page size) whereas your application typically wants to allocate small amounts of memory at any one time.
You do not mention OS of your interest.
That's exactly mean no direct answer.
Try to look into some book about OSes
e.g. Tanenbaum's