Virtual memory is a good concept currently used by modern operating systems. But I was stuck answering a question and was not sure enough about it. Here is the question:
Suppose there are only a few applications running on a machine, such that the
physical memory of system is more than the memory required by all the
applications. To support virtual memory, the OS needs to do a lot work. So if
the running applications all fit in the physical memory, is virtual memory
really needed?
(Furthermore, the applications running together will always fit in RAM.)
Even when the memory usage of all applications fits in physical memory, virtual memory is still useful. VM can provide these features:
Privileged memory isolation (every app can't touch the kernel or memory-mapped hardware devices)
Interprocess memory isolation (one app can't see another app's memory)
Static memory addresses (e.g. every app has main() at address 0x0800 0000)
Lazy memory (e.g. pages in the stack are allocated and set to zero when first accessed)
Redirected memory (e.g. memory-mapped files)
Shared program code (if more than one instance of a program or library is running, its code only needs to be stored in memory once)
While not strictly needed in this scenario, virtual memory is about more than just providing "more" memory than is physically available (swapping). For example, it helps avoiding memory fragmentation (from an application point of view) and depending on how dynamic/shared libraries are implemented, it can help to avoid relocation (relocation is when the dynamic linker needs to adapt pointers in a library or executable that was just loaded).
A few more points to consider:
Buggy apps that don't handle failures in the memory allocation code
Buggy apps that leak allocated memory
Virtual memory reduces severity of these bugs.
The other replies list valid reasons why virtual memory is useful but
I would like to answer the question more directly : No, virtual memory
is not needed in the situation you describe and not using virtual
memory can be the right trade-off in such situations.
Seymour Cray took the position that "Virtual memory leads to virtual
performance." and most (all?) Cray vector machines lacked virtual
memory. This usually leads to higher performance on the process level
(no translations needed, processes are contiguous in RAM) but can lead
to poorer resource usage on the system level (the OS cannot utilize
RAM fully since it gets fragmented on the process level).
So if a system is targeting maximum performance (as opposed to maximum
resource utilization) skipping virtual memory can make sense.
When you experience the severe performance (and stability) problems
often seen on modern Unix-based HPC cluster nodes when users
oversubscribe RAM and the system starts to page to disk, there is a
certain sympathy with the Cray model where the process either starts
and runs at max performance, or it doesn't start at all.
Related
In the context of operating systems, does an application have direct access to primary memory?
Speaking from a Linux perspective, assuming we're talking about user-level (non-root) processes, then no they can't directly access physical memory. Nor should they for security and functionality reasons. The whole point of paging is to abstract away physical memory from applications so that they only think they have the entire physical memory, but behind the scenes, their memory may or not be resident in physical memory (see: page faults, non-contiguous allocation, page replacement policies).
For root processes though, there is at least one way I know of: through /dev/mem. This discussion mentions how to mmap into /dev/mem to get access to specific physical addresses. Use at your own risk though.
What is the need of virtual memory on 64 bit microprocessor? As i know it can address around 16exabyte memory. So why do we still need paging.
Thanks in advance
In addition to providing virtual memory, paging is used to control memory protection, to provide separation between different applications and between applications and the operating system. Paging also allow different applications to use the same linear address to access different memory locations.
The memory pager is also capable of doing other very useful things, such as mapping a file to memory and paging only the blocks that are actually used from disk, mapping the same data into multiple processes with copy-on-write, giving each program only as much physical RAM as it actually uses, implementing shared memory, memory-mapped I/O and virtualization.
The main reason to have virtual memory is to be able to work with more data than the system has physical memory, but most of the underlying infrastructure (with the significant exception of the paging algorithm) would be needed anyway, and has hardware support.
In the future we may see paging go away. One other problem is that we have systems with 8GB of physical memory with 64 bit processors. As soon as you need more than 8GB of memory, you have to resort to paging. It should not be that long until we have computer systems that have terrabytes of memory and paging will not be necessary.
In that case we will need new operating systems and even new computer systems to take advantage of such large memory.
For example, I heard in class that global variables are just put in a specific location in memory. What is to prevent two programs from accidentally using the same memory location for different variables?
Also, do both programs use the same stack for their arguments and local variables? If so, what's to prevent the variables from interleaving with each other and messing up the indexing?
Just curious.
Most modern processors have a memory management unit (MMU) that provide the OS the ability to create protected separate memory sections for each process including a separate stack for each process. With the help of the MMU the processor can restrict each process to modifying / accessing only memory that has been allocated to it. This prevents one process from writing into a another processes memory space.
Most modern operating systems will use the features of the MMU to provide protection for each process.
Here are some useful links:
Memory Management Unit
Virtual Memory
This is something that modern operating systems do by loading each process in a separate virtual address space. Multiple processes may reference the same virtual address, but the operating system, helped by modern hardware, will map each one to a separate physical address, and make sure that one process cannot access physical memory allocated to another process1.
1 Debuggers are a notable exception: operating system often provide special mechanisms for debuggers to attach to other processes and examine their memory space.
The short answer to your question is that the operating system deals with these issues. They are very serious issues, and a significant percentage of an operating systems job is keeping everything in a separate space. The operating system runs programs that track all the other programs and make sure they are each using a space. This keeps the stacks separate too. Each program is running its own stack assigned by the OS. How the OS does this assigning is actually a complex task.
I have a slight doubt regarding virtual memory.
Normally, it is up to the OS to provide virtual memory to using disk space to expand the amount of memory which appears to be available for applications.
The OS will clear physical memory by copying the data to disk and restoring when needed.
However, it is possible for an application to manage its own “virtual memory” rather than the OS, for example by writing objects to a file then destroying them?
If so, is allowing application to manage its own virtual memory for advantageous or allowing the OS to provide?
Most applications would not be able to even know that they are being managed using virtual memory because the operating system would perform address translation on every memory request made by your application.
This is a task definitely best left to the operating system unless you are working in a very low-level environment (in which case you are probably writing your own operating system anyway).
Aside from the fact that this requires kernel privileges to accomplish, you would need to take care not to corrupt other process' memory.
The operating system is the best place for this kind of logic.
It is not just not advantageous for the application to manage its own virtual memory, it is not possible with standard operating systems (Windows, Unix, Linux, Mac OS X, etc.).
Translation from virtual address to physical address is done by the Memory Management Unit of the system, which is typically firmware, not strictly part of the operating system software.
The only part of the process done by the operating system software is handling of page faults (swapping units of virtual memory to and from backing store), when the address translation finds a reference to a virtual address that is not currently mapped in physical memory.
What could be advantageous is for an application to minimize its use of virtual memory by writing out its own data to disk rather than allocating larger amounts of virtual memory. However, this will only yield a benefit if the application's disk i/o is more efficient than the operating system page handler's disk i/o - an unlikely scenario these days.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I was recently asked a question that in a computer system, if the primary memory(RAM) is comparable to the secondary memory (HDD) then is there a need for virtual memory to be implemented in such a computer system ?
Since paging and segmentation require context switching, which is purely processing overhead, would the benefits of virtual memory overshoot the processing overhead it requires ?
Can someone help me with this question ?
Thanku
It is true that with virtual memory, you are able to have your programs commit (i.e. allocate) a total of more memory that physically available. However, this is only one of many benefits if having virtual memory and it's not even the most important one. Personally, when I use a PC, I periodically check task manager to see how close I come to using my actual RAM. If I constantly go over, I go and I buy more RAM.
The key attribute of all OSes that use virtual memory is that every process has its own isolated address space. That means you can have a machine with 1GB of RAM and have 50 processes running but each one will still have 4GB of addressable memory space (32-bit OS assumed). Why is it important? It's not that you can "fake things out" and use RAM that isn't there. As soon as you go over and swapping starts, your virtual memory manager will begin thrashing and performance will come a halt. A much more important implication of this is that if each program has it's own address space, there's no way it can write to any random memory location and affect another program.
That's the main advantage: stability/reliability. In Windows 95, you could write an application that would crash entire operating system. In W2K+, it is simply impossible to write a program that paves all over its own address space and crashes anything other than self.
There are few other advantages as well. When executables and DLLs are loaded into RAM, virtual memory manager can detect when the same binary is loaded more than once and it will make multiple processes share the same physical RAM. At virtual memory level, it appears as if each process has its own copy, but at a lower level, it all gets mapped to one spot. This speeds up program startup and also optimizes memory usage since each DLL is only loaded once.
Virtual memory managers also allow you to perform file I/O by simply mapping files to pages in the virtual address space. In addition to introducing interesting alternative to working with files, this also allows for implementations of shared memory segments which is when physical RAM with read/write pages is intentionally shared between processes for extremely efficient inter-process communications (IPC).
With all these benefits, if we consider that most of the time you still want to shoot for having more physical RAM than total commit size and consider that modern CPUs have support for virtual address mapping built directly into the hardware, the overhead of having virtual memory manager is actually very minimal. On the other hand, in environments where many applications from many different vendors run concurrently, process address space is priceless.
I'm going to dump my understanding of this matter, with absolutely no background credentials to back it up. Gonna get downvoted? :)
First up, by saying primary memory is comparable to secondary memory, I assume you mean in terms of space. (Afterall, accessing RAM is faster than accessing storage).
Now, as I understand it,
Random Access Memory is limited by Address Space, which is the addresses which the operating system can store stuff in. A 32bit operating system is limited to roughly 4gb of RAM, while 64bit operating systems are (theoretically) limited to 2.3EXABYTES of RAM, although Windows 7 limits it to 200gb for Ultimate edition, and 2tb for Server 2008.
Of course, there are still multiple factors, such as
cost to manufacture RAM. (8gb on a single ram thingie(?) still in the hundreds)
dimm slots on motherboards (I've seen boards with 4 slots)
But for the purpose of this discussion let us ignore these limitations, and talk just about space.
Let us talk about how applications nowadays deal with memory. Applications do not know how much memory exists - for the most part, it simply requisitions it from the operating system. The operating system is the one responsible for managing which address spaces have been allocated to each application that is running. If it does not have enough, well, bad things happen.
But, surely with theoretical 2EXABYTES of RAM, you'd never run out?
Well, a famous person long ago once said we'd never need more than 64kBs of RAM.
Because most Applications nowadays are greedy (they take as much as the operating system is willing to give), if you ran enough applications, on a powerful enough computer, you could theoretically exceed the storage limits of the physical memory. In that case, Virtual Memory would be required to make up the extra required memory.
So to answer your question: (in my humble opinion formed from limited knowledge on the matter,) yes you'd still need to implement virtual memory.
Obviously take all this and do your own research. I'm turning this into a community wiki so others can edit it or just delete it if it is plain wrong :)
Virtual memory working
It may not ans your whole question. But it seems the ans to me