Virtual address is described as linear address in some places, and logical address in others.
I'd like to know which one is right with the clear concept of virtual address.
The concept of virtual addresses is that you have a fake/pretend address space and convert/map that somehow to the real/physical address space for one or more reasons (to improve flexibility, to improve portability, to improve security, etc). How this is implemented in practice doesn't really effect the theoretical concept.
For the implementation of the concept on 80x86; virtual addresses are converted into linear addresses using segmentation, then linear addresses are converted into physical addresses using paging. However; segmentation can be configured so that "virtual = linear" (by setting segment bases to zero and segment limits to max., including in 64-bit code if FS and GS are configured so that they do nothing); and paging can be disabled resulting in "linear = physical"; and if neither segmentation nor paging are used you end up with "virtual = linear = physical".
Most operating systems for 80x86 don't use segmentation but do use paging; so virtual addresses can be described as linear addresses for most operating systems (and most applications) on 80x86; but "technically can" isn't a good reason for increasing confusion and almost nobody would call them linear addresses (instead of virtual addresses) without a reason - normally you'd only see the word "linear" used if the difference might matter.
For logical addresses, I have no idea where you saw that, and without context I'd (correctly or incorrectly) assume it's related to storage space and has nothing to do with memory (e.g. "logical block address" as an alternative to "cylinder, head, sector addressing" for old hard disks).
The three basic concept you need to know:
Physical - An actual, specific device
Logical - A redirection to a device
Virtual - A simulated device
In ye olde days before large memory system, virtual and logical were often conflated in regard to addresses. In reality, there is no such thing as a virtual address. A logical address can map to a nothing at all, a physical address, or memory that is simulated virtually.
You can have virtual memory that is accessed by logical addresses.
Related
I faced this question in my interview, I answered that, there is no limit, as virtual memory itself imaginary thing, so we don't have any limit.
But I don't understand any proper answer by googling.
Kindly help me out in this and explain the memory limit of virtual memory.
The maximum theoretical size for virtual memory is given by the size of a pointer. The largest number that can be represented by the pointer is the maximum theoretical size of virtual memory. The units are the minimal addressable memory unit (typically bytes).
Real operating systems sometimes impose additional restrictions.
There are a number of restrictions on virtual memory.
The address range of the underlying hardware.
Any subdivisions of the address space. Some ranges may be reserved (for example, System and User address spaces) Some may be invalid altogether. Example: VAX divides the 32-bit address evenly into 2 user spaces, a system space, and a reserved (unusable space).
Limits the operating system imposes on page table size. Must system have a parameter and/or account setting limiting this.
The size of the page file.
I'm learning that segmentation in operating systems is based on dividing different segments (for a program, these could represent a symbol table, the source text, the stack...) into units that start at logical memory address 0. This is the virtual address that the MMU (?) uses to get the real in addition to the offset.
An apparent benefit of segmentation is that, since each segment starts at address 0, multiple processes can take advantage of a single segment simultaneously (an example is the shared library).
However, I don't see how else segmentation can benefit programmers. What would be some examples?
Thanks!
Segmentation provides NO benefit to programmers. Segmentation is a kludge that developed to overcome architectural limits. The 16-bit PDP-11 computers could only address 64K of memory. The use of a segmentation allowed the programmer to map memory in and out of the address space to access more memory.
The 8086 chip was retrograde. IBM set the computer industry back by years using it for the PC rather than 68000. The 8086 used segments to reduce the size of instructions. Rather than using 32-bits for an address, instructions could use am offset from a segment register.
In 64-bit mode, the abomination of segments in the Intel processors finally goes away.
While I was reading this Wikipedia article, http://en.wikipedia.org/wiki/Memory_management_unit#How_it_works, I came across that divide virtual address space (range of address used by processor) into pages. But I have learnt that only the physical memory (RAM) is divided into pages. So how is the division of virtual address space of a process done?
Also, here the definition of virtual address space goes as range of address used by processor. Range of address used by processor means the length of address bus in processor, right? So if I am having a processor of address bus of 32 bits, and a RAM of 4 GB (2^32), is my physical and virtual address space same?
Bear with me if the questions are too naive.. I am still not getting a very clear visualization of address space. Thanks in advance.
The answer is specific to each OS, but in general terms it means that though each process gets say 32 bits worth of addressable memory, this memory space is divided in to ranges or pages of a certain size.
Simplistically speaking when your process accesses an address, that location will be in a certain page. The OS will ensure that there is physical memory that is mapped to that location. However it may not be in the same address in physical ram.
When some other process addresses that location then the OS will map in a page of physical ram at that so that location too will be addressable.
All the time the physical memory pages are being mapped to and from disk (so that you can have memory greater than 32 bits worth_\, and the virtual memory pages are being mapped to physical pages just described.
I really recommend reading the links in this question https://stackoverflow.com/questions/1437914/best-book-on-operating-systems
In Operating System Design the kernel is most always mapped to a high virtual memory address, thus gaining control of the upper memory part. The space left below is for applications running in user space, as described in an excellent way in "Linux 3/1 virtual address split".
What I'd like to know is, why is this design decision made or why the kernel does not use the lower part of memory? This is not really clear to me, or maybe I've overseen something.
Edit: This question regards virtual addresses and not physical.
Some advantages of/reasons for such a design:
Applications don't need to care about the size and location of the kernel and may pretend they're the only ones in the memory, starting at around 0 and spanning upwards, with minimal or no code and data relocations. Applications are hence easier to design and implement and they may be less likely to have bugs related to mememory management.
Applications may use smaller/shorter addresses/pointers and hence save some memory.
In the x86 CPU, 16-bit and 32-bit address spaces start at virtual address 0 and end at around 1MB (for real and virtual 8086 modes), 16 MB (16-bit protected mode on i80286+) and 4 GB (32-bit mode, unreal mode). Placing the kernel at lower addresses will reduce the range of addresses available to applications (ex: 16-bit app in 32-bit mode or 32-bit app in 64-bit mode) and/or complicate their memory management. Moving the kernel to the top of the virtual address space generally makes sense on the x86.
There may be other reasons, usually platform-specific. On some platforms there may be little to no difference between the two options. Yet on others the preferred kernel location may be at the lower virtual addresses. Details matter.
I'm reading "Understanding Linux Kernel". This is the snippet that explains how Linux uses Segmentation which I didn't understand.
Segmentation has been included in 80 x
86 microprocessors to encourage
programmers to split their
applications into logically related
entities, such as subroutines or
global and local data areas. However,
Linux uses segmentation in a very
limited way. In fact, segmentation
and paging are somewhat redundant,
because both can be used to separate
the physical address spaces of
processes: segmentation can assign a
different linear address space to each
process, while paging can map the same
linear address space into different
physical address spaces. Linux prefers
paging to segmentation for the
following reasons:
Memory management is simpler when all
processes use the same segment
register values that is, when they
share the same set of linear
addresses.
One of the design objectives of Linux
is portability to a wide range of
architectures; RISC architectures in
particular have limited support for
segmentation.
All Linux processes running in User
Mode use the same pair of segments to
address instructions and data. These
segments are called user code segment
and user data segment , respectively.
Similarly, all Linux processes running
in Kernel Mode use the same pair of
segments to address instructions and
data: they are called kernel code
segment and kernel data segment ,
respectively. Table 2-3 shows the
values of the Segment Descriptor
fields for these four crucial
segments.
I'm unable to understand 1st and last paragraph.
The 80x86 family of CPUs generate a real address by adding the contents of a CPU register called a segment register to that of the program counter. Thus by changing the segment register contents you can change the physical addresses that the program accesses. Paging does something similar by mapping the same virtual address to different real addresses. Linux using uses the latter - the segment registers for Linux processes will always have the same unchanging contents.
Segmentation and Paging are not at all redundant. The Linux OS fully incorporates demand paging, but it does not use memory segmentation. This gives all tasks a flat, linear, virtual address space of 32/64 bits.
Paging adds on another layer of abstraction to the memory address translation. With paging, linear memory addresses are mapped to pages of memory, instead of being translated directly to physical memory. Since pages can be swapped in and out of physical RAM, paging allows more memory to be allocated than what is physically available. Only pages that are being actively used need to be mapped into physical memory.
An alternative to page swapping is segment swapping, but it is generally much less efficient given that segments are usually larger than pages.
Segmentation of memory is a method of allocating multiple chunks of memory (per task) for different purposes and allowing those chunks to be protected from each other. In Linux a task's code, data, and stack sections are all mapped to a single segment of memory.
The 32-bit processors do not have a mode bit for disabling
segmentation, but the same effect can be achieved by mapping the
stack, code, and data spaces to the same range of linear addresses.
The 32-bit offsets used by 32-bit processor instructions can cover a
four-gigabyte linear address space.
Aditionally, the Intel documentation states:
A flat model without paging minimally requires a GDT with one code and
one data segment descriptor. A null descriptor in the first GDT entry
is also required. A flat model with paging may provide code and data
descriptors for supervisor mode and another set of code and data
descriptors for user mode
This is the reason for having a one pair of CS/DS for kernel privilege execution (ring 0), and one pair of CS/DS for user privilege execution (ring 3).
Summary: Segmentation provides a means to isolate and protect sections of memory. Paging provides a means to allocate more memory that what is physically available.
Windows uses the fs segment for local thread storage.
Therefore, wine has to use it, and the linux kernel needs to support it.
Modern operating systems (i.e. Linux, other Unixen, Windows NT, etc.) do not use the segmentation facility provided by the x86 processor. Instead, they use a flat 32 bit memory model. Each user mode process has it's own 32 bit virtual address space.
(Naturally the widths are expanded to 64 bits on x86_64 systems)
Intel first added segmentation on the 80286, and then paging on the 80386. Unix-like OSes typically use paging for virtual memory.
Anyway, since paging on x86 didn't support execute permissions until recently, OpenWall Linux used segmentation to provide non-executable stack regions, i.e. it set the code segment limit to a lower value than the other segment's limits, and did some emulation to support trampolines on the stack.