When I read the book Operating system design and implementation, in chapter 2, Process creation, here is what it mentions:
The child's initial address space is a copy of the parent's, but
there are two distinct address involved
This is a bit vague to me. It seems it is telling me they have the same address space but I believe that is not true.
Can anyone explain the detail for this?
An address space is the range of addresses (values) that are visible to the program. For instance a program address’s space can be from 0x00000000 to 0xFFFFFFFF. The child and the parent have the same address space, but, for instance, the adress 0x00D543A7 is a different address in parent and a different address in child. The OS (and to some extent the processor) takes care of address translations so that a two logical addresses from two different programs that have the same value map map to a different physical memory addresses.
Related
During compile time and load time address binding logical address is same as physical adsress. My question is that, if logical and physical address are identical in compile time and load time address binding then why do we call it logical address? Shouldn't the term "logical address" be used only in execution time binding
First of all there are 2 types of bindings:
1) Dynamic/Hardware-Based/Runtime Binding:
Here the MMU (Memory Management Unit) is responsible for translating the logical addresses. It is assumed that each program is loaded at logical address zero. When the program starts running the OS decides where should it reside in the actual physical memory and sets a special register called base register to the offset in the physical memory. Then each logical address is translated in this manner:
Physical Address= Logical Address + Base Register Value.
*Note that if you look at the assembly code for the program the addresses are logical thus they don't change if you recompile.
2) Static/Software-Based Binding:
At compile time the OS knows where the process will reside in the physical memory. Thus the addresses in the compiled/ assembly code will be the actual physical addresses and note that these addresses might change if we recompile.
FINAL NOTE: I think that static binding is not used nowadays .. thus logical addresses are 100% different than the physical addresses (dynamic binding).
both are the concepts of Operating System. I searched on address binding,and found answer that it is mapping of logical address to physical address in main memory.On the other side when I searched about Paging,it also indicates the same.Became confused in both.Please help me in this question...
Thanking you in Advance
Address binding can map a logical address to physical memory, or to a location on a disk. Paging is specific part of address binding. Pages are large blocks of memory, all the same size. To map a logical address to a physical one in main memory or in disk, first you find the page ( for example 4K block ) it is in, and then where it is inside that page.
I've been researching (on SO and elsewhere) the relationship between virtual addresses and physical addresses. I would appreciate it if someone could confirm if my understanding of this concept is correct.
The page table is classified as 'virtual space' and contains the virtual addresses of each page. It then maps to the 'physical space', which contains the physical addresses of each page.
A wikipedia diagram to make my explanation clearer:
https://upload.wikimedia.org/wikipedia/commons/3/32/Virtual_address_space_and_physical_address_space_relationship.svg
Is my understanding of this concept correct?
Thank you.
Not entirely correct.
Each program has its own virtual address space. Technically, there is only one address space, the physical random-access memory. Therefore it's called "virtual" because to the user program it seems as if it has its own address space.
Now, take the instruction mov 0x1234, %eax (AT&T) or MOV EAX, [0x1234] (Intel) as an example:
The CPU sends the virtual address 0x1234 to one of its parts, the MMU.
The MMU obtains the corresponding physical address from the page table. This process of adjusting the address is also lovingly called "massaging."
The CPU retrieves the data from the RAM location the physical address refers to.
The concrete translation process depends heavily on the actual architecture and CPU.
The page table is classified as 'virtual space' and contains the virtual addresses of each page. It then maps to the 'physical space', which contains the physical addresses of each page.
This is not really correct. The page table defines a logical address space consisting of pages. The page table maps the logical pages to physical page frames they indicate that the page frame does not [yet] exist in memory in which case you have a virtual mapping. A page is VIRTUAL when memory is simulated using disk space.
In the olde days, page tables always established a virtual address space. Now it is becoming increasingly common (e.g. embedded system) to use logical address translation without virtual memory (paging). Thus, the terms "virtual memory" and "logical memory" are frequently conflated.
The physical address space exists only to the operating system. The process sees only a logical address space.
That is a bit of an oversimplification because the process becomes the operating system after an exception or interrupt and the kernel operates within a common logical address range. However, the operating system kernel does have to manage physical memory to some degree.
For example, some aspect of the page tables must use physical addresses. If the page tables used all logical addresses, then you'd have a chicken and egg problem for address translation. Various hardware systems address this problem in different ways.
Finally, the diagram you link to is a very poor illustration.
I was going through few of the lectures conducted at UC Berkeley on Virtual Memory
# https://www.youtube.com/results?search_query=computer+science+194+-+lecture+14
"Computer Science 194 - Lecture 13_ Memory Management_ Mechanisms on various architectures, NUMA" and "Computer Science 194 - Lecture 14_ Virtual Memory Management, Swapping, Page Cache"
Its an excellent lecture. However , I am slightly confused on one point.
The lecture explains how segmentation and paging have been combined for addressing VM.
It goes on to explain that the current systems use as the structure of VM address.
It also mentioned that the virtual space visible to a process is private and the address ranges for each process remains same. Each process views its address space to start at 0 and extend till 4G. with different segment areas within this 4G space.
Questions :
Now if the address ranges for each VM space is same , How is it that two processes referring to the highest level lookup table - the PageTblPtr using "segment Number" as the index , are uniquely able to identify a row in this table ... as the segment address/number for each of the process may be same... lets say Process A's and B's have both data segment starting at address 'x' within there VM space .
Does that also mean that there could be as many as 6 entries in the PagetblPtr for a process for each - 6 possible segments - CS, DS, ...etc?
Where is the PageTblPtr maintained ?
Best Regards,
Varun
I'm not going to watch an 83-minute video lecture just to get the exact definitions of terms like PageTblPtr, but I'll try to provide some general answers:
Modern operating systems don't use segmentation. It's just a big flat address space, and all the segment selectors are set to encompass the whole thing. I don't think x86 processors even support segmentation when running in 64-bit mode.
Each process has its own page table, which defines the mappings for that process's virtual address space. Two processes can have different data at the same virtual address because their respective page tables point to different physical pages for the same virtual page.
Page tables are owned and managed by the OS kernel. When the kernel performs a task switch (changing which process it's running), one of the things it must do is activate the new process's page table instead of the old.
I'm trying to understand how does an operating system work when we want to assign some value to a particular virtual memory address.
My first question concerns whether the MMU handles everything between the CPU and the RAM. Is this true? From what one can read from Wikipedia, I'd say so:
A memory management unit (MMU), sometimes called paged memory
management unit (PMMU), is a computer
hardware component responsible for
handling accesses to memory requested
by the CPU.
If that is the case, how can one tell the MMU I want to get 8 bytes, 64 or 128bytes, for example? What about writing?
If that is not the case, I'm guessing the MMU just translates virtual addresses to physical ones?
What happens when the MMU detects there will be what we call a page-fault? I guess it has to tell it to the CPU so the CPU loads the page itself off disk, or is the MMU able to do this?
Thanks
Devoured Elysium,
I'll attempt to answer your questions one by one but note, it might be a good idea to get your hands on a textbook for an OS course or an introductory computer architecture course.
The MMU consists of some hardware logic and state whose purpose is, indeed, to produce a physical address and provide/receive data to and from the memory controller. Actually, the job of memory translation is one that is taken care of by cooperating hardware and software (OS) mechanisms (at least in modern PCs). Once the physical address is obtained, the CPU has essentially done its job and now sends the address out on a bus which is at some point connected to the actual memory chips. In many systems this bus is called the Front-Side Bus (FSB), which is in turn connected to a memory controller. This controller takes the physical address supplied by the CPU and uses it to interact with the DRAM chips, and ultimately extract the bits in the correct rows and columns of the memory array. The data is then sent back to the CPU, which can now operate on it. Note that I'm not including caching in this description.
So no, the MMU does not interact directly with RAM, which I assume you are using to mean the physical DRAM chips. And you cannot tell the MMU that you want 8 bytes, or 24 bytes, or whatever, you can only supply it with an address. How many bytes that gets you depends on the machine you're on and whether it's byte-addressable or word-addressable.
Your last question urges me to remind you: the MMU is actually a part of the CPU--it sits on the same silicon die (although this was not always the case).
Now, let's take your example with the page fault. Suppose our user-level application wants to, like you said, set someAddress = 10, I'll take it in steps. Let's assume someAddress is 0xDEADBEEF and let's ignore caches for now.
1) The application issues a store instruction to 0xsomeAddress, which, in x86 might look something like
mov %eax, 0xDEADBEEF
where 10 is the value in the eax register.
2) 0xDEADBEEF in this case is a virtual address, which must be translated. Most of the time, the virtual to physical address translation will be available in a hardware structure called the Translation Lookaside Buffer (TLB), which will provide this translation to us very fast. Typically, it can do so in one clock cycle. If the translation is in the TLB, called a TLB hit, execution can continue immediately (i.e. the physical address corresponding to 0xDEADBEEF and the value 10 are sent out to the memory controller to be written).
3) Let's suppose though, that the translation wasn't available in the TLB (called a TLB miss). Then we must find the translation in the page tables, which are structures in memory whose structure is defined by the hardware and managed by the OS. They simply contain entries that map a virtual address to a physical one (more accurately, a virtual page number to a physical page number). But these structures also reside in memory, and so must have addresses! The hardware contains a special register called cr3 which contains the physical address of the current page table. We can index into this page table using our virtual address, so the hardware takes the value in cr3, computes an address by adding an offset, and goes off to memory to fetch the page table entry (PTE). This PTE will (hopefully) contain the physical address corresponding to 0xDEADBEEF, in which case we put this mapping in the TLB (so we don't have to walk the page table again) and continue on our way.
4) But oh no! What if there is no PTE in the page tables for 0xDEADBEEF? This is a page fault, and this is where the Operating System comes into play. The PTE we got out of the page table existed, as in it was (let's assume) a valid memory address to access, but the OS had not created a VA->PA mapping for it yet, so it would have had a bit set to indicate that it is invalid. The hardware is programmed in such a way that when it sees this invalid bit upon an access, it generates an exception, in this case a page fault.
5) The exception causes the hardware to invoke the OS by jumping to a well known location--a piece of code called a handler. There can be many exception handlers, and a page fault handler is one of them. The page fault handler will know the address that caused the fault because it's stored in a register somewhere, and so will create a new mapping for our virtual address 0xDEADBEEF. It will do so by allocating a free page of physical memory and then saying "all virtual addresses between VA x and VA y will map to some address within this newly allocated page of physical memory". 0xDEADBEEF will be somewhere in that range, so the mapping is now securely in the page tables, and we can restart the instruction that caused the page fault (the mov).
6) Now, when we go through the page tables again, we will find a mapping and the PTE we pull out will have a nice physical address, the one we want to store to. We provide this with the value 10 to the memory controller and we're done!
Caches will change this game quite a bit, but I hope this serves to illustrate how paging works. Again, it would benefit you greatly to check out some OS/Computer Architecture books. I hope this was clear.
There are data structures that describe which virtual addresses correspond to which physical addresses. The OS creates and manages these data structures, and the CPU uses them to translate virtual addresses into physical addresses.
For example, the OS might use these data structures to say "virtual addresses in the range from 0x00000000 to 0x00000FFF correspond to physical addresses 0x12340000 to 0x12340FFFF"; and if software tries to read 4 bytes from the virtual address 0x00000468 then the CPU will actually read 4 bytes from the physical address 0x12340468.
Typically everything is effected by the virtual->physical translation (except for when the CPU is accessing the data structures that describe the translation). Also, usually there's some sort of translation cache build into the CPU to help reduce the overhead involved.