Diff. between Logical memory and Physical memory - operating-system

While understanding the concept of Paging in Memory Management, I came through the terms "logical memory" and "physical memory". Can anyone please tell me the diff. between the two ???
Does physical memory = Hard Disk
and logical memory = RAM

There are three related concepts here:
Physical -- An actual device
Logical -- A translation to a physical device
Virtual -- A simulation of a physical device
The term "logical memory" is rarely used because we normally use the term "virtual memory" to cover both the virtual and logical translations of memory.
In an address translation, we have a page index and a byte index into that page.
The page index to the Nth path in the process could be called a logical memory. The operating system redirects the ordinal page number into some arbitrary physical address.
The reason this is rarely called logical memory is that the page made be simulated using paging, becoming a virtual address.
Address transition is a combination of logical and virtual. The normal usage is to just call the whole thing "virtual memory."
We can imagine that in the future, as memory grows, that paging will go away entirely. Instead of having virtual memory systems we will have logical memory systems.

Not a lot of clarity here thus far, here goes:
Physical Memory is what the CPU addresses on its address bus. It's the lowest level software can get to. Physical memory is organized as a sequence of 8-bit bytes, each with a physical address.
Every application having to manage its memory at a physical level is obviously not feasible. So, since the early days, CPUs introduced abstractions of memory known collectively as "Memory Management." These are all optional, but ubiquitous, CPU features managed by your kernel:
Linear Memory is what user-level programs address in their code. It's seen as a contiguous addresses space, but behind the scenes each linear address maps to a physical address. This allows user-level programs to address memory in a common way and leaves the management of physical memory to the kernel.
However, it's not so simple. User-level programs address linear memory using different memory models. One you may have heard of is the segmented memory model. Under this model, programs address memory using logical addresses. Each logical address refers to a table entry which maps to a linear address space. In this way, the o/s can break up an application into different parts of memory as a security feature (details out of scope for here)
In Intel 64-bit (IA-32e, 64-bit submode), segmented memory is never used, and instead every program can address all 2^64 bytes of linear address space using a flat memory model. As the name implies, all of linear memory is available at a byte-accessible level. This is the most straightforward.
Finally we get to Virtual Memory. This is a feature of the CPU facilitated by the MMU, totally unseen to user-level programs, and managed by the kernel. It allows physical addresses to be mapped to virtual addresses, organized as tables of pages ("page tables"). When virtual memory ("paging") is enabled, tables can be loaded into the CPU, causing memory addresses referenced by a program to be translated to physical addresses transparently. Page tables are swapped in and out on the fly by the kernel when different programs are run. This allows for optimization and security in process/memory management (details out of scope for here)
Keep in mind, Linear and Virtual memory are independent features which can work in conjunction. If paging is disabled, linear addresses map one-to-one with physical addresses. When enabled, linear addresses are mapped to virtual memory.
Notes:
This is all linux/x86 specific but the same concepts apply almost everywhere.
There are a ton of details I glossed over
If you want to know more, read The Intel® 64 and IA-32 Architectures Software Developer Manual, from where I plagiarized most of this

I'd like to add a simple answer here.
Physical Memory : This is the memory that is actually present and every process needs space here to execute their code.
Logical Memory:
To a user program the memory seems contiguous,Suppose a program needs 100 MB of space in memory,To this program a virtual address space / Logical address space starts from 0 and continues to some finite number.This address is generated by CPU and then The MMU then maps this virtual address to real physical address through some page table or any other way the mapping is implemented.
Please correct me or add some more content here. Thanks !

Physical memory is RAM; Actually belongs to main memory. Logical address is the address generated by CPU. In paging,logical address is mapped into physical address with the help of page tables. Logical address contains page number and an offset address.

An address generated by the CPU is commonly referred to as a logical address, whereas an address seen by the memory unit—that is, the one loaded into the memory-address register of the memory—is commonly referred to as a physical address

The physical address is the actual address of the frame where each page will be placed, whereas the logical address is the address generated by the CPU for each page.
What exactly is a frame?
Processes are retrieved from secondary memory and stored in main memory using the paging storing technique.
Processes are kept in secondary memory as non-contiguous pages, which implies they are stored in random locations.
Those non-contiguous pages are retrieved into main Memory as a frame by the paging operating system.
The operating system divides the memory frame size equally in main memory, and all processes retrieved from secondary memory are stored concurrently.

Related

What is the real use of logical addresses?

This is what I understood of logical addresses :
Logical addresses are used so that data on the physical memory do not get corrupted. By the use of logical addresses, the processes wont be able to access the physical memory directly, thereby ensuring that it cannot store data on already accessed physical memory locations and hence protecting data integrity.
I have a doubt whether it was really necessary to use logical addresses. The integrity of the data on the physical memory could have been preserved by using an algorithm or such which do not allow processes to access or modify memory locations which were already accessed by other processes.
"The integrity of the data on the physical memory could have been preserved by using an algorithm or such which do not allow processes to access or modify memory locations which were already accessed by other processes."
Short Answer: It is impossible to devise an efficient algorithm as proposed to match the same level of performance with logical address.
The issue with this algorithm is that how are you going to intercept each processes' memory access? Without intercepting memory access, it is impossible to check if a process has privileges to access certain memory region. If we are really going to implement this algorithms, there are ways to intercept memory access without using the logical address provided by MMU (Memory management unit) on modern cpus (Assume you have a cpu without MMU). However, those methods will not be as efficient as using MMU. If your cpu does have a MMU, although logical address translation will be unavoidable, you could setup a one-to-one to the physical memory.
One way to intercept memory access without MMU is to insert kernel trap instruction before each memory access instruction in a program. Since we cannot trust user level program, such job cannot be delegated to a compiler. Thus, you can write an OS which will do this job before it loads a program into memory. This OS will scan through the binary of your program and insert kernel trap instruction before each memory access. By doing so, kernel can inspect if a memory access should be granted. However, this approach downgrades your system's performance a lot as each memory access, legal or not, will trap into the kernel. And trapping into kernel involves context switching which takes a lot of cpu cycles.
Can we do better? What about do a static analysis of memory access of our programs before we load it into memory so we only insert trap before illegal memory access? However, processes has no predefined execution order. Let's say you have programs A and B. They both try to access the same memory region. Then who should get it with our static analysis? We could randomly assign to one of them. Let's say we assign to B. Then how do we know when will B be done with this memory so we can give to A so it can proceed? Let's say B use this region to hold a global variable, which accessed multiple times throughout its life cycle. Do we wait till the completion of B to give this region to A? What if B never ends?
Furthermore, a static analysis of memory access would be impossible with the present of dynamic memory allocation. If either program A or B tries to allocate a memory region which size depends on user input, then OS or our static analysis tool cannot know ahead of time of where or how big the region is. And thus would not be able to do analysis at all.
Thus, we have to fall back to trap on every memory access and determine if access is legal on runtime. Sounds familiar? This is the function of MMU or logical address. However, with logical address, a trap is incurred if and only if a illegal access has happened instead of every memory access.
It is simulated by the OS to programs as if they were using physical memory. The need of the extra layer (logical address) is necessary for data-integrity purposes. You can make the analogy of logical addresses as the language of OS for addresses because without this Mapping, OS would not be able to understand what are the "actual" addresses allowed to any program. To remove this ambiguity, logical address mapping is required so that the OS know what logical address maps to what physical addressing and whether that physical address location is allowed to that program. It performs the "integrity checks" on logical addresses and not on physical memory because you can check the integrity by changing the logical address and do manipulations but you cant really do the same on physical memory because it would affect the already running processes using the memory.
Also I would like to mention that the base register and limit register are loaded by executing privileged instructions and privileged instructions are executed in kernel mode and only operating system has access to kernel mode and therefore CPU cannot directly access the registers. I hope I helped a little :)
There are some things that you need to understand.
First of all a CPU is unable to access the physical memory directly. In order to calculate the physical address a CPU needs a logical address. Logical address is then used compute the physical address. So this is the basic need of logical addresses to access physical memory. Without logical address you cannot access it. This conversion is necessary. Suppose if there is a system which do not follow virtual/logical addresses, that system will become highly vulnerable to hacker or intruder as they can access physical memory directly and manipulate the useful data on any location.
Second thing, when a process runs, CPU generates logical address in order to load that process on main memory. Now the purpose of this logical address here is, the memory management. The size of registers are very less as compared to the actual size of process. So we need to relocate the memory in order to obtain the optimum efficiency. MMU (Memory Management Unit) comes into play here. Physical memory is calculated by MMU using the logical address. So logical addresses are generated by processes and MMU access physical address based on that logical address.
This example will make it clear.
If data is stored on address 50, base register holds the value 50 and offset holds 0. Now, MMU shifts it to address 100, this would be reflected in logical address as well. Offset becomes 100-50=50. So, now if data is needed to be retrieved via logical address, it goes to base address 50 and then see the offset i.e. 50, it goes to address 100 and access data. Logical address keeps the record of the data where it has been moved. No matter how many address locations that data change, it will be reflected in logical address and hence this logical address give accessibility to that data whatever physical address it holds now.
I hope it helps.

What is the purpose of Logical addresses in operating system? Why they are generated

I want to know that why the CPU generates logical addresses and then maps them into Physical addresses with the help of memory manager? Why do we need them.
Virtual addresses are required to run several program on a computer.
Assume there is no virtual address mechanism. Compilers and link editors generate a memory layout with a given pattern. Instruction (text segment) are positioned in memory from address 0. Then are the segments for initialized or uninitialized data (data and bss) and the dynamic memory (heap and stack). (see for instance https://www.geeksforgeeks.org/memory-layout-of-c-program/ if you have no idea on memory layout)
When you run this program, it will occupy part of the memory that will no longer be available for other processes in a completely unpredictable way. For instance, addresses 0 to 1M will be occupied, or 0 to 16k, or 0 to 128M, it completely depends on the program characteristics.
If you now want to run concurrently a second program, where will its instructions and data go to memory? Memory addresses are generated by the compiler that obviously do not know at compile time what will be the free memory. And remember memory addresses (for instructions or data) are somehow hard-coded in the program code.
A second problem happens when you want to run many processes and that you run out of memory. In this situations, some processes are swapped out to disk and restored later. But when restored, a process will go where memory is free and again, it is something that is unpredictable and would require modifying internal addresses of the program.
Virtual memory simplifies all these tasks. When running a process (or restoring it after a swap), the system looks at free memory and fills page tables to create a mapping between virtual addresses (manipulated by the processor and always unchanged) and physical addresses (that depends on the free memory on the computer at a given time).
Logical address translation serves several functions.
One of these is to support the common mapping of a system address space to all processes. This makes it possible for any process to handle interrupt because the system addresses needed to handle interrupts are always in the same place, regardless of the process.
The logical translation system also handles page protection. This makes is possible to protect the common system address space from individual users messing with it. It also allows protecting the user address space, such as making code and data read only, to check for errors.
Logical translation is also a prerequisite for implementing virtual memory. In an virtual memory system, each process's address space is constructed in secondary storage (ie disk). Pages within the address space are brought into memory as needed. This kind of system would be impossible to implement if processes with large address spaces had to be mapped contiguously within memory.

Location of OS Kernel Data

I'm a beginner with operating systems, and I had a question about the OS Kernel.
I'm used to the standard notion of each user process having a virtual address space of stack, heap, data, and code. My question is that when a context switch occurs to the OS Kernel, is the code run in the kernel treated as a process with a stack, heap, data, and code?
I know there is a dedicated kernel stack, which the user program can't access. Is this located in the user program address space?
I know the OS needs to maintain some data structures in order to do its job, like the process control block. Where are these data structures located? Are they in user-program address spaces? Are they in some dedicated segment of memory for kernel data structures? Are they scattered all around physical memory wherever there is space?
Finally, I've seen some diagrams where OS code is located in the top portion of a user program's address space. Is the entire OS kernel located here? If not, where else does the OS kernel's code reside?
Thanks for your help!
Yes, the kernel has its own stack, heap, data structures, and code separate from those of each user process.
The code running in the kernel isn't treated as a "process" per se. The code is privileged meaning that it can modify any data in the kernel, set privileged bits in processor registers, send interrupts, interact with devices, execute privileged instructions, etc. It's not restricted like the code in a user process.
All of kernel memory and user process memory is stored in physical memory in the computer (or perhaps on disk if data has been swapped from memory).
The key to answering the rest of your questions is to understand the difference between physical memory and virtual memory. Remember that if you use a virtual memory address to access data, that virtual address is translated to a physical address before the data is fetched at the determined physical address.
Each process has its own virtual address space. This means that some virtual address a in one process can map to a different physical address than the same virtual address a in another process. Virtual memory has many important uses, but I'm not going to go into them here. The important point is that virtual memory enforces memory isolation. This means that process A cannot access the memory of process B. All of process A's virtual addresses map to some set of physical addresses and all of process B's virtual addresses map to a different set of physical addresses. As long as the two sets of physical addresses do not overlap, the processes cannot see or modify the memory of each other. User processes cannot access physical memory addresses directly - they can only make memory accesses with virtual addresses.
There are times when two processes may have some virtual addresses that do map to the same physical addresses, such as if they both mmap the same file, both use a shared library, etc.
So now to answer your question about kernel address spaces and user address spaces.
The kernel can have a separate virtual address space from each user process. This is as simple as changing the page directory pointer in the cr3 register (in an x86 processor) on each context switch. Since the kernel has a different virtual address space, no user process can access kernel memory as long as none of the kernel's virtual memory addresses map to the same physical addresses as any of the virtual addresses in any address space for a user process.
This can lead to a minor problem. If a user process makes a system call and passes a pointer as a parameter (e.g. a pointer to a buffer in the read system call), how does the kernel know which physical address corresponds to that buffer? The virtual address in the pointer maps to a different physical address in kernel space, so the kernel cannot just dereference the pointer. There are two options:
The kernel can traverse the user process page directory/tables to find the physical address that corresponds to the buffer. The kernel can then read/write from/to that physical address.
The kernel can instead include all of its mappings in the user address space (at the top of the user address space, as you mentioned). Now, when the kernel receives a pointer through the system call, it can just access the pointer directly since it is sharing the address space with the process.
Kernels generally go with the second option, since it's more convenient and more efficient. Option 1 is less efficient because each time a context switch occurs, the address space changes, so the TLB needs to be flushed and now you lose all of your cached mappings. I'm simplifying things a bit here since kernels have started doing things differently given the recent Meltdown vulnerability discovered.
This leads to another problem. If the kernel includes its mappings in the user process address space, what stops the user process from accessing kernel memory? The kernel sets protection bits in the page table that cause the processor to prohibit the user process from accessing the virtual addresses that map to physical addresses that contain kernel memory.
Take a look at these slides for more information.
I'm used to the standard notion of each user process having a virtual address space of stack, heap, data, and code. My question is that when a context switch occurs to the OS Kernel, is the code run in the kernel treated as a process with a stack, heap, data, and code?
One every modern operating system I am aware there is NEVER a context switch to the kernel. The kernel executes in the context of a process (some systems user the fiction of a reduced process context.
The "kernel" executes when a process enters kernel mode through an exception or an interrupt.
Each process (thread) normally has its own kernel mode stack used after an exception. Usually there is a single single interrupt stack for each processor.
https://books.google.com/books?id=FSX5qUthRL8C&pg=PA322&lpg=PA322&dq=vax+%22interrupt+stack%22&source=bl&ots=CIaxuaGXWY&sig=S-YsXBR5_kY7hYb6F2pLGjn5pn4&hl=en&sa=X&ved=2ahUKEwjrgvyX997fAhXhdd8KHdT7B8sQ6AEwCHoECAEQAQ#v=onepage&q=vax%20%22interrupt%20stack%22&f=false
I know there is a dedicated kernel stack, which the user program can't access. Is this located in the user program address space?
Each process has its own kernel stack. It is often in the user space with protected memory but could be in the system space. The interrupt stack is always in the system space.
Where are these data structures located? Are they in user-program address spaces?
They are generally in the system space. However, some systems do put some structures in the user space in protected memory.
Are they in some dedicated segment of memory for kernel data structures?
If they are in the user space, they are generally for an access mode more privileged than user mode and less privileged than kernel mode.
Are they scattered all around physical memory wherever there is space?
Thinks can be spread over physical memory pretty much at random.
The data structures in questions are usually regular C structures situated in the RAM allotted to the kernel by the kernel allocator
They are not usually accessible from regular processes becuase of normal mechanisms for memory protection and paging (virtual memory)
A kind of exception to this are kernel threads which have no userspace address space so the code they execute is always the kernel code working with the kernel space data structures hence with the isolated kernel memory
Now for the interesting part: 64-bit Linux uses a thing called Direct Map for memory organization, which means that the full amount of physical memory available is mapped in the kernel page tables as just one contiguous chunk. This is not true for 32-bit as the HIGHMEM was used to avoid the limitation of 4GB address spaces
Since the kernel has all the physical RAM visible and available to its own allocator, the kernel data structures in question can be situated pretty randomly with respect to the physical addresses
You can google on there terms to gain additional information:
PTI (page table isolation)
__copy_from_user (esp. on esoteric architectures where this function is not just a bitwise copy)
EPT (Intel nested paging in virtual machines)

Where are Logical addresses located?

I know that user program generates logical addresses.Suppose there is a small code snippet in C .When address is printed,the addresses are virtual addresses.My question is where are those addresses fetched from?where exactly do the allocated values and variables stay?At main memory or secondary memory?If main memory then why there is physical address?
User mode programs only see logical addresses. Only the operating system (kernel mode) sees physical memory.
My question is where are those addresses fetched from?
Those are the logical addresses assigned by the program loader ad linker.
where exactly do the allocated values and variables stay?At main memory or secondary memory?
In a virtual memory system, it may be in main memory or secondary storage.
If main memory then why there is physical address?
It is a logical address that is mapped to a physical address using page tables.
I am kind of new to the computer architecture and Operating System,
but I will try to answer as much as I can. As far as I have
understood about the logical address (Which I still have trouble
understanding, about where it is fetched from or where it is stored.
I mean these addresses (numbers) gotta be stored somewhere,
otherwise CPU can't generate it by itself, right?), these addresses
are assigned by CPU or a processor and depends on the CPU
architecture. Each process is assigned a virtual/logical
address. And this logical address is translated to physical address
by Memory Management Unit of CPU (MMU).
Where exactly do the allocated values and variables stay? As user3344003 said, it may be in main memory or secondary storage.
If main memory then why there is physical address? The reason lies
in the concept of Virtual Memory. Each process has its own virtual
address and a page table. Process's logical address are mapped
through this page table to the Physical memory (RAM). Whatever that
logical address is, it gets mapped to Physical address. If the
Physical memory gets full, then OS evicts some of the less used
or unused process to Secondary storage and puts the needed process
in the RAM. That way multiple process can run at the same time.
Every process assumes that they have all the space in RAM just for
themselves. If not for virtual memory, then physical memory would be
full and process might crash and may shut down the OS as well.
Hope it helps. I am still learning, if my understanding about Logical address and virtual memory is wrong then Please comment.

What's the difference between "virtual memory" and "swap space"?

Can any one please make me clear what is the difference between virtual memory and swap space?
And why do we say that for a 32-bit machine the maximum virtual memory accessible is 4 GB only?
There's an excellent explantation of virtual memory over on superuser.
Simply put, virtual memory is a combination of RAM and disk space that running processes can use.
Swap space is the portion of virtual memory that is on the hard disk, used when RAM is full.
As for why 32bit CPU is limited to 4gb virtual memory, it's addressed well here:
By definition, a 32-bit processor uses
32 bits to refer to the location of
each byte of memory. 2^32 = 4.2
billion, which means a memory address
that's 32 bits long can only refer to
4.2 billion unique locations (i.e. 4 GB).
There is some confusion regarding the term Virtual Memory, and it actually refers to the following two very different concepts
Using disk pages to extend the conceptual amount of physical memory a computer has - The correct term for this is actually Paging
An abstraction used by various OS/CPUs to create the illusion of each process running in a separate contiguous address space.
Swap space, OTOH, is the name of the portion of disk used to store additional RAM pages when not in use.
An important realization to make is that the former is transparently possible due to the hardware and OS support of the latter.
In order to make better sense of all this, you should consider how the "Virtual Memory" (as in definition 2) is supported by the CPU and OS.
Suppose you have a 32 bit pointer (64 bit points are similar, but use slightly different mechanisms). Once "Virtual Memory" has been enabled, the processor considers this pointer to be made as three parts.
The highest 10 bits are a Page Directory Entry
The following 10 bits are a Page Table Entry
The last 12 bits make up the Page Offset
Now, when the CPU tries to access the contents of a pointer, it first consults the Page Directory table - a table consisting of 1024 entries (in the X86 architecture the location of which is pointed to by the CR3 register). The 10 bits Page Directory Entry is an index in this table, which points to the physical location of the Page Table. This, in turn, is another table of 1024 entries each of which is a pointer in physical memory, and several important control bits. (We'll get back to these later). Once a page has been found, the last 12 bits are used to find an address within that page.
There are many more details (TLBs, Large Pages, PAE, Selectors, Page Protection) but the short explanation above captures the gist of things.
Using this translation mechanism, an OS can use a different set of physical pages for each process, thus giving each process the illusion of having all the memory for itself (as each process gets its own Page Directory)
On top of this Virtual Memory the OS may also add the concept of Paging. One of the control bits discussed earlier allows to specify whether an entry is "Present". If it isn't present, an attempt to access that entry would result in a Page Fault exception. The OS can capture this exception and act accordingly. OSs supporting swapping/paging can thus decide to load a page from the Swap Space, fix the translation tables, and then issue the memory access again.
This is where the two terms combine, an OS supporting Virtual Memory and Paging can give processes the illusion of having more memory than actually present by paging (swapping) pages in and out of the swap area.
As to your last question (Why is it said 32 bit CPU is limited to 4GB Virtual Memory). This refers to the "Virtual Memory" of definition 2, and is an immediate result of the pointer size. If the CPU can only use 32 bit pointers, you have only 32 bit to express different addresses, this gives you 2^32 = 4GB of addressable memory.
Hope this makes things a bit clearer.
IMHO it is terribly misleading to use the concept of swap space as equivalent to virtual memory. VM is a concept much more general than swap space. Among other things, VM allows processes to reference virtual addresses during execution, which are translated into physical addresses with the support of hardware and page tables. Thus processes do not concern about how much physical memory the system has, or where the instruction or data is actually resident in the physical memory hierarchy. VM allows this mapping. The referenced item (instruction or data) may be resident in L1, or L2, or RAM, or finally on disk, in which case it is loaded into main memory.
Swap space it is just a place on secondary memory where pages are stored when they are inactive. If there is no sufficient RAM, the OS may decide to swap-out pages of a process, to make room for other process pages. The processor never ever executes instruction or read/write data directly from swap space.
Notice that it would be possible to have swap space in a system with no VM. That is, processes that directly access physical addresses, still could have portions of it on
disk.
Though the thread is quite old and has already been answered. Still would like to share this link as this is the simplest explanation I have found so far. Below link has got diagrams for better visualization.
Key Difference: Virtual memory is an abstraction of the main memory. It extends the available memory of the computer by storing the inactive parts of the content RAM on a disk. Whenever the content is required, it fetches it back to the RAM. Swap memory or swap space is a part of the hard disk drive that is used for virtual memory. Thus, both are also used interchangeably.
Virtual memory is quiet different from the physical memory. Programmers get direct access to the virtual memory rather than physical memory. Virtual memory is an abstraction of the main memory. It is used to hide the information of the real physical memory of the system. It extends the available memory of the computer by storing the inactive parts of the RAM's content on a disk. When the content is required, it fetches it back to the RAM. Virtual memory creates an illusion of a whole address space with addresses beginning with zero. It is mainly preferred for its optimization feature by which it reduces the space requirements. It is composed of the available RAM and disk space.
Swap memory is generally called as swap space. Swap space refers to the portion of the virtual memory which is reserved as a temporary storage location. Swap space is utilized when available RAM is not able to meet the requirement of the system’s memory. For example, in Linux memory system, the kernel locates each page in the physical memory or in the swap space. The kernel also maintains a table in which the information regarding the swapped out pages and pages in physical memory is kept.
The pages that have not been accessed since a long time are sent to the swap space area. The process is referred to as swapping out. In case the same page is required, it is swapped in physical memory by swapping out a different page. Thus, one can conclude that swap memory and virtual memory are interconnected as swap memory is used for the technique of virtual memory.
difference-between-virtual-memory-and-swap-memory
"Virtual memory" is a generic term. In Windows, it is called as Paging or pagination. In Linux, it is called as Swap.