If you think question is not proper please edit or make it correct, i am asking what i google and extract from the internet.
Cpu generates the logical address which is converted into physical address but the question here is how does the cpu generates the logical address for the data that is stored on the disk.
Cpu generates the logical address which is converted into physical address but the question here is how does the cpu generates the logical address for the data that is stored on the disk.
It doesn't, at least not the way you're thinking it does.
Normally a program tries to access memory at a virtual address, but the CPU sees "virtual address isn't present" and complains to the OS (kernel) via. a page fault. The page fault handler figures out what went wrong, loads the data from disk into RAM, maps the RAM into the virtual address space, then lets the program continue/retry as if nothing happened. The second time the CPU tries to execute the code the data is in RAM so it works fine.
Of course the OS has to know the reason why data at a virtual address wasn't present, which means that the OS has to keep track of extra information that the CPU doesn't have - if the virtual address actually isn't valid at all (e.g. NULL), or if the data is in swap space (and where), or if the data is part of a memory mapped file (and which offset of which file).
There is Virtual address space and a physical address space. virtual address space defines the address space of the program. say there is a program of 4GB. in that case we can represent the address space for that program as 32 bits. (2^32 = 4GB) from 0 to 0xFFFFFFFF.
this is the space the program thinks it has.
while compilation of the program, the program is given logical addresses based on the address space of the program.
after loading in to the memory. the program counter that is assign to this program will point to these addresses (logical/virtual addresses) and cpu will only want to fetch these addresses where the program instruction's are. cpu doesn't know where are the instructions located in the memory. that is up to MMU to translate the addresses.
the main thing is CPU doesn't actually generate these addresses, these are the addresses that where given to the program while compilation, using these, the instructions in the program reference each other. so cpu just see what program counter is pointing and generate / asks for these instruction. which are located in the physical memory.
when ever a address for fetching data or operand , instruction pointed by the PC, cpu call for these addresses.
Related
I want to know that why the CPU generates logical addresses and then maps them into Physical addresses with the help of memory manager? Why do we need them.
Virtual addresses are required to run several program on a computer.
Assume there is no virtual address mechanism. Compilers and link editors generate a memory layout with a given pattern. Instruction (text segment) are positioned in memory from address 0. Then are the segments for initialized or uninitialized data (data and bss) and the dynamic memory (heap and stack). (see for instance https://www.geeksforgeeks.org/memory-layout-of-c-program/ if you have no idea on memory layout)
When you run this program, it will occupy part of the memory that will no longer be available for other processes in a completely unpredictable way. For instance, addresses 0 to 1M will be occupied, or 0 to 16k, or 0 to 128M, it completely depends on the program characteristics.
If you now want to run concurrently a second program, where will its instructions and data go to memory? Memory addresses are generated by the compiler that obviously do not know at compile time what will be the free memory. And remember memory addresses (for instructions or data) are somehow hard-coded in the program code.
A second problem happens when you want to run many processes and that you run out of memory. In this situations, some processes are swapped out to disk and restored later. But when restored, a process will go where memory is free and again, it is something that is unpredictable and would require modifying internal addresses of the program.
Virtual memory simplifies all these tasks. When running a process (or restoring it after a swap), the system looks at free memory and fills page tables to create a mapping between virtual addresses (manipulated by the processor and always unchanged) and physical addresses (that depends on the free memory on the computer at a given time).
Logical address translation serves several functions.
One of these is to support the common mapping of a system address space to all processes. This makes it possible for any process to handle interrupt because the system addresses needed to handle interrupts are always in the same place, regardless of the process.
The logical translation system also handles page protection. This makes is possible to protect the common system address space from individual users messing with it. It also allows protecting the user address space, such as making code and data read only, to check for errors.
Logical translation is also a prerequisite for implementing virtual memory. In an virtual memory system, each process's address space is constructed in secondary storage (ie disk). Pages within the address space are brought into memory as needed. This kind of system would be impossible to implement if processes with large address spaces had to be mapped contiguously within memory.
I'm a beginner with operating systems, and I had a question about the OS Kernel.
I'm used to the standard notion of each user process having a virtual address space of stack, heap, data, and code. My question is that when a context switch occurs to the OS Kernel, is the code run in the kernel treated as a process with a stack, heap, data, and code?
I know there is a dedicated kernel stack, which the user program can't access. Is this located in the user program address space?
I know the OS needs to maintain some data structures in order to do its job, like the process control block. Where are these data structures located? Are they in user-program address spaces? Are they in some dedicated segment of memory for kernel data structures? Are they scattered all around physical memory wherever there is space?
Finally, I've seen some diagrams where OS code is located in the top portion of a user program's address space. Is the entire OS kernel located here? If not, where else does the OS kernel's code reside?
Thanks for your help!
Yes, the kernel has its own stack, heap, data structures, and code separate from those of each user process.
The code running in the kernel isn't treated as a "process" per se. The code is privileged meaning that it can modify any data in the kernel, set privileged bits in processor registers, send interrupts, interact with devices, execute privileged instructions, etc. It's not restricted like the code in a user process.
All of kernel memory and user process memory is stored in physical memory in the computer (or perhaps on disk if data has been swapped from memory).
The key to answering the rest of your questions is to understand the difference between physical memory and virtual memory. Remember that if you use a virtual memory address to access data, that virtual address is translated to a physical address before the data is fetched at the determined physical address.
Each process has its own virtual address space. This means that some virtual address a in one process can map to a different physical address than the same virtual address a in another process. Virtual memory has many important uses, but I'm not going to go into them here. The important point is that virtual memory enforces memory isolation. This means that process A cannot access the memory of process B. All of process A's virtual addresses map to some set of physical addresses and all of process B's virtual addresses map to a different set of physical addresses. As long as the two sets of physical addresses do not overlap, the processes cannot see or modify the memory of each other. User processes cannot access physical memory addresses directly - they can only make memory accesses with virtual addresses.
There are times when two processes may have some virtual addresses that do map to the same physical addresses, such as if they both mmap the same file, both use a shared library, etc.
So now to answer your question about kernel address spaces and user address spaces.
The kernel can have a separate virtual address space from each user process. This is as simple as changing the page directory pointer in the cr3 register (in an x86 processor) on each context switch. Since the kernel has a different virtual address space, no user process can access kernel memory as long as none of the kernel's virtual memory addresses map to the same physical addresses as any of the virtual addresses in any address space for a user process.
This can lead to a minor problem. If a user process makes a system call and passes a pointer as a parameter (e.g. a pointer to a buffer in the read system call), how does the kernel know which physical address corresponds to that buffer? The virtual address in the pointer maps to a different physical address in kernel space, so the kernel cannot just dereference the pointer. There are two options:
The kernel can traverse the user process page directory/tables to find the physical address that corresponds to the buffer. The kernel can then read/write from/to that physical address.
The kernel can instead include all of its mappings in the user address space (at the top of the user address space, as you mentioned). Now, when the kernel receives a pointer through the system call, it can just access the pointer directly since it is sharing the address space with the process.
Kernels generally go with the second option, since it's more convenient and more efficient. Option 1 is less efficient because each time a context switch occurs, the address space changes, so the TLB needs to be flushed and now you lose all of your cached mappings. I'm simplifying things a bit here since kernels have started doing things differently given the recent Meltdown vulnerability discovered.
This leads to another problem. If the kernel includes its mappings in the user process address space, what stops the user process from accessing kernel memory? The kernel sets protection bits in the page table that cause the processor to prohibit the user process from accessing the virtual addresses that map to physical addresses that contain kernel memory.
Take a look at these slides for more information.
I'm used to the standard notion of each user process having a virtual address space of stack, heap, data, and code. My question is that when a context switch occurs to the OS Kernel, is the code run in the kernel treated as a process with a stack, heap, data, and code?
One every modern operating system I am aware there is NEVER a context switch to the kernel. The kernel executes in the context of a process (some systems user the fiction of a reduced process context.
The "kernel" executes when a process enters kernel mode through an exception or an interrupt.
Each process (thread) normally has its own kernel mode stack used after an exception. Usually there is a single single interrupt stack for each processor.
https://books.google.com/books?id=FSX5qUthRL8C&pg=PA322&lpg=PA322&dq=vax+%22interrupt+stack%22&source=bl&ots=CIaxuaGXWY&sig=S-YsXBR5_kY7hYb6F2pLGjn5pn4&hl=en&sa=X&ved=2ahUKEwjrgvyX997fAhXhdd8KHdT7B8sQ6AEwCHoECAEQAQ#v=onepage&q=vax%20%22interrupt%20stack%22&f=false
I know there is a dedicated kernel stack, which the user program can't access. Is this located in the user program address space?
Each process has its own kernel stack. It is often in the user space with protected memory but could be in the system space. The interrupt stack is always in the system space.
Where are these data structures located? Are they in user-program address spaces?
They are generally in the system space. However, some systems do put some structures in the user space in protected memory.
Are they in some dedicated segment of memory for kernel data structures?
If they are in the user space, they are generally for an access mode more privileged than user mode and less privileged than kernel mode.
Are they scattered all around physical memory wherever there is space?
Thinks can be spread over physical memory pretty much at random.
The data structures in questions are usually regular C structures situated in the RAM allotted to the kernel by the kernel allocator
They are not usually accessible from regular processes becuase of normal mechanisms for memory protection and paging (virtual memory)
A kind of exception to this are kernel threads which have no userspace address space so the code they execute is always the kernel code working with the kernel space data structures hence with the isolated kernel memory
Now for the interesting part: 64-bit Linux uses a thing called Direct Map for memory organization, which means that the full amount of physical memory available is mapped in the kernel page tables as just one contiguous chunk. This is not true for 32-bit as the HIGHMEM was used to avoid the limitation of 4GB address spaces
Since the kernel has all the physical RAM visible and available to its own allocator, the kernel data structures in question can be situated pretty randomly with respect to the physical addresses
You can google on there terms to gain additional information:
PTI (page table isolation)
__copy_from_user (esp. on esoteric architectures where this function is not just a bitwise copy)
EPT (Intel nested paging in virtual machines)
I know that user program generates logical addresses.Suppose there is a small code snippet in C .When address is printed,the addresses are virtual addresses.My question is where are those addresses fetched from?where exactly do the allocated values and variables stay?At main memory or secondary memory?If main memory then why there is physical address?
User mode programs only see logical addresses. Only the operating system (kernel mode) sees physical memory.
My question is where are those addresses fetched from?
Those are the logical addresses assigned by the program loader ad linker.
where exactly do the allocated values and variables stay?At main memory or secondary memory?
In a virtual memory system, it may be in main memory or secondary storage.
If main memory then why there is physical address?
It is a logical address that is mapped to a physical address using page tables.
I am kind of new to the computer architecture and Operating System,
but I will try to answer as much as I can. As far as I have
understood about the logical address (Which I still have trouble
understanding, about where it is fetched from or where it is stored.
I mean these addresses (numbers) gotta be stored somewhere,
otherwise CPU can't generate it by itself, right?), these addresses
are assigned by CPU or a processor and depends on the CPU
architecture. Each process is assigned a virtual/logical
address. And this logical address is translated to physical address
by Memory Management Unit of CPU (MMU).
Where exactly do the allocated values and variables stay? As user3344003 said, it may be in main memory or secondary storage.
If main memory then why there is physical address? The reason lies
in the concept of Virtual Memory. Each process has its own virtual
address and a page table. Process's logical address are mapped
through this page table to the Physical memory (RAM). Whatever that
logical address is, it gets mapped to Physical address. If the
Physical memory gets full, then OS evicts some of the less used
or unused process to Secondary storage and puts the needed process
in the RAM. That way multiple process can run at the same time.
Every process assumes that they have all the space in RAM just for
themselves. If not for virtual memory, then physical memory would be
full and process might crash and may shut down the OS as well.
Hope it helps. I am still learning, if my understanding about Logical address and virtual memory is wrong then Please comment.
How can CPU generate a memory address, address to memory cells are defined before itself so what is the meaning of that , cpu generates a memory address??
From the context you gave above, it appears that you are trying to understand memory management concepts in OS. Galvin is a great source by the way, but I would agree there are confusing statements as the above at quite a few places in Galvin.
Nevertheless, to your question, I will try to give an idea of my understanding about it:
CPU Generates Memory Address -
So OS needs to make sure that all the processes run in their own address space and do not illegally try to access the address space of another process or the OS itself.
To do this the OS makes use of the Base register and Limit register. These registers can be accessed and modified only by the OS through special instructions. These special instructions can be executed only by the OS, because the OS runs in the kernel mode. User programs, since they run in user mode, so they can not access these registers and hence can not modify the value of the Base and Limit registers.
Now consider a situation where you have written a program, that has been compiled and it's exe (binary or rather put, an instance of this above program - a process) is residing in the input queue, ready to be brought into the main memory to be taken up by the CPU for execution. So the OS will make sure that this process gets loaded only in the address space allotted to it.
So far all good. Now lets say that one of the instructions in the process (that is now in the memory, in it's own address space) needs to access some data from some memory location. When the CPU takes this instruction for execution (in it's instruction cycle, Fetch-Decode-Execute) it sees that for the completion of the execution of this instruction, the CPU needs to access some data from some other memory location.
This memory location, as mentioned in the instruction, is not an actual physical address. It is rather a relocatable address. The linker/loader converts this address to an absolute address. The CPU translates (or let's call it as GENERATES) this address to a logical address. (The OS takes care of converting this logical address, generated by the CPU, to actual physical address, on one of the memory banks. This translation is actually done by the MMU - hardware)
This is my understanding of the phrase "CPU Generates a memory address"
This is an interview question I found in a website, the questions says: "In virtual memory, can two different processes have the same address? When you answer "No" which is correct, how one process can access another process' memory, for example the debugger can access the variables and change them while debugging?"
What I understand is :
2 diff process can have same virtual memory address. This is because each process has its own page table. Each process thinks it as 4Gb memory on a 32-bit machine. So both P1 and P2 can access address 0xabcdef - but the physical memory location might be different. Isnt this right ?
The debugger works on the same principle - 2 processes can access the same address. So it can modify variables etc on the fly.
Theoretically every process executed by user in any present popular OSes(Win,linux,unix,Sol etc) are initially allowed to use the address range of 4gig ( 0x00000000 t0 0xffffffff on 32 bit platform),whether its a simple hello world program or its complex web container hosting stackoverflow site.It means every process has its range starting from the same start address and ending with the same address space VIRTUALLY. So obviously every process has that same virtual addresses in their respective virtual address space range. So answer for your first question is YES.
Difference comes when OS execute any process, modern OSes are multitasking OS and they run more than one process at any point of time.So accommodating 4gig of every process in the main memory is not feasible at all. So OSes using paging system,in which they divide the virtual address range (0x00000000 to 0xffffffff) into a page of 4k size(not always). So before starting the process it actually load the required pages which needed at the initial time to the main memory and then load the another virtual page ranges as required. So loading of virtual memory to physical memory (main memory) is called memory mapping. In this process you map the page's virtual address range to physical address range( like ox00000000 to ox00001000 virtaul address range to 0x00300000 to 0x00301000 physical address range)based on the slot free in the main memory.So at any point of time only one virtual address range will be mapped to that particular physical address range,so answer for your second question is NO.
BUT
Shared Memory concept is an exception where all the process can share some of their virtual address range with each other,that will be mapped to a common physical address space.So in this case answer can be YES.
As an example on Linux every executable require libc.so library to execute the program executable.Every process load their required libraries and allocate them some virtual address page ranges in their address space. So now consider a scenario where you are executing 100's of process where each process require this library libc.so. So if OS allocate virtual address space in every process for this library libc.so,then you can imagine the level of duplication for library libc.so & its highly possible that at any point of time you will get multiple instance of libc.so address range pages in the main memory.So to make is redundant OS will load libc.so to specific virtual address space range of every process which is mapped to a fixed physical address range in main memory.So every process will refer to that fixed physical address range to execute any code in libc.so. So in this case every process share some physical address ranges as well.
But there is no chance of two process has same physical address at the same time in the user malloced virtual address range mapping.
Hope it helps.
1)
Same physical memory address at the same time: NO
Same virtual memory address at the same time: YES (each one maps to differnet physical address, or swap space)
2) I think the debuggers don't access directly the other process debugged but communicates with the runtime in the debugged process to do that changes.
That said, maybe the OS or processor instructions provide access/modify to other's memory access if you have the right. That doesn't mean it has the SAME address, it only says process 1 can say "access memory #address1 in Process2". Someone (processor / OS / runtime) will do that for process 1.
Yes, it's definitely possible for the same address to map to different physical memory depending on the process that's referencing it. This is in fact the case under Windows.
Each process has a address space of 4GB in a 32 bit system. Where is this real 4GB is managed by the OS. So in principle 2 different process can have same addresses that is local to the process.
Now when one process has to read the memory of another process it has to either communicate with the other process (memory mapped files etc.,) or use the Debug apis like OpenProcess/ReadProcessMemory.
What I am sure is one process cannot directly go and read the virtual memory of other process atleast in Win32 without the help of the OS.
Sometimes I feel like the "elder" in the Minolta commercial... In the 1960's Multics was created using Virtual Memory. The last Multics system was shut down October 30, 2000 at 17:08Z.
In Multics, only one copy of any program was present in memory, regardless of how many users were running it. So that means that each user process had both the same physical and virtual address for the program.
When I look at the Windows Task Manager and see multiple copies of a program (e.g. svchost.exe) I wonder why / how the revolutionary concepts in Multics were lost.