Are Condition Codes / Flags stored in the processor registers or the main memory? - operating-system

I'm new to this, so I want to make sure that my comprehension of what I read is correct.
Also, registers are always processor registers, and there no other registers which are not a part of the processor (like registers in primary/secondary memory), correct?

Most architectures have a dedicated register for storing flags. Modern x86, for example, has one 32bit-register that stores all flags. Storing the flags in main memory would make accessing them incredibly slow, compared to a register. Some architectures do support moving the flags to either another register or directly onto the stack, and vice versa.
When talking about registers, most people are referring to registers in a processor. That's not to say there aren't any registers in your PC besides the ones in your CPU. GPUs, for example, also have registers. Your memory could have a register to temporarily store read/write addresses or keep track of other information, but when looking at processors you usually won't need to know about those.

Related

What does TCM connection with Icache in this RISCV version?

In the middle of this page (https://github.com/ultraembedded/riscv), there is a block diagram about the core, I really do not know what is TCM doing in the same block with the Icache ? Is it an optional thing to be inside the CPU ?
Some embedded systems provide dedicated memory for code and/or for data.  On some of these systems, Tightly-Coupled Memory serves as a replacement for the (instruction) cache, while on other such systems this memory is in addition to and along side a cache, applying to a certain portion of the address space.  This dedicated memory may be on the chip of the processor.
This memory could be some kind of ROM or other memory that is initialized somehow prior to boot.  In any case, TCM typically isn't backed by main memory, so doesn't suffer cache misses and the associated circuitry, usually also has high performance, like a cache when a hit occurs.
Some systems refer to this as Instruction Tightly Integrated Memory, ITIM, or Data Tightly Integrated Memory, DTIM.
When a system uses ITIM or DTIM, it performs more like a Harvard architecture than the Modified Harvard architecture of laptops and desktops.
The cache has no address space. CPU does not ask for data from the cache, it just asks for a data, then the memory controller first checks the cache if the data is present in the cache. If it is in the cache, data is fetched, if not then the controller checks the RAM. All processor does is ask for data, it does not care where the data came from. In the case of TCM, the CPU can directly write data to TCM and ask data from TCM since it has a specific address. Think of TCM as a RAM that is close to the CPU.

What is the real use of logical addresses?

This is what I understood of logical addresses :
Logical addresses are used so that data on the physical memory do not get corrupted. By the use of logical addresses, the processes wont be able to access the physical memory directly, thereby ensuring that it cannot store data on already accessed physical memory locations and hence protecting data integrity.
I have a doubt whether it was really necessary to use logical addresses. The integrity of the data on the physical memory could have been preserved by using an algorithm or such which do not allow processes to access or modify memory locations which were already accessed by other processes.
"The integrity of the data on the physical memory could have been preserved by using an algorithm or such which do not allow processes to access or modify memory locations which were already accessed by other processes."
Short Answer: It is impossible to devise an efficient algorithm as proposed to match the same level of performance with logical address.
The issue with this algorithm is that how are you going to intercept each processes' memory access? Without intercepting memory access, it is impossible to check if a process has privileges to access certain memory region. If we are really going to implement this algorithms, there are ways to intercept memory access without using the logical address provided by MMU (Memory management unit) on modern cpus (Assume you have a cpu without MMU). However, those methods will not be as efficient as using MMU. If your cpu does have a MMU, although logical address translation will be unavoidable, you could setup a one-to-one to the physical memory.
One way to intercept memory access without MMU is to insert kernel trap instruction before each memory access instruction in a program. Since we cannot trust user level program, such job cannot be delegated to a compiler. Thus, you can write an OS which will do this job before it loads a program into memory. This OS will scan through the binary of your program and insert kernel trap instruction before each memory access. By doing so, kernel can inspect if a memory access should be granted. However, this approach downgrades your system's performance a lot as each memory access, legal or not, will trap into the kernel. And trapping into kernel involves context switching which takes a lot of cpu cycles.
Can we do better? What about do a static analysis of memory access of our programs before we load it into memory so we only insert trap before illegal memory access? However, processes has no predefined execution order. Let's say you have programs A and B. They both try to access the same memory region. Then who should get it with our static analysis? We could randomly assign to one of them. Let's say we assign to B. Then how do we know when will B be done with this memory so we can give to A so it can proceed? Let's say B use this region to hold a global variable, which accessed multiple times throughout its life cycle. Do we wait till the completion of B to give this region to A? What if B never ends?
Furthermore, a static analysis of memory access would be impossible with the present of dynamic memory allocation. If either program A or B tries to allocate a memory region which size depends on user input, then OS or our static analysis tool cannot know ahead of time of where or how big the region is. And thus would not be able to do analysis at all.
Thus, we have to fall back to trap on every memory access and determine if access is legal on runtime. Sounds familiar? This is the function of MMU or logical address. However, with logical address, a trap is incurred if and only if a illegal access has happened instead of every memory access.
It is simulated by the OS to programs as if they were using physical memory. The need of the extra layer (logical address) is necessary for data-integrity purposes. You can make the analogy of logical addresses as the language of OS for addresses because without this Mapping, OS would not be able to understand what are the "actual" addresses allowed to any program. To remove this ambiguity, logical address mapping is required so that the OS know what logical address maps to what physical addressing and whether that physical address location is allowed to that program. It performs the "integrity checks" on logical addresses and not on physical memory because you can check the integrity by changing the logical address and do manipulations but you cant really do the same on physical memory because it would affect the already running processes using the memory.
Also I would like to mention that the base register and limit register are loaded by executing privileged instructions and privileged instructions are executed in kernel mode and only operating system has access to kernel mode and therefore CPU cannot directly access the registers. I hope I helped a little :)
There are some things that you need to understand.
First of all a CPU is unable to access the physical memory directly. In order to calculate the physical address a CPU needs a logical address. Logical address is then used compute the physical address. So this is the basic need of logical addresses to access physical memory. Without logical address you cannot access it. This conversion is necessary. Suppose if there is a system which do not follow virtual/logical addresses, that system will become highly vulnerable to hacker or intruder as they can access physical memory directly and manipulate the useful data on any location.
Second thing, when a process runs, CPU generates logical address in order to load that process on main memory. Now the purpose of this logical address here is, the memory management. The size of registers are very less as compared to the actual size of process. So we need to relocate the memory in order to obtain the optimum efficiency. MMU (Memory Management Unit) comes into play here. Physical memory is calculated by MMU using the logical address. So logical addresses are generated by processes and MMU access physical address based on that logical address.
This example will make it clear.
If data is stored on address 50, base register holds the value 50 and offset holds 0. Now, MMU shifts it to address 100, this would be reflected in logical address as well. Offset becomes 100-50=50. So, now if data is needed to be retrieved via logical address, it goes to base address 50 and then see the offset i.e. 50, it goes to address 100 and access data. Logical address keeps the record of the data where it has been moved. No matter how many address locations that data change, it will be reflected in logical address and hence this logical address give accessibility to that data whatever physical address it holds now.
I hope it helps.

What is the purpose of Logical addresses in operating system? Why they are generated

I want to know that why the CPU generates logical addresses and then maps them into Physical addresses with the help of memory manager? Why do we need them.
Virtual addresses are required to run several program on a computer.
Assume there is no virtual address mechanism. Compilers and link editors generate a memory layout with a given pattern. Instruction (text segment) are positioned in memory from address 0. Then are the segments for initialized or uninitialized data (data and bss) and the dynamic memory (heap and stack). (see for instance https://www.geeksforgeeks.org/memory-layout-of-c-program/ if you have no idea on memory layout)
When you run this program, it will occupy part of the memory that will no longer be available for other processes in a completely unpredictable way. For instance, addresses 0 to 1M will be occupied, or 0 to 16k, or 0 to 128M, it completely depends on the program characteristics.
If you now want to run concurrently a second program, where will its instructions and data go to memory? Memory addresses are generated by the compiler that obviously do not know at compile time what will be the free memory. And remember memory addresses (for instructions or data) are somehow hard-coded in the program code.
A second problem happens when you want to run many processes and that you run out of memory. In this situations, some processes are swapped out to disk and restored later. But when restored, a process will go where memory is free and again, it is something that is unpredictable and would require modifying internal addresses of the program.
Virtual memory simplifies all these tasks. When running a process (or restoring it after a swap), the system looks at free memory and fills page tables to create a mapping between virtual addresses (manipulated by the processor and always unchanged) and physical addresses (that depends on the free memory on the computer at a given time).
Logical address translation serves several functions.
One of these is to support the common mapping of a system address space to all processes. This makes it possible for any process to handle interrupt because the system addresses needed to handle interrupts are always in the same place, regardless of the process.
The logical translation system also handles page protection. This makes is possible to protect the common system address space from individual users messing with it. It also allows protecting the user address space, such as making code and data read only, to check for errors.
Logical translation is also a prerequisite for implementing virtual memory. In an virtual memory system, each process's address space is constructed in secondary storage (ie disk). Pages within the address space are brought into memory as needed. This kind of system would be impossible to implement if processes with large address spaces had to be mapped contiguously within memory.

Context Switch questions: What part of the OS is involved in managing the Context Switch?

I was asked to anwer these questions about the OS context switch, the question is pretty tricky and I cannot find any answer in my textbook:
How many PCBs exist in a system at a particular time?
What are two situations that could cause a Context Switch to occur? (I think they are interrupt and termination of a process,but I am not sure )
Hardware support can make a difference in the amount of time it takes to do the switch. What are two different approaches?
What part of the OS is involved in managing the Context Switch?
There can be any number of PCBs in the system at a given moment in time. Each PCB is linked to a process.
Timer interrupts in preemptive kernels or process renouncing control of processor in cooperative kernels. And, of course, process termination and blocking at I/O operations.
I don't know the answer here, but see Marko's answer
One of the schedulers from the kernel.
3: A whole number of possible hardware optimisations
Small register sets (therefore less to save and restore on context switch)
'Dirty' flags for floating point/vector processor register set - allows the kernel to avoid saving the context if nothing has happened to it since it was switched in. FP/VP contexts are usually very large and a great many threads never use them. Some RTOSs provide an API to tell the kernel that a thread never uses FP/VP at all eliminating even more context restores and some saves - particularly when a thread handling an ISR pre-empts another, and then quickly completes, with the kernel immediately rescheduling the original thread.
Shadow register banks: Seen on small embedded CPUs with on-board singe-cycle SRAM. CPU registers are memory backed. As a result, switching bank is merely a case of switching base-address of the registers. This is usually achieved in a few instructions and is very cheap. Usually the number of context is severely limited in these systems.
Shadow interrupt registers: Shadow register banks for use in ISRs. An example is all ARM CPUs that have a shadow bank of about 6 or 7 registers for its fast interrupt handler and a slightly fewer shadowed for the regular one.
Whilst not strictly a performance increase for context switching, this can help ith the cost of context switching on the back of an ISR.
Physically rather than virtually mapped caches. A virtually mapped cache has to be flushed on context switch if the MMU is changed - which it will be in any multi-process environment with memory protection. However, a physically mapped cache means that virtual-physical address translation is a critical-path activity on load and store operations, and a lot of gates are expended on caching to improve performance. Virtually mapped caches were therefore a design choice on some CPUs designed for embedded systems.
The scheduler is the part of the operating systems that manage context switching, it perform context switching in one of the following conditions:
1.Multitasking
2.Interrupt handling
3.User and kernel mode switching
and each process have its own PCB

How are the stack pointer and program status word maintained in multiprocessor architecture?

In a multi-processor architecture, how are registers organized?
For example, in a 4 cores processor, a minimum of 4 processes can run at a time.
How are stack pointer, program status registers and program counter organized?
What about other general purpose registers?
My guess is, each core will have a separate set of registers.
Imagine 4 completely separate computers, each with a single-core CPU. A 4-core computer is like that; except:
All CPUs share the same physical address space (and can all use the same RAM, PCI devices, etc)
Interrupt/IRQ controllers may be designed so the OS can tell it which CPU/s should be interrupted by the IRQ
CPUs are typically able to signal each other (e.g. "inter-processor interrupts")
Some CPUs may share some caches
Some CPUs may share some control registers (e.g. for things like power management, cache configuration, etc)
For modern CPUs, some CPUs may share some or all execution units (SMT, hyper-threading, etc)
For modern systems (where memory controller is built into the physical chip) some CPUs may share the same memory controller
Most of this is "invisible" to most software. Unless you're writing part of an OS that controls power management, you don't need to care if power management is shared between CPUs or not; unless you're writing an OSs/kernel's low level IRQ handling you don't need to care how IRQs reach device drivers, etc.
The same applies to how many CPUs actually exist. The OS/kernel normally ensures that applications only need to care about higher level abstractions (e.g. "threads"). How this higher level abstraction works depends on the OS - normally (for most OSs) the OS/kernel attempts to provide the illusion that all threads are running at the same time by switching between them "quickly enough" (where if there's only 4 CPUs a maximum of 4 threads actually do run at the exact same time), but it's usually far more complex than this (involving things like thread priorities, pre-emption rules, etc) and (even though it's relatively rare) it may be very different (e.g. for some systems the same thread may be run on multiple CPUs at the same time for fault tolerance/redundancy purposes; for some systems there might just be a queue of functions and their data, where multiple functions run at the same time; etc).
Multiprocessor means that there are at least two discrete processors on the same platform -- usually on the same motherboard
A subset is distributed multiprocessing, where two PC's for example are programmed to appear as a single system with two processors
Multicore means that the most or all of the CPU is replicated many times on single chip.
- this also means that stack, status, program counter and all generic purpose registers are replicated.
Hyperthreading is a technique, where each stage of the pipeline executes commands from different processes.
Multiprocessing means in OS level that everything a process consists of, is switched every now and then.
Multithreading is a lightweight variant of multiprocessing, where the threads e.g. share the same code segment and same data segment, same file descriptors etc. but have unique stacks (and of course unique status registers and program counters)
Also means multiprocessing in general (hardware architecture)