Why place of Mem[MA] in MB then copy from MB to IR rather than going straight from Mem[MA] to IR? - cpu-architecture

During the fetch stage of the fetch-execute cycle, why are the contents of the cell whose address is in the MA (memory address register) placed in MB (memory buffer) then copied to IR (instruction register), rather than placing the contents of address of MA directly in the IR?

In theory it would be possible to send instruction fetch memory data directly to the IR (or to both the MB and the IR) — this would require extra hardware: wires and muxes.
You may notice that the architecture (depending on which one it is) makes use of few (one or two) busses, and this would effectively add another bus.  So, I think that all we can say is that simplicity is the reason.  Back in the day when processors were this simple, transistor counts were very limited for integrated circuits.
Going in the direction of making things more efficient, nowadays, even simple processors separate instruction (usually cache) memory from data (usually cache) memory.  This independence accomplishes a number of improvements.  MIPS, even the unpipelined single cycle processor, for example:
First, the PC (program counter) register replaces the MA for the instruction fetch side of things and the IR replaces the MB (as if loading directly into that register as you're suggesting), but let's also note that the IR can be reduced from being a true register to being wires whose output is stable for the cycle and thus can be worked on by a decode unit directly.  (Stability is gained by not sharing the instruction memory hardware with the data memory hardware; whereas with only a single memory interface, data has to be copied around and stored somewhere so the interface can be shared for both code & data.)
That saves both the cycle you're referring to: to transfer data from MB to IR, but also the cycle before to capture the data in the MB register in the first place.  (Generally speaking, enregistering some data requires a cycle, so if you can feed wires without enregistering, that's better, all other factors being the same.)
(Also depending on the architecture you're looking at, the PC on MIPS uses dedicated increment unit (adder) rather than attempting to share the main ALU and/or the busses for that increment — that could also save a cycle or two.)
Second, meanwhile the data memory can run concurrently with the instruction memory (a nice win) executing a data load from memory or store to memory in parallel with the fetch of the next instruction.  The data side also forgoes the MB register as temporary parking place, and instead can load memory data directly into a processor register (the one specified by the load instruction).
Having two dedicated memories creates an independence that reduces the need for register capture while also allowing for parallelism, of course requiring more hardware for the design.

Related

What does TCM connection with Icache in this RISCV version?

In the middle of this page (https://github.com/ultraembedded/riscv), there is a block diagram about the core, I really do not know what is TCM doing in the same block with the Icache ? Is it an optional thing to be inside the CPU ?
Some embedded systems provide dedicated memory for code and/or for data.  On some of these systems, Tightly-Coupled Memory serves as a replacement for the (instruction) cache, while on other such systems this memory is in addition to and along side a cache, applying to a certain portion of the address space.  This dedicated memory may be on the chip of the processor.
This memory could be some kind of ROM or other memory that is initialized somehow prior to boot.  In any case, TCM typically isn't backed by main memory, so doesn't suffer cache misses and the associated circuitry, usually also has high performance, like a cache when a hit occurs.
Some systems refer to this as Instruction Tightly Integrated Memory, ITIM, or Data Tightly Integrated Memory, DTIM.
When a system uses ITIM or DTIM, it performs more like a Harvard architecture than the Modified Harvard architecture of laptops and desktops.
The cache has no address space. CPU does not ask for data from the cache, it just asks for a data, then the memory controller first checks the cache if the data is present in the cache. If it is in the cache, data is fetched, if not then the controller checks the RAM. All processor does is ask for data, it does not care where the data came from. In the case of TCM, the CPU can directly write data to TCM and ask data from TCM since it has a specific address. Think of TCM as a RAM that is close to the CPU.

HSA Data copy between RAM and GPU-RAM

Reading the wikipage about HSA found this block diagram.
Could not understand the benefits of passing pointer through PCI-ex
Does this avoids data copying from system memory to graphics memory ?
As far as I understand to process the content of the pointer the GPU will need it to be present in the graphics memory.
If you have separate graphics memory, but you're doing HSA, you have to somehow unify the address spaces. The CPU can see graphics memory, mapped to physical address space. And the GPU can access main memory via DMA. You can set up the CPU and GPU with page tables that direct the same virtual addresses to the same place, which will require one of them to go (transparently) over the PCIe bus.
Where you save time and energy is that you don't have to copy everything you MIGHT want to access; the CPU and GPU access only the data they actually need to use.

Are Condition Codes / Flags stored in the processor registers or the main memory?

I'm new to this, so I want to make sure that my comprehension of what I read is correct.
Also, registers are always processor registers, and there no other registers which are not a part of the processor (like registers in primary/secondary memory), correct?
Most architectures have a dedicated register for storing flags. Modern x86, for example, has one 32bit-register that stores all flags. Storing the flags in main memory would make accessing them incredibly slow, compared to a register. Some architectures do support moving the flags to either another register or directly onto the stack, and vice versa.
When talking about registers, most people are referring to registers in a processor. That's not to say there aren't any registers in your PC besides the ones in your CPU. GPUs, for example, also have registers. Your memory could have a register to temporarily store read/write addresses or keep track of other information, but when looking at processors you usually won't need to know about those.

MSI: Why do we need to write the line back when other CPU is going to override it?

In the book "Computer Architecture", by Hennessy/Patterson, 5th ed, on page 360 they describe MSI protocol, and write something like:
If the line is in state "Exclusive" (Modified), then on receiving "Write Miss" from the bus the current CPU 1) writes back the line into the bus, and then 2) goes into "Invalid" state.
Why do we need to write-back the line, if it will be overwritten anyway by the successive write by the other CPU?
Is it connected with the fact that every CPU should see the same writes? (but I don't see why is it a problem not see this particular write by some other CPU)
Here is the protocol from their book (question in green, in purple it is clear: we need to write-back in order to supply the line to requesting CPU):
Writing back the modified data to memory is not strictly necessary in a MSI protocol. The state diagrams also seem to assume a system with low cost memory access (data is supplied by memory even when found in the shared state in another cache) and a shared bus connecting to the memory interface.
However, the modified data cannot simply be dropped as in shared state since the requesting processor might only be modifying part of the cache block (e.g., only one byte). Whatever portions of the block that are not modified by the requesting processor must still be available either in memory or at the requesting processor (the other processor has already invalidated its copy). With a shared bus and low cost memory access the cost difference of adding a write-back to memory over just communicating the data to the other processor is small.
In addition, even on a word-addressed system with word-sized cache blocks, keeping the old data available allows the write miss request to be sent speculatively (as with out-of-order execution or prefetch-for-write) without correctness issues.
(Per-byte tracking of modified [or valid as a superset of modified] state would allow some data communication to be avoided at the cost of extra state bits and a more complex communication system.)

Where are multiple stacks and heaps put in virtual memory?

I'm writing a kernel and need (and want) to put multiple stacks and heaps into virtual memory, but I can't figure out how to place them efficiently. How do normal programs do it?
How (or where) are stacks and heaps placed into the limited virtual memory provided by a 32-bit system, such that they have as much growing space as possible?
For example, when a trivial program is loaded into memory, the layout of its address space might look like this:
[ Code Data BSS Heap-> ... <-Stack ]
In this case the heap can grow as big as virtual memory allows (e.g. up to the stack), and I believe this is how the heap works for most programs. There is no predefined upper bound.
Many programs have shared libraries that are put somewhere in the virtual address space.
Then there are multi-threaded programs that have multiple stacks, one for each thread. And .NET programs have multiple heaps, all of which have to be able to grow one way or another.
I just don't see how this is done reasonably efficient without putting a predefined limit on the size of all heaps and stacks.
I'll assume you have the basics in your kernel done, a trap handler for page faults that can map a virtual memory page to RAM. Next level up, you need a virtual memory address space manager from which usermode code can request address space. Pick a segment granularity that prevents excessive fragmentation, 64KB (16 pages) is a good number. Allow usermode code to both reserve space and commit space. A simple bitmap of 4GB/64KB = 64K x 2 bits to keep track of segment state gets the job done. The page fault trap handler also needs to consult this bitmap to know whether the page request is valid or not.
A stack is a fixed size VM allocation, typically 1 megabyte. A thread usually only needs a handful of pages of it, depending on function nesting level, so reserve the 1MB and commit only the top few pages. When the thread nests deeper, it will trip a page fault and the kernel can simply map the extra page to RAM to allow the thread to continue. You'll want to mark the bottom few pages as special, when the thread page faults on those, you declare this website's name.
The most important job of the heap manager is to prevent fragmentation. The best way to do that is to create a lookaside list that partitions heap requests by size. Everything less than 8 bytes comes from the first list of segments. 8 to 16 from the second, 16 to 32 from the third, etcetera. Increasing the size bucket as you go up. You'll have to play with the bucket sizes to get the best balance. Very large allocations come directly from the VM address manager.
The first time an entry in the lookaside list is hit, you allocate a new VM segment. You subdivide the segment into smaller blocks with a linked list. When such an allocation is released, you add the block to the list of free blocks. All blocks have the same size regardless of the program request so there won't be any fragmentation. When the segment is fully used and no free blocks are available you allocate a new segment. When a segment contains nothing but free blocks you can return it to the VM manager.
This scheme allows you to create any number of stacks and heaps.
Simply put, as your system resources are always finite, you can't go limitless.
Memory management always consists of several layers each having its well defined responsibility. From the perspective of the program, the application-level manager is visible that is usually concerned only with its own single allocated heap. A level above could deal with creating the multiple heaps if needed out of (its) one global heap and assigning them to subprograms (each with its own memory manager). Above that could be the standard malloc()/free() that it uses and above those the operating system dealing with pages and actual memory allocation per process (it is basically not concerned not only about multiple heaps, but even user-level heaps in general).
Memory management is costly and so is trapping into the kernel. Combining the two could impose severe performance hit, so what seems to be the actual heap management from the application's point of view is actually implemented in user space (the C runtime library) for the sake of performance (and other reason out of scope for now).
When loading a shared (DLL) library, if it is loaded at program startup, it will of course be most probably loaded to CODE/DATA/etc so no heap fragmentation occurs. On the other hand, if it is loaded at runtime, there's pretty much no other chance than using up heap space.
Static libraries are, of course, simply linked into the CODE/DATA/BSS/etc sections.
At the end of the day, you'll need to impose limits to heaps and stacks so that they're not likely to overflow, but you can allocate others.
If one needs to grow beyond that limit, you can either
Terminate the application with error
Have the memory manager allocate/resize/move the memory block for that stack/heap and most probably defragment the heap (its own level) afterwards; that's why free() usually performs poorly.
Considering a pretty large, 1KB stack frame on every call as an average (might happen if the application developer is unexperienced) a 10MB stack would be sufficient for 10240 nested call -s. BTW, besides that, there's pretty much no need for more than one stack and heap per thread.