Can someone very briefly point out the differences between the memory bus and address bus in computer architectures ? Also when you say memory bus does it imply that you are referring to the databus ?
Beautifully explained here.
In isolation, the microprocessor, the memory and the input/output
ports are interesting components, but they cannot do anything useful.
In combination, they can form a complete system if they can
communicate with each other. This communication is accomplished over
bundles of signal wires (known as buses) that connect the parts of the
system together.
There are normally three types of bus in any processor system:
An address bus: this determines the location in memory that the processor will read data from or write data to.
A data bus: this contains the contents that have been read from the memory location or are to be written into the memory location.
A control bus: this manages the information flow between components indicating whether the operation is a read or a write
and ensuring that the operation happens at the right time.
Data bus:
The data bus is an electrical path that connects the cpu,memory and the other hardware devices on the motherboard. the number of wires in the data bus affects the speed at which data can travel between components.Since each wire can transfer one bit at a time therefore,an 8-wire or one byte at a time.
Address bus:
The reason that the address bus is important is that the number of lines in it tells the maximum number of memory addresses.8 bit data is enough to represent 2(8 in power)=256.
Memory bus consists of an address bus (used to specify memory address) and data bus (used to specify value to be written to it).
When you read data from memory or write data to memory you operate with 2 different items, the address and the data. Somehow they have to be transferred between the CPU and memory. You can have two buses to transfer them independently. Or you can have just one and use it for both, one thing at a time.
Address and data buses may have different widths, that is, they may carry different number of bits.
Yes, memory bus usually means data bus (that carries the memory data).
Data bus is a bi-directional bus for fetching and storing data where as an Address bus is a unidirectional bus used to specify the address.
Excellent narration here http://www.differencebetween.com/difference-between-address-bus-and-vs-data-bus/
Related
Operating System: RHEL Centos 7.9 Latest
Operation:
Sending 500MB chunks 21 times from one System to another connected via Mellanox Cables.
(Ethernet controller: Mellanox Technologies MT28908 Family [ConnectX-6])
(The registered memory region (500MB) is reused for all the 21 iterations.)
The gain in Message Send Bandwidth when using aligned_alloc() (with system page size 4096B) instead of malloc() for the registered memory is around 35Gbps.
with malloc() : ~86Gbps
with aligned_alloc() : ~121Gbps
Since the CPU is not involved for these operations, how is this operation faster with aligned memory?
Please provide useful reference links if available that explains this.
What change does aligned memory bring to the read/write operations?
Is it the address translation within the device that gets improved?
[Very limited information is present over the internet about this, hence asking here.]
RDMA operations use either MMIO or DMA to transfer data from the main memory to the NIC via the PCI bus - DMA is used for larger transfers.
The behavior you're observing can be entirely explained by the DMA component of the transfer. DMA operates at the physical level, and a contiguous region in the Virtual Address Space is unlikely to be mapped to a contiguous region in the physical space. This fragmentation incurs costs - there's more translation needed per unit of transfer, and DMA transfers get interrupted at physical page boundaries.
[1] https://www.kernel.org/doc/html/latest/core-api/dma-api-howto.html
[2] Memory alignment
I have two questions.
memory region of the cortex-m core cpu
1- is the memory of the stm32 microcontrollers inside the cortex-m core or outside of it? and if it is inside the cortex-core why is it not shown in the block diagram of the cortex-m core generic user guide?block diagram of the cortex-m core
2-I'm trying to understand the stm32 architecture but I'm facing an ambiguity.
usart block diagram
as you can see in the picture the reference manual says that the USART unit has some registers(i.e Data Register).
but these registers also exist in the memory region of the cortex-m core(if the answer to the first question is "inside").where are they really? are there two registers for each register? or are they resided in the cortex-m core or in the peripheral itself?
is it related to the memory-mapped i/o definition?
The only storage that's inside the CPU core is the registers (including general-purpose and special-purpose registers). Everything else is external, including RAM and ROM.
The peripheral control registers exist, essentially, inside the peripheral. However they are accessed by the CPU in the same way that it accesses RAM or ROM; that's the meaning of the memory map, it shows you which addresses refer to RAM, ROM, peripheral registers, and other things. (Note that most of the memory map is unused - a 32-bit address space is capable of addressing 4GB of memory, and no microcontroller that I know of has anything like that much storage.) The appropriate component 'responds' to read and write requests on the memory bus depending on the address.
For a basic overview the Wikipedia page on memory-mapped IO is reasonably good.
Note that none of this is specific to the Cortex-M. Almost all modern microprocessor designs use memory mapping. Note also that the actual bus architecture of the Cortex-M is reasonably complex, so any understanding you gain from the Wikipedia article will be of an abstraction of the true implementation.
Look at the image below, showing the block diagram of an STM32 Cortex-M4 processor.
I have highlighted the CPU Core (top left); and other components you can find inside the microcontroller.
The CPU "core", as its name implies, is just the "core"; but the microcontroller also integrates a Flash memory, a RAM, and a number of peripherals; almost everything outside the core (except debugging lines) is accessed by means of the bus matrix, this is equally true for ROM, RAM, and integrated peripherals.
Note that the main difference between a "microprocessor" and a "microcontroller" is that a the latter has dedicated peripherals on board.
Peripherals on STM32 devices are accessed by the CPU via memory-mapped I/O, look at the picture below:
As you can see, despite a linear address space from 0x00000000 to 0xFFFFFFFF, the address space is partitioned in "segments", f.e., program memory starting at 0x00000000, SRAM at 0x20000000, peripherals at 0x40000000. Specific peripheral registers can be read/written by pointers at specific offsets from the base address.
For this device, USARTS are mapped to APB1 area, thus in address range 0x40000000-0x4000A000. Note that the actual peripheral addresses can be different from device to device.
Peripherals are connected to core via buses. The address decoder knows which address is handled by the particular bus.
Not only peripherals are connected via buses. Memories are connected the same way. Busses are interconnected via the bridges. Those bridges know how to direct the traffic.
From the core point of view the peripheral register works the same way as the memory cell.
What about the gaps. Usually if the address decoder does not understand the address it will generate the exception - hardware error (in the ARM terminology called HardFault)
Details are very complicated and unless you are going to design your own chip not needed for the register level programmer.
While understanding the concept of Paging in Memory Management, I came through the terms "logical memory" and "physical memory". Can anyone please tell me the diff. between the two ???
Does physical memory = Hard Disk
and logical memory = RAM
There are three related concepts here:
Physical -- An actual device
Logical -- A translation to a physical device
Virtual -- A simulation of a physical device
The term "logical memory" is rarely used because we normally use the term "virtual memory" to cover both the virtual and logical translations of memory.
In an address translation, we have a page index and a byte index into that page.
The page index to the Nth path in the process could be called a logical memory. The operating system redirects the ordinal page number into some arbitrary physical address.
The reason this is rarely called logical memory is that the page made be simulated using paging, becoming a virtual address.
Address transition is a combination of logical and virtual. The normal usage is to just call the whole thing "virtual memory."
We can imagine that in the future, as memory grows, that paging will go away entirely. Instead of having virtual memory systems we will have logical memory systems.
Not a lot of clarity here thus far, here goes:
Physical Memory is what the CPU addresses on its address bus. It's the lowest level software can get to. Physical memory is organized as a sequence of 8-bit bytes, each with a physical address.
Every application having to manage its memory at a physical level is obviously not feasible. So, since the early days, CPUs introduced abstractions of memory known collectively as "Memory Management." These are all optional, but ubiquitous, CPU features managed by your kernel:
Linear Memory is what user-level programs address in their code. It's seen as a contiguous addresses space, but behind the scenes each linear address maps to a physical address. This allows user-level programs to address memory in a common way and leaves the management of physical memory to the kernel.
However, it's not so simple. User-level programs address linear memory using different memory models. One you may have heard of is the segmented memory model. Under this model, programs address memory using logical addresses. Each logical address refers to a table entry which maps to a linear address space. In this way, the o/s can break up an application into different parts of memory as a security feature (details out of scope for here)
In Intel 64-bit (IA-32e, 64-bit submode), segmented memory is never used, and instead every program can address all 2^64 bytes of linear address space using a flat memory model. As the name implies, all of linear memory is available at a byte-accessible level. This is the most straightforward.
Finally we get to Virtual Memory. This is a feature of the CPU facilitated by the MMU, totally unseen to user-level programs, and managed by the kernel. It allows physical addresses to be mapped to virtual addresses, organized as tables of pages ("page tables"). When virtual memory ("paging") is enabled, tables can be loaded into the CPU, causing memory addresses referenced by a program to be translated to physical addresses transparently. Page tables are swapped in and out on the fly by the kernel when different programs are run. This allows for optimization and security in process/memory management (details out of scope for here)
Keep in mind, Linear and Virtual memory are independent features which can work in conjunction. If paging is disabled, linear addresses map one-to-one with physical addresses. When enabled, linear addresses are mapped to virtual memory.
Notes:
This is all linux/x86 specific but the same concepts apply almost everywhere.
There are a ton of details I glossed over
If you want to know more, read The Intel® 64 and IA-32 Architectures Software Developer Manual, from where I plagiarized most of this
I'd like to add a simple answer here.
Physical Memory : This is the memory that is actually present and every process needs space here to execute their code.
Logical Memory:
To a user program the memory seems contiguous,Suppose a program needs 100 MB of space in memory,To this program a virtual address space / Logical address space starts from 0 and continues to some finite number.This address is generated by CPU and then The MMU then maps this virtual address to real physical address through some page table or any other way the mapping is implemented.
Please correct me or add some more content here. Thanks !
Physical memory is RAM; Actually belongs to main memory. Logical address is the address generated by CPU. In paging,logical address is mapped into physical address with the help of page tables. Logical address contains page number and an offset address.
An address generated by the CPU is commonly referred to as a logical address, whereas an address seen by the memory unit—that is, the one loaded into the memory-address register of the memory—is commonly referred to as a physical address
The physical address is the actual address of the frame where each page will be placed, whereas the logical address is the address generated by the CPU for each page.
What exactly is a frame?
Processes are retrieved from secondary memory and stored in main memory using the paging storing technique.
Processes are kept in secondary memory as non-contiguous pages, which implies they are stored in random locations.
Those non-contiguous pages are retrieved into main Memory as a frame by the paging operating system.
The operating system divides the memory frame size equally in main memory, and all processes retrieved from secondary memory are stored concurrently.
I've read about the difference between port mapped IO and memory mapped IO, but I can't figure out how memory mapped Io is implemented in modern operating systems (windows or linux)
What I know is that a part of the physical memory is reserved to communicate with the hardware and there's a MMIO Unit involved in taking care of the bus communication and other memory-related stuff
How would a driver communicate with underlying hardware? What are the functions that the driver would use? Are the addresses to communicate with a video card fixed or is there some kind of "agreement" before using them?
I'm still rather confused
The following statement in your question is wrong:
What I know is that a part of the physical memory is reserved to communicate with the hardware
A part of the physical memory is not reserved for communication with the hardware. A part of the physical address space, to which the physical memory and memory mapped IO are mapped, is. This memory layout is permanent, but user programs do not see it directly - instead, they run into their own virtual address space to which the kernel can decide to map, wherever it wants, physical memory and IO ranges.
You may want to read the following articles which I believe contain answers to most of your questions:
http://duartes.org/gustavo/blog/post/motherboard-chipsets-memory-map
http://duartes.org/gustavo/blog/post/memory-translation-and-segmentation
http://duartes.org/gustavo/blog/post/how-the-kernel-manages-your-memory
http://en.wikipedia.org/wiki/Memory-mapped_I/O
http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/IO/mapped.html
Essentially it is just a form of accessing the data, as if you are saving / reading from the memory. But the hardware will snoop on the address bus, and when it sees the address targetting for him, it will just receive the data on the data bus.
Are you asking about Memory mapped files, or memory mapped port-IO?
Memory mapped files are done by paging out the pages and intercepting page-faults to those addresses. This is all done by the OS by negotiation between the file-system manager and the page-fault handler.
Memory mapped port-IO is done at the CPU level by overloading address lines as port-IO lines which allow writes to memory to be translated onto the QPI bus lines as port-IO. This is all done by the processor interacting with the motherboard. The only other thing that the OS needs to do is to tell the MMU not to coalese reads and writes through the PAE must-writethrough and no-cache bits.
Pls explain the difference between memory mapped IO and IO mapped IO
Uhm,... unless I misunderstood, you're talking about two completely different things. I'll give you two very short explanations so you can google up what you need to now.
Memory-mapped I/O means mapping I/O hardware devices' memory into the main memory map. That is, there will be addresses in the computer's memory that won't actually correspond to your RAM, but to internal registers and memory of peripheral devices. This is the machine architecture Pointy was talking about.
There's also mapped I/O, which means taking (say) a file, and having the OS load portions of it in memory for faster access later on. In Unix, this can be accomplished through mmap().
I hope this helped.
On x86 there are two different address spaces, one for memory, and another one for I/O ports.
The port address space is limited to 65536 ports, and is accessed using the IN/OUT instructions.
As an example, a video card's VGA functionality can be accessed using some I/O ports, but the framebuffer is memory-mapped.
Other CPU architectures only have one address space. In those architectures, all devices are memory-mapped.
Memory mapped I/O is mapped into the same address space as program memory and/or user memory, and is accessed in the same way.
Port mapped I/O uses a separate, dedicated address space and is accessed via a dedicated set of microprocessor instructions.
As 16-bit processors will slowly become obsolete and replaced with 32-bit and 64-bit in general use, reserving ranges of memory address space for I/O is less of a problem, as the memory address space of the processor is usually much larger than the required space for all memory and I/O devices in a system.
Therefore, it has become more frequently practical to take advantage of the benefits of memory-mapped I/O.
The disadvantage to this method is that the entire address bus must be fully decoded for every device. For example, a machine with a 32-bit address bus would require logic gates to resolve the state of all 32 address lines to properly decode the specific address of any device. This increases the cost of adding hardware to the machine.
The advantage of IO Mapped IO system is that less logic is needed to decode a discrete address and therefore less cost to add hardware devices to a machine. However more instructions could be needed.
Ref:- Check This link
I have one more clear difference between the two. The memory mapped I/O device is that I/O device which respond when IO/M is low. While a I/O (or peripheral) mapped I/O device is that which respond when IO/M is high.
Memory mapped I/O is mapped into the same address space as program memory and/or user memory, and is accessed in the same way.
I/O mapped I/O uses a separate, dedicated address space and is accessed via a dedicated set of microprocessor instructions.
The difference between the two schemes occurs within the Micro processor’s / Micro controller’s. Intel has, for the most part, used the I/O mapped scheme for their microprocessors and Motorola has used the memory mapped scheme.
https://techdhaba.com/2018/06/16/memory-mapped-i-o-vs-i-o-mapped-i-o/