Memory mapped IO - who maps the addresses to the physical address space? - operating-system

When we say that a device is memory mapped,
Who maps the addresses to the devices?
How are these address spaces decided in terms of location and size?
Where are these maps stored?
Do these address spaces vary across system boots?

Roughly,
The MMU hardware.
The kernel manages the MMU tables used by the MMU hardware.
In a per-process structure. Under Linux, look at /proc/<pid>/maps to see all memory-mapped files and devices.
They can, so you should not count on them being fixed.
For further reading, I suggest the Memory Mapping and DMA chapter from Linux Device Drivers, this FAQ, and this stackoverflow question.

Related

which firmware boots up when we turn on the power?

All I know is It helps in initializing processor hardware and operating system.
First I need to know what is a firmware and how it works.
Probably showing a list of firmware and what they do can be a good idea for explanation.
Typically, microprocessors performa start up process that usually takes the form of "begin execution of the code that is found starting at a specific address" or "look for a multibyte code at a pinpointed location and jump to the indicated locale to begin execution"
Since the introduction of IC ROM, with its many variants, including, but not limited to mask programmed ROMs, and programmable ROMs, and EPROMs. Since this time, firmware boot programs were shipped and isntalled on computers. Then the introduction of an external ROM was an italian telephone switching elaborator, called "Gruppi Speciali".
When a computer's turned off, it's software remains stored on nonvolatile data devices such as HDDs CDs DVDs SD cards, USBs, floppies, etc. WHen the computer's powere don, it doen't have an operating system or its lloader in RAM, it first of all executes a relatively small program stored in ROM, along with a small amount of needed data to access the nonvolatile device or devices from which the operating system programs the data and can be uploaded into RAM, this small program being known as a bootstrap loader.

Mapping of Host System Memory to PCI domain Address

My understanding of PCI
The Host CPU is responsible for assigning the PCI domain address to all other devices on PCI bus by setting the devices BAR register in PCI configuration space
The Host CPU can map the PCI address domain to its domain(i.e System domain), so that Host initiated "PCI Memory transactions" with devices on PCI bus can be achieved using simple load/ store instructions of the host CPU
Question ->
Is it possible that even the system memory i.e. the main memory of the host(actual ram) be mapped to PCI domain address, so that when Host system is a target device of the "PCI memory transaction" initiated by a device on PCI bus, the main memory is read/ written without the intervention of the Host CPU?
Additional Information: I am working of embedded system with 3 SH4 processors communicating using PCI bus
There are two kind of memory mapping in PCIe world. One is inbound mapping and the other is outbound mapping.
Inbound mapping : the memory space is located on the device and the host CPU can look up the mapped memory space.
Outbound mapping : the memory space is located on the host CPU and the device can look up the mapped memory space.
Both of them seem to be same but it is a important difference. With this feature, you don't need to any additional memory copy to communicate between the host CPU and the device.
I realize this is an old question but I'll like to answer it anyway. When you say "transaction initiated by a device on PCI bus", I assume you mean a read/write initiated by the device to access system memory (RAM). This is called bus mastering on the device (also referred to as DMA), and it can be done by having the host CPU allocate a DMA buffer (ie. with dma_alloc_coherent()), and having the driver provide this DMA address to the device. Then yes, the device can read/write to system memory without host CPU intervention.
Yes it is very much possible. You can also use DMA functionality of PCIE to 'bypass' the CPU.

How does the OS detect hardware?

Does the OS get this information from the BIOS or does it scan the buses on its own to detect what hardware is installed on the system. Having looked around online different sources say different things. Some saying the BIOS detects the hardware and then stores it in memory which the OS then reads, others saying the OS scans buses (e.g pci) to learn of the hardware.
I would have thought with modern OSs it would ignore the BIOS and do it itself.
Any help would be appreciated.
Thanks.
Generally speaking, most modern OSes (Windows and Linux) will re-scan the detected hardware as part of the boot sequence. Trusting the BIOS to detect everything and have it setup properly has proven to be unreliable.
In a typical x86 PC, there are a combination of techniques used to detect attached hardware.
PCI and PCI Express busses has a standard mechanism called Configuration Space that you can scan to get a list of attached devices. This includes devices installed in a PCI/PCIe slot, and also the controller(s) in the chipset (Video Controller, SATA, etc).
If an IDE or SATA controller is detected, the OS/BIOS must talk to the controller to get a list of attached drives.
If a USB controller is detected, the OS/BIOS loaded a USB protocol stack, and then enumerates the attached hubs and devices.
For "legacy" ISA devices, things are a little more complicated. Even if your motherboard does not have an ISA slot on it, you typically still have a number of "ISA" devices in the system (Serial Ports, Parallel Ports, etc). These devices typically lack a truly standardized auto-detection method. To detect these devices, there are 2 options:
Probe known addresses - Serial Ports are usually at 0x3F8, 0x2F8, 0x3E8, 0x2E8, so read from those addresses and see if there is something there that looks like a serial port UART. This is far from perfect. You may have a serial port at a non-standard address that are not scanned. You may also have a non-serial port device at one of those addresses that does not respond well to being probed. Remember how Windows 95 and 98 used to lock up a lot when detecting hardware during installation?
ISA Plug-n-Play - This standard was popular for a hot minute as ISA was phased out in favor of PCI. You probably will not encounter many devices that support this. I believe ISA PnP is disabled by default in Windows Vista and later, but I am struggling to find a source for that right now.
ACPI Enumeration - The OS can rely on the BIOS to describe these devices in ASL code. (See below.)
Additionally, there may be a number of non-PnP devices in the system at semi-fixed addresses, such as a TPM chip, HPET, or those "special" buttons on laptop keyboards. For these devices to be explained to the OS, the standard method is to use ACPI.
The BIOS ACPI tables should provide a list of on-motherboard devices to the OS. These tables are written in a language called ASL (or AML for the compiled form). At boot time, the OS reads in the ACPI tables and enumerates any described devices. Note that for this to work, the motherboard manufacturer must have written their ASL code correctly. This is not always the case.
And of course, if all of the auto-detection methods fail you, you may be forced to manually install a driver. You do this through the Add New Hardware Wizard in Windows. (The exact procedure varies depending on the Windows version you have installed.)
I see a lot of info about system hardware, except for memory ,one of the main important part besides the cpu, which funnily isn't really mentioned as well.
This is fair because perhaps there's so many things to enumerate, you kind of lose sight of the forest through the trees.
For memory on x86/64 platforms you will want to query either BIOS or EFI for a memory map. for BIOS this is int 0x15 handle 0xe820. EFI has it's own mechanism which provides similar information.
This will show you which memory ranges are reserved by hardware etc. in order for your OS to know to leave them alone. (ok you have to built that part too of course ;D)
For other platforms, often the OS will be configured for a fixed memory size, like in embedded platforms. There is no BIOS for you, and performing a sort of bruteforce on memory is unreliable at best. (as far as i know! - not much experience outside of x86/64!!!)
For the CPU you will definitely want to look into MSRs, control registers and CPUID functions to enumerate the CPU and see what it's capable of. you can query if for example 64 bit mode is supported, and some other features which might not be present on all cpus.
For other hardware like pci etc, i would recommend like myron-semack said to look into PCI specification, pci-express, and importantly ACPI as implementing that will make you handle hardware and powermanagement a . bit more generically / according to newer standards.

bios and real mode

It is said that bios program can only be seen in real mode, it is also known that bios is stored in ROM, but what the CPU maps is RAM usually, does it mean in real mode, some space of the memory is mapped to ROM, so we can see the bios program.
The physical address space is more than just RAM. It contains ROM and memory-mapped devices, such as APICs and video memory. The main reason you cannot use the BIOS from outside of real mode is that it was written to be used in real mode. Some functions may work in 16-bit protected mode, and more will work in Virtual 8086 mode, but trying to call the wrong function could cause your system to crash. Also, interrupts work differently in protected mode than real mode, so you would have to remap the functions.
Another reason the BIOS could be unavailable outside of real mode is paging. Paging is the process of mapping virtual addresses to physical addresses. If the operating system uses paging, it could choose not to map the pages containing ROM into virtual memory at all, so they would effectively not be there, and therefore impossible to call. ROM still takes some of the physical address space, but is unavailable through virtual memory.

Direct communication between two PCI devices

I have a NIC card and a HDD both connected on PCIe slots in a Linux machine. Ideally, I'd like to funnel incoming packets to the HDD without involving the CPU, or involving it minimally. Is it possible to set up direct communication along the PCI bus like that? Does anyone have pointers as to what to read up on to get started on a project like this?
Thanks all.
Not sure if you are asking about PCI or PCIe. You used both terms, and the answer is different for each.
If you are talking about a legacy PCI bus: The answer is "yes". Board to board DMA is doable. Video capture boards may DMA video frames directly into your graphics card memory for example.
In your example, the video card could DMA directly to a storage device. However, the data would be quite "raw". Your NIC would have no concept of a filesystem for example. You also need to make sure you can program the NIC's DMA engine to sit within the confines of your SATA controller's registers. You don't want to walk off the end of the BAR!
If you are talking about a modern PCIe bus: The answer is "typically no, but it depends". Peer-to-peer bus transactions are a funny thing in the PCI Express Spec. Root complex devices are not required to support it.
In my testing, peer-to-peer DMA will work, if your devices are behind a PCIe switch (not directly plugged into the motherboard). However, if your devices are connected directly to the chipset (Root Complex), peer-to-peer DMA will not work, except in some special cases. The most notable special case would be the video capture example I mentioned earlier. The special cases are mentioned in the chipset datasheets.
We have tested the peer-to-peer PCIe DMA with a few different Intel and AMD chipsets and found consistent behavior. Have not tested the most recent generations of chipsets though. (We have discussed the lack of peer-to-peer PCIe DMA support with Intel, not sure if our feedback has had any impact on their Engineering dept.)
Assuming that both the NIC card and the HDD are End Points (or Legacy Endpoints) you cannot funnel traffic without involving the Root Complex (CPU).
PCIe, unlike PCI or PCI-X, is not a bus but a link, thus any transaction from an Endpoint device (say the NIC) would have to travel through the Root Complex (CPU) in order to get to another branch (HDD).