How to access pci express configuration space via MMIO? - pci

I am new to PCI express, I want to read/write into PCI Express configuration space via MMIO addresses. I know how port mapped IO read/write into PCI express config space via 0xCFC and 0xCF8 port addresses(on x86). I also wrote a sample linux kernel module to read pci config space via port mapped io which worked fine. I want to do the same via MMIO/MMCFG access.
I also did a search around but could not find a convincing answer. I am looking for the details and also some code sample to understand it better.
Any help is appreciated.

Hardware
The base address of the MMIO area for the configuration space of each PCIe devices in a PCI segment group is given in the ACPI table MCFG.
The MCFG table lists, for each PCI segment group, the first and last (inclusive) bus number of the PCI segment group and the base address of the extended configuration space.
The MCFG table is setup by the BIOS/UEFI based upon the value of the PCIEXBAR (for my processor is at offset 60h) in the Host Bridge/DRAM registers device located at 00:00.0.
This is the usual address, the device is integrated in the processor socket since the Nehalem architecture and never changed address.
You can google your processor generation datasheet to get the correct device address and register offset.
Also note that not all of the 256 MiB area may be mapped, my processor allows 256/128/64 MiB mapping with 128 MiB being the one selected by the BIOS/UEFI.
Linux
I don't know how to correctly handle this in Linux, there are the pci_{read|write}_config_XXX function function that seems to use the PCIe extended config space.
So accessing the config space should be very easy.
Alternatively, the pci_mcfg_lookup will give the physical address of extended configuration space for a PCI segment group and a bus range (you should be able to make it work by defining a resource structure with only the start and end fields set to the bus number).
In case you wanted a lower level approach.
Finally, you could get the address of the MCFG table and (re)parse it yourself - I don't know how to get such an address in Linux, exactly.
There is a acpi_tb_find_table where you can pass the signature of the table and null oem and table ids to get a table index.
At line 114 of the same file there is a piece of code that access a table by index, you can use it as documentation.
You probably have to import one or more symbols from the ACPI module.

Related

Allocate Micropython array from a specific memory range

I'm using Micropython on an ESP32-S3. I'm setting up DMA from an ADC by manually configuring the appropriate registers using mem32. I need to reserve some space in the SRAM1 region of Internal RAM for the linked list of descriptors. i.e. In address range 0x3FC88000 to 0x3FCEFFFF.
Is there something like array.aray() that lets me specify a range?
I assume either drivers or some other internal Micropython plumbing under the hood is already using some of that internal RAM address space, as I see non-zero values in it. I want to make sure my code doesn't stomp on anything being used (or vice versa).

How to access to physical memory address in arm?

For example, without mmu, leds are fixed at 0x110002E0, I can write values to this physical address to configure it.
Here is question:
Where are these values written to, leds' register or dram?
If they are written to leds'registers, what physical address I should write to, so that the values can be written to the same address in dram?

How do operating systems allow userspace programs to interact with kernelspace programs?

This isn't quite a question about a specific OS, but let's take Windows as an example. A userspace program uses the Windows API to communicate with kernelspace. However, I don't understand how that's possible. The API, according to MS websites, lives in userspace. In order to access kernelspace it has to be in kernelspace, if I understand it correctly. So what is the mechanism by which the windows API gets extra privileges to speak to kernelspace? In which space does that mechanism operate? Is this sort of thing universal to all modern PC OS's?
As you're already aware there a bunch of facilities exposed to userspace programs by the Windows kernel. (If you're curious there's a list of system calls). These system calls are all identified by a unique number, which isn't part of the publicly documented interface given by Microsoft. Instead when you call a publicly exposed function from your program there's a DLL installed when you install (or update) Windows that has an entry point which is just a normal, unprivileged user mode function call. This DLL knows the mappings between public interfaces and the available system calls in the currently running kernel. These mappings are not always 1:1 which allows for tweaks and enhancements without breaking existing code using stable interfaces.
When some userland code calls one of these functions its role is to prepare arguments for the system call and then initiate the jump into kernel mode. How exactly that jump occurs is specific to the architecture that Windows is currently running on. In fact it varies not just between x86 and Arm but between AMD and Intel x86 systems even. I'll talk just about the modern Intel x86 32-bit case (using the SYSENTER instruction) here for simplicity. On x86 most of the other variations are relatively minor, for instance int 2Eh was used prior to SYSENTER support.
Early in boot up the operating system does a bunch of work to prepare for enabling a userland and system calls from it. Understanding this is critical to understanding how system calls really work.
First let's rewind a little and consider what exactly we mean by userland and kernelmode. On x86 when we talk about privileged vs un-privileged code we talk about "rings". There are actually 4 (ignoring hypervisors) but for various reasons nobody really used anything but ring0 (kernel) and ring3 (userland). When we run code on x86 the address that's being executed (EIP) and data that's being read/written come from segments.
Segments are mostly just a historical accident left over from the days before virtual addressing on x86 was a thing. They are however important for us here because there are special registers that define which segments are currently being used when we execute instructions or otherwise reference memory. Segments on x86 are all defined in a big table, called the Global Descriptor Table or GDT. (There's also a local descriptor table, LDT, but that's not going to further the current discussion here). The important point for our discussion here is that the (arcane) layout of the table entries include 2 bits, called DPL which define the privilege level of the currently active segment. You'll notice that 2 bits is exactly enough to define 4 levels of privilege.
So in short when we talk about "executing in kernel mode" we really just mean that our active code segment (CS) and data segment selectors point to entries in the GDT which have DPL set to 0. Likewise for userland we have a CS and data segment selectors pointing to GDT entries with DPL set to 3 and no access to kernel addresses. (There are other selectors too, but to keep it simple we'll just consider "code" and "data" for now).
Back to early on during kernel boot up: during start up the kernel creates the GDT entries we need. (These have to be laid out in a specific order for SYSENTER to work, but that's mostly just an implementation detail). There are also some "machine specific registers" that control how our processor behaves. These can only be set by privileged code. Three of them that are important here are:
IA32_SYSENTER_ESP
IA32_SYSENTER_EIP
IA32_SYSENTER_CS
Recall that we've got some code runnig in userland (ring3) that wants to transition to ring0. Let's assume that it has saved any registers that it needs to per the calling convention and put arguments into the right registers that the call expects. We then hit the SYSENTER instruction. (Actually it uses KiFastSystemCall I think). The SYSENTER instruction is special. It modifies the current code and data segment selectors based on the value that the kernel setup in the machine specific register IA32_SYSENTER_CS. (The stack/data segument values are computed as an offset of IA32_SYSENTER_CS). Subsequently the stack pointer itself (ESP) is set to the kernel stack that was setup for handling system calls earlier on and saved into the MSR IA32_SYSENTER_ESP and likewise for EIP the instruction pointer from IA32_SYSENTER_EIP.
Since the CS selector now points to a GDT entry with DPL set to 0 and EIP points to kernel mode code on a kernel stack we're running in the kernel at this point.
From here onwards the kernel mode code can read and write memory from both kernel and userspace (with some appropriate caution) to undertake the actual work needed to perform the system call. The arguments to the system call can be read from registers etc. according to the calling convention, but any arguments that are actually pointers back to userland or handles to kernel objects can be accessed to read larger blocks of data too.
When the system call is over the process is basically reversed and we end up back in userland with DPL 3 for the selectors.
Its the CPU that is acts as intermediate for transfer of information between user memory space(accessible in user mode) to protected memory space(accessible in kernel mode), via CPU registers.
Here's an Example:
Suppose a user writes a program in higher level language. Now when execution of the program happens, CPU generates the virtual addresses.
Now before any read/write operation occurs, the virtual address, is converted to physical address. Because the translation mechanism(memory management unit), is only accessible in kernel mode, cause its stored in protected memory, the translation occurs in kernel mode and the physical address is finally saved into some register of the CPU, and only then a read/write operation occurs.

Is my understanding of the relationship between virtual addresses and physical addresses correct?

I've been researching (on SO and elsewhere) the relationship between virtual addresses and physical addresses. I would appreciate it if someone could confirm if my understanding of this concept is correct.
The page table is classified as 'virtual space' and contains the virtual addresses of each page. It then maps to the 'physical space', which contains the physical addresses of each page.
A wikipedia diagram to make my explanation clearer:
https://upload.wikimedia.org/wikipedia/commons/3/32/Virtual_address_space_and_physical_address_space_relationship.svg
Is my understanding of this concept correct?
Thank you.
Not entirely correct.
Each program has its own virtual address space. Technically, there is only one address space, the physical random-access memory. Therefore it's called "virtual" because to the user program it seems as if it has its own address space.
Now, take the instruction mov 0x1234, %eax (AT&T) or MOV EAX, [0x1234] (Intel) as an example:
The CPU sends the virtual address 0x1234 to one of its parts, the MMU.
The MMU obtains the corresponding physical address from the page table. This process of adjusting the address is also lovingly called "massaging."
The CPU retrieves the data from the RAM location the physical address refers to.
The concrete translation process depends heavily on the actual architecture and CPU.
The page table is classified as 'virtual space' and contains the virtual addresses of each page. It then maps to the 'physical space', which contains the physical addresses of each page.
This is not really correct. The page table defines a logical address space consisting of pages. The page table maps the logical pages to physical page frames they indicate that the page frame does not [yet] exist in memory in which case you have a virtual mapping. A page is VIRTUAL when memory is simulated using disk space.
In the olde days, page tables always established a virtual address space. Now it is becoming increasingly common (e.g. embedded system) to use logical address translation without virtual memory (paging). Thus, the terms "virtual memory" and "logical memory" are frequently conflated.
The physical address space exists only to the operating system. The process sees only a logical address space.
That is a bit of an oversimplification because the process becomes the operating system after an exception or interrupt and the kernel operates within a common logical address range. However, the operating system kernel does have to manage physical memory to some degree.
For example, some aspect of the page tables must use physical addresses. If the page tables used all logical addresses, then you'd have a chicken and egg problem for address translation. Various hardware systems address this problem in different ways.
Finally, the diagram you link to is a very poor illustration.

Does the MMU mediate everything between the operating system and physical memory or is it just an address translator?

I'm trying to understand how does an operating system work when we want to assign some value to a particular virtual memory address.
My first question concerns whether the MMU handles everything between the CPU and the RAM. Is this true? From what one can read from Wikipedia, I'd say so:
A memory management unit (MMU), sometimes called paged memory
management unit (PMMU), is a computer
hardware component responsible for
handling accesses to memory requested
by the CPU.
If that is the case, how can one tell the MMU I want to get 8 bytes, 64 or 128bytes, for example? What about writing?
If that is not the case, I'm guessing the MMU just translates virtual addresses to physical ones?
What happens when the MMU detects there will be what we call a page-fault? I guess it has to tell it to the CPU so the CPU loads the page itself off disk, or is the MMU able to do this?
Thanks
Devoured Elysium,
I'll attempt to answer your questions one by one but note, it might be a good idea to get your hands on a textbook for an OS course or an introductory computer architecture course.
The MMU consists of some hardware logic and state whose purpose is, indeed, to produce a physical address and provide/receive data to and from the memory controller. Actually, the job of memory translation is one that is taken care of by cooperating hardware and software (OS) mechanisms (at least in modern PCs). Once the physical address is obtained, the CPU has essentially done its job and now sends the address out on a bus which is at some point connected to the actual memory chips. In many systems this bus is called the Front-Side Bus (FSB), which is in turn connected to a memory controller. This controller takes the physical address supplied by the CPU and uses it to interact with the DRAM chips, and ultimately extract the bits in the correct rows and columns of the memory array. The data is then sent back to the CPU, which can now operate on it. Note that I'm not including caching in this description.
So no, the MMU does not interact directly with RAM, which I assume you are using to mean the physical DRAM chips. And you cannot tell the MMU that you want 8 bytes, or 24 bytes, or whatever, you can only supply it with an address. How many bytes that gets you depends on the machine you're on and whether it's byte-addressable or word-addressable.
Your last question urges me to remind you: the MMU is actually a part of the CPU--it sits on the same silicon die (although this was not always the case).
Now, let's take your example with the page fault. Suppose our user-level application wants to, like you said, set someAddress = 10, I'll take it in steps. Let's assume someAddress is 0xDEADBEEF and let's ignore caches for now.
1) The application issues a store instruction to 0xsomeAddress, which, in x86 might look something like
mov %eax, 0xDEADBEEF
where 10 is the value in the eax register.
2) 0xDEADBEEF in this case is a virtual address, which must be translated. Most of the time, the virtual to physical address translation will be available in a hardware structure called the Translation Lookaside Buffer (TLB), which will provide this translation to us very fast. Typically, it can do so in one clock cycle. If the translation is in the TLB, called a TLB hit, execution can continue immediately (i.e. the physical address corresponding to 0xDEADBEEF and the value 10 are sent out to the memory controller to be written).
3) Let's suppose though, that the translation wasn't available in the TLB (called a TLB miss). Then we must find the translation in the page tables, which are structures in memory whose structure is defined by the hardware and managed by the OS. They simply contain entries that map a virtual address to a physical one (more accurately, a virtual page number to a physical page number). But these structures also reside in memory, and so must have addresses! The hardware contains a special register called cr3 which contains the physical address of the current page table. We can index into this page table using our virtual address, so the hardware takes the value in cr3, computes an address by adding an offset, and goes off to memory to fetch the page table entry (PTE). This PTE will (hopefully) contain the physical address corresponding to 0xDEADBEEF, in which case we put this mapping in the TLB (so we don't have to walk the page table again) and continue on our way.
4) But oh no! What if there is no PTE in the page tables for 0xDEADBEEF? This is a page fault, and this is where the Operating System comes into play. The PTE we got out of the page table existed, as in it was (let's assume) a valid memory address to access, but the OS had not created a VA->PA mapping for it yet, so it would have had a bit set to indicate that it is invalid. The hardware is programmed in such a way that when it sees this invalid bit upon an access, it generates an exception, in this case a page fault.
5) The exception causes the hardware to invoke the OS by jumping to a well known location--a piece of code called a handler. There can be many exception handlers, and a page fault handler is one of them. The page fault handler will know the address that caused the fault because it's stored in a register somewhere, and so will create a new mapping for our virtual address 0xDEADBEEF. It will do so by allocating a free page of physical memory and then saying "all virtual addresses between VA x and VA y will map to some address within this newly allocated page of physical memory". 0xDEADBEEF will be somewhere in that range, so the mapping is now securely in the page tables, and we can restart the instruction that caused the page fault (the mov).
6) Now, when we go through the page tables again, we will find a mapping and the PTE we pull out will have a nice physical address, the one we want to store to. We provide this with the value 10 to the memory controller and we're done!
Caches will change this game quite a bit, but I hope this serves to illustrate how paging works. Again, it would benefit you greatly to check out some OS/Computer Architecture books. I hope this was clear.
There are data structures that describe which virtual addresses correspond to which physical addresses. The OS creates and manages these data structures, and the CPU uses them to translate virtual addresses into physical addresses.
For example, the OS might use these data structures to say "virtual addresses in the range from 0x00000000 to 0x00000FFF correspond to physical addresses 0x12340000 to 0x12340FFFF"; and if software tries to read 4 bytes from the virtual address 0x00000468 then the CPU will actually read 4 bytes from the physical address 0x12340468.
Typically everything is effected by the virtual->physical translation (except for when the CPU is accessing the data structures that describe the translation). Also, usually there's some sort of translation cache build into the CPU to help reduce the overhead involved.