How to get hardware ID of network interface card in UEFI program? - uefi

The form of hardware id of nic is like PCI\VEN_8086&DEV_153A&SUBSYS_309717AA&REV_04
I want to get it in UEFI program, but I haven't gotten any tips in UEFI specification.

What you need is EFI_PCI_IO_PROTOCOL.
Refer to UEFI spec 2.6 "13.4 EFI PCI I/O Protocol".
Get all PCI devices handles by calling gBS->LocateHandleBuffer().
Get EFI_PCI_IO_PROTOCOL attached on PCI device handle. (gBS->HandleProtocol)
Call EFI_PCI_IO_PROTOCOL.Pci() to load the PCI configuration space. Everything you need (Device id, Vendor id, Subsystem, Revision) can be found in PCI configuration space.

Related

How does a modern operating system like Windows or Linux know the chipset specific memory map?

The memory map of the peripherals are defined by the chipset. However, modern operating systems like linux and Windows can boot from pretty much every chip (if compiled for the right architecture). As far as I know, the memory mapped devices like the USB Host are not included in the architecture standard. How can the OS still boot, load the drivers, and function? I suppose there must be some specification where the chipset is described.
Formulated a little different: How does the identification of the chipset work, what standards define the communication between the chipset and the processor so that it works on different hardwares and how does the kernel know the right physical addresses for the different peripherals?
Open systems typically use a device tree, which is a specification of the attached hardware, and how it is attached. There is another system, ACPI which supports legacy PCs. Either system permits an OS to locate and configure the buses and associated peripherals it needs.
It is never 100% as easy as that. For example, it is fine for the OS to know there is a scsi controller on bus 1 at address 1000; but if the code for the scsi driver isn't in the loaded os image, then this knowledge is of little use, as it has no way to load the driver.
The intel specification for ACPI attempts to fix this by having tiny driver implementations baked into the firmware of either the platform, the device itself, or both. Since the device doesn't necessarily know what sort of cpu it will run on, these mini drivers are written in a virtual instruction set which the host OS requires an interpreter for.
UEFI provides an alternate way have addressing the boot dependency via a more generic mechanism to use mini-boot drivers for the same purpose.

how drivers work (e.g. PCIe and USB)

I am curious about how drivers in general work. I do understand basic concepts and also how a single driver operates. Where I am confused is how they work when multiple drivers are involved.
Let me explain my question through an example:
Suppose I have a PCIe and USB interface in HW. The primary interface to host (where driver, OS, applications reside) is PCIe. USB interface is accessible to host through PCIe.
So, in this case, I would have driver for PCIe as well for USB.
When data has to be transferred through USB by application, application would invoke system/OS calls. This would eventually land up in USB driver.
Is this correct?
Once USB driver has completed processing, PCIe calls have to be called. Who does it? is it OS or USB driver itself?
I would assume that, it would be OS as otherwise it would break basic modular philosophy. But driver calling OS seems counterintuitive as I always assumed flow to be from application to OS to driver and HW.
Can anyone please throw some light on this topic?
Much like in user space code, there exist standardized APIs for access various types of hardware in kernel land(exact usage varies by OS). As a result it isn't really that much of a stretch for one device driver to access another device's driver via these standardized APIs. (Warning: USB is a very complex protocol, and many details have been glossed over to keep a long post shorter)
The original question focused on PCIe to USB cards. In this example I think it's helpful to think of there being three "layers" of drivers. The first layer is the PCIe bus controller driver, which controls PCIe bus specific functions such as mapping out MMIO for PCIe devices and supporting interrupts from those PCIe devices. The second layer is the USB host controller layer, which provides the functions for issuing standardized USB transactions. Finally, the USB device driver (like a USB driver keyboard) sits on top of the stack using the standardized USB transaction to implement the functionality of the specific USB peripheral device. Calls from the keyboard driver will call functions down in the USB host controller driver, which in turn may even call down to the PCIe driver. All of this is done in the kernel space, even though many separate drivers are employed.
Most PCIe devices do the bulk of their communication with the CPU via MMIO access, which appear as memory reads/writes to the processor. Generally no specific driver function is needed to perform the MMIO transfer of data from PCIe to CPU (although there may be some simple access functions to do endian correction or deal with cache issues).
USB host controller drivers are interesting in that they conform to a standard (such as XHCI, the USB 3.0 standard, which I'll use in this example) which dictates a standard device memory map and behavior. Thus there usually is some chip specific driver performs non-standard initialization to the USB host controller device. Additionally, these chip specific drivers will both retrieve the location of the XHCI standardized MMIO region and provide a way to receive interrupts from the XHCI controller (in this example from PCIe interrupts).
Next, this standardized memory region and interrupt mechanism is passed to a generic XHCI host controller driver. The generic XHCI code does not care if the device is PCIe, it just cares that it gets passed a memory region that follows the XHCI standard and that it receives the correct interrupts The XHCI driver provides the generic USB transfer functions which in turn the USB keyboard device can use to initiate USB transactions.
For the most part, the XHCI driver is just going to do read/writes to the MMIO region that was passed in. This allows the same common XHCI code to service a wide array USB host controllers, many of which are not PCIe devices. Thus effectively allowing the XHCI driver to abstract away the underlying hardware implementing the USB controller. Thus, for the example posed by the original question, the USB host controller standards are designed to hide the underlying hardware mechanisms to make for a more modular USB driver system.

How do I find PCI device using IPMI over the network

I've been having quite a time trying to use IPMI tools (such as OpenIPMI, FreeIPMI, and ipmitool) to discover and monitor a PCI device in my server. Using an IBM server going through IMM over the network using the IPMI tools, I can't seem to be able to get any information on the PCI devices in the server. The IPMI tools only return basic information on the system such as the BMC, chassis, power supplies, fans, etc. No information on the devices plugged into the PCI slots.
I've tried basic commands like "fru list", "sdr elist", etc. and haven't been able to get any information from the PCI slots.
Just hoping someone has had experience using these tools and is able to get information from the devices in the PCI slots.
Specifically, I would like to get the FRU information as well as device ID, I2C slave address, etc. for accessing the device.
Thanks for any information that you can provide...
There is no requirement in the IPMI spec that the PCI connector A side pins 40 and 41 that contain the SMBus are routed to the BMC. A vendor may do it but most do not.
Look at it this way, the BMC can turn off power to the PCI bus and main CPUs. You would not be able to read anything from them anyway.
This is why the AdvancedTCA specification requires management power and two IPMB buses to each blade slot. The AdvancedTCA spec requires the IPMB bus from each slot is connected to the BMC. The blade can power up and use a max 15 watts to supply IPM Controller and you can read the data you are looking for without powering on the main CPUs.
Hank Bruning
JBlade

Mapping of Host System Memory to PCI domain Address

My understanding of PCI
The Host CPU is responsible for assigning the PCI domain address to all other devices on PCI bus by setting the devices BAR register in PCI configuration space
The Host CPU can map the PCI address domain to its domain(i.e System domain), so that Host initiated "PCI Memory transactions" with devices on PCI bus can be achieved using simple load/ store instructions of the host CPU
Question ->
Is it possible that even the system memory i.e. the main memory of the host(actual ram) be mapped to PCI domain address, so that when Host system is a target device of the "PCI memory transaction" initiated by a device on PCI bus, the main memory is read/ written without the intervention of the Host CPU?
Additional Information: I am working of embedded system with 3 SH4 processors communicating using PCI bus
There are two kind of memory mapping in PCIe world. One is inbound mapping and the other is outbound mapping.
Inbound mapping : the memory space is located on the device and the host CPU can look up the mapped memory space.
Outbound mapping : the memory space is located on the host CPU and the device can look up the mapped memory space.
Both of them seem to be same but it is a important difference. With this feature, you don't need to any additional memory copy to communicate between the host CPU and the device.
I realize this is an old question but I'll like to answer it anyway. When you say "transaction initiated by a device on PCI bus", I assume you mean a read/write initiated by the device to access system memory (RAM). This is called bus mastering on the device (also referred to as DMA), and it can be done by having the host CPU allocate a DMA buffer (ie. with dma_alloc_coherent()), and having the driver provide this DMA address to the device. Then yes, the device can read/write to system memory without host CPU intervention.
Yes it is very much possible. You can also use DMA functionality of PCIE to 'bypass' the CPU.

Interfacing socket code with a Linux PCI driver

I have two devices that are interfaced with PCI. I also have code for both devices that uses generic socket code. (The devices were originally connected by MII/Ethernet.)
Now, I need to write a PCI device driver to transport packets back and forth between the two devices.
How do I access the file descriptors opened by the socket code? Is this the same as accessing a character device file?
The PCI driver has to somehow capture packets from read() and write() in the code.
Thanks!
The answers to your questions are: (1) You don't, and (2) no.
File descriptors are a user-space concept, and kernel drivers don't interact with user-space concepts. (Yes, they're implemented by the kernel, but other device drivers don't get to play with them directly, and shouldn't play with them even indirectly.)
What you do is implement methods that will receive data that is buffered in a kernel-accessible memory space, and send that to your hardware, and then receive data from your hardware and write it (when asked) to a buffer in kernel-accessible memory.
You'll do this by implementing the character device driver APIs as well as the PCI device driver APIs and then registering your driver as a PCI device, and then a character device. While some of these methods may refer to file structures, they will not be the user-land structures that you know and love.
For devices that implement the Ethernet protocols, life is easier because you implement the Net Device Interface instead. This way all you have to write is the parts necessary to get data to and from your hardware.
What you'll need is specifications for the device hardware, how you control the hardware using PCI registers and regions.
The good news is, you don't have to do this alone -- there's a large community of kernel developers, and several good (and current) books on developing for the Linux kernel (see below).
References
Understanding the Linux Kernel
Linux Device Drivers
Essential Linux Device Drivers