How do I find PCI device using IPMI over the network - pci

I've been having quite a time trying to use IPMI tools (such as OpenIPMI, FreeIPMI, and ipmitool) to discover and monitor a PCI device in my server. Using an IBM server going through IMM over the network using the IPMI tools, I can't seem to be able to get any information on the PCI devices in the server. The IPMI tools only return basic information on the system such as the BMC, chassis, power supplies, fans, etc. No information on the devices plugged into the PCI slots.
I've tried basic commands like "fru list", "sdr elist", etc. and haven't been able to get any information from the PCI slots.
Just hoping someone has had experience using these tools and is able to get information from the devices in the PCI slots.
Specifically, I would like to get the FRU information as well as device ID, I2C slave address, etc. for accessing the device.
Thanks for any information that you can provide...

There is no requirement in the IPMI spec that the PCI connector A side pins 40 and 41 that contain the SMBus are routed to the BMC. A vendor may do it but most do not.
Look at it this way, the BMC can turn off power to the PCI bus and main CPUs. You would not be able to read anything from them anyway.
This is why the AdvancedTCA specification requires management power and two IPMB buses to each blade slot. The AdvancedTCA spec requires the IPMB bus from each slot is connected to the BMC. The blade can power up and use a max 15 watts to supply IPM Controller and you can read the data you are looking for without powering on the main CPUs.
Hank Bruning
JBlade

Related

Activating DFU (USB programming) on STM32F303

I am building a board based on the STM32F303RET6.
The Processor Datasheet, page 17/section 3.5, mentions programming can be done "using USART1 (PA9/PA10), USART2 (PA2/PA3) or USB (PA11/PA12) through DFU (device firmware upgrade)"
I am using a NUCLEO board with this processor.
I have connected the Vdd, Gnd, D+ and D- pins of USB to a NUCLEO board and disabled the power from the add-on programmer board.
However whenever I reboot it with BOOT0 HIGH the USB never enumerates any device.
I am connecting the pins directly to the USB plug without any external resistor. The datasheet seems to suggest these are not needed.
To make things a bit trickier, this processor has the additional particularity of not having a BOOT1 pin; it is a software bit.
My question is, does the processor actually support DFU by using the built in bootloader?
If so, how should one go about starting it and programming via USB?
Thank you very much,
Pedro.
PS: ST has actually got conflicting information about support for USB programming on this processor. While the Datasheet says it's supported, Application Note AN2606, page 81 (section 19) only mentions support for programming via USART1, USART 2 and I2C. It references the USARTs but it's unclear how they can be used.
I have connected the Vdd, Gnd, D+ and D- pins of USB to a NUCLEO board
and disabled the power from the add-on programmer board.
Check the actual voltage and current on Vdd. The host might limit the current, or shut the port down when the consumption exceeds 100mA before enumeration. Try it with an external power supply.
I am connecting the pins directly to the USB plug without any external
resistor.
You need an 1.5k pullup on D+ (full speed) or D- (low speed). This is from the STM32F3 Discovery schematics (that's an OTG socket, ignore the ID line for regular 4-wire ports)
When there is no pullup, the host can't detect when the device is plugged in, and therefore won't enumerate it.
ST has actually got conflicting information about support for USB programming on this processor. While the Datasheet says it's supported, Application Note AN2606, page 81 (section 19) only mentions support for programming via USART1, USART 2 and I2C.
There is no conflicting information there. Section 19 on page 81 refers to some other controllers.
The capabilities of your STM32F303RET6 are listed in table 36, section 18.1 on page 77. (As I've already pointed it out.) See also table 3 on page 23, line STM32F302xD(E)/303xD(E).

how drivers work (e.g. PCIe and USB)

I am curious about how drivers in general work. I do understand basic concepts and also how a single driver operates. Where I am confused is how they work when multiple drivers are involved.
Let me explain my question through an example:
Suppose I have a PCIe and USB interface in HW. The primary interface to host (where driver, OS, applications reside) is PCIe. USB interface is accessible to host through PCIe.
So, in this case, I would have driver for PCIe as well for USB.
When data has to be transferred through USB by application, application would invoke system/OS calls. This would eventually land up in USB driver.
Is this correct?
Once USB driver has completed processing, PCIe calls have to be called. Who does it? is it OS or USB driver itself?
I would assume that, it would be OS as otherwise it would break basic modular philosophy. But driver calling OS seems counterintuitive as I always assumed flow to be from application to OS to driver and HW.
Can anyone please throw some light on this topic?
Much like in user space code, there exist standardized APIs for access various types of hardware in kernel land(exact usage varies by OS). As a result it isn't really that much of a stretch for one device driver to access another device's driver via these standardized APIs. (Warning: USB is a very complex protocol, and many details have been glossed over to keep a long post shorter)
The original question focused on PCIe to USB cards. In this example I think it's helpful to think of there being three "layers" of drivers. The first layer is the PCIe bus controller driver, which controls PCIe bus specific functions such as mapping out MMIO for PCIe devices and supporting interrupts from those PCIe devices. The second layer is the USB host controller layer, which provides the functions for issuing standardized USB transactions. Finally, the USB device driver (like a USB driver keyboard) sits on top of the stack using the standardized USB transaction to implement the functionality of the specific USB peripheral device. Calls from the keyboard driver will call functions down in the USB host controller driver, which in turn may even call down to the PCIe driver. All of this is done in the kernel space, even though many separate drivers are employed.
Most PCIe devices do the bulk of their communication with the CPU via MMIO access, which appear as memory reads/writes to the processor. Generally no specific driver function is needed to perform the MMIO transfer of data from PCIe to CPU (although there may be some simple access functions to do endian correction or deal with cache issues).
USB host controller drivers are interesting in that they conform to a standard (such as XHCI, the USB 3.0 standard, which I'll use in this example) which dictates a standard device memory map and behavior. Thus there usually is some chip specific driver performs non-standard initialization to the USB host controller device. Additionally, these chip specific drivers will both retrieve the location of the XHCI standardized MMIO region and provide a way to receive interrupts from the XHCI controller (in this example from PCIe interrupts).
Next, this standardized memory region and interrupt mechanism is passed to a generic XHCI host controller driver. The generic XHCI code does not care if the device is PCIe, it just cares that it gets passed a memory region that follows the XHCI standard and that it receives the correct interrupts The XHCI driver provides the generic USB transfer functions which in turn the USB keyboard device can use to initiate USB transactions.
For the most part, the XHCI driver is just going to do read/writes to the MMIO region that was passed in. This allows the same common XHCI code to service a wide array USB host controllers, many of which are not PCIe devices. Thus effectively allowing the XHCI driver to abstract away the underlying hardware implementing the USB controller. Thus, for the example posed by the original question, the USB host controller standards are designed to hide the underlying hardware mechanisms to make for a more modular USB driver system.

At boot time how OS determines all the hardware?

I have these related questions:
Does anybody know how an OS gets to know all hardware connected on the motherboard? (I guess this is called "Hardware Enumeration").
How does it determine what kind of hardware is residing at an specific IO address (i.e.: serial or parallel or whatever controller)?
How to code a system module which will do this job? (Assuming no OS loaded yet, just BIOS).
I know BIOS is just a validation and an user friendly interface to configure hardware at boot time with no real use after that for most modern OS's (win, Linux, etc). Besides I know that for the BIOS it should not be difficult to find all hardware because it is specifically tuned by the board manufacturer (who knows everything about it!). But for an OS or an application above BIOS that is a complete different story. Right?
Pre-PCI this was much more difficult, you needed a trick for each product, and even with that it was difficult to figure everything out. With usb and pci you can scan the busses to find a vendor and product id, from that you go into a product specific discovery (like the old days this can be difficult). Sometimes the details for that board are protected by NDA or worse you just dont get to know unless you work there on the right team.
The general approach is either based on detection (usb, pcie, etc vendor/product ids) you load a driver or write a driver for that family of product based on the documentation for that family of product. Since you mentioned BIOS, win, linux that implies X86 or a PC, and that pretty much covers the autodetectable. Beyond that you rely on the user who knows what hardware was installed in the system and a driver that came with it. or in some way you ask them what is installed (the specific printer at the end of a cable or out on the network if not auto detectable is an easy to understand example).
In short you take decades of experience in trying to succeed at this and apply it, and still fail from time to time since you are not in 100% control of all the hardware in the system, there are hundreds of vendors out there each doing their own thing.
BIOS enumerates the pci(e) for an x86 pc, for other platforms the OS might do it. The enumeration includes allocating address space for the device based on pci compliant rules. but within that address space you have to know how to program that specific board from vendor documentation if available.
Sorry my English is not very good.
Responding questions 1 and 2:
During the boot process, only the hardware modules strictly necessary to find and start the OS are loaded.
These hardware modules are: motherboard, hard drive, RAM, graphics card, keyboard, mouse, screen (that is detected by the graphics card), network card, CD / DVD, and a few extra peripherals such as USB units.
Each hardware module you connect to a computer has a controller that is like a small BIOS with all the information of the stored device: manufacturer, device type, protocols, etc
The detection process is very simple: the BIOS has hardcoded all the information about the motherboard, with all communication ports. At startup, the BIOS sends a signal to all system ports asking the questions "Who are you? What are you? How do you function?" and all attached devices answers by sending their information. In this way the computer knows how much RAM you have, if there is present a keyboard or a mouse, which storage devices are available, screen resolution, etc ...
Obviously, this process only works with the basic modules needed to boot the system, and does not work with complex peripherals that require specific drivers: printers, scanners, webcams ... all these complex peripherals are loaded by software once the OS has been charged.
Responding question 3:
If you wants to load a specific module during the boot, you must:
- Create a controller for this module.
- If the peripheral is too complex to load everything from the controller, you must to write all the proccess to control that module directly in the BIOS module, or reprogram the IPL to manage that specific module.
I hope I've helped
Actually, when you turn on your system, the BIOS starts the IPL (Initial program load) from ROM. For check the all connected devices are working good and also check the mandatory devices such as keyboard, CMOS, Hard disk and so on. If it is not success, it gives an error flag to the BIOS. The BIOS shows us the error through video devices.
If it is success, the control of boot is transferred to BIOS for further process.
The above all process are called POST (Power On Self Test).
Now the BIOS have the control. It checks all secondary memory devices for boot loader of the OS. If it hit, the bootloader's memory address is transferred to RAM.
The Ram started to execute the bootloader. Here only the os starts to run.
If the bootloader not hit, the bios shows us the boot failure error.
I hope your doubt clarified...

Interfacing socket code with a Linux PCI driver

I have two devices that are interfaced with PCI. I also have code for both devices that uses generic socket code. (The devices were originally connected by MII/Ethernet.)
Now, I need to write a PCI device driver to transport packets back and forth between the two devices.
How do I access the file descriptors opened by the socket code? Is this the same as accessing a character device file?
The PCI driver has to somehow capture packets from read() and write() in the code.
Thanks!
The answers to your questions are: (1) You don't, and (2) no.
File descriptors are a user-space concept, and kernel drivers don't interact with user-space concepts. (Yes, they're implemented by the kernel, but other device drivers don't get to play with them directly, and shouldn't play with them even indirectly.)
What you do is implement methods that will receive data that is buffered in a kernel-accessible memory space, and send that to your hardware, and then receive data from your hardware and write it (when asked) to a buffer in kernel-accessible memory.
You'll do this by implementing the character device driver APIs as well as the PCI device driver APIs and then registering your driver as a PCI device, and then a character device. While some of these methods may refer to file structures, they will not be the user-land structures that you know and love.
For devices that implement the Ethernet protocols, life is easier because you implement the Net Device Interface instead. This way all you have to write is the parts necessary to get data to and from your hardware.
What you'll need is specifications for the device hardware, how you control the hardware using PCI registers and regions.
The good news is, you don't have to do this alone -- there's a large community of kernel developers, and several good (and current) books on developing for the Linux kernel (see below).
References
Understanding the Linux Kernel
Linux Device Drivers
Essential Linux Device Drivers

Direct communication between two PCI devices

I have a NIC card and a HDD both connected on PCIe slots in a Linux machine. Ideally, I'd like to funnel incoming packets to the HDD without involving the CPU, or involving it minimally. Is it possible to set up direct communication along the PCI bus like that? Does anyone have pointers as to what to read up on to get started on a project like this?
Thanks all.
Not sure if you are asking about PCI or PCIe. You used both terms, and the answer is different for each.
If you are talking about a legacy PCI bus: The answer is "yes". Board to board DMA is doable. Video capture boards may DMA video frames directly into your graphics card memory for example.
In your example, the video card could DMA directly to a storage device. However, the data would be quite "raw". Your NIC would have no concept of a filesystem for example. You also need to make sure you can program the NIC's DMA engine to sit within the confines of your SATA controller's registers. You don't want to walk off the end of the BAR!
If you are talking about a modern PCIe bus: The answer is "typically no, but it depends". Peer-to-peer bus transactions are a funny thing in the PCI Express Spec. Root complex devices are not required to support it.
In my testing, peer-to-peer DMA will work, if your devices are behind a PCIe switch (not directly plugged into the motherboard). However, if your devices are connected directly to the chipset (Root Complex), peer-to-peer DMA will not work, except in some special cases. The most notable special case would be the video capture example I mentioned earlier. The special cases are mentioned in the chipset datasheets.
We have tested the peer-to-peer PCIe DMA with a few different Intel and AMD chipsets and found consistent behavior. Have not tested the most recent generations of chipsets though. (We have discussed the lack of peer-to-peer PCIe DMA support with Intel, not sure if our feedback has had any impact on their Engineering dept.)
Assuming that both the NIC card and the HDD are End Points (or Legacy Endpoints) you cannot funnel traffic without involving the Root Complex (CPU).
PCIe, unlike PCI or PCI-X, is not a bus but a link, thus any transaction from an Endpoint device (say the NIC) would have to travel through the Root Complex (CPU) in order to get to another branch (HDD).