How to find separate cpu1 and cpu2 usage in snmpwalk? - centos

I want to get the usage of cpu1 and cpu2 usage separately using snmpwalk.
Mine is dual core cpu. Can anybody know what is the exact OID for the cpu1 and cpu2 usage?
I am using centos operating system. Thank you

The OID HOST-RESOURCES-MIB::hrProcessorLoad (.1.3.6.1.2.1.25.3.3.1.2) shows the CPU percentage per processor (averaged over the last minute). However, the index of that OID is HOST-RESOURCES-MIB::hrDeviceIndex (.1.3.6.1.2.1.25.3.2.1.1), so you won't get something convenient like ".1" for the first processor and ".2" for the second one. Instead, you'll probably get something like this:
snmpwalk -v2c -cpublic localhost HOST-RESOURCES-MIB::hrProcessorLoad
HOST-RESOURCES-MIB::hrProcessorLoad.196608 = INTEGER: 15
HOST-RESOURCES-MIB::hrProcessorLoad.196609 = INTEGER: 3
HOST-RESOURCES-MIB::hrProcessorLoad.196610 = INTEGER: 4
HOST-RESOURCES-MIB::hrProcessorLoad.196611 = INTEGER: 3
What matters is that each entry represents a different processor (or core, or whatever). Here, you can see that this box has four such processors.
To get something a little bit more descriptive for the kind of processor, you can check HOST-RESOURCES-MIB::hrDeviceDescr (.1.3.6.1.2.1.25.3.2.1.3). For example:
snmpwalk -v2c -cpublic localhost HOST-RESOURCES-MIB::hrDeviceDescr
HOST-RESOURCES-MIB::hrDeviceDescr.196608 = STRING: AuthenticAMD: AMD Phenom(tm) 9550 Quad-Core Processor
HOST-RESOURCES-MIB::hrDeviceDescr.196609 = STRING: AuthenticAMD: AMD Phenom(tm) 9550 Quad-Core Processor
HOST-RESOURCES-MIB::hrDeviceDescr.196610 = STRING: AuthenticAMD: AMD Phenom(tm) 9550 Quad-Core Processor
HOST-RESOURCES-MIB::hrDeviceDescr.196611 = STRING: AuthenticAMD: AMD Phenom(tm) 9550 Quad-Core Processor
HOST-RESOURCES-MIB::hrDeviceDescr.262145 = STRING: network interface lo
HOST-RESOURCES-MIB::hrDeviceDescr.262146 = STRING: network interface eth1
HOST-RESOURCES-MIB::hrDeviceDescr.786432 = STRING: Guessing that there's a floating point co-processor
Here, you can see that there are more things indexed by HOST-RESOURCES-MIB::hrDeviceIndex than just the processors. For example, the two network interfaces ("lo" and "eth1") are listed. Just be sure to ask for the indexes that match those of your processors.

Related

QSPI connection on STM32 microcontrollers with other peripherals instead of Flash memories

I will start a project which needs a QSPI protocol. The component I will use is a 16-bit ADC which supports QSPI with all combinations of clock phase and polarity. Unfortunately, I couldn't find a source on the internet that points to QSPI on STM32, which works with other components rather than Flash memories. Now, my question: Can I use STM32's QSPI protocol to communicate with other devices that support QSPI? Or is it just configured to be used for memories?
The ADC component I want to use is: ADS9224R (16-bit, 3MSPS)
Here is the image of the datasheet that illustrates this device supports the full QSPI protocol.
Many thanks
page 33 of the datasheet
The STM32 QSPI can work in several modes. The Memory Mapped mode is specifically designed for memories. The Indirect mode however can be used for any peripheral. In this mode you can specify the format of the commands that are exchanged: presence of an instruction, of an adress, of data, etc...
See register QUADSPI_CCR.
QUADSPI supports indirect mode, where for each data transaction you manually specify command, number of bytes in address part, number of data bytes, number of lines used for each part of the communication and so on. Don't know whether HAL supports all of that, it would probably be more efficient to work directly with QUADSPI registers - there are simply too many levers and controls you need to set up, and if the library is missing something, things may not work as you want, and QUADSPI is pretty unpleasant to debug. Luckily, after initial setup, you probably won't need to change very much in its settings.
In fact, some time ago, when I was learning QUADSPI, I wrote my own indirect read/write for QUADSPI flash. Purely a demo program for myself. With a bit of tweaking it shouldn't be hard to adapt it. From my personal experience, QUADSPI is a little hard at first, I spent a pair of weeks debugging it with logic analyzer until I got it to work. Or maybe it was due to my general inexperience.
Below you can find one of my functions, which can be used after initial setup of QUADSPI. Other communication functions are around the same length. You only need to set some settings in a few registers. Be careful with the order of your register manipulations - there is no "start communication" flag/bit/command. Communication starts automatically when you set some parameters in specific registers. This is explicitly stated in the reference manual, QUADSPI section, which was the only documentation I used to write my code. There is surprisingly limited information on QUADSPI available on the Internet, even less with registers.
Here is a piece from my basic example code on registers:
void QSPI_readMemoryBytesQuad(uint32_t address, uint32_t length, uint8_t destination[]) {
while (QUADSPI->SR & QUADSPI_SR_BUSY); //Make sure no operation is going on
QUADSPI->FCR = QUADSPI_FCR_CTOF | QUADSPI_FCR_CSMF | QUADSPI_FCR_CTCF | QUADSPI_FCR_CTEF; // clear all flags
QUADSPI->DLR = length - 1U; //Set number of bytes to read
QUADSPI->CR = (QUADSPI->CR & ~(QUADSPI_CR_FTHRES)) | (0x00 << QUADSPI_CR_FTHRES_Pos); //Set FIFO threshold to 1
/*
* Set communication configuration register
* Functional mode: Indirect read
* Data mode: 4 Lines
* Instruction mode: 4 Lines
* Address mode: 4 Lines
* Address size: 24 Bits
* Dummy cycles: 6 Cycles
* Instruction: Quad Output Fast Read
*
* Set 24-bit Address
*
*/
QUADSPI->CCR =
(QSPI_FMODE_INDIRECT_READ << QUADSPI_CCR_FMODE_Pos) |
(QIO_QUAD << QUADSPI_CCR_DMODE_Pos) |
(QIO_QUAD << QUADSPI_CCR_IMODE_Pos) |
(QIO_QUAD << QUADSPI_CCR_ADMODE_Pos) |
(QSPI_ADSIZE_24 << QUADSPI_CCR_ADSIZE_Pos) |
(0x06 << QUADSPI_CCR_DCYC_Pos) |
(MT25QL128ABA1EW9_COMMAND_QUAD_OUTPUT_FAST_READ << QUADSPI_CCR_INSTRUCTION_Pos);
QUADSPI->AR = (0xFFFFFF) & address;
/* ---------- Communication Starts Automatically ----------*/
while (QUADSPI->SR & QUADSPI_SR_BUSY) {
if (QUADSPI->SR & QUADSPI_SR_FTF) {
*destination = *((uint8_t*) &(QUADSPI->DR)); //Read a byte from data register, byte access
destination++;
}
}
QUADSPI->FCR = QUADSPI_FCR_CTOF | QUADSPI_FCR_CSMF | QUADSPI_FCR_CTCF | QUADSPI_FCR_CTEF; //Clear flags
}
It is a little crude, but it may be a good starting point for you, and it's well-tested and definitely works. You can find all my functions here (GitHub). Combine it with reading the QUADSPI section of the reference manual, and you should start to get a grasp of how to make it work.
Your job will be to determine what kind of commands and in what format you need to send to your QSPI slave device. That information is available in the device's datasheet. Make sure you send command and address and every other part on the correct number of QUADSPI lines. For example, sometimes you need to have command on 1 line and data on all 4, all in the same transaction. Make sure you set dummy cycles, if they are required for some operation. Pay special attention at how you read data that you receive via QUADSPI. You can read it in 32-bit words at once (if incoming data is a whole number of 32-bit words). In my case - in the function provided here - I read it by individual bytes, hence such a scary looking *destination = *((uint8_t*) &(QUADSPI->DR));, where I take an address of the data register, cast it to pointer to uint8_t and dereference it. Otherwise, if you read DR just as QUADSPI->DR, your MCU reads 32-bit word for every byte that arrives, and QUADSPI goes crazy and hangs and shows various errors and triggers FIFO threshold flags and stuff. Just be mindful of how you read that register.

can open dbc edit - selecting non sequential bytes for 16 bit data

I am trying to write a .dbc file for a can-open data log (example of one of lines I am trying to use below)
Time 884.163000, ID:2a1, Data Bytes (0)7b (1)00 (2)95 (3)68 (4)e5 (5)8e (6)49 (7)54
I have written .dbc files using both motorola and intel endianness using candb++ covering 16 bit data over 2 bytes, but this has always been with sequential bytes, ie- (2),(3) or (5),(6).
The bytes I need to use for the particular data in the above example are (3) and (7) in intel format (54,68 in this case). I have written a .dbc for just byte 3 shown in the snip below.
BO_ 673 Rig_Pressure: 4 Vector__XXX
SG_ Pressure_Multiplex M : 15|8#0+ (1,0) [0|0] "" Vector__XXX
SG_ Pressure m0 : 31|8#0- (0.45,58) [0.399999999999999|115.15] "Bar" Vector__XXX
I am asking if there is a way to modify the text file (or use cabdb++) to specify each bit or pick 2 non sequential bytes in the .dbc, something like modifying the bit start something like
31|8#0- to 31|8#0- 63|8#0-
I am far from a computer programmer, I much prefer GUI based programs and am only starting out in learning python, so please be gentle!!!
Thank you!

What is the latency of `clwb` and `ntstore` on Intel's Optane Persistent Memory?

In this paper, it is written that the 8 bytes sequential write of clwb and ntstore of optane PM have 90ns and 62ns latency, respectively, and sequential reading is 169ns.
But in my test with Intel 5218R CPU, clwb is about 700ns and ntstore is about 1200ns. Of course, there is a difference between my test method and the paper, but the result is too bad, which is unreasonable. And my test is closer to actual usage.
During the test, did the Write Pending Queue of CPU's iMC or the WC buffer in the optane PM become the bottleneck, causing blockage, and the measured latency has been inaccurate? If this is the case, is there a tool to detect it?
#include "libpmem.h"
#include "stdio.h"
#include "x86intrin.h"
//gcc aep_test.c -o aep_test -O3 -mclwb -lpmem
int main()
{
size_t mapped_len;
char str[32];
int is_pmem;
sprintf(str, "/mnt/pmem/pmmap_file_1");
int64_t *p = pmem_map_file(str, 4096 * 1024 * 128, PMEM_FILE_CREATE, 0666, &mapped_len, &is_pmem);
if (p == NULL)
{
printf("map file fail!");
exit(1);
}
if (!is_pmem)
{
printf("map file fail!");
exit(1);
}
struct timeval start;
struct timeval end;
unsigned long diff;
int loop_num = 10000;
_mm_mfence();
gettimeofday(&start, NULL);
for (int i = 0; i < loop_num; i++)
{
p[i] = 0x2222;
_mm_clwb(p + i);
// _mm_stream_si64(p + i, 0x2222);
_mm_sfence();
}
gettimeofday(&end, NULL);
diff = 1000000 * (end.tv_sec - start.tv_sec) + end.tv_usec - start.tv_usec;
printf("Total time is %ld us\n", diff);
printf("Latency is %ld ns\n", diff * 1000 / loop_num);
return 0;
}
Any help or correction is much appreciated!
The main reason is repeating flush to the same cacheline is delayed dramatically[1].
You are testing the avg latency instead of best-case latency like the FAST20 papaer.
ntstore are more expensive than clwb, so it's latency is higher. I guess it's a typo in your first paragraph.
appended on 4.14
Q: Tools to detect possible bottleneck on WPQ of buffers?
A: You can get a baseline when PM is idle, and use this baseline to indicate the possible bottleneck.
Tools:
Intel Memory Bandwidth Monitoring
Reads Two hardware counters from performance monitoring unit (PMU) in the processor: 1) UNC_M_PMM_WPQ_OCCUPANCY.ALL, which counts the accumulated number of WPQ entries at each cycle and 2) UNC_M_PMM_WPQ_INSERTS, which counts how many entries have been inserted into WPQ. And the calculate the queueing delay of WPQ: UNC_M_PMM_WPQ_OCCUPANCY.ALL / UNC_M_PMM_WPQ_INSERTS. [2]
[1] Chen, Youmin, et al. "Flatstore: An efficient log-structured key-value storage engine for persistent memory." Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems. 2020.
[2] Imamura, Satoshi, and Eiji Yoshida. “The analysis of inter-process interference on a hybrid memory system.” Proceedings of the International Conference on High Performance Computing in Asia-Pacific Region Workshops. 2020.
https://www.usenix.org/system/files/fast20-yang.pdf describes what they're measuring: the CPU side of doing one store + clwb + mfence for a cached write1. So the CPU-pipeline latency of getting a store "accepted" into something persistent.
This isn't the same thing as making it all the way to the Optane chips themselves; the Write Pending Queue (WPQ) of the memory controllers are part of the persistence domain on Cascade Lake Intel CPUs like yours; wikichip quotes an Intel image:
Footnote 1: Also note that clwb on Cascade Lake works like clflushopt - it just evicts. So store + clwb + mfence in a loop test would test the cache-cold case, if you don't do something to load the line before the timed interval. (From the paper's description, I think they do). Future CPUs will hopefully properly support clwb, but at least CSL got the instruction supported so future libraries won't have to check CPU features before using it.
You're doing many stores, which will fill up any buffers in the memory controller or elsewhere in the memory hierarchy. So you're measuring throughput of a loop, not latency of one store plus mfence itself in a previously-idle CPU pipeline.
Separate from that, rewriting the same line repeatedly seems to be slower than sequential write, for example. This Intel forum post reports "higher latency" for "flushing a cacheline repeatedly" than for flushing different cache lines. (The controller inside the DIMM does do wear leveling, BTW.)
Fun fact: later generations of Intel CPUs (perhaps CPL or ICX) will have even the caches (L3?) in the persistence domain, hopefully making clwb even cheaper. IDK if that would affect back-to-back movnti throughput to the same location, though, or even clflushopt.
During the test, did the Write Pending Queue of CPU's iMC or the WC buffer in the optane PM become the bottleneck, causing blockage, and the measured latency has been inaccurate?
Yes, that would be my guess.
If this is the case, is there a tool to detect it?
I don't know, sorry.

Question on PCI Express(PCIe) configuration space access on VirtualBox

Hi I'm trying to access the PCIe configuration space with MMIO method on a kernel base.
Before I drop my question, my platform is Windows 10, VirtualBox 6.0.10.
My virtual machine set as default except the following:
chipset choosed ICH9
Core number set to 4
Memory set to 1GB
Added IDE controller(no HD connected)
After boot, the printing shows that valid memory address are:0x0~0x9FC00 and 0x100000~0x3FEF0000 as displayed in following screen shot.
While type 1 is RAM, 2 is ROM or Reserved, 3 is ACPI Reclaim Memory and 4 is ACPI NVS Memory.
Furthermore I retrieved base address of the PCIe configuration memory map base address from MCFG as showed in the following screen shot.
It can be seen that:
The configuration space base is 0x3F000000 which overlap with the valid memory space.
The first 8 byte of 0x3F000000~0x3F000008 is all 0, which should be first 8 byte of bus:0, device:0, function:0.
So whether I should not use VirtualBox, or I should do some other operations to enable PCIe MMIO accessibility of configuration space?
Thanks so much!!
You've probably parsed the "MFCG ACPI table" incorrectly, or used the wrong (virtual?) address for the "MFCG ACPI table" and forgot to check signature and checksum.
The "Base_addr:" doesn't make sense, and the "Start_PCI_bus: 0, End_PCI_bus: 0" doesn't make sense either.

Question about Message Signaled Interrupts (MSI) on x86 LAPIC system

Hi I'm writing a kernel and plan to use MSI interrupt for PCI devices.
However, I'm also quite confused by the documentations.
My understanding about MSI are as follow:
From PCI device point of view:
Documentations indicate that I
need to find Capabillty ID = 0x05 to locate 3 registers: Message control (MCR), Message Address (MAR) and Message Data (MDR) registers
MCR provide control functionality for MSI interrupt,
MAR provide the physical address the PCI device
will write once interrupt occurs
MDR forms out the actual data it will write into the physical address
From CPU point of view:
Documentation shows that Message Address register contains fixed top of 0xFEE, and following by destination ID (LAPIC ID) and other controlling bits as follow:
The Message Data register will contain the following information, including the interrupt vector:
After reading all of these, I am thinking if the APIC_ID is 0x0h would the Message Address conflict with the Local APIC memory mapping? Although the address of FEE00000~FEE00010 are reserved.
In addition, is it true that the vector number in MDR is corresponding to the IDT vector number. In other words, if I put MAR = 0xFEE0000C (Destination ID = 0, Using logical APIC ID) and MDR = 0x0032 (edge trigger, Vector = 50) and enable the MSI interrupt, then once the device issues an interrupt CPU would correspondingly run the function pointed by IDT[50]? After that I write 0h to EOI register to end it?
Finally, according to the documentation, the upper 32 bit of MAR is not used? Can anyone help on this?
Thanks a lot!
Your understanding of how to detect and program MSI in a PCI (or PCIe) device is correct.*
The message address controls the destination (which CPU the interrupt is sent to), while the message data contains the vector number. For normal interrupts, all bits of the message data should be 0 except for the low 8 bits, which contain the vector.
The vector is an index into the IDT, so if the message data is 0x0032, the interrupt is delivered through entry 50 of the IDT.**
If the Destination ID in an interrupt message is 0, the Message Address of the MSI does match the default address of the local APIC, but they do not conflict, because the APIC can only be written by the CPU and MSIs can only be written by devices.
On x86 platforms, the upper 32 bits of the message address must be 0. This can be done by setting the upper part of the message address to 0 or by programming the device to use a 32-bit message address (in which case the upper message address register is not used). The PCI spec was designed to work with systems where 64-bit MSI addresses are used, but x86 systems never use the upper 32 bits of the message address.
Reprogramming the APIC base address by writing to the APIC_BASE MSR does not affect the address range used for MSI; it is always 0xFEExxxxx.
* You should also look at the MSI-X capability, because some devices support MSI-X but not MSI. MSI-X is a bit more flexible, which inevitably makes it a bit more complicated.
** When using the MSI capability, the message data isn't exactly the value in the Message Data Register (MDR). The MSI capability allows the device to use several contiguous vectors. When the device sends an interrupt message, it replaces the low bits of the MDR with a different value depending on the interrupt cause within the device.