bpf verifier log is truncated, how to get the full log? - bpf

As the following, bpf verifier log is truncated at the last. How could I get the full log ?
368=mmmmmmmm fp-376=mmmmmmmm fp-432=mmmmmmmm fp-440=inv fp-448=inv fp-456=map_value fp-464=inv
389: (73) *(u8 *)(r3 +322) = r1
390: (71) r1 = *(u8 *)(r2 +713)
R0=inv(id=0,umax_value=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff)) R1_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff)) R2_w=map_value(id=0,off=0,ks=260,vs=904,imm=0) R3=pkt(id=0,off=42,r=398,imm=0) R4_w=inv0 R6=invP0 R7=ctx(id=0,off=0,imm=0) R8=inv(id=0) R9=inv(id=9) R10=fp0 fp-32=????mmmm fp-40=mmmmmmmm
fp-48=mmmmmmmm fp-56=mmmmmmmm fp-64=mmmmmmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmmmmmm fp-112=mmmmmmmm fp-120=mmmmmmmm fp-128=mmmmmmmm fp-136=mmmmmmmm fp-144=mmmmmmmm
fp-152=mmmmmmmm fp-160=mmmmmmmm fp-168=mmmmmmmm fp-176=mmmmmmmm fp-184=mmmmmmmm fp-192=mmmmmmmm fp-200=mmmmmmmm fp-208=mmmmmmmm fp-216=mmmmmmmm fp-224=mmmmmmmm fp-232=mmmmmmmm fp-240=mmmmmmmm fp-248=mmmmmmmm fp-256=mmmmmmmm fp-264=mmmmmmmm fp-272=mmmmmmmm fp-280=mmmmmmmm fp-288=mmmmmmmm fp-296=mmmm????
fp-304=??mmmmmm fp-312=mmmmmmmm fp-320=mmmmmmmm fp-328=?mmmmmmm fp-336=mmmmmmmm fp-344= (truncated...)
supplement:
Under the guide of #Qeole, I have solved the problem.
cilium/ebpf implement,It can be used as a reference.
https://github.com/cilium/ebpf/commit/f365a1e12f0a2477c41ee907a917db6f9bd9cf72

You need to pass a larger buffer (and to indicate its length accordingly) to the verifier when you load your program.
The kernel receives a pointer to a union bpf_attr, which for loading programs starts like this:
struct { /* anonymous struct used by BPF_PROG_LOAD command */
__u32 prog_type; /* one of enum bpf_prog_type */
__u32 insn_cnt;
__aligned_u64 insns;
__aligned_u64 license;
__u32 log_level; /* verbosity level of verifier */
__u32 log_size; /* size of user buffer */
__aligned_u64 log_buf; /* user supplied buffer */
log_buf, of size log_size, is the buffer filled by the verifier. You don't usually set up these parameters yourself, how you should do it depends on what you use to load your program. Most loaders rely on libbpf, and in recent versions, they should automatically attempt to reload the program with a larger buffer size in case of error and if the verifier output is truncated.

Related

BPF Ring Buffer Invalid Argument (-22)?

I wanted to use eBPF's latest map, BPF_MAP_TYPE_RINGBUF, but I can't find much information online on how I can use it, so I am just doing some trial-and-error here. I defined and used it like this:
struct bpf_map_def SEC("maps") r_buf = {
.type = BPF_MAP_TYPE_RINGBUF,
.max_entries = 1 << 2,
};
SEC("lsm/task_alloc")
int BPF_PROG(task_alloc, struct task_struct *task, unsigned long clone_flags) {
uint32_t pid = task->pid;
bpf_ringbuf_output(&r_buf, &pid, sizeof(uint32_t), 0); //stores the pid value to the ring buffer
return 0;
}
But I got the following error when running:
libbpf: map 'r_buf': failed to create: Invalid argument(-22)
libbpf: failed to load object 'bpf_example_kern'
libbpf: failed to load BPF skeleton 'bpf_example_kern': -22
It seems like libbpf does not recognize BPF_MAP_TYPE_RINGBUF? I cloned the latest libbpf from GitHub and did make and make install. I am using Linux 5.8.0 kernel.
UPDATE: The issue seems to be resolved if I changed the max_entries to something like 4096 * 64, but I don't know why this is the case.
You are right, the problem is in the size of BPF_MAP_TYPE_RINGBUF (max_entries attribute in libbpf map definition). It has to be a multiple of a memory page (which is 4096 bytes at least on most popular platforms). So that explains why it all worked when you specified 64 * 4096.
BTW, if you'd like to see some examples of using it, I'd start with BPF selftests:
user-space part: https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/prog_tests/ringbuf.c
kernel (BPF) part: https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/progs/test_ringbuf.c

How to pass user space data to dmaengine client usage call?

[EDITED]
I have a board on arm64 with fpga (SoC).
The task is simple:
make possible to transfer data from/to "User Space" area (app) to/from "Kernel Space" phys mem (device mem = fpga regs), with and without dma support usage (streaming type). That dma is in the board (ZynqMP / GDMA).
I will have several devices - on fpga and outside, which should use this communication, but now I'm working only with fpga-ddr4 mem area.
Now I see the next logic flow:
some initialization (dma parameters and so on);
ioremap() a fpga device area;
make a buffer (by kzalloc() or another) - this buffer I should give to the US by mmap fops;
make a scatterlist from the buffer (pseudo-code below);
use the scatterlist with dmaengine to transfer data;
// scatterlist init pseudo-code
struct scatterlist sgl[2];
struct scatterlist *sge;
int i, buf_n, err_code;
__u8 *buffer; // allocated earlier
sg_init_table(sgl, ARRAY_SIZE(sgl));
for_each_sg(sgl, sge, ARRAY_SIZE(sgl), i) {
struct page *pg = virt_to_page(buffer + i * PAGE_SIZE);
dma_addr_t dma_handle = dma_map_page(&pdev->dev, pg, 0, PAGE_SIZE, direction /* DMA_TO_DEVICE */);
if ((err_code = dma_mapping_error(&pdev->dev, dma_handle))) {
dev_err(&pdev->dev, "dma page mapping failed! (code: %i)\n", err_code);
break;
}
sg_set_page(sge, pg, PAGE_SIZE, 0);
}
dma_map_sg(&pdev->dev, sgl, ARRAY_SIZE(sgl), direction) // with appropriate check
Now I misunderstanding next - how or where the destination controled? I mean, I had allocated the buffer in RAM, make scatterlist from it and give this list by argument of dmaengine funtions for transferring. But I dont set/use ioremapped device mem area to save this buffer data! Is this dma works only with appropriate RAM memory area and I should copy buffer to the device area? Or, should I use the ioremapped area as my buffer?
Is it right flow? Explain me my mistakes pleaes?

Unable to use PC15 as GPIO input on stm32f030rc

I'm working on a project using an stm32f030rc. I need to use PC15 as a GPIO input but it appears I'm unable to.
I understand the couple PC14/PC15 is shared with the LFE oscillator, but of course I'm not using that function. Moreover, I am able to read the correct pin level on the PC14 GPIO. In the datasheed regarding my model the PC15 pin is marked as a I/O with OSC32_OUT as additional function: can it be used as input at all?
For reference, this is the C code I'm using to test the functionality; I'm using libopencm3 for initialization.
#include <libopencm3/stm32/rcc.h>
#include <libopencm3/stm32/gpio.h>
static void clock_setup(void)
{
rcc_clock_setup_in_hsi_out_48mhz();
/* Enable GPIOA, GPIOB, GPIOC clock. */
rcc_periph_clock_enable(RCC_GPIOA);
rcc_periph_clock_enable(RCC_GPIOB);
rcc_periph_clock_enable(RCC_GPIOC);
rcc_periph_clock_enable(RCC_DBGMCU);
/* Enable clocks for GPIO port B and C*/
gpio_mode_setup(GPIOA, GPIO_MODE_OUTPUT, GPIO_PUPD_NONE, GPIO5);
gpio_mode_setup(GPIOC, GPIO_MODE_INPUT, GPIO_PUPD_NONE, GPIO15);
gpio_mode_setup(GPIOC, GPIO_MODE_INPUT, GPIO_PUPD_NONE, GPIO14);
}
int main(void)
{
unsigned long long i = 0;
clock_setup();
/* Blink the LED (PA8) on the board with every transmitted byte. */
while (1)
{
gpio_toggle(GPIOA, GPIO5); /* LED on/off */
for (i = 0; i < 400000; i++) /* Wait a bit. */
__asm__("nop");
// This conditional is never entered
if (gpio_get(GPIOC, GPIO15) == 0) {
__asm__("nop");
__asm__("nop");
__asm__("nop");
}
// This one works
if (gpio_get(GPIOC, GPIO14) == 0) {
__asm__("nop");
__asm__("nop");
__asm__("nop");
}
}
return 0;
}
PC14 & PC15 have the same configuration properties. Of course, there are some limitations regarding using these pins as outputs (including PC13), but it should be okay to use them as inputs as long as you don't activate LSE functionality.
PC14 & PC15 are GPIO inputs after power-up and considering that LSE is disabled by default, you should be able to use them directly even without any configuration.
As you don't have any problems with PC14, I suspect 3 possible causes:
1) A bug in the GPIO code the library provides. Although it's very unlikely, it's easy to test. You can remove the configuration code for PC14 & PC15, as they are GPIO inputs after power-up by default. This eliminates the possibility of having a bug in gpio_mode_setup() function. To avoid using gpio_get() function, you can use the following code:
if (GPIOC->IDR & (1 << 15) == 0)
2) A bug in the clock config code the library provides. Again, this one is very unlikely, but you can test it by removing the rcc_clock_setup_in_hsi_out_48mhz() function. MCU uses HSI running at 8 MHz after power-up.
3) This can be a hardware problem. I suggest checking the voltage on PC15. Test it by physically connecting it to GND. Also measure PC14 for comparison. Among these 3 possible causes I can think of, this is the most probable one.

As to the GPIOTE function(External Interrupt) of nRF52832

I have some trouble to control GPIOTE function with nRF52832 sdk,
when using 14.01 version(SDK), it seems that the GPIOTE function can't be used with BLE function, I used the code below, it made hang-up issue of system, why?
I wonder whether GPIOTE function can't be used with BLE function or not,
and another method to use the function with BLE function,
thankful for your support and kindness in advance,
#define PIN_IN BUTTON_4
//#define PIN_OUT BSP_LED_3
void in_pin_handler(nrf_drv_gpiote_pin_t pin, nrf_gpiote_polarity_t action)
{
printf("love %d: %d\n", (int)pin, (int)action);
// nrf_drv_gpiote_out_toggle(PIN_OUT);
}
/**
* #brief Function for configuring: PIN_IN pin for input, PIN_OUT pin for output,
* and configures GPIOTE to give an interrupt on pin change.
*/
void gpio_external_int_init(void)//love_1108
{
uint32_t err_code;
err_code = nrf_drv_ppi_init();
APP_ERROR_CHECK(err_code);
//
err_code = nrf_drv_gpiote_init();
APP_ERROR_CHECK(err_code);
//
// (void)nrf_drv_gpiote_init();
// nrf_drv_gpiote_out_config_t out_config = GPIOTE_CONFIG_OUT_SIMPLE(false);
// err_code = nrf_drv_gpiote_out_init(PIN_OUT, &out_config);
// APP_ERROR_CHECK(err_code);
nrf_drv_gpiote_in_config_t in_config = GPIOTE_CONFIG_IN_SENSE_TOGGLE(false);
in_config.pull = NRF_GPIO_PIN_PULLUP;
err_code = nrf_drv_gpiote_in_init(PIN_IN, &in_config, in_pin_handler);
APP_ERROR_CHECK(err_code);
nrf_drv_gpiote_in_event_enable(PIN_IN, true);
}
While you don't provide much detail, such as what is meant by "used with BLE function", I have found an issue with the SDK example ble_app_template. In my case the cause was that the file bsp_btn_ble.c demands that there are more buttons than my board_custom.h defines. So the function startup_event_extract() wants to check the state of BTN_ID_WAKEUP_BOND_DELETE, which does not exist on my hardware, causes an assertion. It is disturbing that BTN_ID_WAKEUP_BOND_DELETE and other buttons are defined in the c file, rather than being derived from custom_board.h.
So, trace the board initialization and you may find something like ASSERT(button_idx < BUTTONS_NUMBER), which caused a hang in my case.

How are am335x GPIOs numbered in device tree?

I am trying to use a driver with a gpio interrupt on BeagleboneBlack. My device tree has the following entry for my custom device:
&i2c1{...
mydevice: mydevice#0c {
compatible = "mydevice,mydeice";
reg = <0x0c>;
mag_irq_gpio = <&gpio1 13 0>; /* INT line */
};
...}
Its driver counterpart has this:
static int parse_dt(struct i2c_client *client)
{
struct device_node *node = client->dev.of_node;
struct mydev_data *data = i2c_get_clientdata(client);
return of_property_read_u32(node, "mag_irq_gpio", &data->gpio);
}
The driver loads and works fine, except the gpio number is completely wrong. The property read function returns success, and reads 8 as the gpio number, even if I put a different number to the device tree.
How am I supposed to pass a gpio number as generic data? The interrupt works if I manually override the gpio number inside my driver.
As per comment by #sawdust
<&gpio1 13 0>
denotes an array of three values. I solved the issue by manually calculating the GPIO number and passing it as a single number:
<14>