I wanted to use eBPF's latest map, BPF_MAP_TYPE_RINGBUF, but I can't find much information online on how I can use it, so I am just doing some trial-and-error here. I defined and used it like this:
struct bpf_map_def SEC("maps") r_buf = {
.type = BPF_MAP_TYPE_RINGBUF,
.max_entries = 1 << 2,
};
SEC("lsm/task_alloc")
int BPF_PROG(task_alloc, struct task_struct *task, unsigned long clone_flags) {
uint32_t pid = task->pid;
bpf_ringbuf_output(&r_buf, &pid, sizeof(uint32_t), 0); //stores the pid value to the ring buffer
return 0;
}
But I got the following error when running:
libbpf: map 'r_buf': failed to create: Invalid argument(-22)
libbpf: failed to load object 'bpf_example_kern'
libbpf: failed to load BPF skeleton 'bpf_example_kern': -22
It seems like libbpf does not recognize BPF_MAP_TYPE_RINGBUF? I cloned the latest libbpf from GitHub and did make and make install. I am using Linux 5.8.0 kernel.
UPDATE: The issue seems to be resolved if I changed the max_entries to something like 4096 * 64, but I don't know why this is the case.
You are right, the problem is in the size of BPF_MAP_TYPE_RINGBUF (max_entries attribute in libbpf map definition). It has to be a multiple of a memory page (which is 4096 bytes at least on most popular platforms). So that explains why it all worked when you specified 64 * 4096.
BTW, if you'd like to see some examples of using it, I'd start with BPF selftests:
user-space part: https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/prog_tests/ringbuf.c
kernel (BPF) part: https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/progs/test_ringbuf.c
Related
As the following, bpf verifier log is truncated at the last. How could I get the full log ?
368=mmmmmmmm fp-376=mmmmmmmm fp-432=mmmmmmmm fp-440=inv fp-448=inv fp-456=map_value fp-464=inv
389: (73) *(u8 *)(r3 +322) = r1
390: (71) r1 = *(u8 *)(r2 +713)
R0=inv(id=0,umax_value=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff)) R1_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff)) R2_w=map_value(id=0,off=0,ks=260,vs=904,imm=0) R3=pkt(id=0,off=42,r=398,imm=0) R4_w=inv0 R6=invP0 R7=ctx(id=0,off=0,imm=0) R8=inv(id=0) R9=inv(id=9) R10=fp0 fp-32=????mmmm fp-40=mmmmmmmm
fp-48=mmmmmmmm fp-56=mmmmmmmm fp-64=mmmmmmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmmmmmm fp-112=mmmmmmmm fp-120=mmmmmmmm fp-128=mmmmmmmm fp-136=mmmmmmmm fp-144=mmmmmmmm
fp-152=mmmmmmmm fp-160=mmmmmmmm fp-168=mmmmmmmm fp-176=mmmmmmmm fp-184=mmmmmmmm fp-192=mmmmmmmm fp-200=mmmmmmmm fp-208=mmmmmmmm fp-216=mmmmmmmm fp-224=mmmmmmmm fp-232=mmmmmmmm fp-240=mmmmmmmm fp-248=mmmmmmmm fp-256=mmmmmmmm fp-264=mmmmmmmm fp-272=mmmmmmmm fp-280=mmmmmmmm fp-288=mmmmmmmm fp-296=mmmm????
fp-304=??mmmmmm fp-312=mmmmmmmm fp-320=mmmmmmmm fp-328=?mmmmmmm fp-336=mmmmmmmm fp-344= (truncated...)
supplement:
Under the guide of #Qeole, I have solved the problem.
cilium/ebpf implement,It can be used as a reference.
https://github.com/cilium/ebpf/commit/f365a1e12f0a2477c41ee907a917db6f9bd9cf72
You need to pass a larger buffer (and to indicate its length accordingly) to the verifier when you load your program.
The kernel receives a pointer to a union bpf_attr, which for loading programs starts like this:
struct { /* anonymous struct used by BPF_PROG_LOAD command */
__u32 prog_type; /* one of enum bpf_prog_type */
__u32 insn_cnt;
__aligned_u64 insns;
__aligned_u64 license;
__u32 log_level; /* verbosity level of verifier */
__u32 log_size; /* size of user buffer */
__aligned_u64 log_buf; /* user supplied buffer */
log_buf, of size log_size, is the buffer filled by the verifier. You don't usually set up these parameters yourself, how you should do it depends on what you use to load your program. Most loaders rely on libbpf, and in recent versions, they should automatically attempt to reload the program with a larger buffer size in case of error and if the verifier output is truncated.
Firstly, I was not using the libbpf API directly, neither BCC. Instead, I was trying to use the API of the skeleton generated by bpftool.
Control code:
obj_gen = bpf_xdp_c__open();
if (!obj_gen)
goto cleanup;
ifindex = if_nametoindex("eth0");
if(!ifindex)
{
perror("if_nametoindex");
return 1;
}
err = bpf_xdp_c__load(obj_gen);
BPF code:
// Simple XDP BPF program. Everything packet will be dropped.
SEC("test")
int xdp_prog1(struct xdp_md *ctx){
char drop_message[] = "XDP PACKET DROP\n";
bpf_trace_printk(&drop_message, sizeof(drop_message));
return XDP_DROP;
}
So, after running, the error below was shown:
// libbpf: load bpf program failed: Invalid argument
// libbpf: failed to load program 'test'
// libbpf: failed to load object 'bpf_xdp_c'
// libbpf: failed to load BPF skeleton 'bpf_xdp_c': -22
After debugging, I have noticed that the program type was not with the correct value. It was always returning 0. So, I had to define the following code before the load call:
obj_gen->progs.xdp_prog1->type = BPF_PROG_TYPE_XDP;
Because the struct bpf_program was in the libbpf.c, I had to redefine in my header in in order to the compile find it. This workaround worked.
Q: Is there a better solution?
I found what it was wrong. After looking at libbpf source coude, I found the variable
static const struct bpf_sec_def section_defs[] = {... BPF_PROG_SEC("xdp",BPF_PROG_TYPE_XDP), ...
So, I noticed that I have not defined my XDP section in my BPF program with SEC("xdp").
Thanks #Qeole for your help.
Failing to use existing rte Hash from secondary process:
h = rte_hash_find_existing("some_hash");
if (h) {
// this will work, in case we re-create
//rte_hash_free(h);
}
else {
h = rte_hash_create (¶ms);
}
// using the hash will crash the process with:
// Program received signal SIGSEGV, Segmentation fault.
ret = rte_hash_lookup_data (h,name,&data);
DPDK Version: dpdk-19.02
Build Mode Static: CONFIG_RTE_BUILD_SHARED_LIB=n
The Primary and secondary processes are different binaries but linked to the same DPDK library
The Key is added in primary as follows
struct cdev_key {
uint64_t len;
};
struct cdev_key key = { 0 };
if (rte_hash_add_key_data (testptr, &key,(void *) &test) < 0) {
fprintf (stderr,"add failed errno: %s\n", rte_strerror(rte_errno));
}
and used in secondary as follows:
printf("Looking for data\n");
struct cdev_key key = { 0 };
int ret = rte_hash_lookup_data (h,&key,&data);
with DPDK version 19.02, I am able to run 2 separate binaries without issues.
[EDIT-1] based on the update in the ticket, I am able to lookup hash entry added from primary in the secondary process.
Priamry log:
rte_hash_count 1 ret:val 0x0:0x0
Secondary log:
0x17fd61380 rte_hash_count 1
rte_hash_count 1 key:val 0:0
note: if using rte_hash_lookup please remember to disable Linux ASLR via echo 0 | tee /proc/sys/kernel/randomize_va_space.
Binary 1: modified example/skeleton to create hash test
CMD-1: ./build/basicfwd -l 5 -w 0000:08:00.1 --vdev=net_tap0 --socket-limit=2048,1 --file-prefix=test
Binary 2: modified helloworld to lookup for hash test, else assert
CMD-2: for i in {1..20000}; do du -kh /var/run/dpdk/; ./build/helloworld -l 6 --proc-type=secondary --log-level=3 --file-prefix=test; done
Changing or removing the file-prefix results in assert logic to be hit.
note: DPDK 19.02 has the inherent bug which does not cleanup the /var/run/dpdk/; hence recommends to use 19.11.2 LTS
Code-1:
struct rte_hash_parameters test = {0};
test.name = "test";
test.entries = 32;
test.key_len = sizeof(uint64_t);
test.hash_func = rte_jhash;
test.hash_func_init_val = 0;
test.socket_id = 0;
struct rte_hash *testptr = rte_hash_create(&test);
if (testptr == NULL) {
rte_panic("Failed to create test hash, errno = %d\n", rte_errno);
}
Code-2:
assert(rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test1"));
As mentioned in DPDK Programmers Guide, using multiprocessor functionalities come with some restrictions. One of them is that the pointer to a function can not be shared between processes. As a result the hashing function is not available on the secondary process. The suggested work around is to the hashing part in the primary process and the secondary process accessing the hash table using the hash value instead of the key.
From DPDK Guide:
To work around this issue, it is recommended that multi-process applications perform the hash calculations by directly calling the hashing function from the code and then using the rte_hash_add_with_hash()/rte_hash_lookup_with_hash() functions instead of the functions which do the hashing internally, such as rte_hash_add()/rte_hash_lookup().
Please refer to the guide for more information [36.3. Multi-process Limitations]
link: https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html
In the time of writing this answer the guide is for DPDK 20.08.
I am using doxygen to parse Linux kernel (https://github.com/torvalds/linux). After running more than 20 hours, while generating the call graphs, it reports errors: QGArray::at: Absolute index xxxxxxxxxx out of range. I analyzed the source code and doubted that it might be caused by the type of array_data->len in doxygen-master/qtools/qgarray.h:54
struct array_data : public QShared { // shared array
array_data() { data=0; len=0; }
char *data; // actual array data
uint len;
};
I try to use a long type for len, rebuild and reinstall Doxygen, but parsing Linux will need another 20 hours to check it.
I want to know how to fix the error perfectly?
I am now using CubeMx 4.23.0 and FW package for STM32F7 1.8.0
MCU is STM32F746 on Core746i board.
Everything is generated by CubeMx automatically.
main.c:
SCB_EnableICache();
SCB_EnableDCache();
HAL_Init();
SystemClock_Config();
MX_GPIO_Init();
MX_SDMMC1_SD_Init();
MX_USB_DEVICE_Init();
MX_FATFS_Init();
HAL_Delay(3000);
DebugString("start OK");
uint8_t res = 0;
FATFS SDFatFs;
FIL MyFile; /* File object */
char SD_Path[4];
res = f_mount(&SDFatFs, (TCHAR const*)SD_Path, 0);
sprintf(DebugStr, "f_mount = 0x%02X", res);
DebugString(DebugStr);
res = f_open(&MyFile, "test.txt", FA_READ);
sprintf(DebugStr, "f_open = 0x%02X", res);
DebugString(DebugStr);
sdmmc.c:
void MX_SDMMC1_SD_Init(void)
{
hsd1.Instance = SDMMC1;
hsd1.Init.ClockEdge = SDMMC_CLOCK_EDGE_RISING;
hsd1.Init.ClockBypass = SDMMC_CLOCK_BYPASS_DISABLE;
hsd1.Init.ClockPowerSave = SDMMC_CLOCK_POWER_SAVE_DISABLE;
hsd1.Init.BusWide = SDMMC_BUS_WIDE_1B;
hsd1.Init.HardwareFlowControl = SDMMC_HARDWARE_FLOW_CONTROL_DISABLE;
hsd1.Init.ClockDiv = 7;
//HAL_SD_Init(&hsd1);
// ^^^^^ I also tried this here
//HAL_SD_ConfigWideBusOperation(&hsd1, SDMMC_BUS_WIDE_4B)
//^^^^ and this
}
In case of f_mount(&SDFatFs, (TCHAR const*)SD_Path, 0) <- with 1 here (forced mount), output is:
f_mount = 0x03
f_open = 0x01
With 0 (do not mount now) output is:
f_mount = 0x00
f_open = 0x03
0x03 value is FR_NOT_READY, but official info is pretty vague about it
Things I've tried:
Adding HAL_SD_Init(&hsd1) to MX_SDMMC1_SD_Init() since i didnt find where is SD card GPIO init happening.
2 GB noname SD card, 1 GB Transcend card.
Different hsd1.Init.ClockDiv 3 to 255.
Resoldering everything completely.
Switching to 4-bit wide bus using HAL_SD_ConfigWideBusOperation(&hsd1, SDMMC_BUS_WIDE_4B);
Turn on and off pullups.
But card still does not mount. It's formatted in FAT, working on a PC, files i've tried to open are exist, but empty.
How to get it to mount?
Thanks in advance!
there was the problem with exact version of cubemx.
updating stm32cubemx helped.
You can try
`f_mount(0, "path", 0);
` after the f_open call . it may work.
If the function with forced mounting failed with FR_NOT_READY, it means that the filesystem object has been registered successfully but the
volume is currently not ready to work
. The volume mount process will be attempted on subsequent file/directroy function.
If implementation of the disk I/O layer lacks asynchronous media change detection, application program needs to perform f_mount function after each media change to force cleared the filesystem object.
Changing all SDIO pins except SDIO_CK to pull-up according to This Topic works for me
Try commenting MX_USB_DEVICE_Init(), see what happens.