bpf_get_current_pid_tgid() returns a 'not found' error in socket_filter type of bpf program in Linux 04.04.15 - bpf

I have a bpf program of the socket_filter type.
Trying to get the PID of the process involved in the current packet.
Then it will go into a bpf map for the user space to use.
However, this call does not work ; the function is not found.
Ubuntu 18.04 Bionic
Linux 04.15

The bpf_get_current_pid_tgid is not currently allowed for BPF_PROG_TYPE_SOCKET_FILTER programs.
What is your use case for this? If you have a strong use case, the kernel would probably accept a patch adding support for it.

Related

How to find out who loads specific Linux kernel module?

I built a certain driver as module (m) for Linux, the spi-imx by NXP. Nontheless, Linux probes this driver when booting. I'm struggling to find out what process/other module/driver requests this spi-imx driver. A depmod does not show any dependencies between the spi-imx an other modules (except for the spidev as submodule).
After some research, I found out that Linux automatically (?) calls modprobe when it detects a new device. So does Linux actually call modprobe because the ecSPI'S status in the device tree as "okay"? If so, how can I prevent this? I would like to dynamically load the spi-imx from a user space application via modprobe. The story behind it: a coprocessor uses this SPI line in parallel to the Linux boot process. This interferes of course and interrupts the coprocessor's use of the SPI line. When the coprocessor has finished its transfer via SPI (a boot mechanism as well), it should hand over the SPI line to Linux.
I'm very thankful for any kind of tips, links, hints and comments on this.
Thanks a lot for the answers. As you guys mentioned, I also found out that Linux itself probes the device if present ("okay").
One possible solution is to complete cut off the modprobe call via an entry like "install spi-imx /bin/false" in the *.conf file. But that makes it impossible to load the driver via modprobe, for Linux and for user space.
"blacklist spi-imx" inside a *.conf located at /etc/modprobe.d/ is the way to prevent Linux from probing the driver when booting. After that, a modprobe from user space can successfully load the driver afterwards.
Thanks again & best regards

AArch64 - GNU ld - multiple linker scripts (for kernel and userland)

I have started a bare-metal application for AArch64. The bare-metal application should implement a simple kernel (for memory/device management and exception handling) and an userland which can made syscalls to output something over the UART via printf() as example. Currently I'm working on the kernel at EL1. The indent is to put kernel and userland in a single ELF binary, because I don't have implemented an filesystem driver and ELF support yet.
The kernel should reside at address 0xC0000000 and the main application (userland) at 0x40000000 as example. But I will change this addresses later. Is it possible to pass two linker scripts to GNU ld? I realize that I must use different sections for kernel and userland.
Or in another question:
Is my indent even possible? Okay it's maybe a generic question, but currently didn't find a similar question here.
From the LD manual: https://man7.org/linux/man-pages/man1/ld.1.html, it's said:
Multiple -T options accumulate.
Just use it like this: -T script1.ld -T script2.ld

Take kernel dump on-demand from user-space without kernel debugging (Windows)

What would be the simplest and most portable way (in the sense of only having to copy a few files to the target machine, like procdump is) to generate a kernel dump that has handle information?
procdump has the -mk option which generates a limited dump file pertaining to the specified process. It is reported in WinDbg as:
Mini Kernel Dump File: Only registers and stack trace are available. Most of the commands I try (!handle, !process 0 0) fail to read the data.
Seems that officially, windbg and kd would generate dumps (which would require kernel debugging).
A weird solution I found is using livekd with -ml: Generate live dump using native support (Windows 8.1 and above only).. livekd still looks for kd.exe, but does not use it :) so I can trick it with an empty file, and does not require kernel debugging. Any idea how that works?
LiveKD uses the undocumented NtSystemDebugControl API to capture the memory dump. While you can easily find information about that API online the easiest thing to do is just use LiveKD.

XDP offloaded mode flags set is not working with bcc

I'm trying to run this tutorial XDP code that is provided in the bcc.
The code I use is this script: bcc/examples/networking/xdp/xdp_drop_count.py.
and to my understanding, XDP flag works as follows (from that question):
#define XDP_FLAGS_SKB_MODE (1U << 1)
#define XDP_FLAGS_DRV_MODE (1U << 2)
#define XDP_FLAGS_HW_MODE (1U << 3)
So, doesn't this mean that if I change the flags bit to
flags |= 1 << 3
I should be able to run this code in hardware accelerated mode (offloaded)?
I have a NIC card that supports XDP HW accelerated mode and it works fine when I just attach a simple program with only one line of code:
return XDP_PASS;
and attach it in offloaded mode by using ip link set dev interface xdpoffload etc.
So I have confirmed my NIC is capable of loading an offloaded XDP program but when I try the above, it gives me an error:
bpf: Attaching prog to enp4s0np1: Invalid argumentTraceback (most recent call last) :
File "xdp_drop_count.py", line 132, in <module>
b. attach_xdp(device, fn, flags)
File "usr/lib/python2.7/dist-packages/bcc/__init__.py", line 723, in attach_xdp % (dev, errstr))
Exception : Failed to attach BPF to device enp4s0np1: No such file or directory
Also, when I set the flags to :
flags |= 1 << 2
I am not sure if this is actually running the XDP program in driver mode.
Am I missing something?
Thank you in advance.
If you build bcc from sources
Since commit d147588, bcc has hardware offload support. To offload programs using bcc, you will need three things:
The XDP_FLAGS_HW_MODE bit (1U << 3) should be set in the flags passed to attach_xdp().
The name of the interface to which you want to offload the program should be given to BPF() with the device= parameter. It will allow bcc to offload the maps to the appropriate device. It is unnecessary if you don't have maps.
The interface's name should also be given to load_func, again with parameter device=, such that bcc tells the kernel where to offload the program.
Note that, with the latest bcc sources, the xdp_drop_count.py script has been updated to do all this for you when you pass the -H option:
sudo ./xdp_drop_count.py -H $ETHNAME
For older versions of bcc
Older versions of bcc do not support hardware offload. You can use bpftool or ip (>v4.16) instead, e.g.:
sudo ip link set dev $ETHNAME xdpoffload obj prog.o sec .text
sudo bpftool prog load prog.o /sys/fs/bpf/prog type xdp dev $ETHNAME
For a BPF program to be attached as a XDP program, it needs to be offloaded to the NIC first, when being loaded on the system.
In your case, the b.load_func() provided by bcc does not support any option for offloading programs when passing them to the kernel. So when you later call b.attach_xdp() with the XDP_FLAGS_HW_MODE, the function fails, because it cannot find any program offloaded on the NIC.
Right now there is no workaround for offloading program with bcc. As pchaigno mentioned, the function simply does not offer an option to indicate the program should be offloaded.
It should not be too difficult to add support for offloading programs to bcc though, so it should probably be available in the future (especially if pchaigno feels like adding it :p). You would still need to replace the per-CPU array by a regular array in your program, as the former is not supported for offload at this time.
Regarding the mode in which your programs run, this is something you can check with bpftool net for example.

failed using cuda-gdb to launch program with CUPTI calls

I'm having this weird issue: I have a program that uses CUPTI callbackAPI to monitor the kernels in the program. It runs well when it's directly launched; but when I put it under cuda-gdb and run, it failed with the following error:
error: function cuptiSubscribe(&subscriber, CUpti_CallbackFunc)my_callback, NULL) failed with error CUPTI_ERROR_NOT_INITIALIZED
I've tried all examples in CUPTI/samples and concluded that programs that use callbackAPI and activityAPI will fail under cuda-gdb. (They are all well-behaved without cuda-gdb) But the fail reason differs:
If I have calls from activityAPI, then once run it under cuda-gdb, it'll hang for a minute then exit with error:
The CUDA driver has hit an internal error. Error code: 0x100ff00000001c Further execution or debugging is unreliable. Please ensure that your temporary directory is mounted with write and exec permissions.
If I have calls from callbackAPI like my own program, then it'll fail out much sooner with the same error:
CUPTI_ERROR_NOT_INITIALIZED
Any experience on this kinda issue? I really appreciate that!
According to NVIDIA forum posting here and also referred to here, the CUDA "tools" must be used uniquely. These tools include:
CUPTI
any profiler
cuda-memcheck
a debugger
Only one of these can be "in use" on a code at a time. It should be fairly easy for developers to use a profiler, or cuda-memcheck, or a debugger independently, but a possible takeaway for those using CUPTI, who also wish to be able to use another CUDA "tool" on the same code, would be to provide a coding method to be able to disable CUPTI use in their application, when they wish to use another tool.