Android 13 can not load ebpf kporbe program - ebpf

For aosp 13. I have two ebpf program. one is tracepoint type, another is kprobe type. traceponit works well. But kprobe program when compile successful, and push to /system/etc/bpf/, after reboot. It not auto load to the /sys/fs/bpf/
my example very sample :
DEFINE_BPF_PROG("kprobe/tcp_sendmsg", AID_ROOT, AID_ROOT, kb_do_tcp_sendmsg)
(void *ctx){
return 0;
}
my kernel is 4.19.190. the CONFIG_KPROBE is y.

Related

STM32L4 USB not appearing on computer as device

I have a STM32 Nucleo-64 development board with STM32L476RG MCU on it.. I have made a small PCB that, among other things, brings the USB interface to a connector.
I have followed the steps shown here: https://www.youtube.com/watch?v=rLnQ3W8gmjY
When I run the code in debug mode (in STM32CubeIDE v1.10.1) the USB never appears on my computer. I have also probed the USB data +/- signals and see no activity.
When I step through the code, I can see that something is failing immediately in:
MX_USB_DEVICE_Init()
USBD_Init()
The code the that fails is:
USBD_StatusTypeDef USBD_Init(USBD_HandleTypeDef *pdev,
USBD_DescriptorsTypeDef *pdesc, uint8_t id)
{
USBD_StatusTypeDef ret;
/* Check whether the USB Host handle is valid */
if (pdev == NULL)
{
#if (USBD_DEBUG_LEVEL > 1U)
USBD_ErrLog("Invalid Device handle");
#endif
return USBD_FAIL;
}
The exact line above that fails is if (pdev == NULL)
One thing to note is that the CubeMX tool did NOT add a handler declaration:
USBD_HandleTypeDef hUsbDeviceFS;
...despite me configuring up the USB interface... I added my own line in main.c:
extern USBD_HandleTypeDef hUsbDeviceFS;
..thinking it may be declared elsewhere??
Can someone please help me figure out whats going on?
Thanks!
Followed a video example online, but its not working as expected. I was expecting to see the dev board appear as a USB device on my computer and spit out some text.

OSDev: OS fails when I deinit physical memory regions

I am currently running into problems. So, I am making my own OS, and currently have no problems except for one.
I don't know if it's with my linker script or what, but for some reason, when I attempt to deinit a region of physical memory, QEMU simply doesn't show anything on the screen.
But, as soon as I comment the function out, it works just fine. I tried packing the binary file full of random junk as well, OS still works fine, everything displays in QEMU. I tried doing hefty function calling for things that the OS doesn't even support yet, still runs. But, for some reason, when I call my function to de-init a region of used physical memory, the whole thing just crashes and doesn't work.
I am only including the linker script below, I will link my github page to the OS project for anyone to skim through it and possible help out. This is seriously super annoying.
SECTIONS {
.text 0x1F00 :
{
*(kernel_entry);
*(.text*);
}
.idt BLOCK(.) : ALIGN(.)
{
_idt = .;
. = . * 0x100;
}
.rodata :
{
*(.rodata*);
}
.bss :
{
*(.bss*);
}
end = .;
}
I did roll my own Bootloader as well. The github layout goes as so:
All bootloader content is in folder Bootloader, all kernel content(including linker script) is in folder Kernel.
GitHub page

Boost Asio tcp::iostream construction raise an Access Violation Exception on every second use

I am trying to use the implementation of std::iostream provided by boost::asio on top of boost::asio::ip::tcp::socket. My code replicate almost line to line the example that is published in Boost Asio's documentation:
#include <iostream>
#include <stdexcept>
#include <boost/asio.hpp>
int main()
{
using boost::asio::ip::tcp;
try
{
boost::asio::io_service io_service;
tcp::endpoint endpoint(tcp::v4(), 8000);
tcp::acceptor acceptor(io_service, endpoint);
for (;;)
{
tcp::iostream stream; // <-- The exception is triggered on this line, on the second loop iteration.
boost::system::error_code error_code;
acceptor.accept(*stream.rdbuf(), error_code);
std::cout << stream.rdbuf() << std::flush;
}
}
catch (std::exception& exception)
{
std::cerr << exception.what() << std::endl;
}
return 0;
}
The only difference is the use I make of the resulting tcp::iostream: I forward everything I receive to the standard output.
When I compile this code with VisualStudio2019/toolset v142 and Boost from the NuGet boost-vc142, I get an Access Violation Exception only in the second iteration in the for loop, in the function
template <typename Service>
Service& service_registry::use_service(io_context& owner)
{
execution_context::service::key key;
init_key<Service>(key, 0);
factory_type factory = &service_registry::create<Service, io_context>;
return *static_cast<Service*>(do_use_service(key, factory, &owner));
} // <-- The debugger show the exception was raised on this line
in asio/detail/impl/service_registry.hpp. So the first iteration everything goes as planned, the connection is accepted, the data shows up on the standard output, and as soon as the stream is instanciated on the stack for the second time, the exception pops.
I don't have a high confidence in the accuracy of this location of the exception reported by the debugger. For some reason, the stack seams to be messed up and show only one frame.
If the declaration of stream is moved out of the loop, no exception is raised any more but then I need to stream.close() at the end of the loop, or nothing shows up on the standard output except the data from the first client's connection.
Basically, as soon as I try to instanciate more than one boost::asio::tcp::iostream (not necessarily at the same time), the exception is raised.
I tried the exact same code under linux (Arch linux, latest version of g++, same version of Boost) and everything works perfectly.
I could work around this issue by not using iostreams, but my idea is to feed the data received on the tcp socket to a parser which only accept implementations of std::iostream, hence I would still need to wrap asio's tcp socket in an homebrewed (and mediocre) implementation of std::iostream.
Does anybody have an idea on what's wrong with this setup, if I missed a crucial #define somewhere or anything?
Update:
Subsequent investigation show that the only situation where the access violation happens is when the executable is run from within Visual Studio (typ. from the menu Debug -> Start Debugging).
The build process seems to have no effect (calling directly cl.exe, using MSBuild, using devenv.exe).
Moreover, if the executable is run from a command prompt, and only then the debugger is attached, no access violation happens.
At this point, the issue is most likely not linked to the code itself.
Okay, it was exceedingly painful to test this on windows.
Of course I first tried on Linux (clang/gcc) and MingW 8.1 on windows.
Then I bit the bullet and jumped the hoops to get MSVC in command line with boost packages¹.
I cheated by manually copying the .lib/.dll for boost_{system,date_time,regex} into the working directory so the command line stayed "wieldy":
C:\work>C:\Users\sghee\Downloads\nuget.exe install boost_system-vc142
C:\work>C:\Users\sghee\Downloads\nuget.exe install boost_date_time-vc142
C:\work>C:\Users\sghee\Downloads\nuget.exe install boost_regex-vc142
(Be sure to get some coffee during those)
C:\work\> cl /EHsc test.cpp /I .\boost.1.72.0.0\lib\native\include /link
Now I can run test.exe
C:\work\> test.exe
And it listens fine, accepts connections (sequentially, not simultaneously). If you connect a second client while the first is still connected, it will be queued and be accepted only after the first disconnects. That's fine, because it's what you expect with the synchronous accept and loop.
I used Ncat.exe (from Nmap) to connect:
C:\Program Files (x86)\Nmap>.\ncat.exe localhost 8000
Quirk: The buffering was fine with the MSVC cl.exe build (linewise) as opposed to MingW behaviour, even though MingW also uses ws2_32.dll. #trivia
I know this doesn't "help", but maybe you can compare notes and see what is different with your system.
Video Of Test
¹ (that's a tough job without VS and also I - obviously - ran out of space, because 50GiB for a VM can't be enough right)

Linux V4L driver - Polling camera input format

I am unfamiliar with Linux kernel development but I'm tasked with updating a kernel driver so that it will return a status code that can be read by an application. This will require that the driver poll the hardware a couple of times a second to see what camera format is being sent (PAL, NTSC, or none).
However, I'm at a loss on how this would be accomplished. I understand how the driver communicates with the hardware, but I don't understand how to pass this data to the application. Does this type of behavior require the use of an ioctl() call or is this a read file operation? Also if the application is calling an IOCTL or read function and then needs to wait for the hardware to respond will this create a performance issue?
Also for added information, I am working on a 2.6 version of the kernel. I'm working my way through "Linux Device Drivers 3rd Ed", but nothing stands out on how to address this specific issue. LDD3 makes it sound like ioctl() only for sending commands to the driver. Since this is a V4L driver, open file will return image data, not the status information I want, I think.
I recommend checking the latest V4L2 API document, hosted on linuxtv.org: http://linuxtv.org/downloads/v4l-dvb-apis/
Userspace applications can call an IOCTL to query the current input format. The following userspace code can be used to query the kernel driver for the current video standard:
(quoting http://www.linuxtv.org/downloads/legacy/video4linux/API/V4L2_API/spec-single/v4l2.html#standard)
Example 1.5. Information about the current video standard
v4l2_std_id std_id;
struct v4l2_standard standard;
if (-1 == ioctl (fd, VIDIOC_G_STD, &std_id)) {
/* Note when VIDIOC_ENUMSTD always returns EINVAL this
is no video device or it falls under the USB exception,
and VIDIOC_G_STD returning EINVAL is no error. */
perror ("VIDIOC_G_STD");
exit (EXIT_FAILURE);
}
memset (&standard, 0, sizeof (standard));
standard.index = 0;
while (0 == ioctl (fd, VIDIOC_ENUMSTD, &standard)) {
if (standard.id & std_id) {
printf ("Current video standard: %s\n", standard.name);
exit (EXIT_SUCCESS);
}
standard.index++;
}
/* EINVAL indicates the end of the enumeration, which cannot be
empty unless this device falls under the USB exception. */
if (errno == EINVAL || standard.index == 0) {
perror ("VIDIOC_ENUMSTD");
exit (EXIT_FAILURE);
}

PIC24 Firmware Bootloader doesn't start loaded program

I know this might not be the best place for this question but I tried the Microchip forum and didn't haven't gotten a response yet. I am working trying to get an HID bootloader project working on a prototype board that I build using a PIC24FJ64GB002. I modified the example HID Bootloader project to work with my board and I modified the example HID Mouse project to work with my board as well. When I program my device with the bootloader code it runs fine and the Microchip Bootloader Windows Program finds the device and displays "Device attached.". But when I try to load the hex file of the Mouse program onto my device it says it completes successfully but the mouse program never runs. I am not sure if I am using the correct linker scripts. Has anyone done this and know what linker scripts I should be using for the bootloader project and the loadable project?
I was able to get a breadboarded PIC24FJ64GB002 working with the Microchip HID bootloader and the Microchip HID mouse app.
The key things to do are use the the correct linker script for the bootloader and the app.
Bootloader Linker changes:
MEMORY
{
...
program (xr) : ORIGIN = 0x400, LENGTH = 0x1000
app_ivt : ORIGIN = 0x1400, LENGTH = 0xC0
...
}
__CODE_BASE = 0x400;
App linker changes:
MEMORY
{
...
app_ivt : ORIGIN = 0x1400, LENGTH = 0xC0
program (xr) : ORIGIN = 0x14C0, LENGTH = 0x96E8
...
}
__CODE_BASE = 0x200;
After you load the application via the bootloader, you must reset the device.
The following code at the beginning of main() in the bootloader is what causes the bootloader to jump to the application.
mInitSwitch2();
if((sw2==1) && ((RCON & 0x83) != 0))
{
__asm__("goto 0x1400");
}