Logical Block Address access above Kworker Level - ebpf

Currently, I am trying to trace Logical Block Address (LBA) access per process. I am aware of biosnoop.py that probes "blk_start_request". With the program I try to intercept I only get to see kworkers.
Two ideas to resolve this problem:
Find out what process the kworker has its current task from. This seems rather complicated to do in ebpf (if possible at all).
Probe another function in the kernel where the LBA can be intercepted or obtained in any way. I tried looking around in the virtual file system but did not find anything useful. Any recommendations?

Related

how to debug in simpy

I have a general question about how to debug in Simpy. Normal debugging tools don't seem to work, since everything is working on the event loop, and you can't step through the code line by line and inspect what exists at any point in time.
Primarily, I'm interested in finding what kinds of processes and callbacks are in existence at a particular time, and how to remove them at the appropriate point. Are there any best practices surrounding debugging in discrete event simulation generally?
I would just use a bunch of print()s.
One thing you might find useful is the specific requests that can be passed to primitives such as resources. For example you can ask a resource how many users it currently has or how big the queue to use the resource is with:
All of these commands can be found in the documentation, here is the resource example: https://simpy.readthedocs.io/en/latest/api_reference/simpy.resources.html

What exactly happens when an OS goes into kernel mode?

I find that neither my textbooks or my googling skills give me a proper answer to this question. I know it depends on the operating system, but on a general note: what happens and why?
My textbook says that a system call causes the OS to go into kernel mode, given that it's not already there. This is needed because the kernel mode is what has control over I/O-devices and other things outside of a specific process' adress space. But if I understand it correctly, a switch to kernel mode does not necessarily mean a process context switch (where you save the current state of the process elsewhere than the CPU so that some other process can run).
Why is this? I was kinda thinking that some "admin"-process was switched in and took care of the system call from the process and sent the result to the process' address space, but I guess I'm wrong. I can't seem to grasp what ACTUALLY is happening in a switch to and from kernel mode and how this affects a process' ability to operate on I/O-devices.
Thanks alot :)
EDIT: bonus question: does a library call necessarily end up in a system call? If no, do you have any examples of library calls that do not end up in system calls? If yes, why do we have library calls?
Historically system calls have been issued with interrupts. Linux used the 0x80 vector and Windows used the 0x2F vector to access system calls and stored the function's index in the eax register. More recently, we started using the SYSENTER and SYSEXIT instructions. User applications run in Ring3 or userspace/usermode. The CPU is very tricky here and switching from kernel mode to user mode requires special care. It actually involves fooling the CPU to think it was from usermode when issuing a special instruction called iret. The only way to get back from usermode to kernelmode is via an interrupt or the already mentioned SYSENTER/EXIT instruction pairs. They both use a special structure called the TaskStateSegment or TSS for short. These allows to the CPU to find where the kernel's stack is, so yes, it essentially requires a task switch.
But what really happens?
When you issue an system call, the CPU looks for the TSS, gets its esp0 value, which is the kernel's stack pointer and places it into esp. The CPU then looks up the interrupt vector's index in another special structure the InterruptDescriptorTable or IDT for short, and finds an address. This address is where the function that handles the system call is. The CPU pushes the flags register, the code segment, the user's stack and the instruction pointer for the next instruction that is after the int instruction. After the systemcall has been serviced, the kernel issues an iret. Then the CPU returns back to usermode and your application continues as normal.
Do all library calls end in system calls?
Well most of them do, but there are some which don't. For example take a look at memcpy and the rest.

Embedded Linux LED-flashing daemon: does it exist?

I've seen embedded boards before that have an LED that flashes like a heartbeat to show that the board is still executing code. I'd like to do something similar on an embedded Linux board I'm working on. Given that it's a fairly trivial bit of code, it seems likely to me that someone has already written a daemon for Linux that does this, but I haven't been able to find any evidence.
Note that OS X Server's heartbeatd and the High-Availability Linux heartbeat daemon are not what I'm looking for-- they both coordinate system availability over IP networks, or something like that.
Assuming what I'm looking for doesn't exist, I'm also interested in advice about how to write a daemon that toggles a pin while minimizing resource usage. At what update rate does cron become a stupid idea?
(I'd also rather not hear gushing about the LED on the sleeping MacBook Pro, if that seems relevant for some reason.)
Thanks.
The LED heartbeat is a built-in kernel function. Assuming you have a device driver for your LED, turning on the heartbeat is done thus:
$ echo "heartbeat" > /sys/class/leds/MyLed/trigger
To see the list of triggers (MMC activity, heartbeat, etc.)
$ cat /sys/class/leds/MyLed/trigger
See drivers/leds/ledtrig-heartbeat.c and http://www.avrfreaks.net/wiki/index.php/Documentation:Linux/LEDs
The interesting thing about the heartbeat is that the pattern is dynamic. The basic pattern is thump-thump-pause, just like a human heartbeat. But the rate of the heartbeat is controlled by the load average! Light loads beat at about 50 beats per minute. Heavier loads cause faster beating until it maxes out at about 180 bpm.
I wouldn't use the cron. Its just not the right tool. A very simple solution is to just run a
shell script from your inittab.
Example:
#!/bin/sh
while [ true ];
do
logger "blink!" # to be replaced
sleep 1
done
Save this to /bin/blink.sh, add the following line to your inittab and have init reread the tab be running init q.
bl:2345:respawn:/bin/blink.sh
Of course you have to adjust the blink.sh script to your environment. Its highly depended on the
particular board how an LED can be toggled from user space (device driver file, sysfs entry, ....).
If you need something more efficient you might redo the while thing in C but it might not be worth the effort.
One thing to think about is what you want to signal with a pulsing LED. With the approach outlined above we can only show that the board is still alive (kernel is running, the process executing blink.sh is scheduled and blink.sh is doing what it is supposed to do). For some use cases this might be fine but more often you actually want to signal that the application running on an embedded board is still OK (doesn't hang, hasn't crashed, ...). To implement such functionality you need to integrate the code that toggles the LED into the main loop of your application.

How to do a "kill_proc()" in Linux Kernel 2.6.31.5

Trying this free forum for developers. I am migrating a serial driver to kernel 2.6.31.5. I have used various books and articles to solve problems going from 2.4
Now I have a couple of kill_proc that is not supported anymore in kernel 2.6.31.5
What would be the fastest way to migrate this into the kernel 2.6.31.5 way of killing a thread. In the books they say use kill() but it does not seem to be so in 2.6.31.5. Using send_signal would be a good way, but how do I do this? There must be a task_struct or something, I wich I could just provide my PID and SIGTERM and go ahaed and kill my thread, but it seems more complicated, having to set a struct with parameters I do not know of.
If anyone have a real example, or a link to a place with up to date info on kernel 2.6.31 I would be very thankful. Siply put, I need to kill my thread, and this is not suppose to be hard. ;)
This is my code now:
kill_proc(ex_pid, SIGTERM, 1);
/Jörgen
For use with kthreads, there is now kthread_stop that the caller (e.g. the module's exit function) can invoke. The kthread itself has to check using kthread_should_stop. Examples of that are readily available in the kernel source tree.

Writing a very basic debugger

Is it possible to write a program under windows that will cause a remote process thread to break (stop execution in that thread) upon reaching a predefined address?
I have been experimenting with the Windows Debug API, but it seems very limited when it comes to setting breakpoints. The DebugBreakProcess function seemed promising, but I can't find any examples on how to use this API call.
You need to use WriteProcessMemory to write a breakpoint (on x86, an opcode of 0xCC) to the address.
On x86, when the debuggee hits that point in the code the 0xCC will generate an int 3 exception. This is picked up by your debugger's WaitForDebugEvent will return a DEBUG_EVENT with EXCEPTION_DEBUG_EVENT set.
You then need to patch the that address back to its original code before continuing. If you want to break again, you need to single step and then repatch the breakpoint opcode. To single step, you need to set the single step flag in EFlag in the thread context.
DebugBreakProcess is used to generate a remote break of a process you are debugging - it can't be used to break at an arbitrary point in the code.
Michael is right - also, if you want to break into an arbitrary process in the debugger once you've attached (i.e. if the user hits "Break into process" all of a sudden), the standard way is to create a remote thread whose routine issues an int3.