Is it possible to write a program under windows that will cause a remote process thread to break (stop execution in that thread) upon reaching a predefined address?
I have been experimenting with the Windows Debug API, but it seems very limited when it comes to setting breakpoints. The DebugBreakProcess function seemed promising, but I can't find any examples on how to use this API call.
You need to use WriteProcessMemory to write a breakpoint (on x86, an opcode of 0xCC) to the address.
On x86, when the debuggee hits that point in the code the 0xCC will generate an int 3 exception. This is picked up by your debugger's WaitForDebugEvent will return a DEBUG_EVENT with EXCEPTION_DEBUG_EVENT set.
You then need to patch the that address back to its original code before continuing. If you want to break again, you need to single step and then repatch the breakpoint opcode. To single step, you need to set the single step flag in EFlag in the thread context.
DebugBreakProcess is used to generate a remote break of a process you are debugging - it can't be used to break at an arbitrary point in the code.
Michael is right - also, if you want to break into an arbitrary process in the debugger once you've attached (i.e. if the user hits "Break into process" all of a sudden), the standard way is to create a remote thread whose routine issues an int3.
Related
On an AUTOSAR realtime Operating System (OS), the software architecture is layered separately (User space, systemcall interface, kernel space). Also, the switching between user context and kernel context is handled by hardware-specific infrastructure and typically the context switching handler is written in assembly code.
IBM® Rational® Test RealTime v8.0.1 (RTRT) currently treats embedded-assembly-code as mentioned in the below Q&A.
https://www.ibm.com/support/pages/how-treat-embedded-assembly-code ( ** )
RTRT tool using code insertion technololy (technically known as instrumentation process) to insert its own code to measure code coverage of the system under test.
In my case, OS with full pre-emptive design doesn't have the termination points. As the result, OS always runs unless loss of power supply. If there's no work, OS shall be in sleep (normally an idle state and do nothing). If any unexpected errors or exceptions occurs, OS shall be shutdown and run into an infinite loop. These indicated that OS is always running.
I learnt from ( ** ) and ensure context switching working correctly.
But I don't know how to teach RTRT to finish its postprocessing (consisting of attolcov and attolpostpro) in a right way. Note that OS has worked correctly throughout all my tasks already and was confirmed by debugger. SHUTDOWN OS procedure has been executed correctly and OS has been in INFINITE loop (such as while(1){};)
After RTRT ends all its processes, the coverage report of OS module is still empty.
Based on IBM guideline for RTRT
https://www.ibm.com/developerworks/community/forums/atom/download/attachment_14076432_RTRT_User_Guide.pdf?nodeId=de3b0048-968c-4111-897e-b73654af32af
RTRT provides two breakpoints to mark the logging point (priv_writeln) and termination point (priv_close) of its process.
I already tried to drive from INFINITE (my OS) to priv_close (RTRT) by interacting PC register and all Context Switching registers with the Lauterbach debugger but RTRT coverage report was empty even thougth none of errors happened. No error meant that the context switch from kernel space to user space is able to work well and main() function returned correctly.
Solved the problem.
It definitely came from context switching process of Operating System.
In my case, I did a RAM dump to see how user context memory (befor Starting OS) look like.
After that, I will backup all context areas and retore them from Sleep or Infiniteloop in the exact order.
Hereby, RTRT can know the return point and reach its own main() function's _exit to finish report generation progress.
Finally, the code coverage report has been generated.
I know that an interrupt causes the OS to change a CPU from its current task and to run a kernel routine. I this case, the system has to save the current context of the process running on the CPU.
However, I would like to know whether or not a context switch occurs when any random process makes a system call.
I would like to know whether or not a context switch occurs when any random process makes a system call.
Not precisely. Recall that a process can only make a system call if it's currently running -- there's no need to make a context switch to a process that's already running.
If a process makes a blocking system call (e.g, sleep()), there will be a context switch to the next runnable process, since the current process is now sleeping. But that's another matter.
There are generally 2 ways to cause a content switch. (1) a timer interrupt invokes the scheduler that forcibly makes a context switch or (2) the process yields. Most operating systems have a number of system services that will cause the process to yield the CPU.
well I got your point. so, first I clear a very basic idea about system call.
when a process/program makes a syscall and interrupt the kernel to invoke syscall handler. TSS loads up Kernel stack and jump to syscall function table.
See It's actually same as running a different part of that program itself, the only major change is Kernel play a role here and that piece of code will be executed in ring 0.
now your question "what will happen if a context switch happen when a random process is making a syscall?"
well, nothing will happen. Things will work in same way as they were working earlier. Just instead of having normal address in TSS you will have address pointing to Kernel stack and SysCall function table address in that random process's TSS.
I'm not an expert, but just a hobbyist. I was playing with 68000 architecture in the past and I've been always thinking of its TRAP instruction. This instruction is always described as a "bridge" to an OS (in some systems however it's not used in this regard, but that's a different story). How this is achieved? TRAP itself is a privileged instruction, so how this OS invoking mechanism works in user mode? My guess is that the privilege violation exception is triggered and the exception handler checks what particular instruction has caused the exception. If it's a TRAP instruction then the instruction is simply executed (maybe TRAP's operand i.e. TRAP vector number is checked as well), of course now in the supervisor mode. Am I right?
The TRAP instruction is not privileged, you can call it from either user mode or supervisor mode.
It's the TRAP instruction itself that will force the CPU to supervisor mode, and then depending of the #xx number you used will jump to any of the 16 possible callbacks from the memory area $80 to $BC.
TRAP also pushes to the stack the PC and SR values, so when the last function call returns it goes back to whatever mode was setup before you called TRAP.
I find that neither my textbooks or my googling skills give me a proper answer to this question. I know it depends on the operating system, but on a general note: what happens and why?
My textbook says that a system call causes the OS to go into kernel mode, given that it's not already there. This is needed because the kernel mode is what has control over I/O-devices and other things outside of a specific process' adress space. But if I understand it correctly, a switch to kernel mode does not necessarily mean a process context switch (where you save the current state of the process elsewhere than the CPU so that some other process can run).
Why is this? I was kinda thinking that some "admin"-process was switched in and took care of the system call from the process and sent the result to the process' address space, but I guess I'm wrong. I can't seem to grasp what ACTUALLY is happening in a switch to and from kernel mode and how this affects a process' ability to operate on I/O-devices.
Thanks alot :)
EDIT: bonus question: does a library call necessarily end up in a system call? If no, do you have any examples of library calls that do not end up in system calls? If yes, why do we have library calls?
Historically system calls have been issued with interrupts. Linux used the 0x80 vector and Windows used the 0x2F vector to access system calls and stored the function's index in the eax register. More recently, we started using the SYSENTER and SYSEXIT instructions. User applications run in Ring3 or userspace/usermode. The CPU is very tricky here and switching from kernel mode to user mode requires special care. It actually involves fooling the CPU to think it was from usermode when issuing a special instruction called iret. The only way to get back from usermode to kernelmode is via an interrupt or the already mentioned SYSENTER/EXIT instruction pairs. They both use a special structure called the TaskStateSegment or TSS for short. These allows to the CPU to find where the kernel's stack is, so yes, it essentially requires a task switch.
But what really happens?
When you issue an system call, the CPU looks for the TSS, gets its esp0 value, which is the kernel's stack pointer and places it into esp. The CPU then looks up the interrupt vector's index in another special structure the InterruptDescriptorTable or IDT for short, and finds an address. This address is where the function that handles the system call is. The CPU pushes the flags register, the code segment, the user's stack and the instruction pointer for the next instruction that is after the int instruction. After the systemcall has been serviced, the kernel issues an iret. Then the CPU returns back to usermode and your application continues as normal.
Do all library calls end in system calls?
Well most of them do, but there are some which don't. For example take a look at memcpy and the rest.
I have a wxWidgets/GTK based application that works well - except for one installation on an Debian Squeeze ARM system. There it crashes when the user just activates the main window of it. To find the reason for that I added a signal handler to the application and use libunwind out of that signal handler to find the source for the crash. During a test that worked fine, when the software writes e.g. to address 0x0 libunwind correctly points me to the function where that happens.
But the results for the system where the crash appears unexpectedly are a bit strange, they seem to happen outside of my application. One crash comes from a function with no name (here libunwind returns an empty string), and one is caused by "malloc_usable_size", a system function which should never die this way.
So...what to do next? All ideas, suggestions or any other hints are welcome since I'm not sure how to contunue with that problem...
Check for buffer overrun or overwriting some memory unexpectedly for any structures, pointers, memory locations for items returned by library functions.
Check for invalid pointer frees in your code for the library allocated pointers that you are using.
May be using valgrind would also help.