Is there any way to display the object contents using !mdt (SOSEX extension) but using a register?
I am aware if you do !mdt 299281 (display the object in that address if any) but what about if I want to do !mdt edx (register instead of hex number)?
Yes, just use #edx. Here's an example with eax:
0:004> r
eax=023c3e64 ebx=00000000 ecx=00000000 edx=773af1da esi=00000000 edi=00000000
eip=7732000c esp=04acfe88 ebp=04acfeb4 iopl=0 nv up ei pl zr na pe nc
cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246
ntdll!DbgBreakPoint:
7732000c cc int 3
0:004> !mdt #eax
023c3e64 (System.Globalization.CultureData)
sRealName:023c1228 (System.String)
sWindowsName:023c1228 (System.String)
sName:023c1228 (System.String)
sParent:023c1228 (System.String)
[...]
Related
I am trying socket programming for ARM, however I am not able to understand how the values for the arguments are decided.
For example
this is the link for Azeria Labs
I understand that sys call for ARM register R7 gets it hence its 281 in this case and arguments are passed using R0, R1, R2, R3. But here how do you decide the values for R0(AF_INET) as 2 and R1(SOCK_STREAM) as 1 while creating socket(AF_INET, SOCK_STREAM, 0)
Finding system call was easy
$ grep socket /usr/include/asm/unistd-common.h
#define __NR_socket (__NR_SYSCALL_BASE+281)
#define __NR_socketpair (__NR_SYSCALL_BASE+288)
Similarly is there a way to find the values for the arguments?
I found an another resource which was for X86 Assembly which also has similar approach.
%assign SOCK_STREAM 1
%assign AF_INET 2
%assign SYS_socketcall 102
%assign SYS_SOCKET 1
%assign SYS_CONNECT 3
%assign SYS_SEND 9
%assign SYS_RECV 10
section .text
global _start
;--------------------------------------------------
;Functions to make things easier. :]
;--------------------------------------------------
_socket:
mov [cArray+0], dword AF_INET
mov [cArray+4], dword SOCK_STREAM
mov [cArray+8], dword 0
mov eax, SYS_socketcall
mov ebx, SYS_SOCKET
mov ecx, cArray
int 0x80
ret
Kindly let me know.
Thank you.
Linux alarmpi 4.4.34+ #3 Thu Dec 1 14:44:23 IST 2016 armv6l GNU/Linux
%include "init.inc"
[org 0x0]
[bits 16]
jmp 0x07C0:start_boot
start_boot:
mov ax, cs
mov ds, ax
mov es, ax
load_setup:
mov ax, SETUP_SEG
mov es, ax
xor bx, bx
mov ah, 2 ; copy data to es:bx from disk.
mov al, 1 ; read a sector.
mov ch, 0 ; cylinder 0
mov cl, 2 ; read data since sector 2.
mov dh, 0 ; Head = 0
mov dl, 0 ; Drive = 0
int 0x13 ; BIOS call.
jc load_setup
lea si, [msg_load_setup]
call print
jmp $
print:
print_beg:
mov ax, 0xB800
mov es, ax
xor di, di
print_msg:
mov al, byte [si]
mov byte [es:di], al
or al, al
jz print_end
inc di
mov byte [es:di], BG_TEXT_COLOR
inc di
inc si
jmp print_msg
print_end:
ret
msg_load_setup db "Loading setup.bin was completed." , 0
times 510-($-$$) db 0
dw 0xAA55
I want to load setup.bin to memory address zero. So, I input 0 value to es register (SETUP_SEG = 0). bx, too. But it didn't work. then, I have a question about this issue. My test is below.
SETUP_SEG's value
0x0000 : fail
0x0010 : success
0x0020 : fail
0x0030 : fail
0x0040 : fail
0x0050 : success
I can't understand why this situation was happened. All test was carried out on VMware. Does anyone have an idea ?
I'm not sure if this is your problem, but your trying to load setup.bin in the Real Mode IVT (Interrupt Vector Table). The IVT contains the location of each interrupt, so I'm assuming that your boatloader is overwriting them when it loads setup.bin into memory! Interrupts can be sneaky and tricky, since they can be called even if you didn't call them. Any interrupt vector you overwrote will likely cause undefined behavior when called, which will cause some problems.
I suggest setting SETUP_SEG to a higher number like 0x2000 or 0x3000, but the lowest you could safely go is 0x07E0. The Osdev Wiki and Wikipedia have some helpful information on conventional memory and memory mapping.
I hope this helps!
So I have a crash dump, and using WinDbg, I am able to get the stack trace. However, it appears to be skipping every other function. For instance if the actual code is:
void a()
{
b();
}
void b()
{
c();
}
void c(){}
The stack trace would have a, and c in the stack frame and not b. Is this expected behavior, and is there something I can do to see the entire stack trace? I have using the command "kn" to view the stack trace.
The compiler may optimize the code. One optimization is to inline methods so that call and ret statements and everything related (push, pop etc.) are removed.
For demonstration purposes, I have added a std::cout statement to your code which we can identify so that the full code now reads
#include "stdafx.h"
#include <iostream>
void c()
{
std::cout << "Hello world";
}
void b()
{
c();
}
void a()
{
b();
}
int _tmain(int argc, _TCHAR* argv[])
{
a();
return 0;
}
Walkthrough in debug build
In a typical debug build, there are no optimizations and you can see the methods. To follow the example, start WinDbg first and run the executable via File/Open executable....
0:000> .symfix e:\debug\symbols
0:000> lm
start end module name
010d0000 010f3000 OptimizedInlined (deferred)
...
0:000> ld OptimizedInlined
Symbols loaded for OptimizedInlined
0:000> x OptimizedInlined!c
010e50c0 OptimizedInlined!c (void)
0:000> bp OptimizedInlined!c
0:000> bl
0 e 010e50c0 0001 (0001) 0:**** OptimizedInlined!c
0:000> g
Breakpoint 0 hit
eax=cccccccc ebx=7efde000 ecx=00000000 edx=00000001 esi=00000000 edi=003ffa00
eip=010e50c0 esp=003ff930 ebp=003ffa00 iopl=0 nv up ei pl nz na po nc
cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000202
OptimizedInlined!c:
010e50c0 55 push ebp
0:000> u eip L18
OptimizedInlined!c [e:\...\optimizedinlined.cpp # 8]:
010e50c0 55 push ebp
010e50c1 8bec mov ebp,esp
010e50c3 81ecc0000000 sub esp,0C0h
010e50c9 53 push ebx
010e50ca 56 push esi
010e50cb 57 push edi
010e50cc 8dbd40ffffff lea edi,[ebp-0C0h]
010e50d2 b930000000 mov ecx,30h
010e50d7 b8cccccccc mov eax,0CCCCCCCCh
010e50dc f3ab rep stos dword ptr es:[edi]
010e50de 6854ca0e01 push offset OptimizedInlined!`string' (010eca54)
010e50e3 a1e4000f01 mov eax,dword ptr [OptimizedInlined!_imp_?coutstd (010f00e4)]
010e50e8 50 push eax
010e50e9 e8c9c1ffff call OptimizedInlined!ILT+690(??$?6U?$char_traitsDstdstdYAAAV?$basic_ostreamDU?$char_traitsDstd (010e12b7)
010e50ee 83c408 add esp,8
010e50f1 5f pop edi
010e50f2 5e pop esi
010e50f3 5b pop ebx
010e50f4 81c4c0000000 add esp,0C0h
010e50fa 3bec cmp ebp,esp
010e50fc e838c2ffff call OptimizedInlined!ILT+820(__RTC_CheckEsp) (010e1339)
010e5101 8be5 mov esp,ebp
010e5103 5d pop ebp
010e5104 c3 ret
Walkthrough release build
In the release build, see how things change. The WinDbg commands are the same, but the methods c(), b() and a() cannot be found. The std::cout method call has been inlined into wmain():
0:000> .symfix e:\debug\symbols
0:000> lm
start end module name
01360000 01367000 OptimizedInlined (deferred)
...
0:000> ld OptimizedInlined
Symbols loaded for OptimizedInlined
0:000> x OptimizedInlined!c
0:000> x OptimizedInlined!b
0:000> x OptimizedInlined!a
0:000> x OptimizedInlined!wmain
013612a0 OptimizedInlined!wmain (int, wchar_t **)
0:000> bp OptimizedInlined!wmain
0:000> bl
0 e 013612a0 0001 (0001) 0:**** OptimizedInlined!wmain
0:000> g
Breakpoint 0 hit
eax=6142f628 ebx=00000000 ecx=0087a6e8 edx=0019def8 esi=00000001 edi=00000000
eip=013612a0 esp=0042f8f8 ebp=0042f934 iopl=0 nv up ei pl zr na pe nc
cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246
OptimizedInlined!wmain:
013612a0 8b0d44303601 mov ecx,dword ptr [OptimizedInlined!_imp_?coutstd (01363044)] ds:002b:01363044={MSVCP120!std::cout (646c6198)}
0:000> u eip L4
OptimizedInlined!wmain [e:\...\optimizedinlined.cpp # 23]:
013612a0 8b0d44303601 mov ecx,dword ptr [OptimizedInlined!_imp_?coutstd (01363044)]
013612a6 e895040000 call OptimizedInlined!std::operator<<<std::char_traits<char> > (01361740)
013612ab 33c0 xor eax,eax
013612ad c3 ret
Like already mentioned by Lieven. this is a case where compiler is making function calls inline.
There are 2 things that i would suggest:
a. If you have control over the code and build environment. Build a non-optimized/debug version of the code. This will ensure that compiler does not inline functions on its own.
b. In WinDBG you can see the disassembly under View --> Disassembly. Here you can see that code for function b() is actually in lined under a().
In case you are comfortable with assembly debugging you can get the value of locals as well :)
I'm having problems with analyzing a simple binary in IDA Pro.
When running a program, i dumped part of its memory (for example, unpacked code section in the memory) into a file, using WinDbg.
I would like to analyze it using IDA, but when trying to just load the binary - it will only show its raw data.
Of course the binary is not a full PE file, so I'm not expecting a deep analysis, just a nicer way to read the disassembly.
So the question is - How can i make IDA disassemble the binary?
Thanks! :)
select an appropriate address and press c
that is MakeCode(Ea); ida will convert the raw bytes to code and disassemble it
pasted below is a simple automation with an idc script but idas automation is imho subpar so you should stick with manual pressing of C in user interface
:dir /b
foo.dmp
foo.idc
:xxd foo.dmp
0000000: 6a10 6830 b780 7ce8 d86d ffff 8365 fc00 j.h0..|..m...e..
0000010: 64a1 1800 0000 8945 e081 7810 001e 0000 d......E..x.....
0000020: 750f 803d 0850 887c 0075 06ff 15f8 1280 u..=.P.|.u......
0000030: 7cff 750c ff55 0850 e8c9 0900 00 |.u..U.P.....
:type foo.idc
#include <idc.idc>
static main (void) {
auto len,temp,fhand;
len = -1; temp = 0;
while (temp < 0x3d && len != 0 ) {
len = MakeCode(temp);
temp = temp+len;
}
fhand = fopen("foo.asm","wb");
GenerateFile(OFILE_LST,fhand,0,0x3d,0x1F);
fclose(fhand);
Wait();
Exit(0);
}
:f:\IDA_FRE_5\idag.exe -c -B -S.\foo.idc foo.dmp
:head -n 30 foo.asm | tail
seg000:00000000 ; Segment type: Pure code
seg000:00000000 seg000 segment byte public 'CODE' use32
seg000:00000000 assume cs:seg000
seg000:00000000 assume es:nothing, ss:nothing, ds:nothing, fs:no thing, gs:nothing
seg000:00000000 push 10h
seg000:00000002 push 7C80B730h
seg000:00000007 call near ptr 0FFFF6DE4h
seg000:0000000C and dword ptr [ebp-4], 0
with windbg you can get the disassembly right from command line like this
:cdb -c ".dvalloc /b 60000000 2000;.readmem foo.dmp 60001000 l?0n61;u 60001000 60001040;q" calc
0:000> cdb: Reading initial command '.dvalloc /b 60000000 2000;.readmem foo.dmp 60001000 l?0n61;u 60001000 60001040;q'
Allocated 2000 bytes starting at 60000000
Reading 3d bytes.
60001000 6a10 push 10h
60001002 6830b7807c push offset kernel32!`string'+0x88 (7c80b730)
60001007 e8d86dffff call 5fff7de4
6000100c 8365fc00 and dword ptr [ebp-4],0
60001010 64a118000000 mov eax,dword ptr fs:[00000018h]
60001016 8945e0 mov dword ptr [ebp-20h],eax
60001019 817810001e0000 cmp dword ptr [eax+10h],1E00h
60001020 750f jne 60001031
60001022 803d0850887c00 cmp byte ptr [kernel32!BaseRunningInServerProcess (7c885008)],0
60001029 7506 jne 60001031
6000102b ff15f812807c call dword ptr [kernel32!_imp__CsrNewThread (7c8012f8)]
60001031 ff750c push dword ptr [ebp+0Ch]
60001034 ff5508 call dword ptr [ebp+8]
60001037 50 push eax
60001038 e8c9090000 call 60001a06
6000103d 0000 add byte ptr [eax],al
6000103f 0000 add byte ptr [eax],al
quit:
ollydbg 1.10 view-> file-> (mask any file) -> foo.dmp -> rightclick -> disassemble
I'm debugging a program that's crashing with WinDbg set as my post-mortem debugger. I have set a breakpoint at address 77f7f571. When it's triggered, I used to get the following:
*** ERROR: Symbol file could not be found. Defaulted to export symbols for C:\WINDOWS\System32\ntdll.dll -
ntdll!DbgBreakPoint+0x1:
Then I followed the instructions from http://www.osronline.com/ShowThread.cfm?link=178221, and now I just get
ntdll!DbgBreakPoint+0x1:
I'd like to remove this breakpoint, but I can't get it to list or delete. There's no output for bl, nor for bc or bd:
0:002> bl
0:002> bc *
0:002> bd *
This is not a line based breakpoint but looks like a manual call to DebugBreak() like in the following program:
#include "stdafx.h"
#include "windows.h"
int _tmain()
{
DebugBreak();
return 0;
}
Internally, the method will throw an exception. To control whether WinDbg stops due to the exception, use sxe bpe to stop and sxi bpe to ignore the exception.
To try this, compile above application and run it under WinDbg (Ctrl+E). At the inital breakpoint, take over the control:
(1c2c.6a8): Break instruction exception - code 80000003 (first chance)
eax=00000000 ebx=00000000 ecx=779d0000 edx=0020e218 esi=fffffffe edi=00000000
eip=773e12fb esp=0038f9e8 ebp=0038fa14 iopl=0 nv up ei pl zr na pe nc
cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246
ntdll!LdrpDoDebuggerBreak+0x2c:
773e12fb cc int 3
0:000> sxe bpe; g
(1c2c.6a8): Break instruction exception - code 80000003 (first chance)
*** WARNING: Unable to verify checksum for DebugBreak.exe
eax=cccccccc ebx=7efde000 ecx=00000000 edx=00000001 esi=0038fd44 edi=0038fe10
eip=74d5322c esp=0038fd40 ebp=0038fe10 iopl=0 nv up ei pl nz na pe nc
cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000206
KERNELBASE!DebugBreak+0x2:
74d5322c cc int 3
0:000> g
eax=00000000 ebx=00000000 ecx=00000000 edx=00000000 esi=77442100 edi=774420c0
eip=7735fd02 esp=0038fd78 ebp=0038fd94 iopl=0 nv up ei pl zr na pe nc
cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246
ntdll!ZwTerminateProcess+0x12:
7735fd02 83c404 add esp,4
After this experiment, type .restart. Then repeat the experiment with sxi bpe:
(109c.1c1c): Break instruction exception - code 80000003 (first chance)
eax=00000000 ebx=00000000 ecx=be9e0000 edx=0009e028 esi=fffffffe edi=00000000
eip=773e12fb esp=002ff890 ebp=002ff8bc iopl=0 nv up ei pl zr na pe nc
cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246
ntdll!LdrpDoDebuggerBreak+0x2c:
773e12fb cc int 3
0:000> sxi bpe; g
eax=00000000 ebx=00000000 ecx=00000000 edx=00000000 esi=77442100 edi=774420c0
eip=7735fd02 esp=002ffc20 ebp=002ffc3c iopl=0 nv up ei pl zr na pe nc
cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246
ntdll!ZwTerminateProcess+0x12:
7735fd02 83c404 add esp,4
As you can see, WinDbg did not stop at KERNELBASE!DebugBreak+0x2 due to the exception any more.