Underscore issue linking fortran library (mumps) with precompiled c-libraries (blas/lapack) - matlab

I'm trying to build the Fortran linear solver mumps for Matlab and have successfully compiled the object files and link the mumps library together.
Now I have to link it with the Matlab blas/lapack libraries (as they are already available and optimized), but run in to the classic missing underscore symbol issue: undefined reference to dgemm_ linking with
gfortran -o dsimpletest -O -I. dsimpletest.o ../lib/libdmumps.a ../lib/libmumps_common.a -L../PORD/lib/ -lpord -L./libmwlapack.a -L../libseq -lmpiseq -L./libmwblas.a -lgomp -lpthread
I have tried various combinations of -f(no)-underscoring but without much success. Is there anything obvious that I might be missing?
EDIT: I'm using mingw and compiling for windows so this might be the issue:
dumpbin /All libmwblas.lib | grep gemm
142D0 dgemm
142D0 __imp_dgemm
00000002 REL32 00000000 8 __imp_dgemm
00000000: BE 00 64 67 65 6D 6D 00 _.dgemm.
007 00000000 SECT1 notype External | dgemm
008 00000000 SECT5 notype External | __imp_dgemm
dgemm

Related

The ba command (Break on Access) in WinDbg is not working as advertised in the book Advanced Windows Debugging by Daniel Pravat and Mario Hewardt

I am reading the book Advanced Windows Debugging: Developing and Administering Reliable, Robust, and Secure Software by Daniel Pravat and Mario Hewardt. I have questions about Chapter 2.
I am using WinDBG 10.0.19041.1 X86 on Windows 10 Pro Version 2004 build 19041.572. I built 02sample.exe with Microsoft Visual Studio Community 2019 Version 16.7.6 with the GenerateDebugInformation property set to DebugFull. I am debugging the Debug|x86 configuration of 02sample.exe.
I am reading the section in chapter 2 named Setting a Breakpoint on Access and I am seeing differences between what is in the book and what I am experiencing.
The first difference in behavior is with the following command.
0:000> dt gGlobal
This command fails with the following error.
Symbol gGlobal not found.
The following command does work.
0:000> dt 02sample!gGlobal
*** WARNING: Unable to verify checksum for 02sample.exe
gGlobal
+0x000 m_ref : 0n0
The next difference in behavior is with the following command.
0:000> ba w4 gGlobal+0
Based on the following output, this appears to work.
0:000> bl
0 e Disable Clear 00969130 w 4 0001 (0001) 0:**** 02sample!gGlobal
However, the breakpoint is not being hit. I cannot figure out why.
Symbols Are probably Not Yet Loaded
try either .reload /f or .reload /f foo.exe
before attempting unqualified dt gGlobal
a qualified foo!gGlobal will always work because it loads the symbols
check with !sym noisy
0:000> !sym noisy
noisy mode - symbol prompts off
0:000> dt gGlobal
Symbol gGlobal not found.
0:000> dt gGlobal
Symbol gGlobal not found.
0:000> dt gGlobal
Symbol gGlobal not found.
0:000> dt awd!gGlobal
SYMSRV: BYINDEX: 0x2
snipxxxxxxxxxxxxxxx
DBGHELP: awd - public symbols & lines
C:\Users\XXX\Desktop\awd\Chapter2\Debug\awd.pdb
+0x000 m_ref : 0n0
0:000>
how is the Book Telling You to set the ba breakpoint ?
You Cannot Set a ba breakpoint on a module when you are Stopped At System Breakpoint
Because the system will reset the thread context
you have to go the Entry Point and then Set the ba breakpoint as windbg suggests do you do that ?
:\>cdb -c "g #$exentry;ba w4 awd!gGlobal;g;u .;kb;q" awd.exe |awk "/Reading/,/quit/"
0:000> cdb: Reading initial command 'g #$exentry;ba w4 awd!gGlobal;g;u .;kb;q'
*** WARNING: Unable to verify checksum for awd.exe
Breakpoint 0 hit
awd!Global::Global+0x21:
00a23461 8b45fc mov eax,dword ptr [ebp-4]
00a23464 83c404 add esp,4
00a23467 3bec cmp ebp,esp
00a23469 e895ddffff call awd!ILT+510(__RTC_CheckEsp) (00a21203)
00a2346e 8be5 mov esp,ebp
00a23470 5d pop ebp
00a23471 c3 ret
00a23472 cc int 3
ChildEBP RetAddr Args to Child
0026f9d4 00a21687 0026f9ec 522c59df 00a21670 awd!Global::Global+0x21
0026f9dc 522c59df 00a21670 00a2b208 0026fa50 awd!`dynamic initializer for 'gGlobal''+0x17
0026f9ec 00a24a5e 00a2b000 00a2b30c 91b61bad ucrtbased!_initterm+0x3f
0026fa50 00a2498d 0026fa60 00a24d08 0026fa6c awd!__scrt_common_main_seh+0xbe
0026fa58 00a24d08 0026fa6c 7683ed6c 7ffde000 awd!__scrt_common_main+0xd
0026fa60 7683ed6c 7ffde000 0026faac 773337eb awd!wmainCRTStartup+0x8
0026fa6c 773337eb 7ffde000 761f96f6 00000000 kernel32!BaseThreadInitThunk+0xe
0026faac 773337be 00a21145 7ffde000 00000000 ntdll!__RtlUserThreadStart+0x70
0026fac4 00000000 00a21145 7ffde000 00000000 ntdll!_RtlUserThreadStart+0x1b
quit:
if by chance you misunderstood entrypoint to wmain() and set a ba break on reaching wmain() it probably will never be hit because the code in question has already been executed

How to do hybrid user-mode/kernel-mode debugging?

Basically, I have a user mode program that calls kernel32.CreateProcessA() which internally calls kernel32.CreateProcessInternalW(). Within this function, I'm interested in what is happening inside ntdll.NtCreateSection() which attempts to map the executable in virtual memory. Once in this function, the program quickly sets up the kernel call as EAX=0x32 and executes the SYSENTER instruction.
Obviously I can't see beyond the call gate in a user mode debugger. I have a little experience debugging kernel-mode drivers, so I loaded a copy of XP SP3 in a VMWare window and used VirtualKD to conect the pipe to the WinDbg (which I happen to be running inside IDA). After connecting the kernel debugger, I copied my user-mode EXE program and PDB onto the virtual machine, but I'm kind of at a loss on how to set the initial breakpoint in my user-mode program properly. I don't want to intercept all calls to the equivalent ntdll.ZwCreateSection() which I believe to be on the other side of the call gate. Ideally, I'd like to break into the user-mode code and step through that call gate now that I'm using a Kernel debugger, but I don't know what the first steps are.
I've done some googling and I've come close by setting a "ntsd -d" value in
HKLM\Software\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\myprocess.exe
This causes a break in the kernel debugger when I start my process, but I can't seem to set any breakpoints following the .breakin command I need to issue to IDA to get to the WinDbg prompt. I've been following this guide where I locate my process with !process then switch to the context, and reload the symbols but I'm having problems setting the breakpoint in my process or advancing past the initial breakpoint set by "ntsd -d". After getting the message that the breakpoint could not be resolved and a deferred breakpoint is added, I cannot seem to advance "into" to the process without clearing the breakpoints if that makes any sense. Here's the stack of where I seem to be at when I hit that initial break:
ChildEBP RetAddr
b2b55ccc 8060e302 nt!RtlpBreakWithStatusInstruction
b2b55d44 8053d638 nt!NtSystemDebugControl+0x128
b2b55d44 7c90e4f4 nt!KiFastCallEntry+0xf8
0007b270 7c90de3c ntdll!KiFastSystemCallRet
0007b274 6d5f5ca6 ntdll!ZwSystemDebugControl+0xc
0007bd48 6d5f6102 dbgeng!DotCommand+0xd0d
0007de8c 6d5f7077 dbgeng!ProcessCommands+0x318
0007dec4 6d5bec6c dbgeng!ProcessCommandsAndCatch+0x1a
0007eedc 6d5bed4d dbgeng!Execute+0x113
0007ef0c 010052ce dbgeng!DebugClient::Execute+0x63
0007ff3c 010069fb ntsd!MainLoop+0x1ec
0007ff44 01006b31 ntsd!main+0x10e
0007ffc0 7c817067 ntsd!mainCRTStartup+0x125
0007fff0 00000000 kernel32!BaseProcessStart+0x23
To be honest, I'm not sure my PDB is being loaded but I suspect its probably not my immediate problem; my modules pane is only showing kernel driver modules, not user mode modules. When I had been doing driver debugging in the past, I could see my driver image in this pane and whether or not the symbols had loaded, so I'm not sure what to expect for a user-mode image.
Without the image, I can't really expect the debugger to resolve any breakpoints.
I realize I may be going about this completely wrong but I'm not having any luck searching for how to do user-mode/kernel-mode hybrid debugging. Is there anyone out there that could point me in the right direction so I can step into this kernel mode function from a specific user-mode process? Or, at least set a proper kernel-mode breakpoint so it is only triggered as a result of my particular user-mode process?
UPDATE:
I loaded my module (happens to be named runlist.exe) in a user-mode debugger on the debugged OS (I happened to use OllyDbg). Once I was paused at a user-mode breakpoint only a couple instructions from SYSENTER, I suspended the OS using the kernel debugger. I then set the process context. The WinDbg command window contents were as follows:
WINDBG>!process 0 0 runlist.exe
PROCESS 820645a8 SessionId: 0 Cid: 01b4 Peb: 7ffd7000 ParentCid: 02b0
DirBase: 089c02e0 ObjectTable: e1671bb0 HandleCount: 8.
Image: runlist.exe
WINDBG>.process /i /r /p 820645a8
You need to continue execution (press 'g' <enter>) for the context
to be switched. When the debugger breaks in again, you will be in
the new process context.
WINDBG>g
This command cannot be passed to the WinDbg plugin directly, please use IDA Debugger menu to achieve the same result.
Break instruction exception - code 80000003 (first chance)
WINDBG>.reload /user
Loading User Symbols
....
Caching 'Modules'... ok
WINDBG>lmu
start end module name
00400000 00405000 runlist C (no symbols)
7c340000 7c396000 MSVCR71 (private pdb symbols) g:\symcache\msvcr71.pdb\630C79175C1942C099C9BC4ED019C6092\msvcr71.pdb
7c800000 7c8f6000 kernel32 (pdb symbols) e:\windows\symbols\dll\kernel32.pdb
7c900000 7c9af000 ntdll (pdb symbols) e:\windows\symbols\dll\ntdll.pdb
WINDBG>bp 0x7c90d16a
WINDBG>bl
0 e 7c90d16a 0001 (0001) ntdll!ZwCreateSection+0xa
Although I couldn't get my process' symbols to load with ".reload" (PDB is in the same directory - might need to copy it to my symbols dir), the breakpoint I care about is in ntdll anyway so I set it on the address 0x7C90D16A which the debugger recognized as being within ntdll.ZwCreateSection(). Oddly to me, in user-mode code this address resolves to ntdll.NtCreateSection(), but either way that breakpoint was only 2 instructions from where I had my user-mode break. When I resumed the machine, my intention was to "run" the user-mode debugged-process and this would trigger the kernel-mode breakpoint 2 instructions away. The kernel breakpoint was never hit and the app resumed past this point. I can however set a breakpoint on ntdll!ZwCreateSection() but then when resuming the OS, the breakpoint is repeatedly hit by other processes preventing me from getting back to the user-mode debugger so I can "run" it to that location only within my own process.
UPDATE
Merging the tips provided by #conio, the following steps worked for me:
1> after attaching kernel debugger and booting target OS, suspend the OS and apply some configuration options:
!gflag +ksl //allow sxe to report user-mode module load events under kernel debugger
sxe ld myproc.exe //cause kernel debugger break upon process load
.sympath+ <path> //path to HOST machine's user-mode app's symbols
2> run debugger to resume target OS
3> on the target, run the EXE we want to debug
4> kernel debugger should break; now enter the following commands to switch to the usermode context:
!process 0 0 myproc.exe //get address of EProcess structure (first number on 1st line after "PROCESS")
.process /i /r /p <eprocess*> //set kernel debugger to process context
g //continue execution to allow the context switch; debugger will break after switch complete
.reload /user //reload user symbols
lmu //ensure you have symbols although not really necessary in my particular case
5> now since I already know what happens in the user-mode side of ntdll.NtCreateSection(), I just went ahead and set a breakpoint for the kernel mode side of that function, but specifying that I want the breakpoint to occur only within the context of my process. This way, the breakpoint is not triggered OS wide:
bu /p <eprocess*> nt!NtCreateSection //set breakpoint in kernel side of function
g //run to break
6> if all goes as planned, the breakpoint will wake up the debugger on the kernel mode side of NtCreateSection(). I appreciate all the responses and tips!
There are two ways to combine user-mode debugging with kernel-mode debugging and you're confusing and mixing them up.
The way you tried is to use the kernel-mode debugger to debug kernel-mode code, use the user-mode debugger (ntsd) to debug user-mode code, and control the user-mode debugger running on the target machine from the kernel debugger. That's what the -d flag to ntsd does. This method is described in the Controlling the User-Mode Debugger from the Kernel Debugger page and its subpages on MSDN.
What this does (more or less) is to redirect ntsd input and output to the kernel debugger. The modules pane - as the rest of the windows in WinDbg - belong to the kernel debugger. Your only interaction with the user-mode debugger is through the tunnel the kernel debugger creates, and you can access it only through the command window. This is documented in the documentation for the -d flag:
-d
        Passes control of this debugger to the kernel debugger. If you are debugging CSRSS, this control redirection always is active, even if -d is not specified. (This option cannot be used during remote debugging -- use -ddefer instead.) See Controlling the User-Mode Debugger from the Kernel Debugger for details. This option cannot be used in conjunction with either the -ddefer option or the -noio option.
        Note  If you use WinDbg as the kernel debugger, many of the familiar features of WinDbg are not available in this scenario. For example, you cannot use the Locals window, the Disassembly window, or the Call Stack window, and you cannot step through source code. This is because WinDbg is only acting as a viewer for the debugger (NTSD or CDB) running on the target computer.
The second way, which is the one used in the link you put, is to use the kernel debugger to debug both kernel-mode code and user-mode code. No user-mode debugger. No ntsd. You said you've followed the guide, but in fact you didn't. If you had, there wouldn't be any ntsd.
I suggest you use this method for start, and after you use the user-mode debugger only if you find out you need to ( because you want to use a user-mode extension, for example).
In order for the kernel debugger to work well with user-mode modules you have to enable the Enable loading of kernel debugger symbols GlobalFlag. Use !gflag +ksl to do that.
Once you do that, break on the loading of your process using sxe ld:runlist, set the breakpoint (possibly with the /p option) and debug whatever it is that you want.
Just do that instead of all the ntsd mess.
Use ntsd -d and start debugging the executabke from target with a kd connection you can use the kd as an usermode debugger as well as kernel debugger read the docs several times it is not easy doing it furst time but over several trials you should get the hang of it read about .breakin etc
How to break on the entry point of a program when debug in kernel mode with windbg?
edited to add a demo for using ntsd -d
setup
1 ) a vm running winxp sp3 and windbg version 6.12 installed in it
2 ) _NT_SYMBOL_PATH in vm is set to z:\
3 ) z:\ is a mapped network drive that points to e:\symbols in host
4 ) host running win 7 sp2
5 ) host windbg 10.0010586
starting an application in vm under ntsd and redirecting it to kd
opened a command prompt in vm navigated to windbg installation directory and issued ntsd -s -d calc -s is to disable lazy symbol loading
0:000> version
version
Windows XP Version 2600 (Service Pack 3) UP Free x86 compatible
Product: WinNt, suite: SingleUserTS
kernel32.dll version: 5.1.2600.5512 (xpsp.080413-2111)
Machine Name:
Debug session time: Thu Mar 16 16:44:29.222 2017
System Uptime: 0 days 0:10:12.941
Process Uptime: 0 days 0:01:40.980
Kernel time: 0 days 0:00:01.632
User time: 0 days 0:00:00.040
Live user mode: <Local>
Microsoft (R) Windows Debugger Version 6.12.0002.633 X86
Copyright (c) Microsoft Corporation. All rights reserved.
command line: 'ntsd -s -d calc' Debugger Process 0xA8
dbgeng: image 6.12.0002.633, built Tue Feb 02 01:38:31 2010
[path C:\Documents and Settings\admin\Desktop\Debugging Tools for Windows (x86)\dbgeng.dll]
windbg breaks on SystemBreakPoint and the Debug prompt is Input:\>
lm shows the symbol was loaded from z:\
CommandLine: calc
Symbol search path is: z:\
Executable search path is:
ModLoad: 01000000 0101f000 calc.exe
xxxxx
ntdll!DbgBreakPoint:
7c90120e cc int 3
0:000> lm
lm
start end module name
01000000 0101f000 calc (pdb symbols) z:\calc.pdb\3B7D84101\calc.pdb
77c10000 77c68000 msvcrt (export symbols) C:\WINDOWS\system32\msvcrt.dll
Executing till AddressOfEntryPoint
0:000> g #$exentry
g #$exentry
calc!WinMainCRTStartup:
01012475 6a70 push 70h
Setting a breakpoint in user mode and its counterpart in kernel mode at once
0:000> bp ntdll!ZwCreateSection <--- user mode bp notice prompt 0:000
bp ntdll!ZwCreateSection
0:000> .breakin <<---- transferring to kd mode
.breakin
Break instruction exception - code 80000003 (first chance)
nt!RtlpBreakWithStatusInstruction:
804e3592 cc int 3
kd> !process 0 0 calc.exe <<----- looking for our process of interest
Failed to get VAD root
PROCESS ffae2020 SessionId: 0 Cid: 0410 Peb: 7ffde000 ParentCid: 00a8
DirBase: 04d87000 ObjectTable: e1bd5238 HandleCount: 26.
Image: calc.exe
kd> bp /p ffae2020 nt!NtCreateSection << setting a kernel mode bp
on counterpart that matches with our process of interest notice prompt kd>
kd> g <<<---- return to user mode after setting a breakpoint
0:000> g <<<<<--------- executing in user mode
g
now calc process is running in usermode in the vm
click help about (this will trigger a Loadlib and that needs a Section so we will break on our user mode bp in the kernel debugger )
Breakpoint 0 hit
eax=00000000 ebx=00000000 ecx=00000001 edx=ffffffff esi=0007f368 edi=00000000
eip=7c90d160 esp=0007f22c ebp=0007f2a8 iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
ntdll!NtCreateSection:
7c90d160 b832000000 mov eax,32h
now we can merrily trace around use t trace not p or g or any other execution commands
0:000> t
t
eax=00000032 ebx=00000000 ecx=00000001 edx=ffffffff esi=0007f368 edi=00000000
eip=7c90d165 esp=0007f22c ebp=0007f2a8 iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
ntdll!ZwCreateSection+0x5:
7c90d165 ba0003fe7f mov edx,offset SharedUserData!SystemCallStub (7ffe0300)
0:000>
eax=00000032 ebx=00000000 ecx=00000001 edx=7ffe0300 esi=0007f368 edi=00000000
eip=7c90d16a esp=0007f22c ebp=0007f2a8 iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
ntdll!ZwCreateSection+0xa:
7c90d16a ff12 call dword ptr [edx] ds:0023:7ffe0300={ntdll!KiFastSystemCall (7c90e4f0)}
0:000>
eax=00000032 ebx=00000000 ecx=00000001 edx=7ffe0300 esi=0007f368 edi=00000000
eip=7c90e4f0 esp=0007f228 ebp=0007f2a8 iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
ntdll!KiFastSystemCall:
7c90e4f0 8bd4 mov edx,esp
0:000>
eax=00000032 ebx=00000000 ecx=00000001 edx=0007f228 esi=0007f368 edi=00000000
eip=7c90e4f2 esp=0007f228 ebp=0007f2a8 iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
ntdll!KiFastSystemCall+0x2:
7c90e4f2 0f34 sysenter
0:000>
Breakpoint 1 hit
nt!NtCreateSection:
805652b3 6a2c push 2Ch
when in bp at kernel .reload and see the stack trace
the 2nd stack trace is same as first but with corrected symbols for
Shell32.dll (vm doesnt have internet access so it fails for first time
so i drag dropped that specific dll from vm and fetched its sumbols from host using windbg -z shell32.dll and .reload (since the downstore in host is network mappped in vm the second trace properly loads the pdb and gives a correct stack trace without warnings
kd> kb
# ChildEBP RetAddr Args to Child
00 f8bb1d40 804de7ec 0007f368 0000000f 00000000 nt!NtCreateSection
01 f8bb1d40 7c90e4f4 0007f368 0000000f 00000000 nt!KiFastCallEntry+0xf8
02 0007f224 7c90d16c 7c91c993 0007f368 0000000f ntdll!KiFastSystemCallRet
03 0007f228 7c91c993 0007f368 0000000f 00000000 ntdll!NtCreateSection+0xc
04 0007f2a8 7c91c64a 0007f340 00000790 0007f300 ntdll!LdrpCreateDllSection+0x92
05 0007f388 7c91624a 000add00 0007f414 0007f93c ntdll!LdrpMapDll+0x28f
06 0007f648 7c9164b3 00000000 000add00 0007f93c ntdll!LdrpLoadDll+0x1e9
07 0007f8f0 7c801bbd 000add00 0007f93c 0007f91c ntdll!LdrLoadDll+0x230
08 0007f958 7c801d72 7ffddc00 00000000 00000000 kernel32!LoadLibraryExW+0x18e
09 0007f96c 7ca625a3 7ca625ac 00000000 00000000 kernel32!LoadLibraryExA+0x1f
WARNING: Stack unwind information not available. Following frames may be wrong.
0a 0007f990 010057b8 000700ac 000a7c84 00000000 SHELL32!SHCreateQueryCancelAutoPlayMoniker+0x2062d
0b 0007fbc4 010041ac 0000012e 00000111 01006118 calc!MenuFunctions+0x15d
0c 0007fcb4 01004329 0000012e 00000111 01006118 calc!RealProcessCommands+0x1b61
0d 0007fcdc 01006521 0000012e 0007fd6c 01006118 calc!ProcessCommands+0x2d
0e 0007fd04 7e418734 000700ac 00000111 0000012e calc!CalcWndProc+0x409
0f 0007fd30 7e418816 01006118 000700ac 00000111 USER32!InternalCallWinProc+0x28
10 0007fd98 7e4189cd 00000000 01006118 000700ac USER32!UserCallWinProcCheckWow+0x150
11 0007fdf8 7e418a10 0007fee8 00000000 0007ff1c USER32!DispatchMessageWorker+0x306
12 0007fe08 010021a7 0007fee8 7c80b731 000a1ee4 USER32!DispatchMessageW+0xf
13 0007ff1c 010125e9 000a7738 00000055 000a7738 calc!WinMain+0x256
14 0007ffc0 7c817067 00000000 00000000 7ffde000 calc!WinMainCRTStartup+0x174
15 0007fff0 00000000 01012475 00000000 78746341 kernel32!BaseProcessStart+0x23
STACKtrace without warnings
Breakpoint 0 hit
nt!NtCreateSection:
805652b3 6a2c push 2Ch
kd> kb
# ChildEBP RetAddr Args to Child
00 f8aa0d40 804de7ec 0007f368 0000000f 00000000 nt!NtCreateSection
01 f8aa0d40 7c90e4f4 0007f368 0000000f 00000000 nt!KiFastCallEntry+0xf8
02 0007f224 7c90d16c 7c91c993 0007f368 0000000f ntdll!KiFastSystemCallRet
03 0007f228 7c91c993 0007f368 0000000f 00000000 ntdll!NtCreateSection+0xc
04 0007f2a8 7c91c64a 0007f340 00000790 0007f300 ntdll!LdrpCreateDllSection+0x92
05 0007f388 7c91624a 000add00 0007f414 0007f93c ntdll!LdrpMapDll+0x28f
06 0007f648 7c9164b3 00000000 000add00 0007f93c ntdll!LdrpLoadDll+0x1e9
07 0007f8f0 7c801bbd 000add00 0007f93c 0007f91c ntdll!LdrLoadDll+0x230
08 0007f958 7c801d72 7ffdfc00 00000000 00000000 kernel32!LoadLibraryExW+0x18e
09 0007f96c 7ca625a3 7ca625ac 00000000 00000000 kernel32!LoadLibraryExA+0x1f
0a 0007f97c 7ca62e8e 003800dd 000a7c84 00000000 SHELL32!GetXPSP1ResModuleHandle+0x16
0b 0007f990 010057b8 000900ac 000a7c84 00000000 SHELL32!ShellAboutW+0x1f
0c 0007fbc4 010041ac 0000012e 00000111 01006118 calc!MenuFunctions+0x15d
0d 0007fcb4 01004329 0000012e 00000111 01006118 calc!RealProcessCommands+0x1b61
0e 0007fcdc 01006521 0000012e 0007fd6c 01006118 calc!ProcessCommands+0x2d
0f 0007fd04 7e418734 000900ac 00000111 0000012e calc!CalcWndProc+0x409
10 0007fd30 7e418816 01006118 000900ac 00000111 USER32!InternalCallWinProc+0x28
11 0007fd98 7e4189cd 00000000 01006118 000900ac USER32!UserCallWinProcCheckWow+0x150
12 0007fdf8 7e418a10 0007fee8 00000000 0007ff1c USER32!DispatchMessageWorker+0x306
13 0007fe08 010021a7 0007fee8 7c80b731 000a1ee4 USER32!DispatchMessageW+0xf
14 0007ff1c 010125e9 000a7738 00000055 000a7738 calc!WinMain+0x256
15 0007ffc0 7c817067 00000000 00000000 7ffda000 calc!WinMainCRTStartup+0x174
16 0007fff0 00000000 01012475 00000000 78746341 kernel32!BaseProcessStart+0x23
dump the arguments to NtCreateSection
kd> dds #esp l8
f8bb1d44 804de7ec nt!KiFastCallEntry+0xf8
f8bb1d48 0007f368
f8bb1d4c 0000000f
f8bb1d50 00000000
f8bb1d54 00000000
f8bb1d58 00000010
f8bb1d5c 01000000 calc!_imp__RegOpenKeyExA <PERF> (calc+0x0)
f8bb1d60 00000790
we know the seventh argument is HANDLE according to prototype of DDI
NTSTATUS ZwCreateSection(
_Out_ PHANDLE SectionHandle,
_In_ ACCESS_MASK DesiredAccess,
_In_opt_ POBJECT_ATTRIBUTES ObjectAttributes,
_In_opt_ PLARGE_INTEGER MaximumSize,
_In_ ULONG SectionPageProtection,
_In_ ULONG AllocationAttributes,
_In_opt_ HANDLE FileHandle
);
kd> !handle 790
Failed to get VAD root
PROCESS ffae2020 SessionId: 0 Cid: 0410 Peb: 7ffde000 ParentCid: 00a8
DirBase: 04d87000 ObjectTable: e1bd5238 HandleCount: 29.
Image: calc.exe
Handle table at e1bd5238 with 29 entries in use
0790: Object: 8124b028 GrantedAccess: 00100020 Entry: e1032f20
Object: 8124b028 Type: (8127b900) File
ObjectHeader: 8124b010 (old version)
HandleCount: 1 PointerCount: 1
Directory Object: 00000000 Name: \WINDOWS\system32\xpsp1res.dll {HarddiskVolume1}
return back to user mode from kernel mode and inspect the new Section Handle
kd> g
eax=00000000 ebx=00000000 ecx=00000001 edx=ffffffff esi=0007f368 edi=00000000
eip=7c90d16c esp=0007f22c ebp=0007f2a8 iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
ntdll!ZwCreateSection+0xc:
7c90d16c c21c00 ret 1Ch
checking the return value of HANDLE in user mode
0:000> dd 7f368 l1
dd 7f368 l1
0007f368 0000078c
0:000> !handle 78c
!handle 78c
Handle 78c
Type Section
0:000> !handle 78c f
!handle 78c f
Handle 78c
Type Section
Attributes 0
GrantedAccess 0xf:
None
Query,MapWrite,MapRead,MapExecute
HandleCount 2
PointerCount 3
Name <none>
Object Specific Information
Section base address 0
Section attributes 0x1800000
Section max size 0x2f000
0:000>
If not satisifed we can revert to kd set the process context and check the returned handle in kernel mode
kd> !handle 78c f
Failed to get VAD root
PROCESS ffae2020 SessionId: 0 Cid: 0410 Peb: 7ffde000 ParentCid: 00a8
DirBase: 04d87000 ObjectTable: e1bd5238 HandleCount: 30.
Image: calc.exe
Handle table at e1bd5238 with 30 entries in use
078c: Object: e1088f30 GrantedAccess: 0000000f Entry: e1032f18
Object: e1088f30 Type: (8128b900) Section
ObjectHeader: e1088f18 (old version)
HandleCount: 1 PointerCount: 1
now if you continue execution you can see the loaded library dbgprint in windbg and the about dialog in vm :)
kd> g
0:000> g
g
ModLoad: 10000000 1002f000 C:\WINDOWS\system32\xpsp1res.dll

Detect actual charset encoding in UTF

Need good tool to detect encoding of the strings using some kind of mapping or heuristic method.
For example String: áÞåàÐÝØÒ ÜÝÞÓÞ ßàØÛÞÖÕÝØÙ Java, ÜÞÖÝÞ ×ÐÝïâì Òáî ÔÞáâãßÝãî ßÐÜïâì
Expected: сохранив много приложений Java, можно занять всю доступную память
The encoding is "ISO8859-5". When I'am trying to detect it with the below libs the result is "UTF-8". It is obviously that string was saved in utf, but is there any heuristic way using symbols mapping to analyse the characters and match them with the correct encoding?
Used usual encoding detect libs:
- enca (aptitude install enca)
- chardet (aptitude install chardet)
- uchardet (aptitude install uchardet)
- http://tika.apache.org/
- http://npmjs.com/package/detect-encoding
- libencode-detect-perl
- http://www-archive.mozilla.org/projects/intl/UniversalCharsetDetection.html
- http://jchardet.sourceforge.net/
- http://grepcode.com/snapshot/repo1.maven.org/maven2/com.googlecode.juniversalchardet/juniversalchardet/1.0.3/
- http://lxr.mozilla.org/seamonkey/source/extensions/universalchardet/src/
- http://userguide.icu-project.org/
- http://site.icu-project.org
You need to unwrap the UTF-8 encoding and then pass it to a character-encoding detection library.
If random 8-bit data is encoded into UTF-8 (assuming an identity mapping, i.e. a C4 byte is assumed to represent U+00C4, as is the case with ISO-8859-1 and its superset Windows 1252), you end up with something like
Source: 8F 0A 20 FE 65
Result: C2 8F 0A 20 C3 BE 65
(because the UTF-8 encoding of U+008F is C2 8F, and U+00FE is C3 BE). You need to revert this encoding in order to obtain the source string, so that you can then identify its character encoding.
In Python, something like
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import chardet
mystery = u'áÞåàÐÝØÒ ÜÝÞÓÞ ßàØÛÞÖÕÝØÙ Java, ÜÞÖÝÞ ×ÐÝïâì Òáî ÔÞáâãßÝãî ßÐÜïâì'
print chardet.detect(mystery.encode('cp1252'))
Result:
{'confidence': 0.99, 'encoding': 'ISO-8859-5'}
On the Unix command line,
vnix$ echo 'áÞåàÐÝØÒ ÜÝÞÓÞ ßàØÛÞÖÕÝØÙ Java, ÜÞÖÝÞ ×ÐÝïâì Òáî ÔÞáâãßÝãî ßÐÜïâì' |
> iconv -t cp1252 | chardet
<stdin>: ISO-8859-5 (confidence: 0.99)
or iconv -t cp1252 file | chardet to decode a file and pass it to chardet.
(For this to work successfully at the command line, you need to have your environment properly set up for transparent Unicode handling. I am assuming that your shell, your terminal, and your locale are adequately configured. Try a recent Ubuntu Live CD or something if your regular environment is stuck in the 20th century.)
In the general case, you cannot know that the incorrectly applied encoding is CP 1252 but in practice, I guess it's going to be correct (as in, yield correct results for this scenario) most of the time. In the worst case, you would have to loop over all available legacy 8-bit encodings and try them all, then look at the one(s) with the highest confidence rating from chardet. Then, the example above will be more complex, too -- the mapping from legacy 8-bit data to UTF-8 will no longer be a simple identity mapping, but rather involve a translation table as well (for example, a byte F5 might correspond arbitrarily to U+0092 or whatever).
(Incidentally, iconv -l spits out a long list of aliases, so you will get a lot of fundamentally identical results if you use that as your input. But here is a quick ad-hoc attempt at fixing your slightly weird Perl script.
#!/bin/sh
iconv -l |
grep -F -v -e UTF -e EUC -e 2022 -e ISO646 -e GB2312 -e 5601 |
while read enc; do
echo 'áÞåàÐÝØÒ ÜÝÞÓÞ ßàØÛÞÖÕÝØÙ Java, ÜÞÖÝÞ ×ÐÝïâì Òáî ÔÞáâãßÝãî ßÐÜïâì' |
iconv -f utf-8 -t "${enc%//}" 2>/dev/null |
chardet | sed "s%^[^:]*%${enc%//}%"
done |
grep -Fwive ascii -e utf -e euc -e 2022 -e None |
sort -k4rn
The output still contains a lot of chaff, but once you remove that, the verdict is straightforward.
It makes no sense to try any multi-byte encodings such as UTF-16, ISO-2022, GB2312, EUC_KR etc in this scenario. If you convert a string into one of these successfully, then the result will most definitely be in that encoding. This is outside the scope of the problem outlined above: a string converted from an 8-bit encoding into UTF-8 using the wrong translation table.
The ones which returned ascii definitely did something wrong; most of them will have received an empty input, because iconv failed with an error. In a Python script, error handling would be more straightforward.)
The string
сохранив много приложений Java, можно занять всю доступную память
is encoded in ISO8859-5 as bytes
E1 DE E5 E0 D0 DD D8 D2 20 DC DD DE D3 DE 20 DF E0 D8 DB DE D6 D5 DD D8 D9 20 4A 61 76 61 2C 20 DC DE D6 DD DE 20 D7 D0 DD EF E2 EC 20 D2 E1 EE 20 D4 DE E1 E2 E3 DF DD E3 EE 20 DF D0 DC EF E2 EC
The string
áÞåàÐÝØÒ ÜÝÞÓÞ ßàØÛÞÖÕÝØÙ Java, ÜÞÖÝÞ ×ÐÝïâì Òáî ÔÞáâãßÝãî ßÐÜïâì
is encoded in ISO-8859-1 as bytes
E1 DE E5 E0 D0 DD D8 D2 20 DC DD DE D3 DE 20 DF E0 D8 DB DE D6 D5 DD D8 D9 20 4A 61 76 61 2C 20 DC DE D6 DD DE 20 D7 D0 DD EF E2 EC 20 D2 E1 EE 20 D4 DE E1 E2 E3 DF DD E3 EE 20 DF D0 DC EF E2 EC
Look familiar? They are the same bytes, just interpreted differently by different charsets.
Any tool that would look at these bytes would not be able to tell you the charset automatically, as they are perfectly valid bytes in both charsets. You would have to tell the tool which charset to use when interpreting the bytes.
Any tool that tells you this particular byte sequence is encoded as UTF-8 is wrong. These are NOT valid UTF-8 bytes.

Why does perldoc evaluate 'Münster' as 'Muenster'

I have a simple POD text file:
$ cat test.pod
=encoding UTF-8
Münster
It is encoded in UTF-8, as per this literal hex dump of the file:
00000000 3d 65 6e 63 6f 64 69 6e 67 20 55 54 46 2d 38 0a |=encoding UTF-8.|
00000010 0a 4d c3 bc 6e 73 74 65 72 0a |.M..nster.|
0000001a
The "ü" is being encoded as the two bytes C3 and BC.
But when I run perldoc on the file it is turning my lovely formatted UTF-8 characters into ASCII.
What's more, it is correctly handling the German language convention of representing "ü" as "ue".
$ perldoc test.pod | cat
TEST(1) User Contributed Perl Documentation TEST(1)
Muenster
perl v5.16.3 2014-06-10 TEST(1)
Why is it doing this?
Is there an additional declaration I can put into my file to stop it from happening?
After additional investigation with App::perlbrew I've found the difference comes from having a particular version of Pod::Perldoc.
perl-5.10.1 3.14_04 Muenster
perl-5.12.5 3.15_02 Muenster
perl-5.14.4 3.15_04 Muenster
perl-5.16.2 3.17 Münster
perl-5.16.3 3.19 Muenster
perl-5.16.3 3.17 Münster
perl-5.17.3 3.17 Münster
perl-5.18.0 3.19 Muenster
perl-5.18.1 3.23 Münster
However I would still like, if possible, a way to make Pod::Perldoc 3.14, 3.15, and 3.19 behave "correctly".
Found this RT ticket http://rt.cpan.org/Public/Bug/Display.html?id=39000
This "bug" seems to be introduced with Perl 5.10 and perhaps this was solved in later versions.
Also see: How can I use Unicode characters in Perl POD-derived man pages? and incorrect behaviour of perldoc with UTF-8 texts.
You should add the latest available version of Pod::Perldoc as a dependency.

windbg conflicting information

I have WinDBG 6.12.0002.633 x86 and I'm using it to view a post-mortem kdmp from a Windows Mobile 6 ARMV4I application.
When I go to analyze the callstack I get a lot of unknowns. In the analysis, I can see in the *FAULTING_IP* section that the fault is in the tcpstk module. (for which I also have symbols. But, in the *STACK_TEXT* section, the tcpstk addresses appear as just addresses, no symbols.
Also, in the *MODULE_NAME* section, I get another unknown even though it just said the faulting module was in tcpstk.
The result of the !analyze -v command is:
1:128:armce> !analyze -v
***snip!***
FAULTING_IP:
tcpstk!_DerefIF+38 [\private\winceos\comm\tcpipw\ip\iproute.c # 1032]
01b0d6f0 ???????? ???
***snip!***
IP_ON_HEAP: 07b00090
The fault address in not in any loaded module, please check your build's rebase
log at <releasedir>\bin\build_logs\timebuild\ntrebase.log for module which may
contain the address if it were loaded.
FRAME_ONE_INVALID: 1
STACK_TEXT:
761efa6c 07b00090 : 7b858453 00000003 00000000 00000000 : 0x7b0d6f0
761efa7c 07b0020c : 7b858453 506f010a 00000000 00000000 : 0x7b00090
761efacc 78012d38 : 7b858453 506f010a 00000000 00000000 : 0x7b0020c
761efaf4 78013cdc module_78010000!AdapterBindingManager::NetUp+0xb4 [bar.cpp # 268]
761efb34 78014b78 module_78010000!AdapterBindingManager::EnterState+0x5e4 [bar.cpp # 1327]
761efda4 78015c08 module_78010000!AdapterBindingManager::ProcessEvent+0x8e4 [bar.cpp # 1298]
761efdd8 03f668dc module_78010000!MediaSense+0x25c [foo.cpp # 673]
761efe94 00000000 coredll_3f49000!ThreadBaseFunc+0x98 [\private\winceos\coreos\core\dll\apis.c # 633]
MODULE_NAME: Unknown_Module
IMAGE_NAME: Unknown_Image
DEBUG_FLR_IMAGE_TIMESTAMP: 0
STACK_COMMAND: ~128s ; kb
FAILURE_BUCKET_ID: INVALID_POINTER_WRITE_c0000005_Unknown_Image!Unknown
If I switch to the kp command, I suddenly can see that part of the callstack
1:128:armce> kp
Child-SP RetAddr Call Site
761efa6c 01b0d6e0 tcpstk!_DerefIF(struct Interface * IF = 0x7b858453)+0x38 [\private\winceos\comm\tcpipw\ip\iproute.c # 1032]
761efa6c 00000000 tcpstk!_DerefIF(struct Interface * IF = 0x7b858453)+0x28 [\private\winceos\comm\tcpipw\ip\iproute.c # 1026]
Why isn't the !analyze -v command able to show the fully decoded callstack? Why does it show so many unknowns?
I think that WinDBG cannot debug ARM I have not seen any documentation that states it is capable of debugging ARM, only x86 and x64 applications.
There is a Windbg provided in the ARM toolkit that is the windowed version of armsd which is not related to the microsoft WindDbg.