View rooted objects in WinDbg - windbg

I am trying to investigate what looks like a memory leak in .NET using WinDbg.
However, when I try to examine output of !dumpheap -type I see that many, if not all objects listed are "Free" objects. I'd like to filter the list to see if there any that are rooted (have references to them).
I tried the following script:
.foreach (t {!dumpheap -mt 0000000091ea94f0 -short}) { .if(!gcroot ${t}) { !mdt ${t} } }
However, it does not output anything. Is there a way to filter out the output of !dumpheap to show only rooted objects?

Free "objects"
.NET uses a heap manager to keep track of memory. This makes it possible to allocate objects smaller than 64 kB, where 64 kB is the minimum memory the OS provides.
So, .NET gets at least 64 kB and then splits that into smaller pieces. Those pieces which are unused can be understood as objects of type Free.
To get a better overview of Free objects, use !dumpheap -stat -type Free. Those Free objects don't have a root, because they are not actually objects.
But you can also see a whole lot of other objects including the sum of their sizes. Those are likely rooted.
Rooted objects
Unfortunately, commands like !gcroot don't have a boolean return value, so you need to use some tricky stuff. The basic .foreach loop is already quite good.
To get a comparable return value, we'll use the root count number, which is 1 in the following case:
0:004> !gcroot 02701078
HandleTable:
001f11ec (strong handle)
-> 02701078 System.OutOfMemoryException
Found 1 unique roots (run '!GCRoot -all' to see all roots).
Since the number can be 1, 2, 3 etc., it seems more reliable to check for !=0. Let's start like this:
.shell -ci"!gcroot ${t}" find "Found 0"
This will only keep the one line "Found 0 unique roots ...", otherwise nothing at all.
Then let's minimize the output just to keep the number by skipping the first word ("Found") using /pS 1, then processing one word and then skipping the rest (actually a maximum of 99 words) using /ps 99:
.foreach /pS 1 /ps 99(word {.shell -ci"!gcroot ${t}" find "Found 0"}) {.echo ${word}}
This will leave us with 0 only.
Next, use can use $scmp() to compare the string:
.if ($scmp("${word}","0")==0) {.echo nothing} .else {.echo something}
The whole script (formatted for readability, remove the line breaks and indentation):
.foreach (t {!dumpheap -short -mt 70c240b4}) {
.foreach /pS 1 /ps 99 (word {.shell -ci"!gcroot ${t}" find "Found 0"}) {
.if ($scmp("${word}","0")==0){
.echo nothing
} .else {
.echo something
}
}
}
In your case, replace .echo something by !mdt ${t}.
PyKd
Since above script is hard to understand and error-prone, you might want to try PyKD. Look for dbgCommand() to execute debugger commands and get the result as a string. You can then operate on that string with any Python commands, which is much easier than WinDbg built-in functions.

Related

Does Perl's Glob have a limitation?

I am running the following expecting return strings of 5 characters:
while (glob '{a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z}'x5) {
print "$_\n";
}
but it returns only 4 characters:
anbc
anbd
anbe
anbf
anbg
...
However, when I reduce the number of characters in the list:
while (glob '{a,b,c,d,e,f,g,h,i,j,k,l,m}'x5) {
print "$_\n";
}
it returns correctly:
aamid
aamie
aamif
aamig
aamih
...
Can someone please tell me what I am missing here, is there a limit of some sort? or is there a way around this?
If it makes any difference, It returns the same result in both perl 5.26 and perl 5.28
The glob first creates all possible file name expansions, so it will first generate the complete list from the shell-style glob/pattern it is given. Only then will it iterate over it, if used in scalar context. That's why it's so hard (impossible?) to escape the iterator without exhausting it; see this post.
In your first example that's 265 strings (11_881_376), each five chars long. So a list of ~12 million strings, with (naive) total in excess of 56Mb ... plus the overhead for a scalar, which I think at minimum is 12 bytes or such. So at the order of a 100Mb's, at the very least, right there in one list.†
I am not aware of any formal limits on lengths of things in Perl (other than in regex) but glob does all that internally and there must be undocumented limits -- perhaps some buffers are overrun somewhere, internally? It is a bit excessive.
As for a way around this -- generate that list of 5-char strings iteratively, instead of letting glob roll its magic behind the scenes. Then it absolutely should not have a problem.
However, I find the whole thing a bit big for comfort, even in that case. I'd really recommend to write an algorithm that generates and provides one list element at a time (an "iterator"), and work with that.
There are good libraries that can do that (and a lot more), some of which are Algorithm::Loops recommended in a previous post on this matter (and in a comment), Algorithm::Combinatorics (same comment), Set::CrossProduct from another answer here ...
Also note that, while this is a clever use of glob, the library is meant to work with files. Apart from misusing it in principle, I think that it will check each of (the ~ 12 million) names for a valid entry! (See this page.) That's a lot of unneeded disk work. (And if you were to use "globs" like * or ? on some systems it returns a list with only strings that actually have files, so you'd quietly get different results.)
† I'm getting 56 bytes for a size of a 5-char scalar. While that is for a declared variable, which may take a little more than an anonymous scalar, in the test program with length-4 strings the actual total size is indeed a good order of magnitude larger than the naively computed one. So the real thing may well be on the order of 1Gb, in one operation.
Update A simple test program that generates that list of 5-char long strings (using the same glob approach) ran for 15-ish minutes on a server-class machine and took 725 Mb of memory.
It did produce the right number of actual 5-char long strings, seemingly correct, on this server.
Everything has some limitation.
Here's a pure Perl module that can do it for you iteratively. It doesn't generate the entire list at once and you start to get results immediately:
use v5.10;
use Set::CrossProduct;
my $set = Set::CrossProduct->new( [ ([ 'a'..'z' ]) x 5 ] );
while( my $item = $set->get ) {
say join '', #$item
}

How to get thread IDs from a memory dump in WinDbg?

I'm trying to create a script that runs thread specific commands to output information for each thread into a separate file. Is there a way to get thread ID that I can run in the command similar to this: ~${threadId} e!clrstack.
Ultimately here is what I'm shooting for:
.foreach(tid {!threads})
{
.logopen c:\temp\${tid}.txt;
~${tid}e !dumpstack;
~${tid}e !clrstack;
~${tid}e !dso;
~${tid}e kb 200;
.logclose
}
Here is what I got so far:
.foreach /pS 1 /pS2 /pS3 /pS4 /pS5 /ps 3 (l {!runaway})
{
.printf "${l}\n";
}
The issue with this that it has the value I'm looking for with OSID. Looks like this <wantedID>:<OSID>.
How can I separate the part I need or is there an easier way to get thread ID from a memory dump?
I'm a bit confused about the use of .foreach together with ~e, because that's somehow duplicate (just the fact that you're limiting ~e to one thread doesn't result in a nested loop).
Is the command
~*e .logopen /t d:\debug\log.txt; !dumpstack; !clrstack; !dso; kb 200; .logclose
sufficient for you? The log file name will not have the thread ID.
Regarding your skip tokens of .foreach, I think you can only use /pS and /ps once. Your statement is equivalent to
.foreach /pS 5 /ps 3 (l {!runaway}) { ... }
If the thread ID really matters
It seems that the OS thread IDs are really important.
As using .foreach directly on any command like !threads, ~ or !runaway seems not flexible and reliable enough, I propose to use .shell find to get at least some consistency in output.
I'll use !teb to get the thread ID, because it is separated by a space and therefore the output of .shell find can be used as input for .foreach.
The complete command I come up with:
~*e .foreach /pS 3 /ps 20 (tid {.shell -ci "!teb" find "ClientId"}) { .logopen d:\debug\logs\log${tid}.txt; !dumpstack; !clrstack; !dso; kb 200; .logclose}
Using pykd as an extension
Using PyKd - Python extension for WinDbg, the result can be achieved like this:
Create a file tid.py, put it next to the pykd.pyd extension and give it the following content:
from pykd import *
threads = getProcessThreads()
for t in threads:
print(hex(ptrPtr(t+0x24))[2:-1])
getProcessThreads() gives you the addresses of the TEBs. At offset 0x24 you can find the thread ID. ptrPtr() reads a memory address, hex() is self-explaining, [2: removes the 0x head and :-1] removes the trailing L (don't ask me why it has a trailing L).
In WinDbg
.load pykd.pyd
!py tid.py; *** Gives one thread ID per line, nice for .foreach
.foreach (tid {!py tid.py}) { .logopen d:\debug\logs\log_${tid}.txt; ~~[${tid}]s; !dumpstack; !clrstack; !dso; kb 200; .logclose}

Windbg - How can I Dump Strings which match a given filter

One can dump all the string using the following command
!dumpheap -type System.string
How can dump or print only those string which starts or contains a specific "string"
Example. I am only intrested to view the string which contains "/my/app/request"
Use sosex instead of sos for this. It has a !strings command which allows you to filter strings using the /m:<filter> option.
Use !sosex.strings. See !sosex.help for options to filter strings based on content and/or length.
Not sure if !dumpheap supports that. You can always use .logopen to redirect the output to a file and post-process that. For a more elegant (and thus more complicated) solution, you can also use .shell to redirect the command output to a shell process for parsing. Here's an example:
http://blogs.msdn.com/b/baleixo/archive/2008/09/06/using-shell-to-search-text.aspx
You can also see the .shell documentation for more details:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff565339(v=vs.85).aspx
If you really want to go without SOSEX, then try
.foreach (string {!dumpheap -short -type System.String}) { .foreach (search {s -u ${string}+c ${string}+c+2*poi(${string}+8) "mySearchTerm"}) { du /c80 ${string}+c }}
It uses
!dumpheap to get all Strings on .NET heap
.foreach to iterate over them
s to search for a substring
.foreach again to find out if s found something
some offset calculations to get the first character (+c) of the string and the string length (+8) (multiplied by 2 to get bytes instead of characters). Those need to be adapted in case of 64 bit applications
The /c80 is just for nicer output. You could also use !do ${string} instead of du /c80 ${string}+c if you like the .NET details of the String.

Limiting the amount of information printed by Perl debugger

One of my pet peeves with debugging Perl code (in command line debbugger, perl -d) is the fact that mistakenly printing (via x command) the contents of a huge datastructure is guaranteed to freeze up your terminal for forever and a half while 100s of pages of data are printed. Epecially if that happens across slowish network.
As such, I'd like to be able to limit the amount of data that x prints.
I see two approaches - I'd be willing to try either if someone knows how to do.
Limit the amount of data any single command in debugger prints.
Better yet, somehow replace the built-in x command with a custom Perl method (which would calculate the "size" of the data structure, and refuse to print its contents without confirmation).
I'm specifically asking "how to replace x with custom code" - building a Good Enough "is the data structure too big" Perl method is something I can likely do on my own without too much effort although I see enough pitfalls preventing the "perfect" one from being a fairly frustrating endeavour. Heck, merely doing Data::Dumper->Dump and taking the length of the string might do the trick :)
Please note that I'm perfectly well aware of how to manually avoid the issue by recursively examining layers of datastructure (e.g. print the ref, print the count of keys/array elements, etc...)... the whole point is I want to be able to avoid thoughtlessly typing x $huge_pile_of_data without thinking - or stumbling on a bug populating said huge pile of data into what should be a scalar.
The x command takes an optional argument for the maximum depth to display. That's not quite the same as limiting the amount of data to N pages, but it's definitely useful to prevent overload.
DB<1> %h = (a => { b => { c => 1 } } )
DB<2> x %h
0 'a'
1 HASH(0x1d5ff44)
'b' => HASH(0x1d61424)
'c' => 1
DB<3> x 2 %h
0 'a'
1 HASH(0x1d5ff44)
'b' => HASH(0x1d61424)
You can specify the default depth to print via the o command, e.g.
DB<1>o dumpDepth=1
Add that to your .perldb file to apply it to all debugger sessions.
Otherwise, it looks like the x command invokes DB::dumpit() which is just a wrapper for dumpval.pl (or, more specifically, the main::dumpValue() sub it defines). You could modify/replace that script as you see fit. I'm not sure how you'd make it interactive, though.
The | command in the debugger pipes another command's output to your pager, e.g.
DB<1> |x %huge_datastructure

Limit !dumpheap (windbg) output to n objects

When using windbg and running !dumpheap command to see the addresses of objects, how can you limit to a specific number of objects. The only way I found was using CTRL+BREAK
and a command line on a blog http://dotnetdebug.net/2005/07/04/dumpheap-parameters-and-some-general-information-on-sos-help-system/
-l X - Prints out only X items from each heap instead of all the objects.
Apparently -l no longer exists in SOS.dll
What are you actually looking for? Before looking at individual objects, it's usual to narrow the area of interest.
The –stat switch shows a summary, per type of the objects on the heap.
DumpHeap [-stat] [-min ][-max ] [-thinlock] [-mt ] [-type ][start [end]]
The -stat option restricts the output to the statistical type summary.
The -min option ignores objects that are less than the size parameter, specified in bytes.
The -max option ignores objects that are larger than the size parameter, specified in bytes.
The -thinlock option reports ThinLocks. For more information, see the SyncBlk command.
The -mt option lists only those objects that correspond to specified the MethodTable structure.
The -type option lists only those objects whose type name is a substring match of the specified string.
The start parameter begins listing from the specified address. The end parameter stops listing at the specified address.
Ref.
According which criteria would you like to limit the number of outputs?
The -l option just limits the output according to line numbers. This is useless: let's say it shows only the first 10 objects, maybe the object you're looking for is not even listed.
If the output is too long for WinDbgs output window, use .logopen to dump the objects into a file and then review the file with a text editor.
If you have other ideas how your object looks like, you can perform a loop over all objects
.foreach ( obj { !dumpheap -short -type MyType} )
and then decide with .if whether or not your object matches this criteria.
As an example, I was looking for a needle in a haystack. I was searching a specific Hashtable in a program with more than 3000 Hashtables on the heap. The command I tried to use was
.foreach ( obj { !dumpheap -short -type Hashtable }) {.if (poi(poi(${obj}+1c)) > 100) {!do ${obj}} }
1C is the offset of the count member of the hashtable.
100 is the number of items the Hashtable was expected to have at least.
Unfortunately it didn't work for Hashtables immediately, because !dumpheap -type also listed HashtableEnumerators which somehow crashed the debugger.
To dump hashtables only, run !dumpheap -stat and figure out the method table of hashtables and run the command with -mt <methodtable> instead of -type <classname>, which gives
.foreach ( obj { !dumpheap -short -mt <MT of Hashtable> }) {.if (poi(poi(${obj}+1c)) > 100) {!do ${obj}} }