Display callstack without method names - windbg

In WinDbg, I can get the callstack using the k command. For DLLs without symbols, it displays an incorrect method name and a large offset, e.g.
0018f9f0 77641148 syncSourceDll_x86!CreateTimerSyncBridge+0xc76a
Since I don't have symbols, I have to give this information to the developer of the DLL. I don't know who will work on the bug and how much debugging knowledge he has. I want to avoid that the developer thinks the problem is in the CreateTimerSyncBridge() method.
Is there a way to get the callstack without method names, just with offsets?
At the moment I'm using the following workaround:
0:000> ? syncSourceDll_x86!CreateTimerSyncBridge+0xc76a
Evaluate expression: 1834469050 = 6d57c6ba
0:000> ? syncSourceDll_x86
Evaluate expression: 1834287104 = 6d550000
0:000> ? 6d57c6ba-6d550000
Evaluate expression: 181946 = 0002c6ba
So I can modify the callstack manually to
0018f9f0 77641148 syncSourceDll_x86!+0x2c6ba
But that's really hard to do for a lot of frames in a lot of threads.

You can specify that the symbols must match exactly using a stricter evaluation, either by starting windbg with command line parameter -ses or issuing the command:
.symopt +0x400
The default is false for the debugger, if you wish to reset this then just remove the option:
.symopt -0x400
See the msdn docs: https://msdn.microsoft.com/en-us/library/windows/hardware/ff558827(v=vs.85).aspx#symopt_exact_symbols

Related

How to pinpoint where in the program a crash happened using .dmp and WinDbg?

I have a huge application (made in PowerBuilder) that crashes every once in a while so it is hard to reproduce this error. We have it set up so that when a crash like this occurs, we recieve a .dmp file.
I used WinDbg to analyze my .dmp file with the command !analyze -v. From this I can deduct that the error that occured was an Access Violation C0000005. Based on the [0] and [1] parameters, it attempted to dereference a null pointer.
WinDbg also showed me STACK_TEXT consisting of around 30 lines, but I am not sure how to read it. From what I have seen I need to use some sort of symbols.
First line of my STACK_TEXT is this:
00000000`00efca10 00000000`75d7fa46 : 00000000`10df1ae0 00000000`0dd62828 00000000`04970000 00000000`10e00388 : pbvm!ob_get_runtime_class+0xad
From this, my goal is to analyze this file to figure out where exactly in the program this error happened or which function it was in. Is this something I will be able to find after further analyzing the stack trace?
How can I pinpoint where in the program a crash happened using .dmp and WinDbg so I can fix my code?
If you analyze a crash dump with !analyze -v, the lines after STACK TEXT is the stack trace. The output is equivalent to kb, given you set the correct thread and context.
The output of kb is
Child EBP
Return address
First 4 values on the stack
Symbol
The backticks ` tell you that you are running in 64 bit and they split the 64 bit values in the middle.
On 32 bit, the first 4 parameters on the stack were often equivalent to the first 4 parameters to the function, depending on the calling convention.
On 64 bit, the stack is not so relevant any more, because with the 64 bit calling convention, parameters are passed via registers. Therefore you can probably ignore those values.
The interesting part is the symbol like pbvm!ob_get_runtime_class+0xad.
In front of ! is the module name, typically a DLL or EXE name. Look for something that you built. After the ! and before the + is a method name. After the + is the offset in bytes from the beginning of the function.
As long as you don't have functions with thousands of lines of code, that number should be small, like < 0x200. If the number is larger than that, it typically means that you don't have correct symbols. In that case, the method name is no longer reliable, since it's probably just the last known (the last exported) method name and a faaaar way from there, so don't trust it.
In case of pbvm!ob_get_runtime_class+0xad, pbvm is the DLL name, ob_get_runtime_class is the method name and +0xad is the offset within the method where the instruction pointer is.
To me (not knowing anything about PowerBuilder) PBVM sounds like the PowerBuilder DLL implementation for Virtual Memory. So that's not your code, it's the code compiled by Sybase. You'd need to look further down the call stack to find the culprit code in your DLL.
After reading Wikipedia, it seems that PowerBuilder does not necessarily compile to native code, but to intermediate P-Code instead. In this case you're probably out of luck, since your code is never really on the call stack and you need a special debugger or a WinDbg extension (which might not exist, like for Java). Run it with the -pbdebug command line switch or compile it to native code and let it crash again.

Can the MATLAB editor show the file from which text is displayed? [duplicate]

In MATLAB, how do you tell where in the code a variable is getting output?
I have about 10K lines of MATLAB code with about 4 people working on it. Somewhere, someone has dumped a variable in a MATLAB script in the typical way:
foo
Unfortunately, I do not know what variable is getting output. And the output is cluttering out other more important outputs.
Any ideas?
p.s. Anyone ever try overwriting Standard.out? Since MATLAB and Java integration is so tight, would that work? A trick I've used in Java when faced with this problem is to replace Standard.out with my own version.
Ooh, I hate this too. I wish Matlab had a "dbstop if display" to stop on exactly this.
The mlint traversal from weiyin is a good idea. Mlint can't see dynamic code, though, such as arguments to eval() or string-valued figure handle callbacks. I've run in to output like this in callbacks like this, where update_table() returns something in some conditions.
uicontrol('Style','pushbutton', 'Callback','update_table')
You can "duck-punch" a method in to built-in types to give you a hook for dbstop. In a directory on your Matlab path, create a new directory named "#double", and make a #double/display.m file like this.
function display(varargin)
builtin('display', varargin{:});
Then you can do
dbstop in double/display at 2
and run your code. Now you'll be dropped in to the debugger whenever display is implicitly called by the omitted semicolon, including from dynamic code. Doing it for #double seems to cover char and cells as well. If it's a different type being displayed, you may have to experiment.
You could probably override the built-in disp() the same way. I think this would be analagous to a custom replacement for Java's System.out stream.
Needless to say, adding methods to built-in types is nonstandard, unsupported, very error-prone, and something to be very wary of outside a debugging session.
This is a typical pattern that mLint will help you find:
So, look on the right hand side of the editor for the orange lines. This will help you find not only this optimization, but many, many more. Notice also that your variable name is highlighted.
If you have a line such as:
foo = 2
and there is no ";" on the end, then the output will be dumped to the screen with the variable name appearing first:
foo =
2
In this case, you should search the file for the string "foo =" and find the line missing a ";".
If you are seeing output with no variable name appearing, then the output is probably being dumped to the screen using either the DISP or FPRINTF function. Searching the file for "disp" or "fprintf" should help you find where the data is being displayed.
If you are seeing output with the variable name "ans" appearing, this is a case when a computation is being done, not being put in a variable, and is missing a ';' at the end of the line, such as:
size(foo)
In general, this is a bad practice for displaying what's going on in the code, since (as you have found out) it can be hard to find where these have been placed in a large piece of code. In this case, the easiest way to find the offending line is to use MLINT, as other answers have suggested.
I like the idea of "dbstop if display", however this is not a dbstop option that i know of.
If all else fails, there is still hope. Mlint is a good idea, but if there are many thousands of lines and many functions, then you may never find the offender. Worse, if this code has been sloppily written, there will be zillions of mlint flags that appear. How will you narrow it down?
A solution is to display your way there. I would overload the display function. Only temporarily, but this will work. If the output is being dumped to the command line as
ans =
stuff
or as
foo =
stuff
Then it has been written out with display. If it is coming out as just
stuff
then disp is the culprit. Why does it matter? Overload the offender. Create a new directory in some directory that is on top of your MATLAB search path, called #double (assuming that the output is a double variable. If it is character, then you will need an #char directory.) Do NOT put the #double directory itself on the MATLAB search path, just put it in some directory that is on your path.
Inside this directory, put a new m-file called disp.m or display.m, depending upon your determination of what has done the command line output. The contents of the m-file will be a call to the function builtin, which will allow you to then call the builtin version of disp or display on the input.
Now, set a debugging point inside the new function. Every time output is generated to the screen, this function will be called. If there are multiple events, you may need to use the debugger to allow processing to proceed until the offender has been trapped. Eventually, this process will trap the offensive line. Remember, you are in the debugger! Use the debugger to determine which function called disp, and where. You can step out of disp or display, or just look at the contents of dbstack to see what has happened.
When all is done and the problem repaired, delete this extra directory, and the disp/display function you put in it.
You could run mlint as a function and interpret the results.
>> I = mlint('filename','-struct');
>> isErrorMessage = arrayfun(#(S)strcmp(S.message,...
'Terminate statement with semicolon to suppress output (in functions).'),I);
>>I(isErrorMessage ).line
This will only find missing semicolons in that single file. So this would have to be run on a list of files (functions) that are called from some main function.
If you wanted to find calls to disp() or fprintf() you would need to read in the text of the file and use regular expresions to find the calls.
Note: If you are using a script instead of a function you will need to change the above message to read: 'Terminate statement with semicolon to suppress output (in scripts).'
Andrew Janke's overloading is a very useful tip
the only other thing is instead of using dbstop I find the following works better, for the simple reason that putting a stop in display.m will cause execution to pause, every time display.m is called, even if nothing is written.
This way, the stop will only be triggered when display is called to write a non null string, and you won't have to step through a potentially very large number of useless display calls
function display(varargin)
builtin('display', varargin{:});
if isempty(varargin{1})==0
keyboard
end
A foolproof way of locating such things is to iteratively step through the code in the debugger observing the output. This would proceed as follows:
Add a break point at the first line of the highest level script/function which produces the undesired output. Run the function/script.
step over the lines (not stepping in) until you see the undesired output.
When you find the line/function which produces the output, either fix it, if it's in this file, or open the subfunction/script which is producing the output. Remove the break point from the higher level function, and put a break point in the first line of the lower-level function. Repeat from step 1 until the line producing the output is located.
Although a pain, you will find the line relatively quickly this way unless you have huge functions/scripts, which is bad practice anyway. If the scripts are like this you could use a sort of partitioning approach to locate the line in the function in a similar manner. This would involve putting a break point at the start, then one half way though and noting which half of the function produces the output, then halving again and so on until the line is located.
I had this problem with much smaller code and it's a bugger, so even though the OP found their solution, I'll post a small cheat I learned.
1) In the Matlab command prompt, turn on 'more'.
more on
2) Resize the prompt-y/terminal-y part of the window to a mere line of text in height.
3) Run the code. It will stop wherever it needed to print, as there isn't the space to print it ( more is blocking on a [space] or [down] press ).
4) Press [ctrl]-[C] to kill your program at the spot where it couldn't print.
5) Return your prompt-y area to normal size. Starting at the top of trace, click on the clickable bits in the red text. These are your potential culprits. (Of course, you may need to have pressed [down], etc, to pass parts where the code was actually intended to print things.)
You'll need to traverse all your m-files (probably using a recursive function, or unix('find -type f -iname *.m') ). Call mlint on each filename:
r = mlint(filename);
r will be a (possibly empty) structure with a message field. Look for the message that starts with "Terminate statement with semicolon to suppress output".

At which lines in my MATLAB code a variable is accessed?

I am defining a variable in the beginning of my source code in MATLAB. Now I would like to know at which lines this variable effects something. In other words, I would like to see all lines in which that variable is read out. This wish does not only include all accesses in the current function, but also possible accesses in sub-functions that use this variable as an input argument. In this way, I can see in a quick way where my change of this variable takes any influence.
Is there any possibility to do so in MATLAB? A graphical marking of the corresponding lines would be nice but a command line output might be even more practical.
You may always use "Find Files" to search for a certain keyword or expression. In my R2012a/Windows version is in Edit > Find Files..., with the keyboard shortcut [CTRL] + [SHIFT] + [F].
The result will be a list of lines where the searched string is found, in all the files found in the specified folder. Please check out the options in the search dialog for more details and flexibility.
Later edit: thanks to #zinjaai, I noticed that #tc88 required that this tool should track the effect of the name of the variable inside the functions/subfunctions. I think this is:
very difficult to achieve. The problem of running trough all the possible values and branching on every possible conditional expression is... well is hard. I think is halting-problem-hard.
in 90% of the case the assumption that the output of a function is influenced by the input is true. But the input and the output are part of the same statement (assigning the result of a function) so looking for where the variable is used as argument should suffice to identify what output variables are affected..
There are perverse cases where functions will alter arguments that are handle-type (because the argument is not copied, but referenced). This side-effect will break the assumption 2, and is one of the main reasons why 1. Outlining the cases when these side effects take place is again, hard, and is better to assume that all of them are modified.
Some other cases are inherently undecidable, because they don't depend on the computer states, but on the state of the "outside world". Example: suppose one calls uigetfile. The function returns a char type when the user selects a file, and a double type for the case when the user chooses not to select a file. Obviously the two cases will be treated differently. How could you know which variables are created/modified before the user deciding?
In conclusion: I think that human intuition, plus the MATLAB Debugger (for run time), and the Find Files (for quick search where a variable is used) and depfun (for quick identification of function dependence) is way cheaper. But I would like to be wrong. :-)

Is there a way to add timing by default to ipython?

When using IPython, it's often convenient to see how long your commands take to run by using the %time magic function. When you use this often enough, you start to wish that you could just get toggle a setting to get this metadata by default whenever you enter a query. Psql lets you do this with \timing. GHCi lets you do this with :set s+. Does IPython let you do this? And if not, why not?
The "ipythonic" way of timing code is using the %timeit or %%timeit magic function (respectively for online, and multi-line code).
These functions provide quite accurate results by running the code multiple times (the exact number is adaptive if not specified).
The global flag you are asking does not exist in ipython. Furthermore, just adding %%timeit to all the cells will not work because global variables are not modified when calling %%timeit. This "feature" is directly inherited from the timeitmodule.
For more info see this ipython issue.

Argument passing strategy - environment variables vs. command line

Most of the applications we developers write need to be externally parametrized at startup. We pass file paths, pipe names, TCP/IP addresses etc. So far I've been using command line to pass these to the appplication being launched. I had to parse the command line in main and direct the arguments to where they're needed, which is of course a good design, but is hard to maintain for a large number of arguments. Recently I've decided to use the environment variables mechanism. They are global and accessible from anywhere, which is less elegant from architectural point of view, but limits the amount of code.
These are my first (and possibly quite shallow) impressions on both strategies but I'd like to hear opinions of more experienced developers -- What are the ups and downs of using environment variables and command line arguments to pass arguments to a process? I'd like to take into account the following matters:
design quality (flexibility/maintainability),
memory constraints,
solution portability.
Remarks:
Ad. 1. This is the main aspect I'm interested in.
Ad. 2. This is a bit pragmatic. I know of some limitations on Windows which are currently huge (over 32kB for both command line and environment block). I guess this is not an issue though, since you just should use a file to pass tons of arguments if you need.
Ad. 3. I know almost nothing of Unix so I'm not sure whether both strategies are as similarily usable as on Windows. Elaborate on this if you please.
1) I would recommend avoiding environmental variables as much as possible.
Pros of environmental variables
easy to use because they're visible from anywhere. If lots of independent programs need a piece of information, this approach is a whole lot more convenient.
Cons of environmental variables
hard to use correctly because they're visible (delete-able, set-able) from anywhere. If I install a new program that relies on environmental variables, are they going to stomp on my existing ones? Did I inadvertently screw up my environmental variables when I was monkeying around yesterday?
My opinion
use command-line arguments for those arguments which are most likely to be different for each individual invocation of the program (i.e. n for a program which calculates n!)
use config files for arguments which a user might reasonably want to change, but not very often (i.e. display size when the window pops up)
use environmental variables sparingly -- preferably only for arguments which are expected not to change (i.e. the location of the Python interpreter)
your point They are global and accessible from anywhere, which is less elegant from architectural point of view, but limits the amount of code reminds me of justifications for the use of global variables ;)
My scars from experiencing first-hand the horrors of environmental variable overuse
two programs we need at work, which can't run on the same computer at the same time due to environmental clashes
multiple versions of programs with the same name but different bugs -- brought an entire workshop to its knees for hours because the location of the program was pulled from the environment, and was (silently, subtly) wrong.
2) Limits
If I were pushing the limits of either what the command line can hold, or what the environment can handle, I would refactor immediately.
I've used JSON in the past for a command-line application which needed a lot of parameters. It was very convenient to be able to use dictionaries and lists, along with strings and numbers. The application only took a couple of command line args, one of which was the location of the JSON file.
Advantages of this approach
didn't have to write a lot of (painful) code to interact with a CLI library -- it can be a pain to get many of the common libraries to enforce complicated constraints (by 'complicated' I mean more complex than checking for a specific key or alternation between a set of keys)
don't have to worry about the CLI libraries requirements for order of arguments -- just use a JSON object!
easy to represent complicated data (answering What won't fit into command line parameters?) such as lists
easy to use the data from other applications -- both to create and to parse programmatically
easy to accommodate future extensions
Note: I want to distinguish this from the .config-file approach -- this is not for storing user configuration. Maybe I should call this the 'command-line parameter-file' approach, because I use it for a program that needs lots of values that don't fit well on the command line.
3) Solution portability: I don't know a whole lot about the differences between Mac, PC, and Linux with regard to environmental variables and command line arguments, but I can tell you:
all three have support for environmental variables
they all support command line arguments
Yes, I know -- it wasn't very helpful. I'm sorry. But the key point is that you can expect a reasonable solution to be portable, although you would definitely want to verify this for your programs (for example, are command line args case sensitive on any platforms? on all platforms? I don't know).
One last point:
As Tomasz mentioned, it shouldn't matter to most of the application where the parameters came from.
You should abstract reading parameters using Strategy pattern. Create an abstraction named ConfigurationSource having readConfig(key) -> value method (or returning some Configuration object/structure) with following implementations:
CommandLineConfigurationSource
EnvironmentVariableConfigurationSource
WindowsFileConfigurationSource - loading from a configuration file from C:/Document and settings...
WindowsRegistryConfigurationSource
NetworkConfigrationSource
UnixFileConfigurationSource - - loading from a configuration file from /home/user/...
DefaultConfigurationSource - defaults
...
You can also use Chain of responsibility pattern to chain sources in various configurations like: if command line argument is not supplied, try environment variable and if everything else fails, return defauls.
Ad 1. This approach not only allows you to abstract reading configuration, but you can easily change the underlying mechanism without any affect on client code. Also you can use several sources at once, falling back or gathering configuration from different sources.
Ad 2. Just choose whichever implementation is suitable. Of course some configuration entries won't fit for instance into command line arguments.
Ad 3. If some implementations aren't portable, have two, one silently ignored/skipped when not suitable for a given system.
I think this question has been answered rather well already, but I feel like it deserves a 2018 update. I feel like an unmentioned benefit of environmental variables is that they generally require less boiler plate code to work with. This makes for cleaner more readable code. However a major disadvatnage is that they remove a layers of isolation from different applications running on the same machine. I think this is where Docker really shines. My favorite design pattern is to exclusively use environment variables and run the application inside of a Docker container. This removes the isolation issue.
I generally agree with previous answers, but there is another important aspect: usability.
For example, in git you can create a repository with the .git directory outside of that. To specify that, you can use a command line argument --git-dir or an environmental variable GIT_DIR.
Of course, if you change the current directory to another repository or inherit environmental variables in scripts, you get a mistake. But if you need to type several git commands in a detached repository in one terminal session, this is extremely handy: you don't need to repeat the git-dir argument.
Another example is GIT_AUTHOR_NAME. It seems that it even doesn't have a command line partner (however, git commit has an --author argument). GIT_AUTHOR_NAME overrides the user.name and author.name configuration settings.
In general, usage of command line or environmental arguments is equally simple on UNIX: one can use a command line argument
$ command --arg=myarg
or an environmental variable in one line:
$ ARG=myarg command
It is also easy to capture command line arguments in an alias:
alias cfg='git --git-dir=$HOME/.cfg/ --work-tree=$HOME' # for dotfiles
alias grep='grep --color=auto'
In general most arguments are passed through the command line. I agree with the previous answers that this is more functional and direct, and that environmental variables in scripts are like global variables in programs.
GNU libc says this:
The argv mechanism is typically used to pass command-line arguments specific to the particular program being invoked. The environment, on the other hand, keeps track of information that is shared by many programs, changes infrequently, and that is less frequently used.
Apart from what was said about dangers of environmental variables, there are good use cases of them. GNU make has a very flexible handling of environmental variables (and thus is very integrated with shell):
Every environment variable that make sees when it starts up is transformed into a make variable with the same name and value. However, an explicit assignment in the makefile, or with a command argument, overrides the environment. (-- and there is an option to change this behaviour) ...
Thus, by setting the variable CFLAGS in your environment, you can cause all C compilations in most makefiles to use the compiler switches you prefer. This is safe for variables with standard or conventional meanings because you know that no makefile will use them for other things.
Finally, I would stress that the most important for a program is not programmer, but user experience. Maybe you included that into the design aspect, but internal and external design are pretty different entities.
And a few words about programming aspects. You didn't write what language you use, but let's imagine your tools allow you the best possible argument parsing. In Python I use argparse, which is very flexible and rich. To get the parsed arguments, one can use a command like
args = parser.parse_args()
args can be further split into parsed arguments (say args.my_option), but I can also pass them as a whole to my function. This solution is absolutely not "hard to maintain for a large number of arguments" (if your language allows that). Indeed, if you have many parameters and they are not used during argument parsing, pass them in a container to their final destination and avoid code duplication (which leads to inflexibility).
And the very final comment is that it's much easier to parse environmental variables than command line arguments. An environmental variable is simply a pair, VARIABLE=value. Command line arguments can be much more complicated: they can be positional or keyword arguments, or subcommands (like git push). They can capture zero or several values (recall the command echo and flags like -vvv). See argparse for more examples.
And one more thing. Your worrying about memory is a bit disturbing. Don't write overgeneral programs. A library should be flexible, but a good program is useful without any arguments. If you need to pass a lot, this is probably data, not arguments. How to read data into a program is a much more general question with no single solution for all cases.