I have a crash dump file that I need to analyze using windbg to run some tests.
Due to some restrictions I can't comment, my symbols folder can only contain the symbols needed to analyze this crash dump.
Is there a way to know the exact symbols needed by a dump? If it helps, I can first analyze this dump in another environment where all the symbols are available.
Thank you.
You can use !sym noisy to make Windbg dump out the symbol it needs and looks for.
If you set up a symbol path with a local cache, Windbg will download into the local cache path only the needed symbols.
If you load the dump on your machine, force it to load all the symbols, the lml command will show all loaded symbols and you can see each module where it loaded the symbols from, copy only those pdb files into your target restricted environment.
I'm not entirely sure if this is possible. Analyzing a dump is a dynamic process but you're looking for a static solution. It's not possible to know what symbols will be needed unless you already know what the problem is. Therefore it's not possible to know what set of symbols will be needed.
Even doing something as simple as saying that "I will only provide symbols for the DLL's which have frames on the stack" is not enough. It's possible that memory corruption or a global variable from a DLL not on the stack could influence the behavior of the program. Leaving symbols for that DLL out could prevent diagnosis of a problem.
One approach though which will yield decent results would be the following
Load up the dump in the environment where all symbols are available
Set the symbol path to the directory
run "analyze -v"
Dump the state of modules at this point and include symbols for any DLL for which windbg loaded symbols.
You can also use the command:
lml
after running "analyze -v" to display which symbols WinDbg loaded or attempted to load.
Related
I use WinDBG a fair bit to debug dumps that testers send me. Right now, I'm experiencing very poor performance with obtaining debugging symbols from Microsoft's symbol server. I'm not the only one so I expect Microsoft are having server issues at the moment but it's highlighted something about WinDBG's symbol caching which I haven't paid much attention to before.
Very often when I open a new dump it's trying to download every pdb for all the Microsoft symbols. It caches them for the duration of my debugging session, and if I close WinDBG and open it again it often opens without having to download symbols. However if I open a dump that I was using a few days earlier it seems to have often lost the symbols that it downloaded previously. It seems to be purging old pdb files or some such..
I have my symbol path set up like this:
SRV*C:\SymCache*http://msdl.microsoft.com/download/symbols
So, question, is it possible to control how often WinDBG purges cached symbols? I'd be quite happy for it to fill a few GB with symbol files since disk is cheap...
No, symbols are not purged. I am not aware of any information that would tell WinDbg the age of symbols except the timestamp which is not reliable. The file pingme.txt is an empty file.
As mentioned by #blabb in the comments, purging would be counter-productive, given that you can download symbol packages (Microsoft) for offline use for very old symbols, e.g. Windows XP.
On my PC which I got when I joined the company, the symbols of 2014 are still there:
I have memory related problem in my application on solaris9 environment where Tcl_DeleteInterp() function calls lot of free() and mutex_unlock() functions. To debug the problem i followed the below steps to compile tcl on solaris server (with TCL_MEM_DEBUG flag) but still i couldn't use the 'memory' command in my interpreter.
Ran configure script on server (./configure –prefix=<directory needs to be installed> --enable-symbols=mem)
Make clean all
Make install (tcl libraries and tlcsh exe is copied to the path specified in step1)
Compilation generated two libraries (libtcl8.4g.so and libtclstub8.4g.a), I copied libtcl8.4g.so as libtcl8.4.so to my app
Copied tcl8.4 directory as well.
I also copied the tclsh8.4 to $PROVHOME/bin and created soft link as tclsh-> tclsh8.4.
From my application i linked the debug symbol enabled libraries to the place where exactly i created the Tcl interpreter.
Initialized the Tcl interpreter to using Tcl_InitMemory() function (so that the memory command will be registered in the supplied(arg) interpreter.
When i used the interpreter exe (tclsh) separately i could execute the memory command, but when i used the same exe on my application its not working. Can someone help me what could be the possible reason for this problem ?
Also help me how can i cross verify the libraries that they are compiled with TCL_MEM_DEBUG flag.
Will the Tcl source code tar file contain Solaris directory where i have to build the libraries or should i use the unix source code for solaris platform as well ?
Thanks
Are you using [mem] interactively (which does expansion of unambiguous short command names) and forgetting to use the full name ([memory]) in your scripts?
You're using Tcl embedded in your code? You need to call Tcl_InitMemory (passing in the handle to the interpreter where you want the memory command created) after creation of the interpreter and before you run user scripts, i.e., straight after the Tcl_CreateInterp gives you the handle (which should in turn come after the Tcl_FindExecutable call that initializes the shared parts of the library).
You must also make sure that everything is built with that flag set so that the correct memory allocation APIs are used in both your code when it integrates with Tcl, and you must make sure that you are linking against the debugging build. It's probably the linking that has gone wrong, but I've not done that level of development on Solaris for many years.
I think you'll find that “Getting a list of used libraries by a running process (unix)” is relevant to your problems.
I'm having some issues with a MATLAB compiled code under mac. I've tested the same program under windows and linux aswell with NO issue at all.
My program needs two folders to work properly, but it seems that mac doesn't like them, because it can't see them. On the contrary win and linux have no problem seeing and using that folders.
I just want to underline that I obviously put these folders in the deploytool package before creating the package.
Any idea?
Maybe I'm running the program unproperly setting the environmental variables in an unproper way.
edit:
matlab error just after the program has started.
Warning: Name is nonexistent or not a directory: materials
but materials folder is in my "current directory" and I did put that in the deploytool folder too, why it can't see that? It seems an addpath error, but why it doesn't appear under linux and windows?!
Here are a few things to think about:
Have you set your permissions to the folders correctly? I would assume the permissions for OSX should be the same as you used in Linux, but perhaps you forgot to update them after you created the folders?
Is Matlab running as the user you think it is? I don't know about Matlab specifically, but its possible it runs as a particular user depending on the environment you are in.
Is your error definitely that the folders are not found? Sometimes errors regarding disk IO are vague or misleading (like when due to permissions - see point 1 and 2).
I have compiled a matlab standalone exe which I can run on any computer that has the MATLAB Compiler Runtime installed.
However starting the exe takes 20-30 seconds !
How can I measure the time accuratly and most important - how can I decrease it down to 1-2 seconds.
This is taken out of Yair Altman's blog:
A splash wrapper application can alleviates much of the pain of the slow startup of deployed (compiled) Matlab applications. A Splash window solution can be found here. While such a splash wrapper is indeed useful, it may also be possible to achieve an actual speedup of the compiled app’s startup using the MCR_CACHE_ROOT environment variable.
Normally, the MCR and the stand-alone executable is unpacked upon every startup in the user’s temp dir, and deleted when the user logs out. Apparently, when the MCR_CACHE_ROOT environment variable is set, these files are only unpacked once and kept for later reuse. If this report is indeed true, this could significantly speed up the startup time of a compiled application in subsequent invocations.
On Linux:
export MCR_CACHE_ROOT=/tmp/mcr_cache_root_$USER # local to host
mkdir -p #MCR_CACHE_ROOT
./myExecutable
On Windows:
REM set MCR_CACHE_ROOT=%TEMP%
set MCR_CACHE_ROOT="C:\Documents and Settings\Yair\Matlab Cache\"
myExecutable.exe
There are also ways to set this env variable permanently on Windows if needed...
Setting MCR_CACHE_ROOT is especially important when running the executable from a network (NFS) location, since unpacking onto a network location could be quite slow. If the executable is run in parallel on different machines (for example, a computer cluster running a parallel program), then this might even cause lock-outs when different clusters try to access the same network location. In both cases, the solution is to set MCR_CACHE_ROOT to a local folder (e.g., /tmp or %TEMP%). If you plan to reuse the extracted files again, then perhaps you should not delete the extracted files but reuse them. Otherwise, simply delete the temporary folder after the executable ends. In the following example, $RANDOM is a bash function that returns a random number:
export MCR_CACHE_ROOT=/tmp/mcr$RANDOM
./matlab_executable
rm -rf $MCR_CACHE_ROOT
Setting MCR_CACHE_ROOT can also be used to solve other performance bottlenecks in deployed applications, as explained in a MathWorks technical solution and a related article here.
In a related matter, compiled Matlab executable may fail with a Could not access the MCR component cache error, when Matlab cannot write in the MCR cache directory due to missing permission rights. This can be avoided by setting MCR_CACHE_ROOT to a non-existent directory, or to a folder in which there is global access permissions (/tmp or %TEMP% are usually such writable folders) – see related posts here and here.
If you are using deploytool to compile your code, under Project - Settings-Toolboxes on path uncheck any toolboxes that aren't needed by your executable. I recently had this issue and the above steps cut the executable file size in half and significantly reduced the start time of the executable.
0:025> !pe
Failed to load data access DLL, 0x80004005
Verify that 1) you have a recent build of the debugger (6.2.14 or newer)
2) the file mscordacwks.dll that matches your version of mscorwks.dll is
in the version directory
3) or, if you are debugging a dump file, verify that the file
mscordacwks_<arch>_<arch>_<version>.dll is on your symbol path.
4) you are debugging on the same architecture as the dump file.
For example, an IA64 dump file must be debugged on an IA64
machine.
You can also run the debugger command .cordll to control the debugger's
load of mscordacwks.dll. .cordll -ve -u -l will do a verbose reload.
If that succeeds, the SOS command should work on retry.
If you are debugging a minidump, you need to make sure that your executable
path is pointing to mscorwks.dll as well.
After corflags.exe /32bit+ xxxx.exe to run on a 64bit server 2003, the xxxx.exe usually crash. I got this dump, on the same machine, install windbg(x86), but I can't use SOS. I googled this issue, but can't find work answer. I use the same machine, why windbg can't find what it need?
The problem is that the correct version of mscordacwks.dll can't be located. The DLL acts as an abstraction layer between the runtime and SOS and thus it must correspond to the version of the runtime. There's an excellent write up of the problem and its solution here http://blogs.msdn.com/dougste/archive/2009/02/18/failed-to-load-data-access-dll-0x80004005-or-what-is-mscordacwks-dll.aspx
Make sure to follow the advice on renaming the DLL very carefully, cause if you get it wrong it doesn't work and the error messages are not very helpful imo.