Valgrind (memcheck) is not displaying the leak/error summary. Help? - centos

I typed in >>valgrind ./myprogramname --tools-memcheck --leak-check=yes
But the summary of the memory leaks and errors are not printing out when the program ends. I am running Centos 5.5 and have even upgraded to the latest version of Valgrind to try and get this to work. I have seen it print out a summary of problems before when I had the leak check option turn on. Has anybody ever ran into this issue?
And I have even set --leak-check-full, among other things. It is like Valgrind is not seeing my options that I am setting.

It looks to me like you are placing the arguments for valgrind in a location where they will be interpreted as arguments to your program.
Try
valgrind --tool=memcheck --leak-check=yes ./myprogramname
and see if that works better.

Related

Slow debugging using PyDev in Eclipse

I would appreciate it very much if you helped me with the following most annoying problem:
I'm using PyDev in Eclipse on my Ubuntu 14.04 machine, and every time I run my code in debug mode, it takes around 3-4 minutes to start.
My research yielded, that it takes a very long time to run each "import" statement row (without import statements, the problem vanishes).
Can anyone tell how can I overcome this problem?
Thanks!
I'm attaching:
1) my import statements.
2) my file tree (the file I'm running is in the folder "Gil").
3) and the debug window (during these 3-4 minutes, eclipse adds more and more lines there, that just say "light.py" (this is the file I'm running))
I'm only guessing here, but from your output in PyDev it seems you're executing something with multiprocessing or another thing which creates python subprocesses (which is why I think you're having a new light.py entry every time in the debugger).
Without looking at your code it's a bit hard guessing on what's actually happening, but I can give you some suggestions here:
Make your imports lazier (if you're always executing a new process which has to re-execute all the imports, that can indeed lead to quite more time -- imports in Python are usually slow, even more so with a debugger in place... maybe do a profile in regular mode to actually know what's going on -- if it's open source or you can afford the price, http://www.pyvmmonitor.com/ can probably help you quite a bit here -- if you haven't profiled your code before, you probably have low-hanging fruits which can give you a nice speedup).
Use only programatic breakpoints with the remote debugger (see: http://pydev.org/manual_adv_remote_debugger.html) -- this will make your code run at regular speed until it hits the programmatic breakpoint.
If none of those help, please add more details on your code (are you using stackless, greenlets, threads, multiple processes, etc? -- also 3-4 minutes may be much or not. Without having the original time to get there, it's hard to know...).

Perl memory leak from match operator

While investigating a long running perl program for memory leaks I tried to use Test::LeakTrace.
Looking at one of the leaks it reports I can narrow down the leaking code to just:
/$?/
So running: perl -MTest::LeakTrace::Script -e'/$?/' prints:
leaked SCALAR(0x10d3d48) from -e line 1.
Why is this, do I need to worry about it ?
Update: Also tried Devel::LeakTrace::Fast, it's not complaining about the same code.
Assuming you got a leak. Then this:
perl -e'/$?/ for 1..1E9'
should make your process grow in memory
ps -o rss,vsz <PID>
In my case it stays stable all the way. You should check it for your setup. It could be that leak your module detects is some late destruction. You could write a note to the module authors to help you figure out its output, you can help them to improve it...
BTW another thing confirming "no leak" for me is that on
perl -MTest::LeakTrace::Script -e'/$?/ for 1..1000'
I don't see multiple leaked scalars, just one.

Need help debugging a minidump with WinDbg

I've read a lot of similar questions, but I can't seem to find an answer to exactly what my problem is.
I've got a set of minidumps from a 32-bit application that was running on 64-bit Windows 2008. The 32-bit Visual Studio on my 32-Bit Vista Business wouldn't touch them at all, so I've been trying to open them in WinDbg.
I don't have the EXACT corresponding .pdb files (we only started saving them AFTER this particular release), but I have .pdbs built by the same machine with the same code. I also have access to the exact executable that created the minidumps.
I found a nifty little application called ChkMatch that can make .pdbs match an executable... the only difference (according to ChkMatch) was age, so I matched my newer .pdbs to the original executable.
However, when I load it in WinDbg, it still says that it is a "mismatched pdb" then, since I had set .symopts+0x40 it tries to load them anyway. I then get the warning:
*** WARNING: Unable to verify checksum for myexe.exe
I ran !lmi myexe and saw that, indeed, the checksum of the executable was in fact zero. From poking around a bit, I've found that the executable should have been built with the /release flag to have a checksum. That's all well and good, but I can't exactly go back in time and rebuild (if I did though, I'd definitely save the original .pdbs :-P ).
Is there anything I can do here? Seems a little ridiculous I can't make things match here at least enough to get a call stack.
you don't need the checksum to get a call stack - this warning can be safely ignored.
to get the stack you need to issue the stack command (any variant of k).
if the minidumps are any good (i.e. describe an actual fault), you should first try the auto analysis !analyze -v which will get you started.
come back when you have exhausted your expertise :o)
If you're working with minidumps then you have to set your image path (Ctrl+I) to point to a location with the images in the dump. The trouble with minidumps is that they don't contain any code or data from the executables on the target, so you have to supply them yourself.
-scott

Finding a Perl memory leak

SOLVED see Edit 2
Hello,
I've been writing a Perl program to handle automatic upgrading of local (proprietary) programs (for the company I work for).
Basically, it runs via cron, and unfortunately has a memory leak (or something similar). The problem is that the leak only happens when I'm not looking (aka when run via cron, not via command line).
My code does not contain any circular (or other) references, so the commonly cited tools will not help me (Devel::Cycle, Devel::Peek).
How would I go about figuring out what is using so much memory that the kernel kills it?
Basically, the code SFTPs into a server (using ```sftp...`` `), calls OpenSSL to verify the file, and then SFTPs more if more files are needed, and installs them (untars them).
I have seen delays (~15 sec) before the first SFTP session, but it has never used so much memory as to be killed (in my presence).
If I can't sort this out, I'll need to re-write in a different language, and that will take precious time.
Edit: The following message is printed out by the kernel which led me to believe it was a memory leak:
[100023.123] Out of memory: kill process 9568 (update.pl) score 325406 or a child
[100023.123] Killed Process 9568 (update.pl)
I don't believe it is an issue with cron because of the stalling (for ~15 sec, sometimes) when running it via the command-line. Also, there are no environmental variables used (at least by what I've written, maybe underlying things do?)
Edit 2: I found the issue myself, with help from the below comment by mobrule (in response to this question). It turns out that the script was called from a crontab of a user (non-root) just once a day and that (non-root privs) caused a special infinite loop situation.
Sorry guys, I feel kinda stupid for not finding this before, but thanks.
mobrule, if you submit your comment as an answer, I will accept it as it lead to me finding the problem.
End Edits
Thanks,
Brian
P.S. I may be able to post small snippets of code, but not the whole thing due to company policy.
You could try using Devel::Size to profile some of your objects. e.g. in the main:: scope (the .pl file itself), do something like this:
use Devel::Size qw(total_size);
foreach my $varname (qw(varname1 varname2 ))
{
print "size used for variable $varname: " . total_size($$varname) . "\n";
}
Compare the actual size used to what you think is a reasonable value for each object. Something suspicious might pop out immediately (e.g. a cache that is massively bloated beyond anything that sounds reasonable).
Other things to try:
Eliminate bits of functionality one at a time to see if suddenly things get a lot better; I'd start with the use of any external libraries
Is the bad behaviour localized to just one particular machine, or one particular operating system? Move the program to other systems to see how its behaviour changes.
(In a separate installation) try upgrading to the latest Perl (5.10.1), and also upgrade all your CPAN modules
How do you know that it's a memory leak? I can think of many other reasons why the OS would kill a program.
The first question I would ask is "Does this program always work correctly from the command line?". If the answer is "No" then I'd fix these issues first.
On the other hand if the answer is "Yes", I would investigate all the differences between having the program executed under cron and from the command line to find out why it is misbehaving.
If it is run by cron, that shouldn't it die after iteration? If that is the case, hard for me to see how a memory leak would be a big deal...
Are you sure it is the script itself, and not the child processes that are using the memory? Perhaps it ends up creating a real lot of ssh sessions , instead of doing a bunch of stuff in one session?

What is the cause for “panic: free from wrong pool during global destruction.” in Term::ReadLine::Gnu?

in https://rt.cpan.org/Ticket/Display.html?id=37194#txn-641389 I reopened a bug concerning a Perl crash in conjunction with the libreadline XS bindings. I attached the necessary debug information, but until now there has been no acknowledgement from the maintainer. I want this finally fixed; it's a major inconvenience to not have readline in Devel::REPL and the Perl debugger. My Perl guts and C夫 is nearly non-existent, so I can't do the usual thing and produce a patch on my own. So I would like to employ your help; more eyeballs ↔ shallow bugs and all that.
My questions to you:
Can you reproduce this crash despite -DPERL_USE_SAFE_PUTENV? If yes, let's compare what is the common factor.
Do you know what is the cause and how do you go about finding it?
I have a debugging perl and know how to use gdb, but where do I have to set a breakpoint to observe the crash properly?
readline 6.1.000 works fine for me here with Perl 5.10.0 & 5.10.1 (on Mac OS X 10.4, 10.5 & 10.6).
Also OK for me is Perl 5.8.8 & 5.10.1 on RedHat Enterprise Linux 5.3 (this time with readline 5.1).
There seems to be a lot of bug fixes between 5.2 & 6.1, so it might be worth trying the newer (or older!) readline to 5.2
/I3az/
The problem was that my perl never had safe putenv. The option is not -DPERL_USE_SAFE_PUTENV, but -Accflags="-DPERL_USE_SAFE_PUTENV".
Doc patches to combat the mistake:
https://rt.cpan.org/Ticket/Display.html?id=59592
https://rt.perl.org/rt3/Ticket/Display.html?id=76618