Why memory leak is not reported when perl is compiled with `DEBUG_LEAKING_SCALARS`? - perl

I compile perl with DEBUG_LEAKING_SCALARS as described here
CASE 1
I follow this DOC to test memory leaking reporting:
env PERL_DESTRUCT_LEVEL=2 valgrind perl -e '#x; $x[0]=\#x'
==7216== Memcheck, a memory error detector
==7216== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==7216== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==7216== Command: perl -e #x;\ $x[0]=\\#x
==7216==
==7216==
==7216== HEAP SUMMARY:
==7216== in use at exit: 0 bytes in 0 blocks
==7216== total heap usage: 1,310 allocs, 1,310 frees, 171,397 bytes allocated
==7216==
==7216== All heap blocks were freed -- no leaks are possible
==7216==
==7216== For counts of detected and suppressed errors, rerun with: -v
==7216== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Nothing is reported.
CASE 2
I even in my XS sub do this thing. Exactly:
#define PERL_NO_GET_CONTEXT
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
#include "XSUtils.h"
#include "ppport.h"
void
call_perl() {
SV *sv;
sv = sv_2mortal( newSVpv( "XS::Utils::hello", 0 ) );
newSViv( 323 ); //<<<< SHOULD LEAK
printf( "Hi 3\n" );
ENTERSCOPE;
CALLPERL( sv , G_DISCARD|G_NOARGS );
LEAVESCOPE;
}
MODULE = XS::Utils PACKAGE = XS::Utils
void
test()
CODE:
call_perl();
Link to the REPO
$ env PERL_DESTRUCT_LEVEL=2 valgrind perl -Iblib/arch/ -Iblib/lib -MXS::Utils -e 'XS::Utils::test()'
==7308== Memcheck, a memory error detector
==7308== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==7308== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==7308== Command: perl -Iblib/arch/ -Iblib/lib -MXS::Utils -e XS::Utils::test()
==7308==
Hi 3
Hello
==7308==
==7308== HEAP SUMMARY:
==7308== in use at exit: 1,502 bytes in 5 blocks
==7308== total heap usage: 12,876 allocs, 12,871 frees, 1,945,298 bytes allocated
==7308==
==7308== LEAK SUMMARY:
==7308== definitely lost: 0 bytes in 0 blocks
==7308== indirectly lost: 0 bytes in 0 blocks
==7308== possibly lost: 0 bytes in 0 blocks
==7308== still reachable: 1,502 bytes in 5 blocks
==7308== suppressed: 0 bytes in 0 blocks
==7308== Rerun with --leak-check=full to see details of leaked memory
==7308==
==7308== For counts of detected and suppressed errors, rerun with: -v
==7308== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Nothing is reported too
CASE 3
I fix module Devel::LeakTrace (The FIX):
$ perl -MDevel::LeakTrace -Iblib/arch/ -Iblib/lib -MXS::Utils -e 'XS::Utils::test()'
Hi 3
Hello
Nothing is reported too
CASE 4
I only found Test::LeakTrace do its job:
$ perl -MTest::LeakTrace::Script=-verbose -Iblib/arch/ -Iblib/lib -MXS::Utils -e 'XS::Utils::test()'
Hi 3
Hello
leaked SCALAR(0x208e1c0) from -e line 1.
ALLOCATED at -e:1 by entersub (parent 0x0); serial 9642
SV = IV(0x208e1b0) at 0x208e1c0
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 323
Why built in tool in perl report nothing about leaking?
What did I wrong? How to debug leaking memory with DEBUG_LEAKING_SCALARS tool?

Actually not an answer, but from Dave Mitchell:
The main purpose of DEBUG_LEAKING_SCALARS isn't to list leaked scalars
(!!)
It's to help in tracking down things generally related to leaked scalars
and refcount problems. its two main features are that it turns SV
allocation from being a macro to being a function so that you can easy
attach a breakpoint; and that it adds instrumentation to each SV showing
where it was allocated (as displayed by Devel::Peek).
But I will not know what to debug, because I do not know that something is leaking. Like CASE from 1 to 3 described above. I was sure that:
newSViv( 323 );
Did not leak.
So DEBUG_LEAKING_SCALARS should list leaked scalars
Also I have found this comment at perl commit history:
-[ 24088] By: davem on 2005/03/28 21:38:44
- Log: expand -DDEBUG_LEAKING_SCALARS to instrument the creation of each SV
This will be very useful for this task to my mind.

Related

ddrescue read non tried blocks

I'm trying to rescue a 1TB disk which has read errors. Because I didn't have a free 1TB drive, I created a raid 0 of two 500GB drives.
I used the command line from Wikipedia for the first run:
sudo ddrescue -f -n /dev/sdk /dev/md/md_test /home/user/rescue.map
ddrescue already completed this run after approximately 20 hours and more than 7000 read errors.
Now I'm trying to do a second run
sudo ddrescue -d -f -v -r3 /dev/sdk /dev/md/md_test /home/user/rescue.map
and read the non tried blocks but ddrescue gives me this:
GNU ddrescue 1.23
About to copy 1000 GBytes from '/dev/sdk' to '/dev/md/md_test'
Starting positions: infile = 0 B, outfile = 0 B
Copy block size: 128 sectors Initial skip size: 19584 sectors
Sector size: 512 Bytes
Press Ctrl-C to interrupt
Initial status (read from mapfile)
rescued: 635060 MB, tried: 0 B, bad-sector: 0 B, bad areas: 0
Current status
ipos: 1000 GB, non-trimmed: 0 B, current rate: 0 B/s
opos: 1000 GB, non-scraped: 0 B, average rate: 0 B/s
non-tried: 365109 MB, bad-sector: 0 B, error rate: 0 B/s
rescued: 635060 MB, bad areas: 0, run time: 0s
pct rescued: 63.49%, read errors: 0, remaining time: n/a
time since last successful read: n/a
Copying non-tried blocks... Pass 1 (forwards)
ddrescue: Write error: Invalid argument
I can't figure out what this write errors means, already searched the manual for answers.
Any help is appreciated! Thx!
After a while I found the cause for the write error, the capacity of the corrupt drive is 931,5G but the total capacity of the raid 0 was just 931,3G.
Realized it, while I took a closer look to the output of lsblk command.
So I rebuild the raid 0 array with 3 500G drives and ddrescue now works as expected.

Getting CPU cycles from user mode dump

Process Explorer has columns for CPU time (down to milliseconds) and CPU Cycles. For WinDbg I am aware of the !runaway command, also !runaway 7 for more details, but it shows CPU time only.
Are the CPU cycles also available somehow in a user mode crash dump?
What I have tried:
I looked at dt nt!_KTHREAD and I see it has a CycleTime property
ntdll!_KTHREAD
+0x000 Header : _DISPATCHER_HEADER
+0x018 CycleTime : Uint8B
I tried to query that property in a !for_each_thread, but WinDbg responds that it's available in kernel mode only.
Why do I want those CPU cycles?
I am working on a training for JetBrains dotTrace. It has an option to count CPU cycles and I'd like to explain where this cycles come from. Above kernel structure and Process Explorer is probably enough, but it would be awesome to see it live or post mortem in a user mode dump. I explain a lot of basics with WinDbg.
Following the implementation of GetProcessTimes() in ReactOS, you can see that the information is copied from the process' KPROCESS. So, indeed, it's only physically present in a dump that includes kernel memory.
C:\tw>ls -l
total 0
C:\tw>cdb -c ".dump /ma .\tw.dmp;q" calc.exe | grep writ
Dump successfully written
C:\tw>cdb -c "lm;!peb;.dump /ma .\tw1.dmp;q" calc.exe | grep writ
Dump successfully written
C:\tw>cdb -c ".ttime;q" -z tw.dmp | grep -B 3 quit
Created: Wed Apr 5 20:03:55.919 2017 ()
Kernel: 0 days 0:00:00.046
User: 0 days 0:00:00.000
quit:
C:\tw>cdb -c ".ttime;q" -z tw1.dmp | grep -B 3 quit
Created: Wed Apr 5 20:04:28.682 2017 ()
Kernel: 0 days 0:00:00.031
User: 0 days 0:00:00.000
quit:
C:\tw>

Using perl module Test::Valgrind for any executable

How would I use Perl module Test::Valgrind for an executable written in C or C++? The documentation is not clear about it.
I've run some simple programs on C/Perl and have got all tests pass for every files.
perl -MTest::Valgrind a.out
perl -MTest::Valgrind b.pl
ok 1 - InvalidFree
ok 2 - MismatchedFree
ok 3 - InvalidRead
ok 4 - InvalidWrite
ok 5 - InvalidJump
ok 6 - Overlap
ok 7 - InvalidMemPool
ok 8 - UninitCondition
ok 9 - UninitValue
ok 10 - SyscallParam
ok 11 - ClientCheck
ok 12 - Leak_DefinitelyLost
ok 13 - Leak_IndirectlyLost
ok 14 - Leak_PossiblyLost
ok 15 - Leak_StillReachable
So you have to just use other file argument for this from your shell.
UPD
perl -MTest::Valgrind -e 0 get the same output as above so that's not what you need I guess.
If we run perl -mTest::Valgrind Valgrind/Test-Valgrind-1.14/samples/map.pl from source distribution. we'll get a more believable result:
# ...
# 4,080 bytes in 1 blocks are still reachable in loss record 769 of 786
# malloc (/usr/lib/valgrind/vgpreload_memcheck-x86-linux.so) [?:?]
# Perl_safesysmalloc (/usr/bin/perl) [?:?]
# ? (/usr/bin/perl) [?:?]
# Perl_newSV (/usr/bin/perl) [?:?]
# Perl_yylex (/usr/bin/perl) [?:?]
# Perl_yyparse (/usr/bin/perl) [?:?]
# ? (/usr/bin/perl) [?:?]
# Perl_pp_require (/usr/bin/perl) [?:?]
# Perl_runops_standard (/usr/bin/perl) [?:?]
# Perl_call_sv (/usr/bin/perl) [?:?]
# Perl_call_list (/usr/bin/perl) [?:?]
# ? (/usr/bin/perl) [?:?]
# Looks like you failed 3 tests of 15.
# Looks like your test exited with 3 just after 15.
You may also use valgrind tool for test with more options to control it.
DESCRIPTION
Valgrind is a flexible program for debugging and profiling Linux executables. It consists of a core, which provides a
synthetic CPU in software, and a series of debugging and profiling tools.
I've checked this simple perl script:
for (1..10) {
for (1..324) {
print $_ * $_;
}
}
valgrind ./test.pl and got this:
==16749== HEAP SUMMARY:
==16749== in use at exit: 224,640 bytes in 2,409 blocks
==16749== total heap usage: 8,846 allocs, 6,437 frees, 439,538 bytes allocated
==16749==
==16749== LEAK SUMMARY:
==16749== definitely lost: 9,001 bytes in 17 blocks
==16749== indirectly lost: 215,639 bytes in 2,392 blocks
==16749== possibly lost: 0 bytes in 0 blocks
==16749== still reachable: 0 bytes in 0 blocks
==16749== suppressed: 0 bytes in 0 blocks
==16749== Rerun with --leak-check=full to see details of leaked memory
==16749==
==16749== For counts of detected and suppressed errors, rerun with: -v
==16749== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

Perl cannot allocate more than 1.1 GB on a Snow leopard Mac server with 32 GB RAM

I have a Mac server (snow leopard) with 32GB RAM. When I try to allocate more than 1.1GB RAM in Perl (v 5.10.0) I get an out of memory error. Here is the script that I used:
#!/usr/bin/env perl
# My snow leopard MAC server runs out of memory at >1.1 billion bytes. How
# can this be when I have 32 GB of memory installed? Allocating up to 4
# billion bytes works on a Dell Win7 12GB RAM machine.
# perl -v
# This is perl, v5.10.0 built for darwin-thread-multi-2level
# (with 2 registered patches, see perl -V for more detail)
use strict;
use warnings;
my $s;
print "Trying 1.1 GB...";
$s = "a" x 1100000000; # ok
print "OK\n\n";
print "Trying 1.2 GB...";
$s = '';
$s = "a" x 1200000000; # fails
print "..OK\n";
Here is the output that I get:
Trying 1.1 GB...OK
perl(96685) malloc: *** mmap(size=1200001024) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Out of memory!
Trying 1.2 GB...
Any ideas why this is happening?
UPDATE 4:42pm 11/14/13
As per Kent Fredric (see 2 posts below), here are my ulimits. Virtual memory defaults to unlimited
$ ulimit -a | grep bytes
data seg size (kbytes, -d) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
virtual memory (kbytes, -v) unlimited
$ perl -E 'my $x = "a" x 1200000000; print "ok\n"'
perl(23074) malloc: *** mmap(size=1200001024) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Out of memory!
$ perl -E 'my $x = "a" x 1100000000; print "ok\n"'
ok
I tried setting virtual memory to 10 billion but to no avail.
$ ulimit -v 10000000000 # 10 billion
$ perl -E 'my $x = "a" x 1200000000; print "ok\n"'
perl(24275) malloc: *** mmap(size=1200001024) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Out of memory!
You're using a 32-bit build of Perl (as shown by perl -V:ptrsize), but you need a 64-bit build. I recommend installing a local perl using perlbrew.
This can be achieved by passing -Duse64bitall to Configure when installing Perl.
This can be achieved by passing --64all to perlbrew install when installing Perl.
(For some odd reason, perl -V:use64bitall says this was done, but it clearly wasn't.)
Seems like this could be related to the problem. This only really is worthy of a comment, but its too complex to put as one without it being entirely illegible
perlbrew exec --with=5.10.0 memusage perl -e '$x = q[a] x 1_000_000_000; print length($x)'
5.10.0
==========
1000000000
Memory usage summary: heap total: 2000150514, heap peak: 2000141265, stack peak: 4896
Yes, thats 2 G of memory for 1 G of text.
Now with 2G ...
perlbrew exec --with=5.10.0 memusage perl -e '$x = q[a] x 1_000_000_000; $y = q[a] x 1_000_000_000; print length($x)+length($y)'
5.10.0
==========
2000000000
Memory usage summary: heap total: 4000151605, heap peak: 4000142092, stack peak: 4896
Yikes. That would certainly hit the 32Bit limit if you had one.
I was spoiled and doing my testing on 5.19.5, which has a notable improvement, namedly copy-on-write strings, which greatly reduces memory consumption:
perlbrew exec --with=5.19.5 memusage perl -e '$x = q[a] x 1_000_000_000; $y = q[a] x 1_000_000_000; print length($x)+length($y)'
5.19.5
==========
2000000000
Memory usage summary: heap total: 2000157713, heap peak: 2000150396, stack peak: 5392
So either way, if you're using any version of Perl at all other than a development one, you need to expect it to eat twice the memory you need.
If there's a memory limit for some reason around the 2G window for 32bit processes, then you will hit that with a 1G string.
Why does Copy On Write matter?
Well, when you do
$a = $b
$a is a copy of $b
So when you do
$a = "a" x 1_000_000_000
First, it expands the right hand side, creating a variable, and then makes a copy to store in $a.
You can prove this by eliminating the copy as follows:
perlbrew exec --with=5.10.0 memusage perl -e 'print length(q[a] x 1_000_000_000)'
5.10.0
==========
1000000000
Memory usage summary: heap total: 1000150047, heap peak: 1000140886, stack peak: 4896
See, all I did was removed the intermediate variable, and the memory usage halved!
:S
Though because 5.19.5 only makes references to the original string, and copies it when written to, it is efficient by default, so removal of the intermediate variable has negligible benefits
perlbrew exec --with=5.19.5 memusage perl -e 'print length(q[a] x 1_000_000_000)'
5.19.5
==========
1000000000
Memory usage summary: heap total: 1000154123, heap peak: 1000145146, stack peak: 5392
It could also be a Mac imposed limitation on per-process memory to prevent processes consuming too much system memory.
I don't know how valid this will be, but I assume Mac, being a Unix, has unix-like ulimits:
There are a few such memory limits, some excerpts from /etc/security/limits.conf
- core - limits the core file size (KB)
- data - max data size (KB)
- fsize - maximum filesize (KB)
- memlock - max locked-in-memory address space (KB)
- rss - max resident set size (KB)
- stack - max stack size (KB)
- as - address space limit (KB)
bash provides ways to limit and read these (somewhat), info bash --index-search=ulimit
For instance, ulimit -a | grep bytes emits this on my Linux machine:
data seg size (kbytes, -d) unlimited
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 8192
virtual memory (kbytes, -v) unlimited
And I can arbitrarily limit this within a scope:
$ perl -E 'my $x = "a" x 100000000;print "ok\n"'
ok
$ ulimit -v 200000
$ perl -E 'my $x = "a" x 100000000;print "ok\n"'
Out of memory!
panic: fold_constants JMPENV_PUSH returned 2 at -e line 1.
So ulimits are certainly something to look into.
I think I figured it out. I could not accept that Apple shipped a 32-bit Perl when their documentation says otherwise. From 'man perl':
64-BIT SUPPORT
Version 5.10.0 supports 64-bit execution (which is on by default). Version 5.8.8
only supports 32-bit execution.
Then I remembered, I installed Fink on my Mac server and it being, er, finicky with 32 and 64 bit issues. So, I commented out
#test -r /sw/bin/init.sh && . /sw/bin/init.sh
from my .profile. Now I can at least allocate 14 GB RAM (yeah!) on my 32 GB RAM server
$ perl -E 'my $x = "a" x 14000000000; print "ok\n"'
ok
I tried 16GB but it hung for 5 minutes before I gave up. Now, a diff between perl -V for 32-bit and 64-bit tells the tale (but why still intsize=4?).
$ diff perlv.32 perlv.64
16c16
< intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
---
> intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678
18c18
< ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
---
> ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
34,35c34,36
< PERL_IMPLICIT_CONTEXT PERL_MALLOC_WRAP USE_ITHREADS
< USE_LARGE_FILES USE_PERLIO USE_REENTRANT_API
---
> PERL_IMPLICIT_CONTEXT PERL_MALLOC_WRAP USE_64_BIT_ALL
> USE_64_BIT_INT USE_ITHREADS USE_LARGE_FILES
> USE_PERLIO USE_REENTRANT_API
Thank you all for you help,
Paul

mmap allocates memory in heap ?

I was reading about mmap in wikipedia and trying out this example http://en.wikipedia.org/wiki/Mmap#Example_of_usage. I have compiled this program with gcc and ran valgrind overit.
Here is valgrind output:
# valgrind a.out
==7018== Memcheck, a memory error detector
==7018== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==7018== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==7018== Command: a.out
==7018==
PID 7018: anonymous string 1, zero-backed string 1
PID 7019: anonymous string 1, zero-backed string 1
PID 7018: anonymous string 2, zero-backed string 2
==7018==
==7018== HEAP SUMMARY:
==7018== in use at exit: 0 bytes in 0 blocks
==7018== total heap usage: 0 allocs, 0 frees, 0 bytes allocated
==7018==
==7018== All heap blocks were freed -- no leaks are possible
==7018==
==7018== For counts of detected and suppressed errors, rerun with: -v
==7018== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)
PID 7019: anonymous string 2, zero-backed string 2
==7019==
==7019== HEAP SUMMARY:
==7019== in use at exit: 0 bytes in 0 blocks
==7019== total heap usage: 0 allocs, 0 frees, 0 bytes allocated
==7019==
==7019== All heap blocks were freed -- no leaks are possible
==7019==
==7019== For counts of detected and suppressed errors, rerun with: -v
==7019== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)
My Question is
Does mmap allocate memory on Heap ? if not what does munmap do ?
On a Unix-like system, your program's address space consists of one or more virtual memory regions, each of which is mapped by the OS to physical memory, to a file, or to nothing at all.
The heap is, generally speaking, one specific memory region created by the C runtime, and managed by malloc (which in turn uses the brk and sbrk system calls to grow and shrink).
mmap is a way of creating new memory regions, independently of malloc (and so independently of the heap). munmap is simply its inverse, it releases these regions.
mmapped memory is neither heap nor stack. It is mapped into virtual address space of the calling process, but it's not allocated on the heap.