How can I make a static analysis call graph for Perl? - perl

I am working on a moderately complex Perl program. As a part of its development, it has to go through modifications and testing. Due to certain environment constraints, running this program frequently is not an option that is easy to exercise.
What I want is a static call-graph generator for Perl. It doesn't have to cover every edge case(e,g., redefining variables to be functions or vice versa in an eval).
(Yes, I know there is a run-time call-graph generating facility with Devel::DprofPP, but run-time is not guaranteed to call every function. I need to be able to look at each function.)

Can't be done in the general case:
my $obj = Obj->new;
my $method = some_external_source();
$obj->$method();
However, it should be fairly easy to get a large number of the cases (run this program against itself):
#!/usr/bin/perl
use strict;
use warnings;
sub foo {
bar();
baz(quux());
}
sub bar {
baz();
}
sub baz {
print "foo\n";
}
sub quux {
return 5;
}
my %calls;
while (<>) {
next unless my ($name) = /^sub (\S+)/;
while (<>) {
last if /^}/;
next unless my #funcs = /(\w+)\(/g;
push #{$calls{$name}}, #funcs;
}
}
use Data::Dumper;
print Dumper \%calls;
Note, this misses
calls to functions that don't use parentheses (e.g. print "foo\n";)
calls to functions that are dereferenced (e.g. $coderef->())
calls to methods that are strings (e.g. $obj->$method())
calls the putt the open parenthesis on a different line
other things I haven't thought of
It incorrectly catches
commented functions (e.g. #foo())
some strings (e.g. "foo()")
other things I haven't thought of
If you want a better solution than that worthless hack, it is time to start looking into PPI, but even it will have problems with things like $obj->$method().
Just because I was bored, here is a version that uses PPI. It only finds function calls (not method calls). It also makes no attempt to keep the names of the subroutines unique (i.e. if you call the same subroutine more than once it will show up more than once).
#!/usr/bin/perl
use strict;
use warnings;
use PPI;
use Data::Dumper;
use Scalar::Util qw/blessed/;
sub is {
my ($obj, $class) = #_;
return blessed $obj and $obj->isa($class);
}
my $program = PPI::Document->new(shift);
my $subs = $program->find(
sub { $_[1]->isa('PPI::Statement::Sub') and $_[1]->name }
);
die "no subroutines declared?" unless $subs;
for my $sub (#$subs) {
print $sub->name, "\n";
next unless my $function_calls = $sub->find(
sub {
$_[1]->isa('PPI::Statement') and
$_[1]->child(0)->isa("PPI::Token::Word") and
not (
$_[1]->isa("PPI::Statement::Scheduled") or
$_[1]->isa("PPI::Statement::Package") or
$_[1]->isa("PPI::Statement::Include") or
$_[1]->isa("PPI::Statement::Sub") or
$_[1]->isa("PPI::Statement::Variable") or
$_[1]->isa("PPI::Statement::Compound") or
$_[1]->isa("PPI::Statement::Break") or
$_[1]->isa("PPI::Statement::Given") or
$_[1]->isa("PPI::Statement::When")
)
}
);
print map { "\t" . $_->child(0)->content . "\n" } #$function_calls;
}

I'm not sure it is 100% feasible (since Perl code can not be statically analyzed in theory, due to BEGIN blocks and such - see very recent SO discussion). In addition, subroutine references may make it very difficult to do even in places where BEGIN blocks don't come into play.
However, someone apparently made the attempt - I only know of it but never used it so buyer beware.

I don't think there is a "static" call-graph generator for Perl.
The next closest thing would be Devel::NYTProf.
The main goal is for profiling, but it's output can tell you how many times a subroutine has been called, and from where.
If you need to make sure every subroutine gets called, you could also use Devel::Cover, which checks to make sure your test-suite covers every subroutine.

I recently stumbled across a script while trying to solve find an answer to this same question. The script (linked to below) uses GraphViz to create a call graph of a Perl program or module. The output can be in a number of image formats.
http://www.teragridforum.org/mediawiki/index.php?title=Perl_Static_Source_Code_Analysis

I solved a similar problem recently, and would like to share my solution.
This tool was born out of desperation, untangling an undocumented part of a 30,000-line legacy script, in order to implement an urgent bug fix.
It reads the source code(s), uses GraphViz to generate a png, and then displays the image on-screen.
Since it uses simple line-by-line regexes, the formatting must be "sane" so that nesting can be determined.
If the target code is badly formatted, run it through a linter first.
Also, don't expect miracles such as parsing dynamic function calls.
The silver lining of a simple regex engine is that it can be easily extended for other languages.
The tool now also supports awk, bash, basic, dart, fortran, go, lua, javascript, kotlin, matlab, pascal, perl, php, python, r, raku, ruby, rust, scala, swift, and tcl.
https://github.com/koknat/callGraph

Related

redefining or overloading backticks

I have a lot of legacy code which shells out a lot, what i want to do is add a require or minimal code changes to make the backticks do something different, for instance print instead of running code
i tried using use subs but i couldn't get it to take over backticks or qx (i did redefine system which is one less thing to worry about)
i also tried to make a package:
package thingmbob;
use Data::Dumper;
use overload '``' => sub { CORE::print "things!:\t", Dumper \#_};
#this works for some reason
$thingmbob::{'(``'}('ls');
#this does the standard backtick operation
`ls`
unfourtunatly, I have no experience in OOP perl and my google-fu skills are failing me, could some one point me in the right direction?
caveat:
I'm in a closed system with a few cpan modules preinstalled, odds are that i don't have any fancy modules preinstalled and i absolutely cannot get new ones
I'm on perl5.14
edit:
for the sake of completeness i want to add my (mostly) final product
BEGIN {
*CORE::GLOBAL::readpipe = sub {
print Dumper(\#_);
#internal = readpipe(#_);
if(wantarray){
return #internal;
}else{
return join('',#internal);
}
};
}
I want it to print what its about to run and then run it. the wantarray is important because without it scalar context does not work
This perlmonks article explains how to do it. You can overwrite the readpipe built-in.
EXPR is executed as a system command. The collected standard output of the command is returned. In scalar context, it comes back as a single (potentially multi-line) string. In list context, returns a list of lines (however you've defined lines with $/ (or $INPUT_RECORD_SEPARATOR in English)). This is the internal function implementing the qx/EXPR/ operator, but you can use it directly. The qx/EXPR/ operator is discussed in more detail in I/O Operators in perlop. If EXPR is omitted, uses $_ .
You need to put this into a BEGIN block, so it would make sense to not require, but use it instead to make it available as early as possible.
Built-ins are overridden using the CORE::GLOBAL:: namespace.
BEGIN {
*CORE::GLOBAL::readpipe = sub {
print "#_";
}
}
print qx/ls/;
print `ls`;
This outputs:
ls1ls1
Where the ls is the #_ and the 1 is the return value of print inside the overridden sub.
Alternatively, there is ex::override, which does the same under the hood, but with less weird internals.

Using filehandles in Perl to alter actively running code

I've been learning about filehandles in Perl, and I was curious to see if there's a way to alter the source code of a program as it's running. For example, I created a script named "dynamic.pl" which contained the following:
use strict;
use warnings;
open(my $append, ">>", "dynamic.pl");
print $append "print \"It works!!\\n\";\n";
This program adds the line
print "It works!!\n";
to the end of it's own source file, and I hoped that once that line was added, it would then execute and output "It works!!"
Well, it does correctly append the line to the source file, but it doesn't execute it then and there.
So I assume therefore that when perl executes a program that it loads it to memory and runs it from there, but my question is, is there a way to access this loaded version of the program so you can have a program that can alter itself as you run it?
The missing piece you need is eval EXPR. This compiles, "evaluates", any string as code.
my $string = q[print "Hello, world!";];
eval $string;
This string can come from any source, including a filehandle.
It also doesn't have to be a single statement. If you want to modify how a program runs, you can replace its subroutines.
use strict;
use warnings;
use v5.10;
sub speak { return "Woof!"; }
say speak();
eval q[sub speak { return "Meow!"; }];
say speak();
You'll get a Subroutine speak redefined warning from that. It can be supressed with no warnings "redefine".
{
# The block is so this "no warnings" only affects
# the eval and not the entire program.
no warnings "redefine";
eval q[sub speak { return "Shazoo!"; }];
}
say speak();
Obviously this is a major security hole. There is many, many, many things to consider here, too long for an answer, and I strongly recommend you not do this and find a better solution to whatever problem you're trying to solve this way.
One way to mitigate the potential for damage is to use the Safe module. This is like eval but limits what built in functions are available. It is by no means a panacea for the security issues.
With a warning about all kinds of issues, you can reload modules.
There are packages for that, for example, Module::Reload. Then you can write code that you intend to change in a module, change the source at runtime, and have it reloaded.
By hand you would delete that from %INC and then require, like
# ... change source code in the module ...
delete $INC{'ModuleWithCodeThatChages.pm'};
require ModuleWithCodeThatChanges;
The only reason I can think of for doing this is experimentation and play. Otherwise, there are all kinds of concerns with doing something like this, and whatever your goal may be there are other ways to accomplish it.
Note The question does specify a filehandle. However, I don't see that to be really related to what I see to be the heart of the question, of modifying code at runtime.
The source file isn't used after it's been compiled.
You could just eval it.
use strict;
use warnings;
my $code = <<'__EOS__'
print "It works!!\n";
__EOS__
open(my $append_fh, ">>", "dynamic.pl")
or die($!);
print($append_fh $code);
eval("$code; 1")
or die($#);
There's almost definitely a better way to achieve your end goal here. BUT, you could recursively make exec() or system() calls -- latter if you need a return value. Be sure to setup some condition or the dominoes will keep falling. Again, you should rethink this, unless it's just practice of some sort, or maybe I don't get it!
Each call should execute the latest state of the file; also be sure to close the file before each call.
i.e.,
exec("dynamic.pl"); or
my retval;
retval = system("perl dynamic.pl");
Don't use eval ever.

I serialized my data in Perl with Data::Dumper. Now when I eval it I get "Global symbol "$VAR1" requires explicit package name"

I serialized my data to string in Perl using Data::Dumper. Now in another program I'm trying to deserialize it by using eval and I'm getting:
Global symbol "$VAR1" requires explicit package name
I'm using use warnings; use strict; in my program.
Here is how I'm evaling the code:
my $wiki_categories = eval($db_row->{categories});
die $# if $#;
/* use $wiki_categories */
How can I disable my program dying because of "$VAR1" not being declared as my?
Should I append "my " before the $db_row->{categories} in the eval? Like this:
my $wiki_categories = eval("my ".$db_row->{categories});
I didn't test this yet, but I think it would work.
Any other ways to do this? Perhaps wrap it in some block, and turn off strict for that block? I haven't ever done it but I've seen it mentioned.
Any help appreciated!
This is normal. By default, when Data::Dumper serializes data, it outputs something like:
$VAR1 = ...your data...
To use Data::Dumper for serialization, you need to configure it a little. Terse being the most important option to set, it turns off the $VAR thing.
use Data::Dumper;
my $data = {
foo => 23,
bar => [qw(1 2 3)]
};
my $dumper = Data::Dumper->new([]);
$dumper->Terse(1);
$dumper->Values([$data]);
print $dumper->Dump;
Then the result can be evaled straight into a variable.
my $data = eval $your_dump;
You can do various tricks to shrink the size of Data::Dumper, but on the whole it's fast and space efficient. The major down sides are that it's Perl only and wildly insecure. If anyone can modify your dump file, they own your program.
There are modules on CPAN which take care of this for you, and a whole lot more, such as Data::Serializer.
Your question has a number of implications, I'll try to address as many as I can.
First, read the perldoc for Data::Dumper. Setting $Data::Dumper::Terse = 1 may suffice for your needs. There are many options here in global variables, so be sure to localise them. But this changes the producer, not the consumer, of the data. I don't know how much control you have over that. Your question implies you're working on the consumer, but makes no mention of any control over the producer. Maybe the data already exists, and you have to use it as is.
The next implication is that you're tied to Data::Dumper. Again, the data may already exist, so too bad, use it. If this is not the case, I would recommend switching to another storable format. A fairly common one nowadays is JSON. While JSON isn't part of core perl, it's pretty trivial to install. It also makes this much easier. One advantage is that the data is useful in other languages, too. Another is that you avoid eval STRING which, should the data be compromised, could easily compromise your consumer.
The next item is just how to solve it as is. If the data exists, for example. A simple solution is to just add the my as you did. This works fine. Another one is to strip the $VAR1: (my $dumped = $db_row->{categories}) =~ s/^\s*\$\w+\s*=\s*//;. Another one is to put the "no warnings" right into the eval: eval ("no warnings; no strict; " . $db_row->{categories});.
Personally, I go with JSON whenever possible.
Your code would work as it stood except that the eval fails because $VAR1 is undeclared in the scope of the eval and use strict 'vars' is in effect.
Get around this by disabling strictures within as tight a block as possible. A do block does the trick, like this
my $wiki_categories = do {
no strict 'vars';
eval $db_row->{categories};
};

The good, the bad, and the ugly of lexical $_ in Perl 5.10+

Starting in Perl 5.10, it is now possible to lexically scope the context variable $_, either explicitly as my $_; or in a given / when construct.
Has anyone found good uses of the lexical $_? Does it make any constructs simpler / safer / faster?
What about situations that it makes more complicated? Has the lexical $_ introduced any bugs into your code? (since control structures that write to $_ will use the lexical version if it is in scope, this can change the behavior of the code if it contains any subroutine calls (due to loss of dynamic scope))
In the end, I'd like to construct a list that clarifies when to use $_ as a lexical, as a global, or when it doesn't matter at all.
NB: as of perl5-5.24 these experimental features are no longer part of perl.
IMO, one great thing to come out of lexical $_ is the new _ prototype symbol.
This allows you to specify a subroutine so that it will take one scalar or if none is provided it will grab $_.
So instead of writing:
sub foo {
my $arg = #_ ? shift : $_;
# Do stuff with $_
}
I can write:
sub foo(_) {
my $arg = shift;
# Do stuff with $_ or first arg.
}
Not a big change, but it's just that much simpler when I want that behavior. Boilerplate removal is a good thing.
Of course, this has the knock on effect of changing the prototypes of several builtins (eg chr), which may break some code.
Overall, I welcome lexical $_. It gives me a tool I can use to limit accidental data munging and bizarre interactions between functions. If I decide to use $_ in the body of a function, by lexicalizing it, I can be sure that whatever code I call, $_ won't be modified in calling code.
Dynamic scope is interesting, but for the most part I want lexical scoping. Add to this the complications around $_. I've heard dire warnings about the inadvisability of simply doing local $_;--that it is best to use for ( $foo ) { } instead. Lexicalized $_ gives me what I want 99 times out of 100 when I have localized $_ by whatever means. Lexical $_ makes a great convenience and readability feature more robust.
The bulk of my work has had to work with perl 5.8, so I haven't had the joy of playing with lexical $_ in larger projects. However, it feels like this will go a long way to make the use of $_ safer, which is a good thing.
I once found an issue (bug would be way too strong of a word) that came up when I was playing around with the Inline module. This simple script:
use strict qw(vars subs);
for ('function') {
$_->();
}
sub function {
require Inline;
Inline->bind(C => <<'__CODE__');
void foo()
{
}
__CODE__
}
fails with a Modification of a read-only value attempted at /usr/lib/perl5/site_perl/5.10/Inline/C.pm line 380. error message. Deep in the internals of the Inline module is a subroutine that wanted to modify $_, leading to the error message above.
Using
for my $_ ('function') { ...
or otherwise declaring my $_ is a viable workaround to this issue.
(The Inline module was patched to fix this particular issue).
[ Rationale: A short additional answer with a quick summary for perl newcomers that may be passing by. When searching for "perl lexical topic" one can end up here.]
By now (2015) I suppose it is common knowledge that the introduction of lexical topic (my $_ and some related features) led to some difficult to detect at the outset unintended behaviors and so was marked as experimental and then entered into a deprecation stage.
Partial summary of #RT119315:
One suggestion was for something like use feature 'lextopic'; to make use of a new
lexical topic variable:
$^_.
Another point made was that an "implicit name for the topicalizing operator ... other than $_" would work best when combined with explicitly lexical functions (e.g. lexical map or lmap). Whether these approaches would somehow make it possible to salvage given/when is not clear. In the afterlife of the experimental and depreciation phases perhaps something may end up living on in the river of CPAN.
Haven't had any problems here, although I tend to follow somewhat of a "Don't ask, don't tell" policy when it comes to Perls magic. I.e. the routines are not usually expected to rely on their peers screwing with non lexical data as a side effect, nor letting them.
I've tested code against various 5.8 and 5.10 versions of perl, while using a 5.6 describing Camel for occasional reference. Haven't had any problems. Most of my stuff was originally done for perl 5.8.8.

Why does Perl::Critic dislike using shift to populate subroutine variables?

Lately, I've decided to start using Perl::Critic more often on my code. After programming in Perl for close to 7 years now, I've been settled in with most of the Perl best practices for a long while, but I know that there is always room for improvement. One thing that has been bugging me though is the fact that Perl::Critic doesn't like the way I unpack #_ for subroutines. As an example:
sub my_way_to_unpack {
my $variable1 = shift #_;
my $variable2 = shift #_;
my $result = $variable1 + $variable2;
return $result;
}
This is how I've always done it, and, as its been discussed on both PerlMonks and Stack Overflow, its not necessarily evil either.
Changing the code snippet above to...
sub perl_critics_way_to_unpack {
my ($variable1, $variable2) = #_;
my $result = $variable1 + $variable2;
return $result;
}
...works too, but I find it harder to read. I've also read Damian Conway's book Perl Best Practices and I don't really understand how my preferred approach to unpacking falls under his suggestion to avoid using #_ directly, as Perl::Critic implies. I've always been under the impression that Conway was talking about nastiness such as:
sub not_unpacking {
my $result = $_[0] + $_[1];
return $result;
}
The above example is bad and hard to read, and I would never ever consider writing that in a piece of production code.
So in short, why does Perl::Critic consider my preferred way bad? Am I really committing a heinous crime unpacking by using shift?
Would this be something that people other than myself think should be brought up with the Perl::Critic maintainers?
The simple answer is that Perl::Critic is not following PBP here. The
book explicitly states that the shift idiom is not only acceptable, but
is actually preferred in some cases.
Running perlcritic with --verbose 11 explains the policies. It doesn't look like either of these explanations applies to you, though.
Always unpack #_ first at line 1, near
'sub xxx{ my $aaa= shift; my ($bbb,$ccc) = #_;}'.
Subroutines::RequireArgUnpacking (Severity: 4)
Subroutines that use `#_' directly instead of unpacking the arguments to
local variables first have two major problems. First, they are very hard
to read. If you're going to refer to your variables by number instead of
by name, you may as well be writing assembler code! Second, `#_'
contains aliases to the original variables! If you modify the contents
of a `#_' entry, then you are modifying the variable outside of your
subroutine. For example:
sub print_local_var_plus_one {
my ($var) = #_;
print ++$var;
}
sub print_var_plus_one {
print ++$_[0];
}
my $x = 2;
print_local_var_plus_one($x); # prints "3", $x is still 2
print_var_plus_one($x); # prints "3", $x is now 3 !
print $x; # prints "3"
This is spooky action-at-a-distance and is very hard to debug if it's
not intentional and well-documented (like `chop' or `chomp').
An exception is made for the usual delegation idiom
`$object->SUPER::something( #_ )'. Only `SUPER::' and `NEXT::' are
recognized (though this is configurable) and the argument list for the
delegate must consist only of `( #_ )'.
It's important to remember that a lot of the stuff in Perl Best Practices is just one guy's opinion on what looks the best or is the easiest to work with, and it doesn't matter if you do it another way. Damian says as much in the introductory text to the book. That's not to say it's all like that -- there are many things in there that are absolutely essential: using strict, for instance.
So as you write your code, you need to decide for yourself what your own best practices will be, and using PBP is as good a starting point as any. Then stay consistent with your own standards.
I try to follow most of the stuff in PBP, but Damian can have my subroutine-argument shifts and my unlesses when he pries them from my cold, dead fingertips.
As for Critic, you can choose which policies you want to enforce, and even create your own if they don't exist yet.
In some cases Perl::Critic cannot enforce PBP guidelines precisely, so it may enforce an approximation that attempts to match the spirit of Conway's guidelines. And it is entirely possible that we have misinterpreted or misapplied PBP. If you find something that doesn't smell right, please mail a bug report to bug-perl-critic#rt.cpan.org and we'll look into it right away.
Thanks,
-Jeff
I think you should generally avoid shift, if it is not really necessary!
Just ran into a code like this:
sub way {
my $file = shift;
if (!$file) {
$file = 'newfile';
}
my $target = shift;
my $options = shift;
}
If you start changing something in this code, there is a good chance you might accidantially change the order of the shifts or maybe skip one and everything goes southway. Furthermore it's hard to read - because you cannot be sure you really see all parameters for the sub, because some lines below might be another shift somewhere... And if you use some Regexes in between, they might replace the contents of $_ and weird stuff begins to happen...
A direct benefit of using the unpacking my (...) = #_ is you can just copy the (...) part and paste it where you call the method and have a nice signature :) you can even use the same variable-names beforehand and don't have to change a thing!
I think shift implies list operations where the length of the list is dynamic and you want to handle its elements one at a time or where you explicitly need a list without the first element. But if you just want to assign the whole list to x parameters, your code should say so with my (...) = #_; no one has to wonder.