Concatenating strings in Perl with "join" - perl

See Update Below
I am going through a whole bunch of Perl scripts that someone at my company wrote. He used join to concatenate strings. For example, he does this (taken hot out of a real Perl script):
$fullpath=join "", $Upload_Loc, "/", "$filename";
Instead of this:
$fullpath = "$Upload_Loc" . "/" . "$filename";
Or even just this:
$fullpath = "$Upload_Loc/$filename";
He's no longer here, but the people who are here tell me he concatenated strings in this way because it was somehow better. (They're not too clear why).
So, why would someone use join in this matter over using the . concatenate operator, or just typing the strings together as in the third example? Is there a valid reason for this style of coding?
I'm trying to clean up lot of the mess here, and my first thought would be to end this practice. It makes the code harder to read, and I'm sure doing a join is not a very efficient way to concatenate strings. However, although I've been writing scripts in Perl since version 3.x, I don't consider myself a guru because I've never had a chance to hang around with people who were better than Perl than I am and could teach me Perl's deep inner secrets. I just want to make sure that my instinct is correct here before I make a fool of myself.
I've got better ways of doing that around here.
Update
People are getting confused. He isn't just for concatenating paths. Here's another example:
$hotfix=join "", "$app", "_", "$mod", "_", "$bld", "_", "$hf", ".zip";
Where as I would do something like this:
$hotfix = $app . "_" $mod . "_" . $bld . "_" . "$hf.zip";
Or, more likely
$hotfix = "${app}_${mod}_${bld}_${hf}.zip";
Or maybe in this case, I might actually use join because the underscore causes problems:
$hotfix = join("_", $app, $mod, $bld, $hf) . ".zip";
My question is still: Is he doing something that real Perl hackers know, and a newbie like me who's been doing this for only 15 years don't know about? Do people look at me concatenating strings using . or just putting them in quotes and say "Ha! What a noob! I bet he owns a Macintosh too!"
Or, does the previous guy just has a unique style of programming much like my son's unique style of driving includes running head on into trees?

I've done my fair share of commercial Perl development for "a well known online retailer", and I've never seen join used like that. Your third example would be my preferred alternative, as it's simple, clean and readable.
Like others here, I don't see any genuine value in using join as a performance enhancer. It might well perform marginally better than string interpolation but I can't imagine a real-world situation where the optimisation could be justified yet the code still written in a scripting language.
As this question demonstrates, esoteric programming idioms (in any language) just lead to a lot of misunderstanding. If you're lucky, the misunderstanding is benign. The developers I enjoy working alongside are the ones who code for readability and consistency and leave the Perl Golf for the weekends. :)
In short: yes, I think his unique style is akin to your son's unique style of driving. :)

I would consider
$fullpath = join "/", $Upload_Loc, $filename;
clearer than the alternatives. However, File::Spec has been in the core for a long time, so
use File::Spec::Functions qw( catfile );
# ...
$fullpath = catfile $Upload_Loc, $filename;
is much better. And, better yet, there is Path::Class:
use Path::Class;
my $fullpath = file($Upload_Loc, $filename);
Speed is usually not a factor I consider in concatenating file names and paths.
The example you give in your update:
$hotfix=join "", "$app", "_", "$mod", "_", "$bld", "_", "$hf", ".zip";
demonstrates why the guy is clueless. First, there is no need to interpolate those individual variables. Second, that is better written as
$hotfix = join '_', $app, $mod, $bld, "$hf.zip";
or, alternatively, as
$hotfix = sprintf '%s_%s_%s_%s.zip', $app, $mod, $bld, $hf;
with reducing unnecessary punctuation being my ultimate goal.

In general, unless the lists of items to be joined are huge, you will not see much of a performance difference changing them over to concatenations. The main concern is readability and maintainability, and in those cases, if the string interpolation form is clearer, you can certainly use that.
I would guess that this is just a personal coding preference of the original programmer.
In general, I use join when the length of the list is large/unknown, or if I am joining with something other than the empty string (or a single space for array interpolation). Otherwise, using . or simple string interpolation is usually shorter and easier to read.

Perl compiles double-quoted strings into things with join and . catenation in them:
$ perl -MO=Deparse,-q -e '$fullpath = "$Upload_Loc/$filename"'
$fullpath = $Upload_Loc . '/' . $filename;
-e syntax OK
$ perl -MO=Deparse,-q -le 'print "Got #ARGV"'
BEGIN { $/ = "\n"; $\ = "\n"; }
print 'Got ' . join($", #ARGV);
-e syntax OK
which may inspire you to things like this:
$rx = do { local $" = "|"; qr{^(?:#args)$} };
as in:
$ perl -le 'print $rx = do { local $" = "\t|\n\t"; qr{ ^ (?xis: #ARGV ) $ }mx }' good stuff goes here
(?^mx: ^ (?xis: good |
stuff |
goes |
here ) $ )
Nifty, eh?

Interpolation is a little slower than joining a list. That said I've never known anyone to take it to this extreme.
You could use the Benchmark module to determine how much difference there is.
Also, you could ask this question over on http://perlmonks.org/. There are real gurus there who can probably give you the inner secrets much better than I can.

All of those approaches are fine.
Join can sometimes be more powerful than . concatentate, particularly when some of the things you are joining are arrays:
join "/", "~", #document_path_elements, $myDocument;

While recognizing that in all the examples I see here, there are no significant performance differences, a series of concatenation, whether with . or with double-quotish interpolation, is indeed going to be more memory-inefficient than a join, which precomputes the needed string buffer for the result instead of expanding it several times (potentially even needing to move the partial resutl to a new location each time).
I have a problem with the criticism I see leveled here; there are many right ways to speak perl, and this is certainly one of them.
Inconsistent indentation, on the other hand...

Related

Exchange hash values without temporary variable

I have two hash values a couple of levels deep in a data structure that I would like to exchange and then switch back later.
$hashref->{$irrelevant}{$key1} and $hashref->{$irrelevant}{$key2}
Since they are such long names, ($a, $b) = ($b, $a) would be way too long for a single line of code.
Is there a way to do this elegantly, or am I stuck taking up three lines by exchanging with a temporary variable?
You people who hide "irrelevant" data meanings aren't doing anyone any favours. We still have to write a solution, but it has to be in abstract terms that make no sense either to you or me!
The neatest way I can think of is with a pair of hash slices
my $irrelevant_href = $hashref->{$irrelevant};
#{$irrelevant_href}{$key1, $key2} = #{$irrelevant_href}{$key2, $key1};
Create a sub to make it clear what the long line full of symbols is doing.
sub swap { ($_[0], $_[1]) = ($_[1], $_[0]) }
And it also makes the line shorter.
swap($hashref->{$irrelevant}{$key1}, $hashref->{$irrelevant}{$key2});
You could even use
swap(#{ $hashref->{$irrelevant} }{ $key1, $key2 });
There are far more grave sins in coding that declaring a lexical temp variable, but since you're asking let me throw an idea at you.
If you'll be doing any significant work with $hashref->{$irrelevant}, perhaps you should grab a copy of it specifically, to both make your code briefer for a maintenance programmer & cut out a level of dereferencing with every use.
For example:
# capture inner reference 'cause we'll be using it alot anyway...
my $ir_h_ref = $hashref->{$irrelevant};
#stuff here
# Do swap with shorter reference chain
( $ir_h_ref->{$key1}, $ir_h_ref->{$key2} ) =
( $ir_h_ref->{$key2}, $ir_h_ref->{$key1} );
Now this new variable is probably not worth it just for the sake of the switch, but if you'll be doing much more with that hash in the same code block, it just may become attractive.

Evaluating escape sequences in perl

I'm reading strings from a file. Those strings contain escape sequences which I would like to have evaluated before processing them further. So I do:
$t = eval("\"$t\"");
which works fine. But I'm having doubt about the performance. If eval is forking a perl process each time, it will be a performance killer. Another way I considered to do the job were regex, where I have found related questions in SO.
My question: is there a better, more efficient way to do it?
EDIT: before calling eval in my example $t is containing \064\065\x20a\n. It is evaluated to 45 a<LF>.
It's not quite clear what the strings in the file look like and what you do to them before passing off to eval. There's something missing in the explanation.
If you simply want to undo C-style escaping (as also used in Perl), use Encode::Escape:
use Encode qw(decode);
use Encode::Escape qw();
my $string_with_unescaped_literals = decode 'ascii-escape', $string_with_escaped_literals;
If you have placeholders in the file which look like Perl variables that you want to fill with values, then you are abusing eval as a poor man's templating engine. Use a real one that does not have the dangerous side effect of running arbitrary code.
$string =~ s/\\([rnt'"\\])/"qq|\\$1|"/gee
string eval can solve the problem too, but it brings up a host of security and maintenance issues, like # in string
oh gah don't use eval for this, thats dangerous if someone provides it with input like "system('sync;reboot');"..
But, you could do something like this:
#!/usr/bin/perl
$string = 'foo\"ba\\\'r';
printf("%s\n", $string);
$string =~ s/\\([\"\'])/$1/g;
printf("%s\n", $string);

Why does Perl::Critic dislike using shift to populate subroutine variables?

Lately, I've decided to start using Perl::Critic more often on my code. After programming in Perl for close to 7 years now, I've been settled in with most of the Perl best practices for a long while, but I know that there is always room for improvement. One thing that has been bugging me though is the fact that Perl::Critic doesn't like the way I unpack #_ for subroutines. As an example:
sub my_way_to_unpack {
my $variable1 = shift #_;
my $variable2 = shift #_;
my $result = $variable1 + $variable2;
return $result;
}
This is how I've always done it, and, as its been discussed on both PerlMonks and Stack Overflow, its not necessarily evil either.
Changing the code snippet above to...
sub perl_critics_way_to_unpack {
my ($variable1, $variable2) = #_;
my $result = $variable1 + $variable2;
return $result;
}
...works too, but I find it harder to read. I've also read Damian Conway's book Perl Best Practices and I don't really understand how my preferred approach to unpacking falls under his suggestion to avoid using #_ directly, as Perl::Critic implies. I've always been under the impression that Conway was talking about nastiness such as:
sub not_unpacking {
my $result = $_[0] + $_[1];
return $result;
}
The above example is bad and hard to read, and I would never ever consider writing that in a piece of production code.
So in short, why does Perl::Critic consider my preferred way bad? Am I really committing a heinous crime unpacking by using shift?
Would this be something that people other than myself think should be brought up with the Perl::Critic maintainers?
The simple answer is that Perl::Critic is not following PBP here. The
book explicitly states that the shift idiom is not only acceptable, but
is actually preferred in some cases.
Running perlcritic with --verbose 11 explains the policies. It doesn't look like either of these explanations applies to you, though.
Always unpack #_ first at line 1, near
'sub xxx{ my $aaa= shift; my ($bbb,$ccc) = #_;}'.
Subroutines::RequireArgUnpacking (Severity: 4)
Subroutines that use `#_' directly instead of unpacking the arguments to
local variables first have two major problems. First, they are very hard
to read. If you're going to refer to your variables by number instead of
by name, you may as well be writing assembler code! Second, `#_'
contains aliases to the original variables! If you modify the contents
of a `#_' entry, then you are modifying the variable outside of your
subroutine. For example:
sub print_local_var_plus_one {
my ($var) = #_;
print ++$var;
}
sub print_var_plus_one {
print ++$_[0];
}
my $x = 2;
print_local_var_plus_one($x); # prints "3", $x is still 2
print_var_plus_one($x); # prints "3", $x is now 3 !
print $x; # prints "3"
This is spooky action-at-a-distance and is very hard to debug if it's
not intentional and well-documented (like `chop' or `chomp').
An exception is made for the usual delegation idiom
`$object->SUPER::something( #_ )'. Only `SUPER::' and `NEXT::' are
recognized (though this is configurable) and the argument list for the
delegate must consist only of `( #_ )'.
It's important to remember that a lot of the stuff in Perl Best Practices is just one guy's opinion on what looks the best or is the easiest to work with, and it doesn't matter if you do it another way. Damian says as much in the introductory text to the book. That's not to say it's all like that -- there are many things in there that are absolutely essential: using strict, for instance.
So as you write your code, you need to decide for yourself what your own best practices will be, and using PBP is as good a starting point as any. Then stay consistent with your own standards.
I try to follow most of the stuff in PBP, but Damian can have my subroutine-argument shifts and my unlesses when he pries them from my cold, dead fingertips.
As for Critic, you can choose which policies you want to enforce, and even create your own if they don't exist yet.
In some cases Perl::Critic cannot enforce PBP guidelines precisely, so it may enforce an approximation that attempts to match the spirit of Conway's guidelines. And it is entirely possible that we have misinterpreted or misapplied PBP. If you find something that doesn't smell right, please mail a bug report to bug-perl-critic#rt.cpan.org and we'll look into it right away.
Thanks,
-Jeff
I think you should generally avoid shift, if it is not really necessary!
Just ran into a code like this:
sub way {
my $file = shift;
if (!$file) {
$file = 'newfile';
}
my $target = shift;
my $options = shift;
}
If you start changing something in this code, there is a good chance you might accidantially change the order of the shifts or maybe skip one and everything goes southway. Furthermore it's hard to read - because you cannot be sure you really see all parameters for the sub, because some lines below might be another shift somewhere... And if you use some Regexes in between, they might replace the contents of $_ and weird stuff begins to happen...
A direct benefit of using the unpacking my (...) = #_ is you can just copy the (...) part and paste it where you call the method and have a nice signature :) you can even use the same variable-names beforehand and don't have to change a thing!
I think shift implies list operations where the length of the list is dynamic and you want to handle its elements one at a time or where you explicitly need a list without the first element. But if you just want to assign the whole list to x parameters, your code should say so with my (...) = #_; no one has to wonder.

How can I identify and remove redundant code in Perl?

I have a Perl codebase, and there are a lot of redundant functions and they are spread across many files.
Is there a convenient way to identify those redundant functions in the codebase?
Is there any simple tool that can verify my codebase for this?
You could use the B::Xref module to generate cross-reference reports.
I've run into this problem myself in the past. I've slapped together a quick little program that uses PPI to find subroutines. It normalizes the code a bit (whitespace normalized, comments removed) and reports any duplicates. Works reasonably well. PPI does all the heavy lifting.
You could make the normalization a little smarter by normalizing all variable names in each routine to $a, $b, $c and maybe doing something similar for strings. Depends on how aggressive you want to be.
#!perl
use strict;
use warnings;
use PPI;
my %Seen;
for my $file (#ARGV) {
my $doc = PPI::Document->new($file);
$doc->prune("PPI::Token::Comment"); # strip comments
my $subs = $doc->find('PPI::Statement::Sub');
for my $sub (#$subs) {
my $code = $sub->block;
$code =~ s/\s+/ /; # normalize whitespace
next if $code =~ /^{\s*}$/; # ignore empty routines
if( $Seen{$code} ) {
printf "%s in $file is a duplicate of $Seen{$code}\n", $sub->name;
}
else {
$Seen{$code} = sprintf "%s in $file", $sub->name;
}
}
}
It may not be convenient, but the best tool for this is your brain. Go through all the code and get an understanding of its interrelationships. Try to see the common patterns. Then, refactor!
I've tagged your question with "refactoring". You may find some interesting material on this site filed under that subject.
If you are on Linux you might use grep to help you make list all of the functions in your codebase. You will probably need to do what Ether suggests and really go through the code to understand it if you haven't already.
Here's an over-simplified example:
grep -r "sub " codebase/* > function_list
You can look for duplicates this way too. This idea may be less effective if you are using Perl's OOP capability.
It might also be worth mentioning NaturalDocs, a code documentation tool. This will help you going forward.

What's good practice for Perl special variables?

First off, does anyone have a comprehensive list of the Perl special variables?
Second, are there any tasks that are much easier using them? I always unset $/ to read in files all at once, and $| to automatically flush buffers, but I'm not sure of any others.
And third, should one use the Perl special variables, or be more explicit in their coding. Personally I'm a fan of using the special variables to manipulate the way code behaves, but I've heard others argue that it just confuses things.
They are all documented in perlvar.
Note that the long names are only usable if you use English qw( -no_match_vars ); first.
Always remember to local'ize your changes to the punctuation variables. Some of the punctuation variables are useful, others should not be used. For instance, $[ should never be used (it changes the base index of arrays, so local $[ = 1; will cause 1 to refer to the first item in a list or array). Others like $" are iffy. You have to balance the usefulness of not having to do the join manually. For instance, which of these is easier to understand?
local $" = " :: "; #"
my $s = "#a / #b / #c\n";
versus
my $sep = " :: ";
my $s = join(" / ", join($sep, #a), join($sep, #a), join($sep, #a)) . "\n";
or
my $s = join(" / ", map { join " :: ", #$_ }, \(#a, #b, #c)) . "\n";
1) As far as which ones I use often:
$! is quintessential for IO error handling
$# for eval error handling when calling mis-designed libraries (like database ones) whose coders weren't considerate enough to code in decent error handling other than "die"
$_ for map/grep blocks, although I 100% agree with a poster above that using it for regular code is not a good practice.
$| for flushing buffers
2) As far as using punctuation vs. English names, I'll pick on Marc Bollinger's reply above although the same rebuttal goes for anyone arguing that there's no benefit to using English names.
"if you're using Perl, you're obviously not choosing it for neophyte readability"
Marc, I find that is not always (or rather almost never) true. Then again, 99% of my Perl experience is writing production Perl code for large companies, 90% of it full fledged applications instead of 10-line hack scripts, so my analysis may not apply in other domains. The reasons such thinking as Marc's is wrong are:
Just because I'm a Perl non-neophyte (to put it mildly), some noob analyst hired a year ago - or an outsourced "genius" - is probably not. You may not want to confuse them any more than they already are. "If code was hard to write, it should be hard to read" is not exactly high on the list of good attitudes of professional developers, in any language.
When I'm up at 2am, half-asleep and troubleshooting a production problem, I really do not want to depend on the ability of my already-nearly-blind eyes to distinguish between $! and $|. Especially in a code written by before mentioned "genius" who may not have known which one of them to use and switched them around.
When I'm reading a code left unfinished by a guy who was cough "restructured" cough out of the company a year ago, I'd rather concentrate on intricacies of screwy logic than readability of the punctuation soup.
The three I use the most are $_, #_ and $!.
I like to use $_ when looping through an array, retrieving parameters (as pointed out by Motti, this is actually #_) or performing substitutions:
Example 1.1:
foreach (#items)
{
print $_;
}
Example 1.2:
my $prm1 = shift; # implicit use of #_ or #ARGV depending on context
Example 1.3:
s/" "/""/ig; # implicit use of $_
I use $! in cases like this:
Example 2.1:
open(FILE, ">>myfile") || die "Error: $!";
I do agree though, it makes the code more confusing to someone not familiar with Perl. But confusing other people is one of the joys of knowing the language! :)
Typical ones I use are $_, #_, #ARGV, $!, $/. Other ones I comment heavily.
Brad notes that $# is also a pretty common variable. (Error value from eval()).
I say use them--if you're using Perl, you're obviously not choosing it for neophyte readability. Any more-than-casual developer will likely have a browser/reference window open, and sifting through the perlvar manpage in one window is likely no less arduous than looking up definitions of (and assignments to!) global or external variables. As an example, I just recently encountered the new-in-5.10.x named capture buffers:
/^(?<myName>.*)$/;
# and later
my $capture = %+{'myName'};
And figuring out what was going on wasn't any harder than going into parlvar/perlre and reading a little bit.
I'd much rather find a bunch of wacky special vars in undocumented code than a bunch of wacky algorithms in undocumented code.