How to implement assert in Perl? - perl

When trying to implement C's assert() macro in Perl, there is some fundamental problem. Consider this code first:
sub assert($$) {
my ($assertion, $failure_msg) = #_;
die $failure_msg unless $assertion;
}
# ...
assert($boolean, $message);
While this works, it's not like C: In C I'd write assert($foo <= $bar), but with this implementation I'd have to write assert($foo <= $bar, '$foo <= $bar'), i.e. repeat the condition as string.
Now I wonder how to implement this efficiently. The easy variant seems to pass the string to assert() and use eval to evaluate the string, but you can't access the variables when evaluating eval. Even if it would work, it would be quite inefficient as the condition is parsed and evaluated each time.
When passing the expression, I have no idea how to make a string from it, especially as it's evaluated already.
Another variant using assert(sub { $condition }) where it's likely easier to make a string from the code ref, is considered too ugly.
The construct assert(sub { (eval $_[0], $_[0]) }->("condition")); with
sub assert($)
{
die "Assertion failed: $_[1]\n" unless $_[0];
}
would do, but is ugly to call.
The solution I am looking for is to write the condition to check only once, while being able to reproduce the original (non-evaluated) condition and efficiently evaluate the condition.
So what are more elegant solutions? Obviously solutions would be easier if Perl had a macro or comparable syntax mechanism that allows transforming the input before compiling or evaluating.

Use B::Deparse?
#!/usr/bin/perl
use strict;
use warnings;
use B::Deparse;
my $deparser = B::Deparse->new();
sub assert(&) {
my($condfunc) = #_;
my #caller = caller();
unless ($condfunc->()) {
my $src = $deparser->coderef2text($condfunc);
$src =~ s/^\s*use\s.*$//mg;
$src =~ s/^\s+(.+?)/$1/mg;
$src =~ s/(.+?)\s+$/$1/mg;
$src =~ s/[\r\n]+/ /mg;
$src =~ s/^\{\s*(.+?)\s*\}$/$1/g;
$src =~ s/;$//mg;
die "Assertion failed: $src at $caller[1] line $caller[2].\n";
}
}
my $var;
assert { 1 };
#assert { 0 };
assert { defined($var) };
exit 0;
Test output:
$ perl dummy.pl
Assertion failed: defined $var at dummy.pl line 26.

There are a load of assertion modules on CPAN. These are open source, so it's pretty easy to peek at them and see how they're done.
Carp::Assert is a low-magic implementation. It has links to a few more complicated assertion modules in its documentation, one of which is my module PerlX::Assert.

Use caller and extract the line of source code that made the assertion?
sub assert {
my ($condition, $msg) = #_;
return if $condition;
if (!$msg) {
my ($pkg, $file, $line) = caller(0);
open my $fh, "<", $file;
my #lines = <$fh>;
close $fh;
$msg = "$file:$line: " . $lines[$line - 1];
}
die "Assertion failed: $msg";
}
assert(2 + 2 == 5);
Output:
Assertion failed: assert.pl:14: assert(2 + 2 == 5);
If you use Carp::croak instead of die, Perl will also report stack trace information and identify where the failing assertion was called.

One approach to any kind of "assertions" is to use a testing framework. It isn't as clean-cut as C's assert but then it is incomparably more flexible and manageable, while tests can still be freely embedded in code much like assert statements are.
A few very simple examples
use warnings;
use strict;
use feature 'say';
use Test::More 'no_plan';
Test::More->builder->output('/dev/null');
say "A few examples of tests, scattered around code\n";
like('may be', qr/(?:\w+\s+)?be/, 'regex');
cmp_ok('a', 'eq', 'a ', 'string equality');
my ($x, $y) = (1.7, 13);
cmp_ok($x, '==', $y, '$x == $y');
say "\n'eval' expression in a string so we can see the failing code\n";
my $expr = '$x**2 == $y';
ok(eval $expr, 'Quadratic') || diag explain $expr;
# ok(eval $expr, $expr);
with output
A few examples of tests, scattered around code
# Failed test 'string equality'
# at assertion.pl line 19.
# got: 'a'
# expected: 'a '
# Failed test '$x == $y'
# at assertion.pl line 20.
# got: 1.7
# expected: 13
'eval' expression in a string so we can see the failing code
# Failed test 'Quadratic'
# at assertion.pl line 26.
# $x**2 == $y
# Looks like you failed 3 tests of 4.
This is just a scattershot of examples, where the last one answers the question directly.
The module Test::More brings together a number of tools; there are many options in how to use it and how to manipulate output. See Test::Harness, and Test::Builder (used above), and a number of tutorials and SO posts.
I don't know how the above eval counts toward "elegant" but it does move you from singular and individually cared for C-style assert statements toward a more easily manageable system.
Good assertions are meant and planned as systemic tests and code documentation but by their nature lack formal structure (and so may still end up scattered and ad-hoc). When done this way they come with a framework and can be managed and tuned with many tools, and as a suite.

Related

Perl - How to create commands that users can input in console?

I'm just starting in Perl and I'm quite enjoying it. I'm writing some basic functions, but what I really want to be able to do is to use those functions intelligently using console commands. For example, say I have a function adding two numbers. I'd want to be able to type in console "add 2, 4" and read the first word, then pass the two numbers as parameters in an "add" function. Essentially, I'm asking for help in creating some basic scripting using Perl ^^'.
I have some vague ideas about how I might do this in VB, but Perl, I have no idea where I'd start, or what functions would be useful to me. Is there something like VB.net's "Split" function where you can break down the contents of a scalar into an array? Is there a simple way to analyse one word at a time in a scalar, or iterate through a scalar until you hit a separator, for example?
I hope you can help, any suggestions are appreciated! Bear in mind, I'm no expert, I started Perl all of a few weeks ago, and I've only been doing VB.net half a year.
Thank you!
Edit: If you're not sure what to suggest and you know any simple/intuitive resources that might be of help, that would also be appreciated.
Its rather easy to make a script which dispatches to a command by name. Here is a simple example:
#!/usr/bin/env perl
use strict;
use warnings;
# take the command name off the #ARGV stack
my $command_name = shift;
# get a reference to the subroutine by name
my $command = __PACKAGE__->can($command_name) || die "Unknown command: $command_name\n";
# execute the command, using the rest of #ARGV as arguments
# and print the return with a trailing newline
print $command->(#ARGV);
print "\n";
sub add {
my ($x, $y) = #_;
return $x + $y;
}
sub subtract {
my ($x, $y) = #_;
return $x - $y;
}
This script (say its named myscript.pl) can be called like
$ ./myscript.pl add 2 3
or
$ ./myscript.pl subtract 2 3
Once you have played with that for a while, you might want to take it further and use a framework for this kind of thing. There are several available, like App::Cmd or you can take the logic shown above and modularize as you see fit.
You want to parse command line arguments. A space serves as the delimiter, so just do a ./add.pl 2 3 Something like this:
$num1=$ARGV[0];
$num2=$ARGV[1];
print $num1 + $num2;
will print 5
Here is a short implementation of a simple scripting language.
Each statement is exactly one line long, and has the following structure:
Statement = [<Var> =] <Command> [<Arg> ...]
# This is a regular grammar, so we don't need a complicated parser.
Tokens are seperated by whitespace. A command may take any number of arguments. These can either be the contents of variables $var, a string "foo", or a number (int or float).
As these are Perl scalars, there is no visible difference between strings and numbers.
Here is the preamble of the script:
#!/usr/bin/perl
use strict;
use warnings;
use 5.010;
strict and warnings are essential when learning Perl, else too much weird stuff would be possible. The use 5.010 is a minimum version, it also defines the say builtin (like a print but appends a newline).
Now we declare two global variables: The %env hash (table or dict) associates variable names with their values. %functions holds our builtin functions. The values are anonymous functions.
my %env;
my %functions = (
add => sub { $_[0] + $_[1] },
mul => sub { $_[0] * $_[1] },
say => sub { say $_[0] },
bye => sub { exit 0 },
);
Now comes our read-eval-loop (we don't print by default). The readline operator <> will read from the file specified as the first command line argument, or from STDIN if no filename is provided.
while (<>) {
next if /^\s*\#/; # jump comment lines
# parse the line. We get a destination $var, a $command, and any number of #args
my ($var, $command, #args) = parse($_);
# Execute the anonymous sub specified by $command with the #args
my $value = $functions{ $command }->(#args);
# Store the return value if a destination $var was specified
$env{ $var } = $value if defined $var;
}
That was fairly trivial. Now comes some parsing code. Perl “binds” regexes to strings with the =~ operator. Regexes may look like /foo/ or m/foo/. The /x flags allows us to include whitespace in our regex that doesn't match actual whitespace. The /g flag matches globally. This also enables the \G assertion. This is where the last successful match ended. The /c flag is important for this m//gc style parsing to consume one match at a time, and to prevent the position of the regex engine in out string to being reset.
sub parse {
my ($line) = #_; # get the $line, which is a argument
my ($var, $command, #args); # declare variables to be filled
# Test if this statement has a variable declaration
if ($line =~ m/\G\s* \$(\w+) \s*=\s* /xgc) {
$var = $1; # assign first capture if successful
}
# Parse the function of this statement.
if ($line =~ m/\G\s* (\w+) \s*/xgc) {
$command = $1;
# Test if the specified function exists in our %functions
if (not exists $functions{$command}) {
die "The command $command is not known\n";
}
} else {
die "Command required\n"; # Throw fatal exception on parse error.
}
# As long as our matches haven't consumed the whole string...
while (pos($line) < length($line)) {
# Try to match variables
if ($line =~ m/\G \$(\w+) \s*/xgc) {
die "The variable $1 does not exist\n" if not exists $env{$1};
push #args, $env{$1};
}
# Try to match strings
elsif ($line =~ m/\G "([^"]+)" \s*/xgc) {
push #args, $1;
}
# Try to match ints or floats
elsif ($line =~ m/\G (\d+ (?:\.\d+)? ) \s*/xgc) {
push #args, 0+$1;
}
# Throw error if nothing matched
else {
die "Didn't understand that line\n";
}
}
# return our -- now filled -- vars.
return $var, $command, #args;
}
Perl arrays can be handled like linked list: shift removes and returns the first element (pop does the same to the last element). push adds an element to the end, unshift to the beginning.
Out little programming language can execute simple programs like:
#!my_little_language
$a = mul 2 20
$b = add 0 2
$answer = add $a $b
say $answer
bye
If (1) our perl script is saved in my_little_language, set to be executable, and is in the system PATH, and (2) the above file in our little language saved as meaning_of_life.mll, and also set to be executable, then
$ ./meaning_of_life
should be able to run it.
Output is obviously 42. Note that our language doesn't yet have string manipulation or simple assignment to variables. Also, it would be nice to be able to call functions with the return value of other functions directly. This requires some sort of parens, or precedence mechanism. Also, the language requires better error reporting for batch processing (which it already supports).

Syntax errors at line 24 and 26. I don't know why?

syntax error at bioinfo2.pl line 24, near ");"
syntax error at bioinfo2.pl line 26, near "}"
Execution of bioinfo2.pl aborted due to compilation errors.
print "Enter file name......\n\n";
chomp($samplefile = <STDIN>);
open(INFILE,"$samplefile") or die "Could not open $samplefile";
#residue_name= ();
#residue_count= ();
while($newline = <INFILE>)
{
if ($newline =~ /^ATOM/)
{
chomp $newline;
#columns = split //, $newline;
$res = join '', $columns[17], $columns[18], $columns[19];
splice #columns,0;
$flag=0
for ($i = 0; $i<scalar(#residue_name); $i++;)
{
if (#residue_name[i] == $res)
{
#residue_count[i] = #residue_count[i] + 1;
$flag=1;
}
}
if($flag==0)
{
push(#residue_name, $res);
}
for ($i = 0; $i<scalar(#residue_name); $i++)
{
print (#residue_name[i], "-------", #residue_count[i], "\n");
}
}
}
It might be advisable to use strict; use warnings. That forces you to declare your variables (you can do so with my), and rules out many possible errors.
Here are a few things that I noticed:
In Perl5 v10 and later, you can use the say function (use 5.010 or use feature 'say'). This works like print but adds a newline at the end.
Never use the two-arg form of open. This opens some security issues. Provide an explicit open mode. Also, you can use scalars as filehandles; this provides nice features like auto-closing of files.
open my $INFILE, '<', $samplefile or die "Can't open $samplefile: $!";
The $! variable contains the reason why the open failed.
If you want to retrieve a list of elements from an array, you can use a slice (multiple subscripts):
my $res = join '', #columns[17 .. 19]; # also, range operator ".."
Note that the sigil is now an #, because we take multiple elems.
The splice #columns, 0 is a fancy way of saying “delete all elements from the array, and return them”. This is not neccessary (you don't read from that variable later). If you use lexical variables (declared with my), then each iteration of the while loop will receive a new variable. If you really want to remove the contents, you can undef #columns. This should be more efficient.
Actual error: You require a semicolon after $flag = 0 to terminate the statement before you can begin a loop.
Actual error: A C-style for-loop contains three expressions contained in parens. Your last semicolon divides them into 4 expressions, this is an error. Simply remove it, or look at my next tip:
C-style loops (for (foo; bar; baz) {}) are painful and error-prone. If you only iterate over a range (e.g. of indices), then you can use the range operator:
for my $i (0 .. $#residue_name) { ... }
The $# sigil gives the last index of an array.
When subscripting arrays (accessing array elements), then you have to include the sigil of the index:
$residue_name[$i]
Note that the sigil of the array is $, because we access only one element.
The pattern $var = $var + 1 can be shortened to $var++. This uses the increment operator.
The $flag == 0 could be abbreviated to !$flag, as all numbers except zero are considered true.
Here is a reimplementation of the script. It takes the filename as a command line argument; this is more flexible than prompting the user.
#!/usr/bin/perl
use strict; use warnings; use 5.010;
my $filename = $ARGV[0]; # #ARGV holds the command line args
open my $fh, "<", $filename or die "Can't open $filename: $!";
my #residue_name;
my #residue_count;
while(<$fh>) { # read into "$_" special variable
next unless /^ATOM/; # start a new iteration if regex doesn't match
my $number = join "", (split //)[17 .. 19]; # who needs temp variables?
my $push_number = 1; # self-documenting variable names
for my $i (0 .. $#residue_name) {
if ($residue_name[$i] == $number) {
$residue_count[$i]++;
$push_number = 0;
}
}
push #residue_name, $number if $push_number;
# are you sure you want to print this after every input line?
# I'd rather put this outside the loop.
for my $i (0 .. $#residue_name) {
say $residue_name[$i], ("-" x 7), $residue_count[$i]; # "x" repetition operator
}
}
And here is an implementation that may be faster for large input files: We use hashes (lookup tables), instead of looping through arrays:
#!/usr/bin/perl
use strict; use warnings; use 5.010;
my $filename = $ARGV[0]; # #ARGV holds the command line args
open my $fh, "<", $filename or die "Can't open $filename: $!";
my %count_residue; # this hash maps the numbers to counts
# automatically guarantees that every number has one count only
while(<$fh>) { # read into "$_" special variable
next unless /^ATOM/; # start a new iteration if regex doesn't match
my $number = join "", (split //)[17 .. 19]; # who needs temp variables?
if (exists $count_residue{$number}) {
# if we already have an entry for that number, we increment:
$count_residue{$number}++;
} else {
# We add the entry, and initialize to zero
$count_residue{$number} = 0;
}
# The above if/else initializes new numbers (seen once) to zero.
# If you want to count starting with one, replace the whole if/else by
# $count_residue{$number}++;
# print out all registered residues in numerically ascending order.
# If you want to sort them by their count, descending, then use
# sort { $count_residue{$b} <=> $count_residue{$a} } ...
for my $num (sort {$a <=> $b} keys %count_residue) {
say $num, ("-" x 7), $count_residue{$num};
}
}
It took me a while to chance down all the various errors. As others have said, use use warnings; and use strict;
Rule #1: Whenever you see syntax error pointing to a perfectly good line, you should always see if the line before is missing a semicolon. You forgot the semicolon after $flag=0.
In order to track down all the issues, I've rewritten your code into a more modern syntax:
#! /usr/bin/env perl
use strict;
use warnings;
use autodie;
print "Enter file name......\n\n";
chomp (my $samplefile = <STDIN>);
open my $input_file, '<:crlf', $samplefile;
my #residue_name;
my #residue_count;
while ( my $newline = <$input_file> ) {
chomp $newline;
next if $newline !~ /^ATOM/; #Eliminates the internal `if`
my #columns = split //, $newline;
my $res = join '', $columns[17], $columns[18], $columns[19];
my $flag = 0;
for my $i (0..$#residue_name) {
if ( $residue_name[$i] == $res ) {
$residue_count[$i]++;
$flag = 1;
}
}
if ( $flag == 0 ) {
push #residue_name, $res;
}
for my $i (0..$#residue_name) {
print "$residue_name[$i] ------- $residue_count[$i]\n";
}
}
close $input_file;
Here's a list of changes:
Lines 2 & 3: Always use use strict; and use warnings;. These will help you track down about 90% of your program errors.
Line 4: Use use autodie;. This will eliminate the need for checking whether a file opened or not.
Line 7 (and others): Using use strict; requires you to predeclare variables. Thus, you'll see my whenever a variable is first used.
Line 8: Use the three parameter open and use local variables for file handles instead of globs (i.e. $file_handle vs. FILE_HANDLE). The main reasons is that local variables are easier to pass into subroutines than globs.
Lines 9 & 10: No need to initialize the arrays, just declare them is enough.
Line 13: Always chomp as soon as you read in.
Line 14: Doing this eliminates an entire inner if statement that's embraces your entire while loop. Code blocks (such as if, while, and for) get hard to figure out when they get too long and too many embedded inside each other. Using next in this way allows me to eliminate the if block.
Line 17: Here's where you missed the semicolon which gave you your first syntax error. The main thing is I eliminated the very confusing splice command. If you want to zero out your array, you could have simply said #columns = (); which is much clearer. However, since #columns is now in scope only in the while loop, I no longer have to blank it out since it will be redefined for each line of your file.
Line 18: This is a much cleaner way of looping through all lines of your array. Note that $#residue_name gives you the last index of $#residue_name while scalar #resudue_name gives you the number of elements. This is a very important distinction! If I have an #array = (0, 1, 2, 3, 4), $#array will be 4, but scalar #array will be 5. Using the C style for loop can be a bit confusing when doing this. Should you use > or >=? Using (0..$#residue) name is obvious and eliminate the chance of errors which included the extra semi-colon inside your C style for statement. Because of the chance of errors and the complexity of the syntax, The developers who created Python have decided not allow for C style for loops.
Line 19 (and others): Using warnings pointed out that you did #residue_name[i] and it had several issues. First of all, you should use $residue_name[...] when indexing an array, and second of all, i is not an integer. You meant $i. Thus #residue_name[i] becomes $residue_name[$i].
Line 20: If you're incrementing a variable, use $foo++; or $foo += 1; and not $foo = $foo + 1;. The first two make it easier to see that you're incrementing a variable and not recalculating it's value.
Line 29: One of the great features of Perl is that variables can be interpolated inside quotes. You can put everything inside a single set of quotes. By the way, you should use . and not , if you do break up a print statement into multiple pieces. The , is a list operation. This means that what you print out is dependent upon the value of $,. The $, is a Perl variable that says what to print out between each item of a list when you interpolate a list into a string.
Please don't take this as criticism of your coding abilities. Many Perl books that teach Perl, and many course that teach Perl seem to teach Perl as it was back in the Perl 3.0 days. When I first learned Perl, it was at Perl 3.0, and much of my syntax would have looked like yours. However, Perl 5.x has been out for quite a while and contains many features that made programming easier and cleaner to read.
It took me a while to get out of Perl 3.0 habits and into Perl 4.0 and later Perl 5.0 habits. You learn by looking at what others do, and asking questions on forums like Stack Overflow.
I still can't say your code will work. I don't have your input, so I can't test it against that. However, by using this code as the basis of your program, debugging these errors should be pretty easy.

How would I do the equivalent of Prototype's Enumerator.detect in Perl with the least amount of code?

Lately I've been thinking a lot about functional programming. Perl offers quite a few tools to go that way, however there's something I haven't been able to find yet.
Prototype has the function detect for enumerators, the descriptions is simply this:
Enumerator.detect(iterator[, context]) -> firstElement | undefined
Finds the first element for which the iterator returns true.
Enumerator in this case is any list while iterator is a reference to a function, which is applied in turn on each element of the list.
I am looking for something like this to apply in situations where performance is important, i.e. when stopping upon encountering a match saves time by disregarding the rest of the list.
I am also looking for a solution that would not involve loading any extra module, so if possible it should be done with builtins only. And if possible, it should be as concise as this for example:
my #result = map function #array;
You say you don't want a module, but this is exactly what the first function in List::Util does. That's a core module, so it should be available everywhere.
use List::Util qw(first);
my $first = first { some condition } #array;
If you insist on not using a module, you could copy the implementation out of List::Util. If somebody knew a faster way to do it, it would be in there. (Note that List::Util includes an XS implementation, so that's probably faster than any pure-Perl approach. It also has a pure-Perl version of first, in List::Util::PP.)
Note that the value being tested is passed to the subroutine in $_ and not as a parameter. This is a convenience when you're using the first { some condition} #values form, but is something you have to remember if you're using a regular subroutine. Some more examples:
use 5.010; # I want to use 'say'; nothing else here is 5.10 specific
use List::Util qw(first);
say first { $_ > 3 } 1 .. 10; # prints 4
sub wanted { $_ > 4 }; # note we're using $_ not $_[0]
say first \&wanted, 1 .. 10; # prints 5
my $want = \&wanted; # Get a subroutine reference
say first \&$want, 1 .. 10; # This is how you pass a reference in a scalar
# someFunc expects a parameter instead of looking at $_
say first { someFunc($_) } 1 .. 10;
Untested since I don't have Perl on this machine, but:
sub first(\&#) {
my $pred = shift;
die "First argument to "first" must be a sub" unless ref $pred eq 'CODE';
for my $val (#_) {
return $val if $pred->($val);
}
return undef;
}
Then use it as:
my $first = first { sub performing test } #list;
Note that this doesn't distinguish between no matches in the list and one of the elements in the list being an undefined value and having that match.
Just since its not here, a Perl function definition of first that localizes $_ for its block:
sub first (&#) {
my $code = shift;
for (#_) {return $_ if $code->()}
undef
}
my #array = 1 .. 10;
say first {$_ > 5} #array; # prints 6
While it will work fine, I don't advocate using this version, since List::Util is a core module (installed by default), and its implementation of first will usually use the XS version (written in C) which is much faster.

How can I translate a shell script to Perl?

I have a shell script, pretty big one. Now my boss says I must rewrite it in Perl.
Is there any way to write a Perl script and use the existing shell code as is in my Perl script. Something similar to Inline::C.
Is there something like Inline::Shell? I had a look at inline module, but it supports only languages.
I'll answer seriously. I do not know of any program to translate a shell script into Perl, and I doubt any interpreter module would provide the performance benefits. So I'll give an outline of how I would go about it.
Now, you want to reuse your code as much as possible. In that case, I suggest selecting pieces of that code, write a Perl version of that, and then call the Perl script from the main script. That will enable you to do the conversion in small steps, assert that the converted part is working, and improve gradually your Perl knowledge.
As you can call outside programs from a Perl script, you can even replace some bigger logic with Perl, and call smaller shell scripts (or other commands) from Perl to do something you don't feel comfortable yet to convert. So you'll have a shell script calling a perl script calling another shell script. And, in fact, I did exactly that with my own very first Perl script.
Of course, it's important to select well what to convert. I'll explain, below, how many patterns common in shell scripts are written in Perl, so that you can identify them inside your script, and create replacements by as much cut&paste as possible.
First, both Perl scripts and Shell scripts are code+functions. Ie, anything which is not a function declaration is executed in the order it is encountered. You don't need to declare functions before use, though. That means the general layout of the script can be preserved, though the ability to keep things in memory (like a whole file, or a processed form of it) makes it possible to simplify tasks.
A Perl script, in Unix, starts with something like this:
#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;
#other libraries
(rest of the code)
The first line, obviously, points to the commands to be used to run the script, just like normal shells do. The following two "use" lines make then language more strict, which should decrease the amount of bugs you encounter because you don't know the language well (or plain did something wrong). The third use line imports the "Dumper" function of the "Data" module. It's useful for debugging purposes. If you want to know the value of an array or hash table, just print Dumper(whatever).
Note also that comments are just like shell's, lines starting with "#".
Now, you call external programs and pipe to or pipe from them. For example:
open THIS, "cat $ARGV[0] |";
That will run cat, passing "$ARGV[0]", which would be $1 on shell -- the first argument passed to it. The result of that will be piped into your Perl script through "THIS", which you can use to read that from it, as I'll show later.
You can use "|" at the beginning or end of line, to indicate the mode "pipe to" or "pipe from", and specify a command to be run, and you can also use ">" or ">>" at the beginning, to open a file for writing with or without truncation, "<" to explicitly indicate opening a file for reading (the default), or "+<" and "+>" for read and write. Notice that the later will truncate the file first.
Another syntax for "open", which will avoid problems with files with such characters in their names, is having the opening mode as a second argument:
open THIS, "-|", "cat $ARGV[0]";
This will do the same thing. The mode "-|" stands for "pipe from" and "|-" stands for "pipe to". The rest of the modes can be used as they were (>, >>, <, +>, +<). While there is more than this to open, it should suffice for most things.
But you should avoid calling external programs as much as possible. You could open the file directly, by doing open THIS, "$ARGV[0]";, for example, and have much better performance.
So, what external programs you could cut out? Well, almost everything. But let's stay with the basics: cat, grep, cut, head, tail, uniq, wc, sort.
CAT
Well, there isn't much to be said about this one. Just remember that, if possible, read the file only once and keep it in memory. If the file is huge you won't do that, of course, but there are almost always ways to avoid reading a file more than once.
Anyway, the basic syntax for cat would be:
my $filename = "whatever";
open FILE, "$filename" or die "Could not open $filename!\n";
while(<FILE>) {
print $_;
}
close FILE;
This opens a file, and prints all it's contents ("while(<FILE>)" will loop until EOF, assigning each line to "$_"), and close it again.
If I wanted to direct the output to another file, I could do this:
my $filename = "whatever";
my $anotherfile = "another";
open (FILE, "$filename") || die "Could not open $filename!\n";
open OUT, ">", "$anotherfile" or die "Could not open $anotherfile for writing!\n";
while(<FILE>) {
print OUT $_;
}
close FILE;
This will print the line to the file indicated by "OUT". You can use STDIN, STDOUT and STDERR in the appropriate places as well, without having to open them first. In fact, "print" defaults to STDOUT, and "die" defaults to "STDERR".
Notice also the "or die ..." and "|| die ...". The operators or and || means it will only execute the following command if the first returns false (which means empty string, null reference, 0, and the like). The die command stops the script with an error message.
The main difference between "or" and "||" is priority. If "or" was replaced by "||" in the examples above, it would not work as expected, because the line would be interpreted as:
open FILE, ("$filename" || die "Could not open $filename!\n");
Which is not at all what is expected. As "or" has a lower priority, it works. In the line where "||" is used, the parameters to open are passed between parenthesis, making it possible to use "||".
Alas, there is something which is pretty much what cat does:
while(<>) {
print $_;
}
That will print all files in the command line, or anything passed through STDIN.
GREP
So, how would our "grep" script work? I'll assume "grep -E", because that's easier in Perl than simple grep. Anyway:
my $pattern = $ARGV[0];
shift #ARGV;
while(<>) {
print $_ if /$pattern/o;
}
The "o" passed to $patttern instructs Perl to compile that pattern only once, thus gaining you speed. Not the style "something if cond". It means it will only execute "something" if the condition is true. Finally, "/$pattern/", alone, is the same as "$_ =~ m/$pattern/", which means compare $_ with the regex pattern indicated. If you want standard grep behavior, ie, just substring matching, you could write:
print $_ if $_ =~ "$pattern";
CUT
Usually, you do better using regex groups to get the exact string than cut. What you would do with "sed", for instance. Anyway, here are two ways of reproducing cut:
while(<>) {
my #array = split ",";
print $array[3], "\n";
}
That will get you the fourth column of every line, using "," as separator. Note #array and $array[3]. The # sigil means "array" should be treated as an, well, array. It will receive an array composed of each column in the currently processed line. Next, the $ sigil means array[3] is a scalar value. It will return the column you are asking for.
This is not a good implementation, though, as "split" will scan the whole string. I once reduced a process from 30 minutes to 2 seconds just by not using split -- the lines where rather large, though. Anyway, the following has a superior performance if the lines are expected to be big, and the columns you want are low:
while(<>) {
my ($column) = /^(?:[^,]*,){3}([^,]*),/;
print $column, "\n";
}
This leverages regular expressions to get the desired information, and only that.
If you want positional columns, you can use:
while(<>) {
print substr($_, 5, 10), "\n";
}
Which will print 10 characters starting from the sixth (again, 0 means the first character).
HEAD
This one is pretty simple:
my $printlines = abs(shift);
my $lines = 0;
my $current;
while(<>) {
if($ARGV ne $current) {
$lines = 0;
$current = $ARGV;
}
print "$_" if $lines < $printlines;
$lines++;
}
Things to note here. I use "ne" to compare strings. Now, $ARGV will always point to the current file, being read, so I keep track of them to restart my counting once I'm reading a new file. Also note the more traditional syntax for "if", right along with the post-fixed one.
I also use a simplified syntax to get the number of lines to be printed. When you use "shift" by itself it will assume "shift #ARGV". Also, note that shift, besides modifying #ARGV, will return the element that was shifted out of it.
As with a shell, there is no distinction between a number and a string -- you just use it. Even things like "2"+"2" will work. In fact, Perl is even more lenient, cheerfully treating anything non-number as a 0, so you might want to be careful there.
This script is very inefficient, though, as it reads ALL file, not only the required lines. Let's improve it, and see a couple of important keywords in the process:
my $printlines = abs(shift);
my #files;
if(scalar(#ARGV) == 0) {
#files = ("-");
} else {
#files = #ARGV;
}
for my $file (#files) {
next unless -f $file && -r $file;
open FILE, "<", $file or next;
my $lines = 0;
while(<FILE>) {
last if $lines == $printlines;
print "$_";
$lines++;
}
close FILE;
}
The keywords "next" and "last" are very useful. First, "next" will tell Perl to go back to the loop condition, getting the next element if applicable. Here we use it to skip a file unless it is truly a file (not a directory) and readable. It will also skip if we couldn't open the file even then.
Then "last" is used to immediately jump out of a loop. We use it to stop reading the file once we have reached the required number of lines. It's true we read one line too many, but having "last" in that position shows clearly that the lines after it won't be executed.
There is also "redo", which will go back to the beginning of the loop, but without reevaluating the condition nor getting the next element.
TAIL
I'll do a little trick here.
my $skiplines = abs(shift);
my #lines;
my $current = "";
while(<>) {
if($ARGV ne $current) {
print #lines;
undef #lines;
$current = $ARGV;
}
push #lines, $_;
shift #lines if $#lines == $skiplines;
}
print #lines;
Ok, I'm combining "push", which appends a value to an array, with "shift", which takes something from the beginning of an array. If you want a stack, you can use push/pop or shift/unshift. Mix them, and you have a queue. I keep my queue with at most 10 elements with $#lines which will give me the index of the last element in the array. You could also get the number of elements in #lines with scalar(#lines).
UNIQ
Now, uniq only eliminates repeated consecutive lines, which should be easy with what you have seen so far. So I'll eliminate all of them:
my $current = "";
my %lines;
while(<>) {
if($ARGV ne $current) {
undef %lines;
$current = $ARGV;
}
print $_ unless defined($lines{$_});
$lines{$_} = "";
}
Now here I'm keeping the whole file in memory, inside %lines. The use of the % sigil indicates this is a hash table. I'm using the lines as keys, and storing nothing as value -- as I have no interest in the values. I check where the key exist with "defined($lines{$_})", which will test if the value associated with that key is defined or not; the keyword "unless" works just like "if", but with the opposite effect, so it only prints a line if the line is NOT defined.
Note, too, the syntax $lines{$_} = "" as a way to store something in a hash table. Note the use of {} for hash table, as opposed to [] for arrays.
WC
This will actually use a lot of stuff we have seen:
my $current;
my %lines;
my %words;
my %chars;
while(<>) {
$lines{"$ARGV"}++;
$chars{"$ARGV"} += length($_);
$words{"$ARGV"} += scalar(grep {$_ ne ""} split /\s/);
}
for my $file (keys %lines) {
print "$lines{$file} $words{$file} $chars{$file} $file\n";
}
Three new things. Two are the "+=" operator, which should be obvious, and the "for" expression. Basically, a "for" will assign each element of the array to the variable indicated. The "my" is there to declare the variable, though it's unneeded if declared previously. I could have an #array variable inside those parenthesis. The "keys %lines" expression will return as an array they keys (the filenames) which exist for the hash table "%lines". The rest should be obvious.
The third thing, which I actually added only revising the answer, is the "grep". The format here is:
grep { code } array
It will run "code" for each element of the array, passing the element as "$_". Then grep will return all elements for which the code evaluates to "true" (not 0, not "", etc). This avoids counting empty strings resulting from consecutive spaces.
Similar to "grep" there is "map", which I won't demonstrate here. Instead of filtering, it will return an array formed by the results of "code" for each element.
SORT
Finally, sort. This one is easy too:
my #lines;
my $current = "";
while(<>) {
if($ARGV ne $current) {
print sort #lines;
undef #lines;
$current = $ARGV;
}
push #lines, $_;
}
print sort #lines;
Here, "sort" will sort the array. Note that sort can receive a function to define the sorting criteria. For instance, if I wanted to sort numbers I could do this:
my #lines;
my $current = "";
while(<>) {
if($ARGV ne $current) {
print sort #lines;
undef #lines;
$current = $ARGV;
}
push #lines, $_;
}
print sort {$a <=> $b} #lines;
Here "$a" and "$b" receive the elements to be compared. "<=>" returns -1, 0 or 1 depending on whether the number is less than, equal to or greater than the other. For strings, "cmp" does the same thing.
HANDLING FILES, DIRECTORIES & OTHER STUFF
As for the rest, basic mathematical expressions should be easy to understand. You can test certain conditions about files this way:
for my $file (#ARGV) {
print "$file is a file\n" if -f "$file";
print "$file is a directory\n" if -d "$file";
print "I can read $file\n" if -r "$file";
print "I can write to $file\n" if -w "$file";
}
I'm not trying to be exaustive here, there are many other such tests. I can also do "glob" patterns, like shell's "*" and "?", like this:
for my $file (glob("*")) {
print $file;
print "*" if -x "$file" && ! -d "$file";
print "/" if -d "$file";
print "\t";
}
If you combined that with "chdir", you can emulate "find" as well:
sub list_dir($$) {
my ($dir, $prefix) = #_;
my $newprefix = $prefix;
if ($prefix eq "") {
$newprefix = $dir;
} else {
$newprefix .= "/$dir";
}
chdir $dir;
for my $file (glob("*")) {
print "$prefix/" if $prefix ne "";
print "$dir/$file\n";
list_dir($file, $newprefix) if -d "$file";
}
chdir "..";
}
list_dir(".", "");
Here we see, finally, a function. A function is declared with the syntax:
sub name (params) { code }
Strictly speakings, "(params)" is optional. The declared parameter I used, "($$)", means I'm receiving two scalar parameters. I could have "#" or "%" in there as well. The array "#_" has all the parameters passed. The line "my ($dir, $prefix) = #_" is just a simple way of assigning the first two elements of that array to the variables $dir and $prefix.
This function does not return anything (it's a procedure, really), but you can have functions which return values just by adding "return something;" to it, and have it return "something".
The rest of it should be pretty obvious.
MIXING EVERYTHING
Now I'll present a more involved example. I'll show some bad code to explain what's wrong with it, and then show better code.
For this first example, I have two files, the names.txt file, which names and phone numbers, the systems.txt, with systems and the name of the responsible for them. Here they are:
names.txt
John Doe, (555) 1234-4321
Jane Doe, (555) 5555-5555
The Boss, (666) 5555-5555
systems.txt
Sales, Jane Doe
Inventory, John Doe
Payment, That Guy
I want, then, to print the first file, with the system appended to the name of the person, if that person is responsible for that system. The first version might look like this:
#!/usr/bin/perl
use strict;
use warnings;
open FILE, "names.txt";
while(<FILE>) {
my ($name) = /^([^,]*),/;
my $system = get_system($name);
print $_ . ", $system\n";
}
close FILE;
sub get_system($) {
my ($name) = #_;
my $system = "";
open FILE, "systems.txt";
while(<FILE>) {
next unless /$name/o;
($system) = /([^,]*)/;
}
close FILE;
return $system;
}
This code won't work, though. Perl will complain that the function was used too early for the prototype to be checked, but that's just a warning. It will give an error on line 8 (the first while loop), complaining about a readline on a closed filehandle. What happened here is that "FILE" is global, so the function get_system is changing it. Let's rewrite it, fixing both things:
#!/usr/bin/perl
use strict;
use warnings;
sub get_system($) {
my ($name) = #_;
my $system = "";
open my $filehandle, "systems.txt";
while(<$filehandle>) {
next unless /$name/o;
($system) = /([^,]*)/;
}
close $filehandle;
return $system;
}
open FILE, "names.txt";
while(<FILE>) {
my ($name) = /^([^,]*),/;
my $system = get_system($name);
print $_ . ", $system\n";
}
close FILE;
This won't give any error or warnings, nor will it work. It returns just the sysems, but not the names and phone numbers! What happened? Well, what happened is that we are making a reference to "$_" after calling get_system, but, by reading the file, get_system is overwriting the value of $_!
To avoid that, we'll make $_ local inside get_system. This will give it a local scope, and the original value will then be restored once returned from get_system:
#!/usr/bin/perl
use strict;
use warnings;
sub get_system($) {
my ($name) = #_;
my $system = "";
local $_;
open my $filehandle, "systems.txt";
while(<$filehandle>) {
next unless /$name/o;
($system) = /([^,]*)/;
}
close $filehandle;
return $system;
}
open FILE, "names.txt";
while(<FILE>) {
my ($name) = /^([^,]*),/;
my $system = get_system($name);
print $_ . ", $system\n";
}
close FILE;
And that still doesn't work! It prints a newline between the name and the system. Well, Perl reads the line including any newline it might have. There is a neat command which will remove newlines from strings, "chomp", which we'll use to fix this problem. And since not every name has a system, we might, as well, avoid printing the comma when that happens:
#!/usr/bin/perl
use strict;
use warnings;
sub get_system($) {
my ($name) = #_;
my $system = "";
local $_;
open my $filehandle, "systems.txt";
while(<$filehandle>) {
next unless /$name/o;
($system) = /([^,]*)/;
}
close $filehandle;
return $system;
}
open FILE, "names.txt";
while(<FILE>) {
my ($name) = /^([^,]*),/;
my $system = get_system($name);
chomp;
print $_;
print ", $system" if $system ne "";
print "\n";
}
close FILE;
That works, but it also happens to be horribly inefficient. We read the whole systems file for every line in the names file. To avoid that, we'll read all data from systems once, and then use that to process names.
Now, sometimes a file is so big you can't read it into memory. When that happens, you should try to read into memory any other file needed to process it, so that you can do everything in a single pass for each file. Anyway, here is the first optimized version of it:
#!/usr/bin/perl
use strict;
use warnings;
our %systems;
open SYSTEMS, "systems.txt";
while(<SYSTEMS>) {
my ($system, $name) = /([^,]*),(.*)/;
$systems{$name} = $system;
}
close SYSTEMS;
open NAMES, "names.txt";
while(<NAMES>) {
my ($name) = /^([^,]*),/;
chomp;
print $_;
print ", $systems{$name}" if defined $systems{$name};
print "\n";
}
close NAMES;
Unfortunately, it doesn't work. No system ever appears! What has happened? Well, let's look into what "%systems" contains, by using Data::Dumper:
#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;
our %systems;
open SYSTEMS, "systems.txt";
while(<SYSTEMS>) {
my ($system, $name) = /([^,]*),(.*)/;
$systems{$name} = $system;
}
close SYSTEMS;
print Dumper(%systems);
open NAMES, "names.txt";
while(<NAMES>) {
my ($name) = /^([^,]*),/;
chomp;
print $_;
print ", $systems{$name}" if defined $systems{$name};
print "\n";
}
close NAMES;
The output will be something like this:
$VAR1 = ' Jane Doe';
$VAR2 = 'Sales';
$VAR3 = ' That Guy';
$VAR4 = 'Payment';
$VAR5 = ' John Doe';
$VAR6 = 'Inventory';
John Doe, (555) 1234-4321
Jane Doe, (555) 5555-5555
The Boss, (666) 5555-5555
Those $VAR1/$VAR2/etc is how Dumper displays a hash table. The odd numbers are the keys, and the succeeding even numbers are the values. Now we can see that each name in %systems has a preceeding space! Silly regex mistake, let's fix it:
#!/usr/bin/perl
use strict;
use warnings;
our %systems;
open SYSTEMS, "systems.txt";
while(<SYSTEMS>) {
my ($system, $name) = /^\s*([^,]*?)\s*,\s*(.*?)\s*$/;
$systems{$name} = $system;
}
close SYSTEMS;
open NAMES, "names.txt";
while(<NAMES>) {
my ($name) = /^\s*([^,]*?)\s*,/;
chomp;
print $_;
print ", $systems{$name}" if defined $systems{$name};
print "\n";
}
close NAMES;
So, here, we are aggressively removing any spaces from the beginning or end of name and system. There are other ways to form that regex, but that's beside the point. There is still one problem with this script, which you'll have seen if your "names.txt" and/or "systems.txt" files have an empty line at the end. The warnings look like this:
Use of uninitialized value in hash element at ./exemplo3e.pl line 10, <SYSTEMS> line 4.
Use of uninitialized value in hash element at ./exemplo3e.pl line 10, <SYSTEMS> line 4.
John Doe, (555) 1234-4321, Inventory
Jane Doe, (555) 5555-5555, Sales
The Boss, (666) 5555-5555
Use of uninitialized value in hash element at ./exemplo3e.pl line 19, <NAMES> line 4.
What happened here is that nothing went into the "$name" variable when the empty line was processed. There are many ways around that, but I choose the following:
#!/usr/bin/perl
use strict;
use warnings;
our %systems;
open SYSTEMS, "systems.txt" or die "Could not open systems.txt!";
while(<SYSTEMS>) {
my ($system, $name) = /^\s*([^,]+?)\s*,\s*(.+?)\s*$/;
$systems{$name} = $system if defined $name;
}
close SYSTEMS;
open NAMES, "names.txt" or die "Could not open names.txt!";
while(<NAMES>) {
my ($name) = /^\s*([^,]+?)\s*,/;
chomp;
print $_;
print ", $systems{$name}" if defined($name) && defined($systems{$name});
print "\n";
}
close NAMES;
The regular expressions now require at least one character for name and system, and we test to see if "$name" is defined before we use it.
CONCLUSION
Well, then, these are the basic tools to translate a shell script. You can do MUCH more with Perl, but that was not your question, and it wouldn't fit here anyway.
Just as a basic overview of some important topics,
A Perl script that might be attacked by hackers need to be run with the -T option, so that Perl will complain about any vulnerable input which has not been properly handled.
There are libraries, called modules, for database accesses, XML&cia handling, Telnet, HTTP & other protocols. In fact, there are miriads of modules which can be found at CPAN.
As mentioned by someone else, if you make use of AWK or SED, you can translate those into Perl with A2P and S2P.
Perl can be written in an Object Oriented way.
There are multiple versions of Perl. As of this writing, the stable one is 5.8.8 and there is a 5.10.0 available. There is also a Perl 6 in development, but experience has taught everyone not to wait too eagerly for it.
There is a free, good, hands-on, hard & fast book about Perl called Learning Perl The Hard Way. It's style is similar to this very answer. It might be a good place to go from here.
I hope this helped.
DISCLAIMER
I'm NOT trying to teach Perl, and you will need to have at least some reference material. There are guidelines to good Perl habits, such as using "use strict;" and "use warnings;" at the beginning of the script, to make it less lenient of badly written code, or using STDOUT and STDERR on the print lines, to indicate the correct output pipe.
This is stuff I agree with, but I decided it would detract from the basic goal of showing patterns for common shell script utilities.
I don't know what's in your shell script, but don't forget there are tools like
a2p - awk-to-perl
s2p - sed-to-perl
and perhaps more. Worth taking a look around.
You may find that due to Perl's power/features, it's not such a big job, in that you may have been jumping through hoops with various bash features and utility programs to do something that comes out of Perl natively.
Like any migration project, it's useful to have some canned regression tests to run with both solutions, so if you don't have those, I'd generate those first.
I'm surprised no-one has yet mentioned the Shell module that is included with core Perl, which lets you execute external commands using function-call syntax. For example (adapted from the synopsis):
use Shell qw(cat ps cp);
$passwd = cat '</etc/passwd';
#pslines = ps '-ww';
cp "/etc/passwd", "/tmp/passwd";
Provided you use parens, you can even call other programs in the $PATH that you didn't mention on the use line, e.g.:
gcc('-o', 'foo', 'foo.c');
Note that Shell gathers up the subprocess's STDOUT and returns it as a string or array. This simplifies scripting, but it is not the most efficient way to go and may cause trouble if you rely on a command's output being unbuffered.
The module docs mention some shortcomings, such as that shell internal commands (e.g. cd) cannot be called using the same syntax. In fact they recommend that the module not be used for production systems! But it could certainly be a helpful crutch to lean on until you get your code ported across to "proper" Perl.
The inline shell thingy is called system. If you have user-defined functions you're trying to expose to Perl, you're out of luck. However, you can run short bits of shell using the same environment as your running Perl program. You can also gradually replace parts of the shell script with Perl. Start writing a module that replicates the shell script functionality and insert Perly bits into the shell script until you eventually have mostly Perl.
There's no shell-to-Perl translator. There was a long running joke about a csh-to-Perl translator that you could email your script to, but that was really just Tom Christainsen translating it for you to show you how cool Perl was back in the early 90s. Randal Schwartz uploaded a sh-to-Perl translator, but you have to check the upload date: it was April Fool's day. His script merely wrapped everything in system.
Whatever you do, don't lose the original shell script. :)
I agree that learning Perl and trying to write Perl instead of shell is for the greater good. I did the transfer once with the help of the "Replace" function of Notepad++.
However, I had a similar problem to the one initially asked while I was trying to create a Perl wrapper around a shell script (that could execute it).
I came with the following code that works in my case.
It might help.
#!perl
use strict;
use Data::Dumper;
use Cwd;
#Variables read from shell
our %VAR;
open SH, "<$ARGV[0]" or die "Error while trying to read $ARGV[0] ($!)\n";
my #SH=<SH>;
close SH;
sh2perl(#SH);
#Subroutine to execute shell from Perl (read from array)
sub sh2perl {
#Variables
my %case; #To store data from conditional block of "case"
my %if; #To store data from conditional block of "if"
foreach my $line (#_) {
#Remove blanks at the beginning and EOL character
$line=~s/^\s*//;
chomp $line;
#Comments and blank lines
if ($line=~/^(#.*|\s*)$/) {
#Do nothing
}
#Conditional block - Case
elsif ($line=~/case.*in/..$line=~/esac/) {
if ($line=~/case\s*(.*?)\s*\in/) {
$case{'var'}=transform($1);
} elsif ($line=~/esac/) {
delete $case{'curr_pattern'};
#Run conditional block
my $case;
map { $case=$_ if $case{'var'}=~/$_/ } #{$case{'list_patterns'}};
$case ? sh2perl(#{$case{'patterns'}->{$case}}) : sh2perl(#{$case{'patterns'}->{"*"}});
} elsif ($line=~/^\s*(.*?)\s*\)/) {
$case{'curr_pattern'}=$1;
push(#{$case{'list_patterns'}}, $case{'curr_pattern'}) unless ($line=~m%\*\)%)
} else {
push(#{$case{'patterns'}->{ $case{'curr_pattern'} }}, $line);
}
}
#Conditional block - if
elsif ($line=~/^if/..$line=~/^fi/) {
if ($line=~/if\s*\[\s*(.*\S)\s*\];/) {
$if{'condition'}=transform($1);
$if{'curr_cond'}="TRUE";
} elsif ($line=~/fi/) {
delete $if{'curr_cond'};
#Run conditional block
$if{'condition'} ? sh2perl(#{$if{'TRUE'}}) : sh2perl(#{$if{'FALSE'}});
} elsif ($line=~/^else/) {
$if{'curr_cond'}="FALSE";
} else {
push(#{$if{ $if{'curr_cond'} }}, $line);
}
}
#echo
elsif($line=~/^echo\s+"?(.*?[^"])"?\s*$/) {
my $str=$1;
#echo with redirection
if ($str=~m%[>\|]%) {
eval { system(transform($line)) };
if ($#) { warn "Error while evaluating $line: $#\n"; }
#print new line
} elsif ($line=~/^echo ""$/) {
print "\n";
#default
} else {
print transform($str),"\n";
}
}
#cd
elsif($line=~/^\s*cd\s+(.*)/) {
chdir $1;
}
#export
elsif($line=~/^export\s+((\w+).*)/) {
my ($var,$exported)=($2,$1);
if ($exported=~/^(\w+)\s*=\s*(.*)/) {
while($exported=~/(\w+)\s*=\s*"?(.*?\S)"?\s*(;(?:\s*export\s+)?|$)/g) { $VAR{$1}=transform($2); }
}
# export($var,$VAR{$var});
$ENV{$var}=$VAR{$var};
print "Exported variable $var = $VAR{$var}\n";
}
#Variable assignment
elsif ($line=~/^(\w+)\s*=\s*(.*)$/) {
$1 eq "" or $VAR{$1}=""; #Empty variable
while($line=~/(\w+)\s*=\s*"?(.*?\S)"?\s*(;|$)/g) {
$VAR{$1}=transform($2);
}
}
#Source
elsif ($line=~/^source\s*(.*\.sh)/) {
open SOURCE, "<$1" or die "Error while trying to open $1 ($!)\n";
my #SOURCE=<SOURCE>;
close SOURCE;
sh2perl(#SOURCE);
}
#Default (assuming running command)
else {
eval { map { system(transform($_)) } split(";",$line); };
if ($#) { warn "Error while doing system on \"$line\": $#\n"; }
}
}
}
sub transform {
my $src=$_[0];
#Variables $1 and similar
$src=~s/\$(\d+)/$ARGV[$1-1]/ge;
#Commands stored in variables "$(<cmd>)"
eval {
while ($src=~m%\$\((.*)\)%g) {
my ($cmd,$new_cmd)=($1,$1);
my $curr_dir=getcwd;
$new_cmd=~s/pwd/echo $curr_dir/g;
$src=~s%\$\($cmd\)%`$new_cmd`%e;
chomp $src;
}
};
if ($#) { warn "Wrong assessment for variable $_[0]:\n=> $#\n"; return "ERROR"; }
#Other variables
$src=~s/\$(\w+)/$VAR{$1}/g;
#Backsticks
$src=~s/`(.*)`/`$1`/e;
#Conditions
$src=~s/"(.*?)"\s*==\s*"(.*?)"/"$1" eq "$2" ? 1 : 0/e;
$src=~s/"(.*?)"\s*!=\s*"(.*?)"/"$1" ne "$2" ? 1 : 0/e;
$src=~s/(\S+)\s*==\s*(\S+)/$1 == $2 ? 1 : 0/e;
$src=~s/(\S+)\s*!=\s*(\S+)/$1 != $2 ? 1 : 0/e;
#Return Result
return $src;
}
You could start your "Perl" script with:
#!/bin/bash
Then, assuming bash was installed at that location, perl would automatically invoke the bash interpretor to run it.
Edit: Or maybe the OS would intercept the call and stop it getting to Perl. I'm finding it hard to track down the documentation on how this actually works. Comments to documentation would be welcomed.

What are some elegant features or uses of Perl?

What? Perl Beautiful? Elegant? He must be joking!
It's true, there's some ugly Perl out there. And by some, I mean lots. We've all seen it.
Well duh, it's symbol soup. Isn't it?
Yes there are symbols. Just like 'math' has 'symbols'. It's just that we programmers are more familiar with the standard mathematical symbols. We grew to accept the symbols from our mother languages, whether that be ASM, C, or Pascal. Perl just decided to have a few more.
Well, I think we should get rid of all the unnecessary symbols. Makes the code look better.
The language for doing so already exists. It's called Lisp. (and soon, perl 6.)
Okay, smart guy. Truth is, I can already invent my own symbols. They're called functions and methods. Besides, we don't want to reinvent APL.
Oh, fake alter ego, you are so funny! It's really true, Perl can be quite beautiful. It can be quite ugly, as well. With Perl, TIMTOWTDI.
So, what are your favorite elegant bits of Perl code?
Perl facilitates the use of lists/hashes to implement named parameters, which I consider very elegant and a tremendous aid to self-documenting code.
my $result = $obj->method(
flux_capacitance => 23,
general_state => 'confusion',
attitude_flags => ATTITUDE_PLEASANT | ATTITUDE_HELPFUL,
);
My favourite pieces of elegant Perl code aren't necessarily elegant at all. They're meta-elegant, and allow you to get rid of all those bad habits that many Perl developers have slipped into. It would take me hours or days to show them all in the detail they deserve, but as a short list they include:
autobox, which turns Perl's primitives into first-class objects.
autodie, which causes built-ins to throw exceptions on failure (removing most needs for the or die... construct). See also my autodie blog and video).
Moose, which provide an elegant, extensible, and correct way of writing classes in Perl.
MooseX::Declare, which provides syntaxic aweseomeness when using Moose.
Perl::Critic, your personal, automatic, extensible and knowledgeable code reviewer. See also this Perl-tip.
Devel::NYTProf, which provides me the most detailed and usable profiling information I've seen in any programming language. See also Tim Bunce's Blog.
PAR, the Perl Archiver, for bundling distributions and even turning whole programs into stand-alone executable files. See also this Perl-tip.
Perl 5.10, which provides some stunning regexp improvements, smart-match, the switch statement, defined-or, and state variables.
Padre, the only Perl editor that integrates the best bits of the above, is cross-platform, and is completely free and open source.
If you're too lazy to follow links, I recently did a talk at Linux.conf.au about most of the above. If you missed it, there's a video of it on-line (ogg theora). If you're too lazy to watch videos, I'm doing a greatly expanded version of the talk as a tutorial at OSCON this year (entitled doing Perl right).
All the best,
Paul
I'm surprised no one mentioned the Schwartzian Transform.
my #sorted =
map { $_->[0] }
sort { $a->[1] <=> $b->[1] }
map { [ $_, expensive_func($_) ] }
#elements;
And in the absence of a slurp operator,
my $file = do { local $/; readline $fh };
Have a list of files the user wants your program to process? Don't want to accidentally process a program, folder, or nonexistent file? Try this:
#files = grep { -T } #files;
And, like magic, you've weeded out all the inappropriate entries. Don't want to ignore them silently? Add this line before the last one:
warn "Not a file: $_" foreach grep { !-T } #files;
Prints a nice warning message for every file that it can't process to standard error. The same thing without using grep would look like this:
my #good;
foreach(#files) {
if(-T) {
push #good, $_;
} else {
warn "Not a file: $_";
}
}
grep (and map) can be used to make code shorter while still keeping it very readable.
The "or die" construct:
open my $fh, "<", $filename
or die "could not open $filename: $!";
The use of qr// to create grammars:
#!/usr/local/ActivePerl-5.10/bin/perl
use strict;
use warnings;
use feature ':5.10';
my $non_zero = qr{[1-9]};
my $zero = qr{0};
my $decimal = qr{[.]};
my $digit = qr{$non_zero+ | $zero}x;
my $non_zero_natural = qr{$non_zero+ $digit*}x;
my $natural = qr{$non_zero_natural | $zero}x;
my $integer = qr{-? $non_zero_natural | $zero}x;
my $real = qr{$integer (?: $decimal $digit)?}x;
my %number_types = (
natural => qr/^$natural$/,
integer => qr/^$integer$/,
real => qr/^$real$/
);
for my $n (0, 3.14, -5, 300, "4ever", "-0", "1.2.3") {
my #types = grep { $n =~ $number_types{$_} } keys %number_types;
if (#types) {
say "$n is of type", #types == 1 ? " ": "s ", "#types";
} else {
say "$n is not a number";
}
}
Anonymous subroutines used to factor out duplicate code:
my $body = sub {
#some amount of work
};
$body->();
$body->() while $continue;
instead of
#some amount of work
while ($continue) {
#some amount of work again
}
Hash based dispatch tables:
my %dispatch = (
foo => \&foo,
bar => \&bar,
baz => \&baz
);
while (my $name = iterator()) {
die "$name not implemented" unless exists $dispatch{$name};
$dispatch{$name}->();
}
instead of
while (my $name = iterator()) {
if ($name eq "foo") {
foo();
} elsif ($name eq "bar") {
bar();
} elsif ($name eq "baz") {
baz();
} else {
die "$name not implemented";
}
}
Three-line classes with constructors, getter/setters and type validation:
{
package Point;
use Moose;
has ['x', 'y'] => (isa => 'Num', is => 'rw');
}
package main;
my $point = Point->new( x => '8', y => '9' );
$point->x(25);
A favorite example of mine is Perl's implementation of a factorial calculator. In Perl 5, it looks like so:
use List::Util qw/reduce/;
sub factorial {
reduce { $a * $b } 1 .. $_[0];
}
This returns false if the number is <= 1 or a string and a number if a number is passed in (rounding down if a fraction).
And looking forward to Perl 6, it looks like this:
sub factorial {
[*] 1..$^x
}
And also ( from the blog in the link above ) you can even implement this as an operator:
sub postfix:<!>(Int $x) {
[*] 1..($x || 1)
}
and then use it in your code like so:
my $fact5 = 5!;
If you have a comma separated list of flags, and want a lookup table for them, all you have to do is:
my %lookup = map { $_ => 1 } split /,/, $flags;
Now you can simply test for which flags you need like so:
if ( $lookup{FLAG} ) {
print "Ayup, got that flag!";
}
I am surprised no one has mentioned this. It's a masterpiece in my opinion:
#!/usr/bin/perl
$==$';
$;||$.| $|;$_
='*$ ( ^#(%_+&~~;# ~~/.~~
;_);;.);;#) ;~~~~;_,.~~,.* +,./|~
~;_);#-, .;.); ~ ~,./##-__);#-);~~,.*+,.
/|);;;~~#-~~~~;.~~,. /.);;.,./#~~#-;.;#~~#-;;
;;,.*+,./.);;#;./#,./ |~~~~;#-(#-__#-__&$#%^';$__
='`'&'&';$___="````" |"$[`$["|'`%",';$~=("$___$__-$[``$__"|
"$___"| ("$___$__-$[.%")).("'`"|"'$["|"'#").
'/.*?&([^&]*)&.*/$'.++$=.("/``"|"/$[`"|"/#'").(";`/[\\`\\`$__]//`;"
|";$[/[\\$[\\`$__]//`;"|";#/[\\\$\\.$__]//'").'#:=("#-","/.",
"~~",";#",";;",";.",",.",");","()","*+","__","-(","/#",".%","/|",
";_");#:{#:}=$%..$#:;'.('`'|"$["|'#')."/(..)(..)/".("```"|"``$["|
'#("').'(($:{$'.$=.'}<<'.(++$=+$=).')|($:{$'.$=.'}))/'.("```;"|
"``$[;"|"%'#;").("````'$__"|"%$[``"|"%&!,").${$[};`$~$__>&$=`;$_=
'*$(^#(%_+&#-__~~;#~~#-;.;;,.(),./.,./|,.-();;#~~#-);;;,.;_~~#-,./.,
./#,./#~~#-);;;,.(),.;.~~#-,.,.,.;_,./#,.-();;#~~#-,.;_,./|~~#-,.
,.);););#-#-__~~;#~~#-,.,.,.;_);~~~~#-);;;,.(),.*+);;# ~~#-,
./|,.*+,.,.);;;);*+~~#-,.*+,.;;,.;.,./.~~#-,.,.,.;_) ;~~~
~#-,.;;,.;.,./#,./.);*+,.;.,.;;#-__~~;#~~#-,.;;,.* +);;
#);#-,./#,./.);*+~~#-~~.%~~.%~~#-;;__,. /.);;##- __#-
__ ~~;;);/#;#.%;#/.;#-(#-__~~;;;.;_ ;#.%~~~~ ;;()
,.;.,./#,. /#,.;_~~#- ););,.;_ );~~,./ #,.
;;;./#,./| ~~~~;#-(#- __,.,.,. ;_);~~~ ~#
-~~());; #);#-,./#, .*+);;; ~~#-~~
);~~);~~ *+~~#-);-( ~~#-#-_ _~~#-
~~#-);; #,./#,.;., .;.);# -~~#-;
#/.;#-( ~~#-#-__ ~~#-~~ #-);#
-);~~, .*+,./ |);;;~ ~#-~~
;;;.; _~~#-# -__);. %;#-(
#-__# -__~~;# ~~#-;; ;#,.
;_,.. %);#-,./#, .*+,
..%, .;.,./|) ;;;)
;;#~ ~#-,.*+,. ,.~~
#-); *+,.;_);;.~ ~););
~~,.; .~~#-);~~,.;., ./.,.;
;,.*+ ,./|,.); ~~#- );;;,.(
),.*+); ;#~~/|#-
__~~;#~~ $';$;;
I absolutely love Black Perl (link to version rewritten to compile under Perl 5). It compiles, but as far as I can tell it doesn't actually do anything.
That's what you get for a language written by a linguist from a pragmatic perspective rather than from a theoretical perspective.
Moving on from that, you can think about the Perl that people complain about as pidgin Perl (perfectly useful, but not expressive, and beware of trying to express anything complex in it), and the stuff that #pjf is talking about as "proper" Perl, the language of Shakespeare, Hemingway, Hume and so on. [edit: err, though easier to read than Hume and less dated than Shakespeare.] [re-edit and hopefully less alcoholic than Hemingway]
Adding to the love of map and grep, we can write a simple command-line parser.
my %opts = map { $_ => 1 } grep { /^-/ } #ARGV;
If we want, we can set each flag to it's index in #ARGV:
my %opts = map { $ARGV[$_] => $_ } grep { $ARGV[$_] =~ /^-/ } 0 .. $#ARGV;
That way, if a flag has an argument, we can get the argument like this:
if( defined( $opts{-e} ) ) {
my $arg = $ARGV[ $opts{-e} ];
# do -e stuff for $arg
}
Of course, some people will cry that we're reinventing the wheel and we should use getopt or some variant thereof, but honestly, this was a fairly easy wheel to reinvent. Plus, I don't like getopt.
If you don't like how long some of those lines are, you can always use intermediate variables or just convenient line breaks (hey, Python fanatics? You hear that? We can put one line of code across two lines and it still works!) to make it look better:
my %opts = map { $ARGV[$_] => $_ }
grep { $ARGV[$_] =~ /^-/ } 0 .. $#ARGV;
This file parsing mechanism is compact and easy to customize (skip blank lines, skip lines starting with X, etc..).
open(H_CONFIG, "< $file_name") or die("Error opening file: $file_name! ($!)");
while (<H_CONFIG>)
{
chomp; # remove the trailing newline
next if $_ =~ /^\s*$/; # skip lines that are blank
next if $_ =~ /^\s*#/; # skip lines starting with comments
# do something with the line
}
I use this type of construct in diverse build situations - where I need to either pre or post process payload files (S-records, etc..) or C-files or gather directory information for a 'smart build'.
My favourite elegant Perl feature is that it uses different operators for numerical values and string values.
my $string = 1 . 2;
my $number = "1" + "2";
my $unambiguous = 1 . "2";
Compare this to other dynamic languages such as JavaScript, where "+" is used for concatenation and addition.
var string = "1" + "2";
var number = 1 + 2;
var ambiguous = 1 + "2";
Or to dynamic languages such as Python and Ruby that require type coercion between strings and numberical values.
string = "1" + "2"
number = 1 + 2
throws_exception = 1 + "2"
In my opinion Perl gets this so right and the other languages get it so wrong.
Poorer typists like me who get cramps hitting the shift key too often and have an almost irrational fear of using a semicolon started writing our Perl code in python formatted files. :)
e.g.
>>> k = 5
>>> reduce(lambda i,j: i*j, range(1,k+1),1)
120
>>> k = 0
>>> reduce(lambda i,j: i*j, range(1,k+1),1)
1