In Perl, how can I implement map using grep? - perl

Some time ago I was asked the “strange” question how would I implement map with grep.
Today I tried to do it, and here is what came out. Did I squeeze everything from Perl, or there are other more clever hacks?
#!/usr/bin/env perl
use strict;
use warnings;
use 5.010;
sub my_map(&#) {
grep { $_= $_[0]->($_) } #_[1..$#_];
}
my #arr = (1,2,3,4);
#list context
say (my_map sub {$_+1}, #arr);
#scalar context
say "".my_map {$_+1} #arr;
say "the array from outside: #arr";
say "builtin map:", (map {$_+1} #arr);

Are you sure they didn't ask how to implement grep with map? That's actually useful sometimes.
grep { STMTs; EXPR } LIST
can be written as
map { STMTs; EXPR ? $_ : () } LIST
(With one difference: grep returns lvalues, and map doesn't.)
Knowing this, one can compact
map { $_ => 1 } grep { defined } #list
to
map { defined ? $_ => 1 : () } #list
(I prefer the "uncompressed" version, but the "compressed" version is probably a little faster.)
As for implementing map using grep, well, you can take advantage of grep's looping and aliasing properties.
map { STMTs; EXPR } LIST
can be written as
my #rv;
grep { STMTs; push #rv, EXPR } LIST;
#rv

My attempt at this largely pointless academic exercise is.
sub my_map (&#) {
my $code = shift;
my #return_list;
grep {
push #return_list, $code->($_);
} #_;
return #return_list;
}
Using grep for this is a bit of a waste, because the return list of map may not be 1:1 with the input list, like my %hash = map { $_ => 1 } #array;, you need to be more generic that using the return list of grep. The result is any looking method that allows modifying the original list would work.
sub my_map (&#) {
my $code = shift;
my #return_list;
push #return_list, $code->($_) for #_;
return #return_list;
}

I'm not entirely sure what you mean (what aspect of mapist to be emulated by grep), but a typical map-scenario could be e.g. y = x**3:
...
my #list1 = (1,2,3,4,5);
my #list2 = map $_**3, #list1;
...
With grep, if you had to, you could almost make it looke like map (but destroying the original list):
...
my #list2 = grep { $_**=3; 1 } #list1;
...
by simply writing to the references of the original list elements. The problem here is the unwanted modification of the original list; this is what you don't want to do with map.
Therefore, we could just generate another list in a subroutine, modify this one and leave the original list untouched. In slight modification of Sinan Ünür's solution, this would read:
sub gap(&#) {
my $code = shift;
my #list = #_;
grep $_ = $code->($_), #list;
#list
}
my #arr = (1 .. 5);
# original map
print join ',', map { $_**3 } #arr;
# 1,8,27,64,125
# grep map
print join ',', gap { $_**3 } #arr;
# 1,8,27,64,125
# test original array
print join ',', #arr;
# 1,2,3,4,5 => untouched
Regards
rbo

Related

Use of reference to elements in #_ to avoid duplicating code

Is it safe to take reference of elements of #_ in a subroutine in order to avoid duplicating code? I also wonder if the following is good practice or can be simplified. I have a subroutine mod_str that takes an option saying if a string argument should be modified in-place or not:
use feature qw(say);
use strict;
use warnings;
my $str = 'abc';
my $mstr = mod_str( $str, in_place => 0 );
say $mstr;
mod_str( $str, in_place => 1 );
say $str;
sub mod_str {
my %opt;
%opt = #_[1..$#_];
if ( $opt{in_place} ) {
$_[0] =~ s/a/A/g;
# .. do more stuff with $_[0]
return;
}
else {
my $str = $_[0];
$str =~ s/a/A/g;
# .. do more stuff with $str
return $str;
}
}
In order to avoid repeating/duplicating code in the if and else blocks above, I tried to improve mod_str:
sub mod_str {
my %opt;
%opt = #_[1..$#_];
my $ref;
my $str;
if ( $opt{in_place} ) {
$ref = \$_[0];
}
else {
$str = $_[0]; # make copy
$ref = \$str;
}
$$ref =~ s/a/A/g;
# .. do more stuff with $$ref
$opt{in_place} ? return : return $$ref;
}
The "in place" flag changes the function's interface to the point where it should be a new function. It will simplify the interface, testing, documentation and the internals to have two functions. Rather than having to parse arguments and have a big if/else block, the user has already made that choice for you.
Another way to look at it is the in_place option will always be set to a constant. Because it fundamentally changes how the function behaves, there's no sensible case where you'd write in_place => $flag.
Once you do that, the reuse becomes more obvious. Write one function to do the operation in place. Write another which calls that on a copy.
sub mod_str_in_place {
# ...Do work on $_[0]...
return;
}
sub mod_str {
my $str = $_[0]; # string is copied
mod_str_in_place($str);
return $str;
}
In the absence of the disgraced given I like using for as a topicalizer. This effectively aliases $_ to either $_[0] or the local copy depending on the value of the in_place hash element. It's directly comparable to your $ref but with aliases, and a lot cleaner
I see no reason to return a useless undef / () in the case that the string is modified in place; the subroutine may as well return the new value of the string. (I suspect the old value might be more useful, after the fashion of $x++, but that makes for uglier code!)
I'm not sure whether this is readable code to anyone but me, so comments are welcome!
use strict;
use warnings;
my $ss = 'abcabc';
printf "%s %s\n", mod_str($ss), $ss;
$ss = 'abcabc';
printf "%s %s\n", mod_str($ss, in_place => 1), $ss;
sub mod_str {
my ($copy, %opt) = #_;
for ( $opt{in_place} ? $_[0] : $copy ) {
s/a/A/g;
# .. do more stuff with $_
return $_;
}
}
output
AbcAbc abcabc
AbcAbc AbcAbc

Creating subs on the fly from eval-ed string in perl

I need to transform data structures from a list of arrays into a tree-like one. I know the depth of the tree before I start processing the data, but I want to keep things flexible so I can re-use the code.
So I landed upon the idea of generating a subref on the fly (from within a Moose-based module) to go from array to tree. Like this (in a simplified way):
use Data::Dump qw/dump/;
sub create_tree_builder {
my $depth = shift;
return eval join '', 'sub { $_[0]->{$_[',
join(']}->{$_[', (1..$depth)),
']} = $_[', $depth + 1 , '] }';
}
my $s = create_tree_builder(5);
my $tree = {};
$s->($tree, qw/one two three four five/, 'a value');
print dump $tree;
# prints
# {
# one => { two => { three => { four => { five => "a value" } } } },
# }
This opened up worlds to me, and I'm finding cool uses for this process of eval-in a parametrically generated string into a function all over the place (clearly, a solution in search of problems).
However, it feels a little too good to be true, almost.
Any advice against this practice? Or suggestion for improvements?
I can see clearly that eval-ing arbitrary input might not be the safest thing, but what else?
Follow up
Thanks for all the answers. I used amon's code and benchmarked a bit, like this:
use Benchmark qw(:all) ;
$\ = "\n";
sub create_tree_builder {
my $depth = shift;
return eval join '', 'sub { $_[0]->{$_[',
join(']}->{$_[', (1..$depth)),
']} = $_[', $depth + 1 , '] }';
}
my $s = create_tree_builder(5);
$t = sub {
$_[0] //= {};
my ($tree, #keys) = #_;
my $value = pop #keys;
$tree = $tree->{shift #keys} //= {} while #keys > 1;
$tree->{$keys[0]} = $value;
};
cmpthese(900000, {
'eval' => sub { $s->($tree, qw/one two three four five/, 'a value') },
'build' => sub { $t->($tree, qw/one two three four five/, 'a value') },
});
The results are clearly in favour of building the tree, not of the eval'ed factory:
Rate build eval
build 326087/s -- -79%
eval 1525424/s 368% --
I'll admit I could have done that before. I'll try with more random trees (rather than assigning the same element over and over) but I see no reason that the results should be different.
Thanks a lot for the help.
It is very easy to write a generalized subroutine to build such a nested hash. It is much simpler that way than writing a factory that will produce such a subroutine for a specific number of hash levels.
use strict;
use warnings;
sub tree_assign {
# Create an empty tree if one was not given, using an alias to the original argument
$_[0] //= {};
my ($tree, #keys) = #_;
my $value = pop #keys;
$tree = $tree->{shift #keys} //= {} while #keys > 1;
$tree->{$keys[0]} = $value;
}
tree_assign(my $tree, qw/one two three four five/, 'a value');
use Data::Dump;
dd $tree;
output
{
one => { two => { three => { four => { five => "a value" } } } },
}
Why this might be a bad idea
Maintainability.
Code that is eval'd has to be eval'd inside the programmers head first– not always an easy task. Essentially, evaling is obfuscation.
Speed.
eval re-runs the perl parser and compiler, before normal execution resumes. However, the same technique can be used to gain start-up time by deferring compilation of subroutines until they are needed. This is not such a case.
There is more than one way to do it.
I like anonymous subroutines, but you don't have to use an eval to construct them. They are closures anyway. Something like
...;
return sub {
my ($tree, $keys, $value) = #_;
$#$keys >= $depth or die "need moar keys";
$tree = $tree->{$keys->[$_]} for 0 .. $depth - 1;
$tree->{$keys->[$depth]} = $value;
};
and
$s->($tree, [qw(one two three four five)], "a value");
would do something suprisingly similar. (Actually, using $depth now looks like a design error; the complete path is already specified by the keys. Therefore, creating a normal, named subroutine would probably be best.)
Understanding what the OP is doing a little better based on their comments, and riffing on Borodin's code, I'd suggest an interface change. Rather than writing a subroutine to apply a value deep in a tree, I'd write a subroutine to create an empty subtree and then work on that subtree. This allows you to work efficiently on the subtree without having to walk the tree on every operation.
package Root;
use Mouse;
has root =>
is => 'ro',
isa => 'HashRef',
default => sub { {} };
sub init_subtree {
my $self = shift;
my $tree = $self->root;
for my $key (#_) {
$tree = $tree->{$key} //= {};
}
return $tree;
}
my $root = Root->new;
my $subtree = $root->init_subtree(qw/one two three four/);
# Now you can quickly work with the subtree without having
# to walk down every time. This loop's performance is only
# dependent on the number of keys you're adding, rather than
# the number of keys TIMES the depth of the subtree.
my $val = 0;
for my $key ("a".."c") {
$subtree->{$key} = $val++;
}
use Data::Dump;
dd $root;
Data::Diver is your friend:
use Data::Diver 'DiveVal', 'DiveRef';
my $tree = {};
DiveVal( $tree, qw/one two three four five/ ) = 'a value';
# or if you hate lvalue subroutines:
${ DiveRef( $tree, qw/one two three four five/ ) } = 'a value';
use Data::Dump 'dump';
print dump $tree;

"Passing arguments to subroutine"-question?

Is routine2 ok too or shouldn't I do this? (I don't need a copy of #list in the subroutine)
#!/usr/bin/perl
use 5.012;
use warnings;
my #list = 0 .. 9;
sub routine1 {
my $list = shift;
for (#$list) { $_++ };
return $list
}
my $l = routine1( \#list );
say "#$l";
sub routine2 {
for (#list) { $_++ };
}
routine2();
say "#list";
If it works for you, then it's ok too. But the first sub can do the job for any array you pass to it which makes it more general.
P.S. Remember that #_ contains aliases for the parameters passed to the function. So you could also use this:
sub increment { $_++ for #_ }
increment(#list);
If you're worried about making the syntax look nice, try this:
sub routine3 (\#) {
for (#{$_[0]}) { $_++ }
}
my #list = (0 .. 9);
routine3(#list);
say "#list"; # prints 1 .. 10
This declares routine3 with a prototype - it takes an array argument by reference. So $_[0] is a reference to #list, no rather unsightly \ needed by the caller. (Some people discourage prototypes, so take this as you will. I like them.)
But unless this is a simplification for what your actual routine does, what I'd do is this:
my #list = 0 .. 9;
my #new_list = map { $_ + 1 } #list;
say "#new_list";
Unless routine is actually really complicated, and it's vital somehow that you modify the original array, I'd just use map. Especially with map, you can plug in a subroutine:
sub complex_operation { ... }
my #new_list = map { complex_operation($_) } #list;
Of course, you could prototype complex_operation with (_) and then just write map(complex_operation, #list); but I like the bracket-syntax personally.

How can I cleanly turn a nested Perl hash into a non-nested one?

Assume a nested hash structure %old_hash ..
my %old_hash;
$old_hash{"foo"}{"bar"}{"zonk"} = "hello";
.. which we want to "flatten" (sorry if that's the wrong terminology!) to a non-nested hash using the sub &flatten(...) so that ..
my %h = &flatten(\%old_hash);
die unless($h{"zonk"} eq "hello");
The following definition of &flatten(...) does the trick:
sub flatten {
my $hashref = shift;
my %hash;
my %i = %{$hashref};
foreach my $ii (keys(%i)) {
my %j = %{$i{$ii}};
foreach my $jj (keys(%j)) {
my %k = %{$j{$jj}};
foreach my $kk (keys(%k)) {
my $value = $k{$kk};
$hash{$kk} = $value;
}
}
}
return %hash;
}
While the code given works it is not very readable or clean.
My question is two-fold:
In what ways does the given code not correspond to modern Perl best practices? Be harsh! :-)
How would you clean it up?
Your method is not best practices because it doesn't scale. What if the nested hash is six, ten levels deep? The repetition should tell you that a recursive routine is probably what you need.
sub flatten {
my ($in, $out) = #_;
for my $key (keys %$in) {
my $value = $in->{$key};
if ( defined $value && ref $value eq 'HASH' ) {
flatten($value, $out);
}
else {
$out->{$key} = $value;
}
}
}
Alternatively, good modern Perl style is to use CPAN wherever possible. Data::Traverse would do what you need:
use Data::Traverse;
sub flatten {
my %hash = #_;
my %flattened;
traverse { $flattened{$a} = $b } \%hash;
return %flattened;
}
As a final note, it is usually more efficient to pass hashes by reference to avoid them being expanded out into lists and then turned into hashes again.
First, I would use perl -c to make sure it compiles cleanly, which it does not. So, I'd add a trailing } to make it compile.
Then, I'd run it through perltidy to improve the code layout (indentation, etc.).
Then, I'd run perlcritic (in "harsh" mode) to automatically tell me what it thinks are bad practices. It complains that:
Subroutine does not end with "return"
Update: the OP essentially changed every line of code after I posted my Answer above, but I believe it still applies. It's not easy shooting at a moving target :)
There are a few problems with your approach that you need to figure out. First off, what happens in the event that there are two leaf nodes with the same key? Does the second clobber the first, is the second ignored, should the output contain a list of them? Here is one approach. First we construct a flat list of key value pairs using a recursive function to deal with other hash depths:
my %data = (
foo => {bar => {baz => 'hello'}},
fizz => {buzz => {bing => 'world'}},
fad => {bad => {baz => 'clobber'}},
);
sub flatten {
my $hash = shift;
map {
my $value = $$hash{$_};
ref $value eq 'HASH'
? flatten($value)
: ($_ => $value)
} keys %$hash
}
print join( ", " => flatten \%data), "\n";
# baz, clobber, bing, world, baz, hello
my %flat = flatten \%data;
print join( ", " => %flat ), "\n";
# baz, hello, bing, world # lost (baz => clobber)
A fix could be something like this, which will create a hash of array refs containing all the values:
sub merge {
my %out;
while (#_) {
my ($key, $value) = splice #_, 0, 2;
push #{ $out{$key} }, $value
}
%out
}
my %better_flat = merge flatten \%data;
In production code, it would be faster to pass references between the functions, but I have omitted that here for clarity.
Is it your intent to end up with a copy of the original hash or just a reordered result?
Your code starts with one hash (the original hash that is used by reference) and makes two copies %i and %hash.
The statement my %i=%{hashref} is not necessary. You are copying the entire hash to a new hash. In either case (whether you want a copy of not) you can use references to the original hash.
You are also losing data if your hash in the hash has the same value as the parent hash. Is this intended?

Traversing a multi-dimensional hash in Perl

If you have a hash (or reference to a hash) in perl with many dimensions and you want to iterate across all values, what's the best way to do it. In other words, if we have
$f->{$x}{$y}, I want something like
foreach ($x, $y) (deep_keys %{$f})
{
}
instead of
foreach $x (keys %f)
{
foreach $y (keys %{$f->{$x})
{
}
}
Stage one: don't reinvent the wheel :)
A quick search on CPAN throws up the incredibly useful Data::Walk. Define a subroutine to process each node, and you're sorted
use Data::Walk;
my $data = { # some complex hash/array mess };
sub process {
print "current node $_\n";
}
walk \&process, $data;
And Bob's your uncle. Note that if you want to pass it a hash to walk, you'll need to pass a reference to it (see perldoc perlref), as follows (otherwise it'll try and process your hash keys as well!):
walk \&process, \%hash;
For a more comprehensive solution (but harder to find at first glance in CPAN), use Data::Visitor::Callback or its parent module - this has the advantage of giving you finer control of what you do, and (just for extra street cred) is written using Moose.
Here's an option. This works for arbitrarily deep hashes:
sub deep_keys_foreach
{
my ($hashref, $code, $args) = #_;
while (my ($k, $v) = each(%$hashref)) {
my #newargs = defined($args) ? #$args : ();
push(#newargs, $k);
if (ref($v) eq 'HASH') {
deep_keys_foreach($v, $code, \#newargs);
}
else {
$code->(#newargs);
}
}
}
deep_keys_foreach($f, sub {
my ($k1, $k2) = #_;
print "inside deep_keys, k1=$k1, k2=$k2\n";
});
This sounds to me as if Data::Diver or Data::Visitor are good approaches for you.
Keep in mind that Perl lists and hashes do not have dimensions and so cannot be multidimensional. What you can have is a hash item that is set to reference another hash or list. This can be used to create fake multidimensional structures.
Once you realize this, things become easy. For example:
sub f($) {
my $x = shift;
if( ref $x eq 'HASH' ) {
foreach( values %$x ) {
f($_);
}
} elsif( ref $x eq 'ARRAY' ) {
foreach( #$x ) {
f($_);
}
}
}
Add whatever else needs to be done besides traversing the structure, of course.
One nifty way to do what you need is to pass a code reference to be called from inside f. By using sub prototyping you could even make the calls look like Perl's grep and map functions.
You can also fudge multi-dimensional arrays if you always have all of the key values, or you just don't need to access the individual levels as separate arrays:
$arr{"foo",1} = "one";
$arr{"bar",2} = "two";
while(($key, $value) = each(%arr))
{
#keyValues = split($;, $key);
print "key = [", join(",", #keyValues), "] : value = [", $value, "]\n";
}
This uses the subscript separator "$;" as the separator for multiple values in the key.
There's no way to get the semantics you describe because foreach iterates over a list one element at a time. You'd have to have deep_keys return a LoL (list of lists) instead. Even that doesn't work in the general case of an arbitrary data structure. There could be varying levels of sub-hashes, some of the levels could be ARRAY refs, etc.
The Perlish way of doing this would be to write a function that can walk an arbitrary data structure and apply a callback at each "leaf" (that is, non-reference value). bmdhacks' answer is a starting point. The exact function would vary depending one what you wanted to do at each level. It's pretty straightforward if all you care about is the leaf values. Things get more complicated if you care about the keys, indices, etc. that got you to the leaf.
It's easy enough if all you want to do is operate on values, but if you want to operate on keys, you need specifications of how levels will be recoverable.
a. For instance, you could specify keys as "$level1_key.$level2_key.$level3_key"--or any separator, representing the levels.
b. Or you could have a list of keys.
I recommend the latter.
Level can be understood by #$key_stack
and the most local key is $key_stack->[-1].
The path can be reconstructed by: join( '.', #$key\_stack )
Code:
use constant EMPTY_ARRAY => [];
use strict;
use Scalar::Util qw<reftype>;
sub deep_keys (\%) {
sub deeper_keys {
my ( $key_ref, $hash_ref ) = #_;
return [ $key_ref, $hash_ref ] if reftype( $hash_ref ) ne 'HASH';
my #results;
while ( my ( $key, $value ) = each %$hash_ref ) {
my $k = [ #{ $key_ref || EMPTY_ARRAY }, $key ];
push #results, deeper_keys( $k, $value );
}
return #results;
}
return deeper_keys( undef, shift );
}
foreach my $kv_pair ( deep_keys %$f ) {
my ( $key_stack, $value ) = #_;
...
}
This has been tested in Perl 5.10.
If you are working with tree data going more than two levels deep, and you find yourself wanting to walk that tree, you should first consider that you are going to make a lot of extra work for yourself if you plan on reimplementing everything you need to do manually on hashes of hashes of hashes when there are a lot of good alternatives available (search CPAN for "Tree").
Not knowing what your data requirements actually are, I'm going to blindly point you at a tutorial for Tree::DAG_Node to get you started.
That said, Axeman is correct, a hashwalk is most easily done with recursion. Here's an example to get you started if you feel you absolutely must solve your problem with hashes of hashes of hashes:
#!/usr/bin/perl
use strict;
use warnings;
my %hash = (
"toplevel-1" =>
{
"sublevel1a" => "value-1a",
"sublevel1b" => "value-1b"
},
"toplevel-2" =>
{
"sublevel1c" =>
{
"value-1c.1" => "replacement-1c.1",
"value-1c.2" => "replacement-1c.2"
},
"sublevel1d" => "value-1d"
}
);
hashwalk( \%hash );
sub hashwalk
{
my ($element) = #_;
if( ref($element) =~ /HASH/ )
{
foreach my $key (keys %$element)
{
print $key," => \n";
hashwalk($$element{$key});
}
}
else
{
print $element,"\n";
}
}
It will output:
toplevel-2 =>
sublevel1d =>
value-1d
sublevel1c =>
value-1c.2 =>
replacement-1c.2
value-1c.1 =>
replacement-1c.1
toplevel-1 =>
sublevel1a =>
value-1a
sublevel1b =>
value-1b
Note that you CAN NOT predict in what order the hash elements will be traversed unless you tie the hash via Tie::IxHash or similar — again, if you're going to go through that much work, I recommend a tree module.