I have to write rows generated after running a sql query to a file.
# Run the SQL script.
my $dbh = get_dbh($source);
my $qry = $dbh->prepare("$sql_data");
$qry->execute();
# Dump the data to file.
open(my $fh_write, ">", "$filename");
while (my #data = $qry->fetchrow_array())
{
print {$fh_write} join("\t", #data) . "\n";
}
close($fh_write);
Clearly i am doing two thing in a function:
Running the sql query.
Writing the data to file.
Is there a way to do this using SRP ?
There are lots of rows in the data so returning the array of rows from a seperate function might not be nice idea.
You could split it up into two different functions. One would query the database, and the other would write data to a file.
sub run_query {
my ( $sql, #args ) = #_;
# if you truly want separation of concerns,
# you need to connect $dbh somewhere else
my $sth = $dbh->prepare($sql);
$sth->execute(#args);
# this creates an iterator
return sub {
return $sth->fetchrow_arrayref;
};
}
This function takes a an SQL query and some arguments (remember to use placeholders!) and runs the query. It returns a code reference that closes over $sth. Every time that reference is invoked, one line of results will be fetched. When the statement handle $sth is empty, it will return undef, which is handed through, and you're done iterating. That might seem overkill, but stay with me for a moment.
Next, we make a function to write data to a file.
sub write_to_file {
my ( $filename, $iter ) = #_;
open my $fh, '>', $filename or die $!;
while ( my $data = $iter->() ) {
print $fh join( "\t", #{$data} ), "\n";
}
return;
}
This takes a filename and an iterator, which is a code reference. It opens the file, and then iterates until there is no more data left. Every line is written to the file. We don't need close $fh because it's a lexical filehandle that will be closed implicitly once $fh goes out of scope at the end of the function anyway.
What you've done now is define an interface. Your write_to_file function's interface is that it takes a file name and an iterator that always returns an array reference of fields.
Let's put this together.
my $iter = run_query('SELECT * FROM orders');
write_to_file( 'orders.csv', $iter );
Two lines of code. One runs the query, the other one writes the data. Looks pretty separated to me.
The good thing about this approach is that now you can also write other things to a file with the same code. The following code could for example talk to some API. The iterator that it returns again gives us one row of results per invocation.
sub api_query {
my ($customer_id) = #_;
my $api = API::Client->new;
my $res = $api->get_orders($customer_id); # returns [ {}, {}, {}, ... ]
my $i = 0;
return sub {
return if $i == $#{ $res };
return $res->[$i++];
}
}
You could drop this into the above example instead of run_query() and it would work, because this function returns something that adheres to the same interface. You could just as well make a write_to_api or write_to_slack_bot function that has the same partial interface. One of the parameters would be the same kind of iterator. Now those are exchangeable too.
Of course this whole example is very contrived. In reality it highly depends on the size and complexity of your program.
If it's a script that runs as a cronjob that does nothing but create this report once a day, you should not care about this separation of concerns. The pragmatic approach would likely be the better choice.
Once you have a lot of those, you'd start caring a bit more. Then my above approach might be viable. But only if you really need to have things flexible.
Not every concept is always applicable, and not every concept always makes sense.
Please keep in mind that there are tools that are better suited for those jobs. Instead of making your own CSV file you can use Text::CSV_XS. Or you could use an ORM like DBIx::Class and have ResultSet objects as your interface.
You should be using a seperate function for doing the job, in your situation using doing how you are doing things currently makes much more sense than sticking to SRP.
Related
I have 2 files here which is newFile and LookupFile (which are huge files).
The contents in newFile will be searched in LookupFile and further processing happens. This script is working fine, however, it is taking more time to execute. Could you please let me know what can be done here to increase the performance? Could you please let me know if we can convert files into hash to increase performance?
My file looks like below
NewFile and LookupFile:
acl sourceipaddress subnet destinationipaddress subnet portnumber
.
.
Script:
#!/usr/bin/perl
use strict;
use warnings;
use File::Slurp::Tiny 'read_file';
use File::Copy;
use Data::Dumper;
use File::Copy qw(copy);
my %options = (
LookupFile => {
type => "=s",
help => "File name",
variable => 'gitFile',
required => 1,
}, newFile => {
type => "=s",
help => "file containing the acl lines to checked for",
variable => ‘newFile’,
required => 1,
} );
$opts->addOptions(%options);
$opts->parse();
$opts->validate();
my $newFile = $opts->getOption('newFile');
my $LookupFile = $opts->getOption('LookupFile');
my #LookupFile = read_file ("$LookupFile");
my #newFile = read_file ("$newFile");
#LookupFile = split (/\n/,$LookupFile[0]);
#newLines = split (/\n/,$newFile[0]);
open FILE1, "$newFile" or die "Could not open file: $! \n";
while(my $line = <FILE1>)
{
chomp($line);
my #columns = split(' ',$line);
$var = #columns;
my $fld1;
my $cnt;
my $fld2;
my $fld3;
my $fld4;
my $fld5;
my $dIP;
my $sIP;
my $sHOST;
my $dHOST;
if(....)
if (....) further checks and processing
)
First thing to do before any optimization is to profile your code. Rather than guessing, this will tell you what lines are taking up the most time, and how often they're called. Devel::NYTProf is a good tool for the job.
This is a problem.
my #LookupFile = read_file ("$LookupFile");
my #newFile = read_file ("$newFile");
#LookupFile = split (/\n/,$LookupFile[0]);
#newLines = split (/\n/,$newFile[0]);
read_file reads the whole file in as one big string (it should be my $contents = read_file(...), using an array is awkward). Then it splits the whole thing into newlines, copying everything in the file. This is very slow and hard on memory and unnecessary.
Instead, use read_lines. This will split the file into lines as it reads avoiding a costly copy.
my #lookups = read_lines($LookupFile);
my #new = read_lines($newFile);
Next problem is $newFile is opened again and iterated through line by line.
open FILE1, "$newFile" or die "Could not open file: $! \n";
while(my $line = <FILE1>) {
This is a waste as you've already read that file into memory. Use one or the other. However, in general, it's better to work with files line-by-line than to slurp them all into memory.
The above will speed things up, but they don't get at the crux of the problem. This is likely the real problem...
The contents in newFile will be searched in LookupFile and further processing happens.
You didn't show what you're doing, but I'm going to imagine it looks something like this...
for my $line (#lines) {
for my $thing (#lookups) {
...
}
}
That is, for each line in one file, you're looking at every line in the other. This is what is known as an O(n^2) algorithm meaning that as you double the size of the files you quadruple the time.
If each file has 10 lines, it will take 100 (10^2) turns through the inner loop. If they have 100 lines, it will take 10,000 (100^2). With 1,000 lines it will take 1,000,000 times.
With O(n^2) as the sizes get bigger things get very slow very quickly.
Could you please let me know if we can convert files into hash to increase performance?
You've got the right idea. You could convert the lookup file to a hash to speed things up. Let's say they're both lists of words.
# input
foo
bar
biff
up
down
# lookup
foo
bar
baz
And you want to check if any lines in input match any lines in lookup.
First you'd read lookup in and turn it into a hash. Then you'd read input and check if each line is in the hash.
use strict;
use warnings;
use autodie;
use v5.10;
...
# Populate `%lookup`
my %lookup;
{
open my $fh, $lookupFile;
while(my $line = <$fh>) {
chomp $line;
$lookup{$line} = 1;
}
}
# Check if any lines are in %lookup
open my $fh, $inputFile;
while(my $line = <$fh>) {
chomp $line;
print $line if $lookup{$line};
}
This way you only iterate through each file once. This is an O(n) algorithm meaning is scales linearly, because hash lookups are basically instantaneous. If each file has 10 lines, it will only take 10 iterations of each loop. If they have 100 lines it will only take 100 iterations of each loop. 1000 lines, 1000 iterations.
Finally, what you really want to do is skip all this and create a database for your data and search that. SQLite is a SQL database that requires no server, just a file. Put your data in there and perform SQL queries on it using DBD::SQLite.
While this means you have to learn SQL, and there is a cost to building and maintaining the database, this is fast and most importantly very flexible. SQLite can do all sorts of searches quickly without you having to write a bunch of extra code. SQL databases are a very common, so it's a very good investment to learn SQL.
Since you're splitting the file up with my #columns = split(' ',$line); it's probably a file with many fields in it. That will likely map to a SQL table very well.
SQLite can even import files like that for you. See this answer for details on how to do that.
I'm trying to make a simple select from a database, the thing is that I want the same script to be able to select any of the tables in it. I have gotten everything solved up until the point when I need to bind the columns to variables, since they must be generated dynamically I just don't know how to do it.
here's the code:
if($op eq "SELECT"){
if ($whr){
$query1 = "SELECT $colsf FROM $tab WHERE $whr";
}else{
$query1 = "SELECT $colsf FROM $tab";
}
$seth = $dbh->prepare($query1);
$seth->execute();
foreach $cajas(#columnas){
$seth->bind_col(*$dynamically_generated_var*);
}
print $q->br();
print $q->br();
print $q->br();
The variable #columans contains the name of the selected columns (which varies a lot), and I need a variable assigned for each of the columns on the $seth->bind_col().
How can I acheive this?
Using bind_col will not gain you anything here. As you have already figured out, that's used to bind a fixed number of results to a set of variables. But you do not have a fixed set.
Thinking in terms of oh, I can just create them dynamically is a very common mistake. It will get you into all kinds of trouble later. Perl has a data structure specifically for this use case: the hash.
DBI has a bunch of functions built in for retrieving data after execute. One of those is fetchrow_hashref. It will return the results as a hash reference, with one key per column, one row at a time.
while (my $res = $sth->fetchrow_hashref) {
p $res; # p is from Data::Printer
}
Let's assume the result looks like this:
$res = {
id => 1,
color => 'red',
}
You can access the color by saying $res->{color}. The perldocs on perlref and perlreftut have a lot of info about this.
Note that the best practice for naming statement handle variables is $sth.
In your case, you have a dynamic number of columns. Those have to be joined to be in the format of col1, col2, col3. I guess you have already done that in $colsf. The table is pretty obvious in $tab, so we only have the $whr left.
This part is tricky. It's important to always sanitize your input, especially in a CGI environment. With DBI this is best done by using placeholders. They will take care of all the escaping for you, and they are easy to use.
my $sth = $dbi->prepare('select cars from garage where color=?');
$sth->execute($color);
Now we don't need to care if the color is red, blue or ' and 1; --, which might have broken stuff. If it's all very dynamic, use $dbi->quote instead.
Let's put this together in your code.
use strict;
use warnings;
use DBI;
# ...
# the columns
my $colsf = join ',', #some_list_of_column_names; # also check those!
# the table name
my $table = $q->param('table');
die 'invalid table name' if $table =~ /[^a-zA-Z0-9_]/; # input checking
# where
# I'm skipping this part as I don't know where it is comming from
if ($op eq 'SELECT') {
my $sql = 'SELECT $colsf FROM $table';
$sql .= ' WHERE $whr' if $whr;
my $sth = $dbh->prepare($sql) or die $dbi->errstr;
$sth->execute;
my #headings = $sth->{NAME}; # see https://metacpan.org/pod/DBI#NAME1
while (my $res = $sth->fetchrow_hashref) {
# do stuff here
}
}
I have a script that uses a custom module EVTConf which is just a wrapper around DBI.
It has the username and password hard coded so we don't have to write the username and password in every script.
I want to see the data that the query picks up - but it does not seem to pick up anything from the query - just a bless statement.
What is bless?
#!/sbcimp/dyn/data/scripts/perl/bin/perl
use EVTConf;
EVTConf::makeDBConnection(production);
$dbh = $EVTConf::dbh;
use Data::Dumper;
my %extend_hash = %{#_[0]};
my $query = "select level_id, e_risk_symbol, e_exch_dest, penny, specialist from etds_extend";
if (!$dbh) {
print "Error connecting to DataBase; $DBI::errstr\n";
}
my $cur_msg = $dbh->prepare($query) or die "\n\nCould not prepare statement: ".$dbh->errstr;
$cur_msg->execute();
$cur_msg->fetchrow_array;
print Dumper($cur_msg) ;
This is what I get:
Foohost:~/walt $
Foohost:~/walt $ ./Test_extend_download_parse_the_object
$VAR1 = bless( {}, 'DBI::st' );
$cur_msg is a statement handle (hence it is blessed into class DBI::st). You need something like:
my $cur_msg = $dbh->prepare($query) or die "…";
$cur_msg->execute();
my #row;
while (#row = $cur_msg->fetchrow_array)
{
print "#row\n";
# print Dumper(\#row);
}
only you need to be a bit more careful about how you actually print the data than I was. There are a number of other fetching methods, such as fetchrow_arrayref, fetchrow_hashref, fetchall_arrayref. All the details are available via perldoc DBI at the command line or the DBI page on CPAN.
You can see what the official documentation says about bless by using perldoc -f bless (or going to bless). It is a way of associating a variable with a class, and the class in this example is DBI::st, the DBI statement handle class. You $dbh would be in class DBI::db, for example.
What is the best way to print the results?
The best way to print them out depends on what you know about the result set.
You might choose:
printf "%-12s %6.2f\n", $row[0], $row[3];
if you know that there are only two fields you're interested in (though why didn't you just choose the two you're interested in — it costs time (a little time) to process elements 1 and 2 if they're unused).
You might choose:
foreach my $val (#row) { print "$val\n"; }
You might choose:
for (my $i = 0; $i < scalar(#row); $i++)
{
printf "%-12s = %s\n", $cur_msg->{NAME}[$i], $row[$i];
}
to print out the column name as well as the value. There are many other possibilities too, but those cover the key ones.
As noted by Borodin in his comment, you should be using use strict; and use warnings; automatically and reflexively in your Perl code. There's one variable that is not handled strictly in the code you show, namely $dbh. 'Tis easily remedied; add my before it where it is assigned. But it is a good idea to ensure that you use them all the time. Using them can allows you to avoid unexpected behaviours that you weren't aware of and weren't intending to exploit.
open(INFILE1,"INPUT.txt");
my $modfile = 'Data.txt';
open MODIFIED,'>',$modfile or die "Could not open $modfile : $!";
for (;;) {
my $line1 = <INFILE1>;
last if not defined $line1;
my $line2 = <INFILE1>;
last if not defined $line2;
my ($tablename1, $colname1,$sql1) = split(/\t/, $line1);
my ($tablename2, $colname2,$sql2) = split(/\t/, $line2);
if ($tablename1 eq $tablename2)
{
my $sth1 = $dbh->prepare($sql1);
$sth1->execute;
my $hash_ref1 = $sth1->fetchall_hashref('KEY');
my $sth2 = $dbh->prepare($sql2);
$sth2->execute;
my $hash_ref2 = $sth2->fetchall_hashref('KEY');
my #fieldname = split(/,/, $colname1);
my $colcnt=0;
my $rowcnt=0;
foreach $key1 ( keys(%{$hash_ref1}) )
{
foreach (#fieldname)
{
$colname =$_;
my $strvalue1='';
#val1 = $hash_ref1->{$key1}->{$colname};
if (defined #val1)
{
my #filtered = grep /#val1/, #metadata;
my $strvalue1 = substr(#filtered[0],index(#filtered[0],'||') + 2);
}
my $strvalue2='';
#val2 = $hash_ref2->{$key1}->{$colname};
if (defined #val2)
{
my #filtered = grep /#val2/, #metadata2;
my $strvalue2 = substr(#filtered[0],index(#filtered[0],'||') + 2);
}
if ($strvalue1 ne $strvalue2 )
{
$colcnt = $colcnt + 1;
print MODIFIED "$tablename1\t$colname\t$strvalue1\t$strvalue2\n";
}
}
}
if ($colcnt>0)
{
print "modified count is $colcnt\n";
}
%$hash_ref1 = ();
%$hash_ref2 = ();
}
The program is Read input file in which every line contrain three strings seperated by tab. First is TableName, Second is ALL Column Name with commas in between and third contain the sql to be run. As this utlity is doing comparison of data, so there are two rows for every tablename. One for each DB. So data needs to be picked from each respective db's and then compared column by column.
SQL returns as ID in the result set and if the value is coming from db then it needs be translated to a string by reading from a array (that array contains 100K records with Key and value seperated by ||)
Now I ran this for one set of tables which contains 18K records in each db. There are 8 columns picked from db in each sql. So for every record out of 18K, and then for every field in that record i.e. 8, this script is taking a lot of time.
My question is if someone can look and see if it can be imporoved so that it takes less time.
File contents sample
INPUT.TXT
TABLENAME COL1,COL2 select COL1,COL2 from TABLENAME where ......
TABLENAMEB COL1,COL2 select COL1,COL2 from TABLENAMEB where ......
Metadata array contains something like this(there are two i.e. for each db)
111||Code 1
222||Code 2
Please suggest
Your code does look a bit unusual, and could gain clarity from using subroutines vs. just using loops and conditionals. Here are a few other suggestions.
The excerpt
for (;;) {
my $line1 = <INFILE1>;
last if not defined $line1;
my $line2 = <INFILE1>;
last if not defined $line2;
...;
}
is overly complicated: Not everyone knows the C-ish for(;;) idiom. You have lots of code duplication. And aren't you actually saying loop while I can read two lines?
while (defined(my $line1 = <INFILE1>) and defined(my $line2 = <INFILE1>)) {
...;
}
Yes, that line is longer, but I think it's a bit more self-documenting.
Instead of doing
if ($tablename1 eq $tablename2) { the rest of the loop }
you could say
next if $tablename1 eq $tablename2;
the rest of the loop;
and save a level of intendation. And better intendation equals better readability makes it easier to write good code. And better code might perform better.
What are you doing at foreach $key1 (keys ...) — something tells me you didn't use strict! (Just a hint: lexical variables with my can perform slightly better than global variables)
Also, doing $colname = $_ inside a for-loop is a dumb thing, for the same reason.
for my $key1 (keys ...) {
...;
for my $colname (#fieldname) { ... }
}
my $strvalue1='';
#val1 = $hash_ref1->{$key1}->{$colname};
if (defined #val1)
{
my #filtered = grep /#val1/, #metadata;
my $strvalue1 = substr(#filtered[0],index(#filtered[0],'||') + 2);
}
I don't think this does what you think it does.
From the $hash_ref1 you retrive a single element, then assign that element to an array (a collection of multiple values).
Then you called defined on this array. An array cannot be undefined, and what you are doing is quite deprecated. Calling defined function on a collection returns info about the memory management, but does not indicate ① whether the array is empty or ② whether the first element in that array is defined.
Interpolating an array into a regex isn't likely to be useful: The elements of the array are joined with the value of $", usually a whitespace, and the resulting string treated as a regex. This will wreak havoc if there are metacharacters present.
When you only need the first value of a list, you can force list context, but assign to a single scalar like
my ($filtered) = produce_a_list;
This frees you from weird subscripts you don't need and that only slow you down.
Then you assign to a $strvalue1 variable you just declared. This shadows the outer $strvalue1. They are not the same variable. So after the if branch, you still have the empty string in $strvalue1.
I would write this code like
my $val1 = $hash_ref1->{$key1}{$colname};
my $strvalue1 = defined $val1
? do {
my ($filtered) = grep /\Q$val1/, #metadata;
substr $filtered, 2 + index $filtered, '||'
} : '';
But this would be even cheaper if you pre-split #metadata into pairs and test for equality with the correct field. This would remove some of the bugs that are still lurking in that code.
$x = $x + 1 is commonly written $x++.
Emptying the hashrefs at the end of the iteration is unneccessary: The hashrefs are assigned to a new value at the next iteration of the loop. Also, it is unneccessary to assist Perls garbage collection for such simple tasks.
About the metadata: 100K records is a lot, so either put it in a database itself, or at the very least a hash. Especially for so many records, using a hash is a lot faster than looping through all entries and using slow regexes … aargh!
Create the hash from the file, once at the beginning of the program
my %metadata;
while (<METADATA>) {
chomp;
my ($key, $value) = split /\|\|/;
$metadata{$key} = $value; # assumes each key only has one value
}
Simply look up the key inside the loop
my $strvalue1 = defined $val1 ? $metadata{$val1} // '' : ''
That should be so much faster.
(Oh, and please consider using better names for variables. $strvalue1 doesn't tell me anything, except that it is a stringy value (d'oh). $val1 is even worse.)
This is not really an answer but it won't really fit well in a comment either so, until you provide some more information, here are some observations.
Inside you inner for loop, there is:
#val1 = $hash_ref1->{$key1}->{$colname};
Did you mean #val1 = #{ $hash_ref1->{$key1}->{$colname} };?
Later, you check if (defined #val1)? What did you really want to check? As perldoc -f defined points out:
Use of "defined" on aggregates (hashes and arrays) is
deprecated. It used to report whether memory for that aggregate
had ever been allocated. This behavior may disappear in future
versions of Perl. You should instead use a simple test for size:
In your case, if (defined #val1) will always be true.
Then, you have my #filtered = grep /#val1/, #metadata; Where did #metadata come from? What did you actually intend to check?
Then you have my $strvalue1 = substr(#filtered[0],index(#filtered[0],'||') + 2);
There is some interesting stuff going on in there.
You will need to verbalize what you are actually trying to do.
I strongly suspect there is a single SQL query you can run that will give you what you want but we first need to know what you want.
In the DBI documentation, this is the recommended code for executing a query many times:
$sth = $dbh->prepare_cached($statement);
$sth->execute(#bind);
$data = $sth->fetchall_arrayref(#attrs);
$sth->finish;
However, I see that many* query methods allow passing a prepared and cached statement handle in place of a query string, which makes this possible:
$sth = $dbh->prepare_cached($statement);
$data = $dbh->selectall_arrayref($sth, \%attrs, #bind);
Is there anything wrong with this approach? I haven't seen it used in the wild.
FWIW, I have benchmarked these two implementations. And the second approach appears marginally (4%) faster, when querying for two consecutive rows using fetchall_arrayref in the first implementation vs selectall_arrayref in the second.
* The full list of query methods which support this are:
selectrow_arrayref - normal method with prepared statements is fetchrow_arrayref
selectrow_hashref - " " fetchrow_hashref
selectall_arrayref - " " fetchall_arrayref
selectall_hashref - " " fetchall_hashref
selectcol_arrayref (doesn't really count, as it has no parallel method using the first code path as described above - so the only way
to use prepared statements with this method is to use the second code
path above)
There's nothing wrong with it, as long as you were planning to do only one fetch. When you use the select*_* methods, all the data comes back in one chunk. My DBI code more often looks like this:
$sth = $dbh->prepare_cached($statement);
$sth->execute(#bind);
while (my $row = $sth->fetch) { # alias for fetchrow_arrayref
# do something with #$row here
}
There's no equivalent to this using a select*_* method.
If you're going to call fetchall_* (or you're only fetching 1 row), then go ahead and use a select*_* method with a statement handle.
No, there's nothing wrong that approach. There is something wrong with your benchmark or its analysis, though.
You've claimed that
$sth->execute(#bind);
$data = $sth->fetchall_arrayref(#attrs);
$sth->finish;
is slower than a call to
sub selectall_arrayref {
my ($dbh, $stmt, $attr, #bind) = #_;
my $sth = (ref $stmt) ? $stmt : $dbh->prepare($stmt, $attr)
or return;
$sth->execute(#bind) || return;
my $slice = $attr->{Slice}; # typically undef, else hash or array ref
if (!$slice and $slice=$attr->{Columns}) {
if (ref $slice eq 'ARRAY') { # map col idx to perl array idx
$slice = [ #{$attr->{Columns}} ]; # take a copy
for (#$slice) { $_-- }
}
}
my $rows = $sth->fetchall_arrayref($slice, my $MaxRows = $attr->{MaxRows});
$sth->finish if defined $MaxRows;
return $rows;
}
Maybe if you got rid of the useless call to finish you'll find the first faster? Note that benchmarks with less than 5% difference are not very telling; the accuracy isn't that high.
Update: s/faster than/slower than/
The performance difference should not be between selectall_arrayref() and fetchall_arrayref() but between fetchall_arrayref() and doing a fetch() in a loop yourself. fetchall_arrayref() may be faster as it is hand optimized in C.
The docs for fetchall_arrayref discuss performance...
If $max_rows is defined and greater than or equal to zero then it is
used to limit the number of rows fetched before returning.
fetchall_arrayref() can then be called again to fetch more rows. This
is especially useful when you need the better performance of
fetchall_arrayref() but don't have enough memory to fetch and return
all the rows in one go.
Here's an example (assumes RaiseError is enabled):
my $rows = []; # cache for batches of rows
while( my $row = ( shift(#$rows) || # get row from cache, or reload cache:
shift(#{$rows=$sth->fetchall_arrayref(undef,10_000)||[]}) )
) {
...
}
That might be the fastest way to fetch and process lots of rows using
the DBI, but it depends on the relative cost of method calls vs memory
allocation.
A standard "while" loop with column binding is often faster because the
cost of allocating memory for the batch of rows is greater than the
saving by reducing method calls. It's possible that the DBI may provide
a way to reuse the memory of a previous batch in future, which would
then shift the balance back towards fetchall_arrayref().
So that's a definitive "maybe". :-)
I don't think there's really any advantage to using one over the other, other than that the first uses three lines and the second uses one (less possibility for bugs with the second method). The first might be more commonly used because the documentation states that the "typical method call sequence for a SELECT statement is prepare, execute, fetch, fetch, ... execute, fetch, fetch, ..." and gives this example:
$sth = $dbh->prepare("SELECT foo, bar FROM table WHERE baz=?");
$sth->execute( $baz );
while ( #row = $sth->fetchrow_array ) {
print "#row\n";
}
Now, I'm not suggesting that programmers actually read the documentation (heaven forbid!) but given its prominence near the top of the documentation in a section designed to show you how to use the module, I would suspect that the more-verbose method is slightly more preferred by the module's author. As to why, your guess is as good as mine.