How to include a "diff" in a Perl test? [duplicate] - perl

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How can I use Perl to determine whether the contents of two files are identical?
If I am writing a Perl module test, and for example I want to test that an output file is exactly what is expected, if I use an external command like diff, the test might fail on some operating systems which don't provide the diff command. What would be a simple way to do something like diff on files, which doesn't rely on external commands? I understand that there are modules on CPAN which can do file diffs, but I would rather not complicate the build process unless necessary.

File::Compare, in core since 5.004.

When testing and looking for differences in files or strings I always use Test::Differences that uses Text::Diff. I know that you probably know that and you would like a non module solution, but looking for differences has many corner cases so is not trivial. Also I write this answer more for googlers (just in case you already know these modules).
I like the table output of this module. It is very convenient when the differences are a small number.

Why not just read and compare the two files in perl? Something like...
sub readfile
{
local ($/) = undef;
open READFILE, "<", $_[0]
or die "Can't read '$_[0]': $!";
my $contents = <READFILE>;
close READFILE or die "Can't close '$_[0]': $!";
return $contents;
}
$expected = readfile("expected_results");
$actual = readfile("actual_results");
if ($expected != $actual) {
die "Got wrong results!";
}
(If you're concerned about multiple OS portability, you may also need to do something about line endings, either in your test program or here, because some OSs use CRLF instead of LF to separate lines in text files. If you want to handle it here, a regular expression replace will do the trick.)

Related

How to pipe to and read from the same tempfile handle without race conditions?

Was debugging a perl script for the first time in my life and came over this:
$my_temp_file = File::Temp->tmpnam();
system("cmd $blah | cmd2 > $my_temp_file");
open(FIL, "$my_temp_file");
...
unlink $my_temp_file;
This works pretty much like I want, except the obvious race conditions in lines 1-3. Even if using proper tempfile() there is no way (I can think of) to ensure that the file streamed to at line 2 is the same opened at line 3. One solution might be pipes, but the errors during cmd might occur late because of limited pipe buffering, and that would complicate my error handling (I think).
How do I:
Write all output from cmd $blah | cmd2 into a tempfile opened file handle?
Read the output without re-opening the file (risking race condition)?
You can open a pipe to a command and read its contents directly with no intermediate file:
open my $fh, '-|', 'cmd', $blah;
while( <$fh> ) {
...
}
With short output, backticks might do the job, although in this case you have to be more careful to scrub the inputs so they aren't misinterpreted by the shell:
my $output = `cmd $blah`;
There are various modules on CPAN that handle this sort of thing, too.
Some comments on temporary files
The comments mentioned race conditions, so I thought I'd write a few things for those wondering what people are talking about.
In the original code, Andreas uses File::Temp, a module from the Perl Standard Library. However, they use the tmpnam POSIX-like call, which has this caveat in the docs:
Implementations of mktemp(), tmpnam(), and tempnam() are provided, but should be used with caution since they return only a filename that was valid when function was called, so cannot guarantee that the file will not exist by the time the caller opens the filename.
This is discouraged and was removed for Perl v5.22's POSIX.
That is, you get back the name of a file that does not exist yet. After you get the name, you don't know if that filename was made by another program. And, that unlink later can cause problems for one of the programs.
The "race condition" comes in when two programs that probably don't know about each other try to do the same thing as roughly the same time. Your program tries to make a temporary file named "foo", and so does some other program. They both might see at the same time that a file named "foo" does not exist, then try to create it. They both might succeed, and as they both write to it, they might interleave or overwrite the other's output. Then, one of those programs think it is done and calls unlink. Now the other program wonders what happened.
In the malicious exploit case, some bad actor knows a temporary file will show up, so it recognizes a new file and gets in there to read or write data.
But this can also happen within the same program. Two or more versions of the same program run at the same time and try to do the same thing. With randomized filenames, it is probably exceedingly rare that two running programs will choose the same name at the same time. However, we don't care how rare something is; we care how devastating the consequences are should it happen. And, rare is much more frequent than never.
File::Temp
Knowing all that, File::Temp handles the details of ensuring that you get a filehandle:
my( $fh, $name ) = File::Temp->tempfile;
This uses a default template to create the name. When the filehandle goes out of scope, File::Temp also cleans up the mess.
{
my( $fh, $name ) = File::Temp->tempfile;
print $fh ...;
...;
} # file cleaned up
Some systems might automatically clean up temp files, although I haven't care about that in years. Typically is was a batch thing (say once a week).
I often go one step further by giving my temporary filenames a template, where the Xs are literal characters the module recognizes and fills in with randomized characters:
my( $name, $fh ) = File::Temp->tempfile(
sprintf "$0-%d-XXXXXX", time );
I'm often doing this while I'm developing things so I can watch the program make the files (and in which order) and see what's in them. In production I probably want to obscure the source program name ($0) and the time; I don't want to make it easier to guess who's making which file.
A scratchpad
I can also open a temporary file with open by not giving it a filename. This is useful when you want to collect outside the program. Opening it read-write means you can output some stuff then move around that file (we show a fixed-length record example in Learning Perl):
open(my $tmp, "+>", undef) or die ...
print $tmp "Some stuff\n";
seek $tmp, 0, 0;
my $line = <$tmp>;
File::Temp opens the temp file in O_RDWR mode so all you have to do is use that one file handle for both reading and writing, even from external programs. The returned file handle is overloaded so that it stringifies to the temp file name so you can pass that to the external program. If that is dangerous for your purpose you can get the fileno() and redirect to /dev/fd/<fileno> instead.
All you have to do is mind your seeks and tells. :-) Just remember to always set autoflush!
use File::Temp;
use Data::Dump;
$fh = File::Temp->new;
$fh->autoflush;
system "ls /tmp/*.txt >> $fh" and die $!;
#lines = <$fh>;
printf "%s\n\n", Data::Dump::pp(\#lines);
print $fh "How now brown cow\n";
seek $fh, 0, 0 or die $!;
#lines2 = <$fh>;
printf "%s\n", Data::Dump::pp(\#lines2);
Which prints
[
"/tmp/cpan_htmlconvert_DPzx.txt\n",
"/tmp/cpan_htmlconvert_DunL.txt\n",
"/tmp/cpan_install_HfUe.txt\n",
"/tmp/cpan_install_XbD6.txt\n",
"/tmp/cpan_install_yzs9.txt\n",
]
[
"/tmp/cpan_htmlconvert_DPzx.txt\n",
"/tmp/cpan_htmlconvert_DunL.txt\n",
"/tmp/cpan_install_HfUe.txt\n",
"/tmp/cpan_install_XbD6.txt\n",
"/tmp/cpan_install_yzs9.txt\n",
"How now brown cow\n",
]
HTH

Faster way to read a file line by line in perl

I have to read a big unix file line by line using perl. the script is taking more than 2 mins to run in case of big file but takes lesser time for small file.
I am using following code:
open(FILE , "filename");
while ( < FILE > ){
}
Please let me know a way to parse file faster
What do you mean by a "big file" and a "small file"? What size are those files? How many lines do they have?
Unless your big file is absolutely huge, it seems likely that what is slowing your program down is not the reading from a file, but whatever you're doing in the while loop. To prove me wrong, you'd just need to run your program with nothing in the while loop to see how long that takes.
Assuming that I'm right, then you need to work out what section of your processing is causing the problems. Without seeing that code we, obviously, can't be any help there. But that's where a tool like Devel::NYTProf would be useful.
I'm not sure where you learned your Perl from, but the idiom you're using to open your file is rather outdated. These days we would a) use lexical variables as filehandles, b) use the 3-argument version of open() and c) always check the return value from open() and take appropriate action.
open(my $fh, '<', 'filename')
or die "Cannot open 'filename': $!\n";
while ( < $fh > ) {
...
}
If you have the memory, #array = <fh> then look thru array

Writing a macro in Perl

open $FP, '>', $outfile or die $outfile." Cannot open file for writing\n";
I have this statement a lot of times in my code.
I want to keep the format same for all of those statements, so that when something is changed, it is only changed at one place.
In Perl, how should I go about resolving this situation?
Should I use macros or functions?
I have seen this SO thread How can I use macros in Perl?, but it doesn't say much about how to write a general macro like
#define fw(FP, outfile) open $FP, '>', \
$outfile or die $outfile." Cannot open file for writing\n";
First, you should write that as:
open my $FP, '>', $outfile or die "Could not open '$outfile' for writing:$!";
including the reason why open failed.
If you want to encapsulate that, you can write:
use Carp;
sub openex {
my ($mode, $filename) = #_;
open my $h, $mode, $filename
or croak "Could not open '$filename': $!";
return $h;
}
# later
my $FP = openex('>', $outfile);
Starting with Perl 5.10.1, autodie is in the core and I will second Chas. Owens' recommendation to use it.
Perl 5 really doesn't have macros (there are source filters, but they are dangerous and ugly, so ugly even I won't link you to the documentation). A function may be the right choice, but you will find that it makes it harder for new people to read your code. A better option may be to use the autodie pragma (it is core as of Perl 5.10.1) and just cut out the or die part.
Another option, if you use Vim, is to use snipMate. You just type fw<tab>FP<tab>outfile<tab> and it produces
open my $FP, '>', $outfile
or die "Couldn't open $outfile for writing: $!\n";
The snipMate text is
snippet fw
open my $${1:filehandle}, ">", $${2:filename variable}
or die "Couldn't open $$2 for writing: $!\n";
${3}
I believe other editors have similar capabilities, but I am a Vim user.
There are several ways to handle something similar to a C macro in Perl: a source filter, a subroutine, Template::Toolkit, or use features in your text editor.
Source Filters
If you gotta have a C / CPP style preprocessor macro, it is possible to write one in Perl (or, actually, any language) using a precompile source filter. You can write fairly simple to complex Perl classes that operate on the text of your source code and perform transformations on it before the code goes to the Perl compiler. You can even run your Perl code directly through a CPP preprocessor to get the exact type of macro expansions you get in C / CPP using Filter::CPP.
Damian Conway's Filter::Simple is part of the Perl core distribution. With Filter::Simple, you could easily write a simple module to perform the macro you are describing. An example:
package myopinion;
# save in your Perl's #INC path as "myopinion.pm"...
use Filter::Simple;
FILTER {
s/Hogs/Pigs/g;
s/Hawgs/Hogs/g;
}
1;
Then a Perl file:
use myopinion;
print join(' ',"Hogs", 'Hogs', qq/Hawgs/, q/Hogs/, "\n");
print "In my opinion, Hogs are Hogs\n\n";
Output:
Pigs Pigs Hogs Pigs
In my opinion, Pigs are Pigs
If you rewrote the FILTER in to make the substitution for your desired macro, Filter::Simple should work fine. Filter::Simple can be restricted to parts of your code to make substations, such as the executable part but not the POD part; only in strings; only in code.
Source filters are not widely used in in my experience. I have mostly seen them with lame attempts to encrypt Perl source code or humorous Perl obfuscators. In other words, I know it can be done this way but I personally don't know enough about them to recommend them or say not to use them.
Subroutines
Sinan Ünür openex subroutine is a good way to accomplish this. I will only add that a common older idiom that you will see involves passing a reference to a typeglob like this:
sub opensesame {
my $fn=shift;
local *FH;
return open(FH,$fn) ? *FH : undef;
}
$fh=opensesame('> /tmp/file');
Read perldata for why it is this way...
Template Toolkit
Template::Toolkit can be used to process Perl source code. For example, you could write a template along the lines of:
[% fw(fp, outfile) %]
running that through Template::Toolkit can result in expansion and substitution to:
open my $FP, '>', $outfile or die "$outfile could not be opened for writing:$!";
Template::Toolkit is most often used to separate the messy HTML and other presentation code from the application code in web apps. Template::Toolkit is very actively developed and well documented. If your only use is a macro of the type you are suggesting, it may be overkill.
Text Editors
Chas. Owens has a method using Vim. I use BBEdit and could easily write a Text Factory to replace the skeleton of a open with the precise and evolving open that I want to use. Alternately, you can place a completion template in your "Resources" directory in the "Perl" folder. These completion skeletons are used when you press the series of keys you define. Almost any serious editor will have similar functionality.
With BBEdit, you can even use Perl code in your text replacement logic. I use Perl::Critic this way. You could use Template::Toolkit inside BBEdit to process the macros with some intelligence. It can be set up so the source code is not changed by the template until you output a version to test or compile; the editor is essentially acting as a preprocessor.
Two potential issues with using a text editor. First is it is a one way / one time transform. If you want to change what your "macro" does, you can't do it, since the previous text of you "macro" was already used. You have to manually change them. Second potential issue is that if you use a template form, you can't send the macro version of the source code to someone else because the preprocessing that is being done inside the editor.
Don't Do This!
If you type perl -h to get valid command switches, one option you may see is:
-P run program through C preprocessor before compilation
Tempting! Yes, you can run your Perl code through the C preprocessor and expand C style macros and have #defines. Put down that gun; walk away; don't do it. There are many platform incompatibilities and language incompatibilities.
You get issues like this:
#!/usr/bin/perl -P
#define BIG small
print "BIG\n";
print qq(BIG\n);
Prints:
BIG
small
In Perl 5.12 the -P switch has been removed...
Conclusion
The most flexible solution here is just write a subroutine. All your code is visible in the subroutine, easily changed, and a shorter call. No real downside other than the readability of your code potentially.
Template::Toolkit is widely used. You can write complex replacements that act like macros or even more complex than C macros. If your need for macros is worth the learning curve, use Template::Toolkit.
For very simple cases, use the one way transforms in an editor.
If you really want C style macros, you can use Filter::CPP. This may have the same incompatibilities as the perl -P switch. I cannot recommend this; just learn the Perl way.
If you want to run Perl one liners and Perl regexs against your code before it compiles, use Filter::Simple.
And don't use the -P switch. You can't on newer versions of Perl anyway.
For something like open i think it's useful to include close in your factorized routine. Here's an approach that looks a bit wierd but encapsulates a typical open/close idiom.
sub with_file_do(&$$) {
my ($code, $mode, $file) = #_;
open my $fp, '>', $file or die "Could not open '$file' for writing:$!";
local $FP = $fp;
$code->(); # perhaps wrap in an eval
close $fp;
}
# usage
with_file_do {
print $FP "whatever\n";
# other output things with $FP
} '>', $outfile;
Having the open params specified at the end is a bit wierd but it allows you to avoid having to specify the sub keyword.

Are there reasons to ever use the two-argument form of open(...) in Perl?

Are there any reasons to ever use the two-argument form of open(...) in Perl rather than the three-or-more-argument versions?
The only reason I can come up with is the obvious observation that the two-argument form is shorter. But assuming that verbosity is not an issue, are there any other reasons that would make you choose the two-argument form of open(...)?
One- and two-arg open applies any default layers specified with the -C switch or open pragma. Three-arg open does not. In my opinion, this functional difference is the strongest reason to choose one or the other (and the choice will vary depending what you are opening). Which is easiest or most descriptive or "safest" (you can safely use two-arg open with arbitrary filenames, it's just not as convenient) take a back seat in module code; in script code you have more discretion to choose whether you will support default layers or not.
Also, one-arg open is needed for Damian Conway's file slurp operator
$_ = "filename";
$contents = readline!open(!((*{!$_},$/)=\$_));
Imagine you are writing a utility that accepts an input file name. People with reasonable Unix experience are used to substituting - for STDIN. Perl handles that automatically only when the magical form is used where the mode characters and file name are one string, else you have to handle this and similar special cases yourself. This is a somewhat common gotcha, I am surprised no one has posted that yet. Proof:
use IO::File qw();
my $user_supplied_file_name = '-';
IO::File->new($user_supplied_file_name, 'r') or warn "IO::File/non-magical mode - $!\n";
IO::File->new("<$user_supplied_file_name") or warn "IO::File/magical mode - $!\n";
open my $fh1, '<', $user_supplied_file_name or warn "non-magical open - $!\n";
open my $fh2, "<$user_supplied_file_name" or warn "magical open - $!\n";
__DATA__
IO::File/non-magical mode - No such file or directory
non-magical open - No such file or directory
Another small difference : the two argument form trim spaces
$foo = " fic";
open(MH, ">$foo");
print MH "toto\n";
Writes in a file named fic
On the other hand
$foo = " fic";
open(MH, ">", $foo);
print MH "toto\n";
Will write in a file whose name begin with a space.
For short admin scripts with user input (or configuration file input), not having to bother with such details as trimming filenames is nice.
The two argument form of open was the only form supported by some old versions of perl.
If you're opening from a pipe, the three argument form isn't really helpful. Getting the equivalent of the three argument form involves doing a safe pipe open (open(FILE, '|-')) and then executing the program.
So for simple pipe opens (e.g. open(FILE, 'ps ax |')), the two argument syntax is much more compact.
I think William's post pretty much hits it. Otherwise, the three-argument form is going to be more clear, as well as safer.
See also:
What's the best way to open and read a file in Perl?
Why is three-argument open calls with autovivified filehandles a Perl best practice?
One reason to use the two-argument version of open is if you want to open something which might be a pipe, or a file. If you have one function
sub strange
{
my ($file) = #_;
open my $input, $file or die $!;
}
then you want to call this either with a filename like "file":
strange ("file");
or a pipe like "zcat file.gz |"
strange ("zcat file.gz |");
depending on the situation of the file you find, then the two-argument version may be used. You will actually see the above construction in "legacy" Perl. However, the most sensible thing might be to open the filehandle appropriately and send the filehandle to the function rather than using the file name like this.
When you are combining a string or using a variable, it can be rather unclear whether '<' or '>' etc is in already. In such cases, I personally prefer readability, which means, I use the longer form:
open($FILE, '>', $varfn);
When you simply use a constant, I prefer the ease-of-typing (and, actually, consider the short version better readable anyway, or at least even to the long version).
open($FILE, '>somefile.xxx');
I'm guessing you mean open(FH, '<filename.txt') as opposed to open(FH, '<', 'filename.txt') ?
I think it's just a matter of preference. I always use the former out of habit.

How do I serve a large file for download with Perl?

I need to serve a large file (500+ MB) for download from a location that is not accessible to the web server. I found the question Serving large files with PHP, which is identical to my situation, but I'm using Perl instead of PHP.
I tried simply printing the file line by line, but this does not cause the browser to prompt for download before grabbing the entire file:
use Tie::File;
open my $fh, '<', '/path/to/file.txt';
tie my #file, 'Tie::File', $fh
or die 'Could not open file: $!';
my $size_in_bytes = -s $fh;
print "Content-type: text/plain\n";
print "Content-Length: $size_in_bytes\n";
print "Content-Disposition: attachment; filename=file.txt\n\n";
for my $line (#file) {
print $line;
}
untie #file;
close $fh;
exit;
Does Perl have an equivalent to PHP's readfile() function (as suggested with PHP) or is there a way to accomplish what I'm trying to do here?
If you just want to slurp input to output, this should do the trick.
use Carp ();
{ #Lexical For FileHandle and $/
open my $fh, '<' , '/path/to/file.txt' or Carp::croak("File Open Failed");
local $/ = undef;
print scalar <$fh>;
close $fh or Carp::carp("File Close Failed");
}
I guess in response to the "Does Perl have a PHP ReadFile Equivelant" , and I guess my answer would be "But it doesn't really need one".
I've used PHP's manual File IO controls and they're a pain, Perls are just so easy to use by comparison that shelling out for a one-size-fits-all function seems over-kill.
Also, you might want to look at X-SendFile support, and basically send a header to your webserver to tell it what file to send: http://john.guen.in/past/2007/4/17/send_files_faster_with_xsendfile/ ( assuming of course it has permissions enough to access the file, but the file is just NOT normally accessible via a standard URI )
Edit Noted, it is better to do it in a loop, I tested the above code with a hard-drive and it does implicitly try store the whole thing in an invisible temporary variable and eat all your ram.
Alternative using blocks
The following improved code reads the given file in blocks of 8192 chars, which is much more memory efficient, and gets a throughput respectably comparable with my disk raw read rate. ( I also pointed it at /dev/full for fits and giggles and got a healthy 500mb/s throughput, and it didn't eat all my rams, so that must be good )
{
open my $fh , '<', '/dev/sda' ;
local $/ = \8192; # this tells IO to use 8192 char chunks.
print $_ while defined ( $_ = scalar <$fh> );
close $fh;
}
Applying jrockways suggestions
{
open my $fh , '<', '/dev/sda5' ;
print $_ while ( sysread $fh, $_ , 8192 );
close $fh;
}
This literally doubles performance, ... and in some cases, gets me better throughput than DD does O_o.
The readline function is called readline (and can also be written as
<>).
I'm not sure what problem you're having. Perhaps that for loops
aren't lazily evaluated (which they're not). Or, perhaps Tie::File is
screwing something up? Anyway, the idiomatic Perl for reading a file
a line at a time is:
open my $fh, '<', $filename or die ...;
while(my $line = <$fh>){
# process $line
}
No need to use Tie::File.
Finally, you should not be handling this sort of thing yourself. This
is a job for a web framework. If you were using
Catalyst (or
HTTP::Engine), you would
just say:
open my $fh, '<', $filename ...
$c->res->body( $fh );
and the framework would automatically serve the data in the file
efficiently. (Using stdio via readline is not a good idea here, it's
better to read the file in blocks from the disk. But who cares, it's
abstracted!)
You could use my Sys::Sendfile module. It's should be highly efficient (as it uses sendfile underneath the hood), but not entirely portable (only Linux, FreeBSD and Solaris are currently supported).
When you say "this does not cause the browser to prompt for download" -- what's "the browser"?
Different browsers behave differently, and IE is particularly wilful, it will ignore headers and decide for itself what to do based on reading the first few kb of the file.
In other words, I think your problem may be at the client end, not the server end.
Try lying to "the browser" and telling it the file is of type application/octet-stream. Or why not just zip the file, especially as it's so huge.
Don't use for/foreach (<$input>) because it reads the whole file at once and then iterates over it. Use while (<$input>) instead. The sysread solution is good, but the sendfile is the best performance-wise.
Answering the (original) question ("Does Perl have an equivalent to PHP's readline() function ... ?"), the answer is "the angle bracket syntax":
open my $fh, '<', '/path/to/file.txt';
while (my $line = <file>) {
print $line;
}
Getting the content-length with this method isn't necessarily easy, though, so I'd recommend staying with Tie::File.
NOTE
Using:
for my $line (<$filehandle>) { ... }
(as I originally wrote) copies the contents of the file to a list and iterates over that. Using
while (my $line = <$filehandle>) { ... }
does not. When dealing with small files the difference isn't significant, but when dealing with large files it definitely can be.
Answering the (updated) question ("Does Perl have an equivalent to PHP's readfile() function ... ?"), the answer is slurping. There are a couple of syntaxes, but Perl6::Slurp seems to be the current module of choice.
The implied question ("why doesn't the browser prompt for download before grabbing the entire file?") has absolutely nothing to do with how you're reading in the file, and everything to do with what the browser thinks is good form. I would guess that the browser sees the mime-type and decides it knows how to display plain text.
Looking more closely at the Content-Disposition problem, I remember having similar trouble with IE ignoring Content-Disposition. Unfortunately I can't remember the workaround. IE has a long history of problems here (old page, refers to IE 5.0, 5.5 and 6.0). For clarification, however, I would like to know:
What kind of link are you using to point to this big file (i.e., are you using a normal a href="perl_script.cgi?filename.txt link or are you using Javascript of some kind)?
What system are you using to actually serve the file? For instance, does the webserver make its own connection to the other computer without a webserver, and then copy the file to the webserver and then send the file to the end user, or does the user make the connection directly to the computer without a webserver?
In the original question you wrote "this does not cause the browser to prompt for download before grabbing the entire file" and in a comment you wrote "I still don't get a download prompt for the file until the whole thing is downloaded." Does this mean that the file gets displayed in the browser (since it's just text), that after the browser has downloaded the entire file you get a "where do you want to save this file" prompt, or something else?
I have a feeling that there is a chance the HTTP headers are getting stripped out at some point or that a Cache-control header is getting added (which apparently can cause trouble).
I've successfully done it by telling the browser it was of type application/octet-stream instead of type text/plain. Apparently most browsers prefer to display text/plain inline instead of giving the user a download dialog option.
It's technically lying to the browser, but it does the job.
The most efficient way to serve a large file for download depends on a web-server you use.
In addition to #Kent Fredric X-Sendfile suggestion:
File Downloads Done Right have some links that describe how to do it for Apache, lighttpd (mod_secdownload: security via url generation), nginx. There are examples in PHP, Ruby (Rails), Python which can be adopted for Perl.
Basically it boils down to:
Configure paths, and permissions for your web-server.
Generate valid headers for the redirect in your Perl app (Content-Type, Content-Disposition, Content-length?, X-Sendfile or X-Accel-Redirect, etc).
There are probably CPAN modules, web-frameworks plugins that do exactly that e.g., #Leon Timmermans mentioned Sys::Sendfile in his answer.