CGI script's message not rendring in browser? - perl

I'm trying to copy some files from one network share to another using File::Copy.
This is my code:
#!C:/strawberry/perl/bin/perl.exe
use File::Copy;
print "Content-type: text/html\n\n";
print "<H1>Hello World</H1>\n";
copy("s:\\nl\\cover\\config.jsp", "s:\\temp\\config.jsp")
or die "File cannot be copied.";
print "this is not displayed";
Why is the 'die' message not rendering?

If you are running this under a web server (I cannot imagine why, you are sending a "Content-Type" header), any error messages you emit using die and warn will go to the server's error log.
Further, if you are invoking this as CGI, note that you are lying to the browser by claiming you are sending HTML and not sending HTML.
Especially if you are just learning Perl, you should make an effort to dot all your is and cross all your ts:
#!C:/strawberry/perl/bin/perl.exe
use strict; # every time
use warnings; # every time
use CGI qw(:cgi);
use CGI::Carp qw(fatalsToBrowser); # only during debugging
use File::Copy;
use File::Spec::Functions qw(catfile);
$| = 1;
# prefer portable ways of dealing with filenames
# see http://search.cpan.org/perldoc/File::Spec
my $source = catfile(qw(S: n1 cover config.jsp));
my $target = catfile(qw(S: temp config.jsp));
print header('text/plain');
if ( copy $source => $target ) {
print "'$source' was copied to '$target'\n";
}
else {
print "'$source' was not copied to '$target'\n";
# you can use die if you want the error message to
# go to the error log and an "Internal Server Error"
# to be shown to the web site visitor.
# die "'$source' was not copied to '$target'\n";
}
See CGI for the function oriented interface import lists.

Are you sending your stderr to the stdout stream as well? All your prints will got to stdout which is presumably connected to a browser, given your HTML output.
However, die writes to the stderr stream. This is likely to go, not to the browser window, but to an error log of some sort. As to where it's going, it depends on what Perl is running within.
One way to check is to print something instead of dieing in the or clause.
So, some questions:
How are you running it?
If on the command line, show us the exact command.
If in a web server of some sort, tell us which one so we can find the logs for you.

die sends messages to STDERR, which will wind up in the web server's error logs, not on the screen. There are some CGI modules that offer you greater control over error-handling, or you could install a $SIG{__DIE__} handler (if you don't know what that is, then don't worry -- you don't need to), but when I want a quick-and-dirty way to debug my CGI scripts, I put this at the top of the script:
#! /usr/bin/perl
$src = join'',<DATA>;
eval $src;
print "Content-type: text/plain\n\n$#\n" if $#;
__END__
... my cgi script starts here ...
This loads the script into a variable, uses eval to run the Perl interpreter on that variable's contents, and prints any errors to standard output (the browser window) with a valid header.

copy("s:\\nl\\cover\\config.jsp", "s:\\temp\\config.jsp")
or die "File cannot be copied.";
print "this is not displayed";
Only one of these messages should ever be displayed and it's unclear which you're asking about.
The question says you're wondering why the die message isn't being displayed; to me, that implies that you're not seeing the message "File cannot be copied." and the most obvious reason for this is that the copy operation is succeeding, but see also the previous responses about looking in the error log if you're running this under CGI.
The text of the messages, though, suggests that you actually mean you're not seeing the message "this is not displayed". (Why else would you mention that it isn't displayed?) In that case, the reason you're not seeing it is because die causes the program to exit. After the copy fails and the die executes, your program is dead. Terminated. It has shuffled off this mortal CPU and joined the stack eternal. It wouldn't print "this is not displayed" if you put four million volts through it. It is an ex-process.

After editing your code, it's apparent that your die is seen as a command and probably needs to be escaped. Note how it is rendered on Stack Overflow in blue (indicating that it is a keyword). Try switching to a synonym like "shutdown" instead.

Related

How to run a local program with user input in Perl

I'm trying to get user input from a web page written in Perl and send it to a local program (blastp), then display the results.
This is what I have right now:
(input code)
print $q->p, "Your database: $bd",
$q->p, "Your protein is: $prot",
$q->p, "Executing...";
print $q->p, system("blastp","-db $bd","-query $prot","-out results.out");
Now, I've done a little research, but I can't quite grasp how you're supposed to do things like this in Perl. I've tried opening a file, writing to it, and sending it over to blastp as an input, but I was unsuccessful.
For reference, this line produces a successful output file:
kold#sazabi ~/BLAST/pataa $ blastp -db pataa -query ../teste.fs -out results.out
I may need to force the bd to load from an absolute path, but that shouldn't be difficult.
edit: Yeah, the DBs have an environmental variable, that's fixed. Ok, all I need is to get the input into a file, pass it to the command, and then print the output file to the CGI page.
edit2: for clarification:
I am receiving user input in $prot, I want to pass it over to blastp in -query, have the program blastp execute, and then print out to the user the results.out file (or just have a link to it, since blastp can output in HTML)
EDIT:
All right, fixed everything I needed to fix. The big problem was me not seeing what was going wrong: I had to install Tiny:Capture and print out stderr, which was when I realized the environmental variable wasn't getting set correctly, so BLAST wasn't finding my databases. Thanks for all the help!
Write $prot to the file. Assuming you need to do it as-is without processing the text to split it or something:
For a fixed file name (may be problematic):
use File::Slurp;
write_file("../teste.fs", $prot, "\n") or print_error_to_web();
# Implement the latter to print error in nice HTML format
For a temp file (better):
my ($fh, $filename) = tempfile( $template, DIR => "..", CLEANUP => 1);
# You can also create temp directory which is even better, via tempdir()
print $fh "$prot\n";
close $fh;
Step 2: Run your command as you indicated:
my $rc = system("$BLASTP_PATH/blastp", "-db", "pataa"
,"-query", "../teste.fs", "-out", "results.out");
# Process $rc for errors
# Use qx[] instead of system() if you want to capture
# standard output of the command
Step 3: Read the output file in:
use File::Slurp;
my $out_file_text = read_file("results.out");
Send back to web server
print $q->p, $out_file_text;
The above code has multiple issues (e.g. you need better file/directory paths, more error handling etc...) but should start you on the right track.

Perl script: how to realize if a process has finished

This is what I have created so far regarding your advice:
#!/usr/bin/perl -w
use strict;
use CGI qw(:standard);
#some variables
my $message = "please wait, loading data...\n";
#First build the web page
print header;
print start_html('Hello World');
print "<H1>we need love, peace and harmony</H1>\n";
print "<p>$message</p>\n";
#Establish a pipeline between the bash and my script.
my $bash_command = '/love/peace/harmony/./lovepeace.bash';
open(my $pipe, '-|', $bash_command) or die $!;
while (my $line = <$pipe>){
# Do something with each line.
print "<p>$line</p>\n";
}
#when is the job done...?
print end_html;
When I call that .pl script in my browser, everything works nice :-) But a few questions are still on my mind:
When I call this website, it is busy loading some values from the pipe. Since there are about 10 Values its rather quick (2-4 seconds) But if I would have 100+ Values, the user has to wait a while. Since I cannot have a progress bar, I should provide an information to the user.
Like:"Loading data, please wait..."
And when the job is done, this message should say: "Job done" or something similar.
• How do I realize if the process is finnished?
• Can I reload the page if the job is done ?
• Is there any chance of using my own stylesheet wihtin this perl-CGI
Regards,
JJ
Randal Schwartz's Watching long processes through CGI might be helpful here.
As for using your own stylesheet, you can just specify that in the <head>...</head> section you are emitting.
It might make more sense to implement this as ajax. Allow the page to full render and then have an ajax request to watch the progress. Your page may not display nicely if it doesn't see the until the process is finished.
EDIT: Sorry, incomplete thought. A more thorough approach to see the line-by-line output would be to kick off that script as a background process writing output to a log, then having the ajax server code (a separate CGI) return the lines processed so far and a flag if the process has exited.
When your external process is complete, <$pipe> will return undef and your while loop
while (my $line = <$pipe>) { ... }
will finish (Until then, <$pipe> will block while waiting for input to be available). Therefore you can tell that the job is done when the while loop is done. If you can, you'll want to configure your external command (/love/peace/harmony/./lovepeace.bash) to not buffer its output.

How can I send multiple images in a server push Perl CGI program?

I am a beginner in Perl CGI etc. I was experimenting with server-push concept with a piece of Perl code. It is supposed to send a jpeg image to the client every three seconds.
Unfortunately nothing seems to work. Can somebody help identify the problem?
Here is the code:
use strict;
# turn off io buffering
$|=1;
print "Content-type: multipart/x-mixed-replace;";
print "boundary=magicalboundarystring\n\n";
print "--magicalboundarystring\n";
#list the jpg images
my(#file_list) = glob "*.jpg";
my($file) = "";
foreach $file(#file_list )
{
open FILE,">", $file or die "Cannot open file $file: $!";
print "Content-type: image/jpeg\n\n";
while ( <FILE> )
{
print "$_";
}
close FILE;
print "\n--magicalboundarystring\n";
sleep 3;
next;
}
EDIT: added turn off i/o buffering, added "use strict" and "#file_list", "$file" are made local
Flush the output.
Most probably, the server is keeping the response in the buffer. You may want to do fflush(STDOUT) after every print or autoflush STDOUT once.
Have a look at http://www.abiglime.com/webmaster/articles/cgi/032498.htm
[quote]
To use the script below, you'll need
to implement a called "non-parsed"
CGIs on your site. Normally, the web
server will buffer all output from
your CGI program until it the program
finishes. We don't want that to happen
here. With Apache, it's quite easy. If
the name of your CGI program starts
with "nph-" it won't be parsed. Also,
change the glob "/some/path/*" to the
path where you want to look for files.
[/quote]

IPC::Open3 Fails Running Under Apache

I have a module that uses IPC::Open3 (or IPC::Open2, both exhibit this problem) to call an external binary (bogofilter in this case) and feed it some input via the child-input filehandle, then reads the result from the child-output handle. The code works fine when run in most environments. However, the main use of this module is in a web service that runs under Apache 2.2.6. And under that environment, I get the error:
Cannot fdopen STDOUT: Invalid argument
This only happens when the code runs under Apache. Previously, the code constructed a horribly complex command, which included a here-document for the input, and ran it with back-ticks. THAT worked, but was very slow and prone to breaking in unique and perplexing ways. I would hate to have to revert to the old version, but I cannot crack this.
Could it be because mod_perl 2 closes STDOUT? I just discovered this and posted about it:
http://marc.info/?l=apache-modperl&m=126296015910250&w=2
I think it's a nasty bug, but no one seems to care about it thus far. Post a follow up on the mod_perl list if your problem is related and you want it to get attention.
Jon
Bogofilter returns different exit codes for spam/nonspam.
You can "fix" this by redirecting stdout to /dev/null
system("bogofilter < $input > /dev/null") >> 8;
Will return 0 for spam, 1 for nonspam, 2 for unknown (the >> 8 is because perl helpfully corrects the exit code, this fixes the damage).
Note: the lack of an environment may also prevent bogofilter from finding its wordlist, so pass that in explicitly as well:
system("bogofilter -d /path/to/.bogofilter/ < $input > /dev/null") >> 8;
(where /path/to/.bogofilter contains the wordlist.db)
You can't retrieve the actual rating that bogofilter gave that way, but it does get you something.
If your code is only going to be run on Linux/Unix systems it is easy to write an open3 replacement that does not fail because STDOUT is not a real file handle:
sub my_open3 {
# untested!
pipe my($inr), my($inw) or die;
pipe my($outr), my($outw) or die;
pipe my($errr), my($errw) or die;
my $pid = fork;
unless ($pid) {
defined $pid or die;
POSIX::dup2($inr, 0);
POSIX::dup2($outw, 1);
POSIX::dup2($errw, 2);
exec #_;
POSIX::_exit(1);
}
return ($inw, $outr, $errr);
}
my ($in, $out, $err) = my_open3('ls /etc/');
Caveat Emptor: I am not a perl wizard.
As #JonathanSwartz suggested, I believe the issue is that apache2 mod_perl closes STDIN and STDOUT. That shouldn't be relevant to what IPC::Open3 is doing, but it has a bug in it, described here.
In summary (this is the part I'm not super clear on), open3 tries to match the child processes STDIN/OUT/ERR to your process, or duplicate it if that was what is requested. Due to some undocumented ways that open('>&=X') works, it generally works fine, except in the case where STDIN/OUT/ERR are closed.
Another link that gets deep into the details.
One solution is to fix IPC::Open3, as described in both of those links. The other, which worked for me, is to temporarily open STDIN/OUT in your mod_perl code and then close it afterwards:
my ($save_stdin,$save_stdout);
open $save_stdin, '>&STDIN';
open $save_stdout, '>&STDOUT';
open STDIN, '>&=0';
open STDOUT, '>&=1';
#make your normal IPC::Open3::open3 call here
close(STDIN);
close(STDOUT);
open STDIN, '>&', $save_stdin;
open STDOUT, '>&', $save_stdout;
Also, I noticed a bunch of complaints around the net about IPC::Run3 suffering from the same problems, so if anyone runs into the same issue, I suspect the same solution would work.

How do I serve a large file for download with Perl?

I need to serve a large file (500+ MB) for download from a location that is not accessible to the web server. I found the question Serving large files with PHP, which is identical to my situation, but I'm using Perl instead of PHP.
I tried simply printing the file line by line, but this does not cause the browser to prompt for download before grabbing the entire file:
use Tie::File;
open my $fh, '<', '/path/to/file.txt';
tie my #file, 'Tie::File', $fh
or die 'Could not open file: $!';
my $size_in_bytes = -s $fh;
print "Content-type: text/plain\n";
print "Content-Length: $size_in_bytes\n";
print "Content-Disposition: attachment; filename=file.txt\n\n";
for my $line (#file) {
print $line;
}
untie #file;
close $fh;
exit;
Does Perl have an equivalent to PHP's readfile() function (as suggested with PHP) or is there a way to accomplish what I'm trying to do here?
If you just want to slurp input to output, this should do the trick.
use Carp ();
{ #Lexical For FileHandle and $/
open my $fh, '<' , '/path/to/file.txt' or Carp::croak("File Open Failed");
local $/ = undef;
print scalar <$fh>;
close $fh or Carp::carp("File Close Failed");
}
I guess in response to the "Does Perl have a PHP ReadFile Equivelant" , and I guess my answer would be "But it doesn't really need one".
I've used PHP's manual File IO controls and they're a pain, Perls are just so easy to use by comparison that shelling out for a one-size-fits-all function seems over-kill.
Also, you might want to look at X-SendFile support, and basically send a header to your webserver to tell it what file to send: http://john.guen.in/past/2007/4/17/send_files_faster_with_xsendfile/ ( assuming of course it has permissions enough to access the file, but the file is just NOT normally accessible via a standard URI )
Edit Noted, it is better to do it in a loop, I tested the above code with a hard-drive and it does implicitly try store the whole thing in an invisible temporary variable and eat all your ram.
Alternative using blocks
The following improved code reads the given file in blocks of 8192 chars, which is much more memory efficient, and gets a throughput respectably comparable with my disk raw read rate. ( I also pointed it at /dev/full for fits and giggles and got a healthy 500mb/s throughput, and it didn't eat all my rams, so that must be good )
{
open my $fh , '<', '/dev/sda' ;
local $/ = \8192; # this tells IO to use 8192 char chunks.
print $_ while defined ( $_ = scalar <$fh> );
close $fh;
}
Applying jrockways suggestions
{
open my $fh , '<', '/dev/sda5' ;
print $_ while ( sysread $fh, $_ , 8192 );
close $fh;
}
This literally doubles performance, ... and in some cases, gets me better throughput than DD does O_o.
The readline function is called readline (and can also be written as
<>).
I'm not sure what problem you're having. Perhaps that for loops
aren't lazily evaluated (which they're not). Or, perhaps Tie::File is
screwing something up? Anyway, the idiomatic Perl for reading a file
a line at a time is:
open my $fh, '<', $filename or die ...;
while(my $line = <$fh>){
# process $line
}
No need to use Tie::File.
Finally, you should not be handling this sort of thing yourself. This
is a job for a web framework. If you were using
Catalyst (or
HTTP::Engine), you would
just say:
open my $fh, '<', $filename ...
$c->res->body( $fh );
and the framework would automatically serve the data in the file
efficiently. (Using stdio via readline is not a good idea here, it's
better to read the file in blocks from the disk. But who cares, it's
abstracted!)
You could use my Sys::Sendfile module. It's should be highly efficient (as it uses sendfile underneath the hood), but not entirely portable (only Linux, FreeBSD and Solaris are currently supported).
When you say "this does not cause the browser to prompt for download" -- what's "the browser"?
Different browsers behave differently, and IE is particularly wilful, it will ignore headers and decide for itself what to do based on reading the first few kb of the file.
In other words, I think your problem may be at the client end, not the server end.
Try lying to "the browser" and telling it the file is of type application/octet-stream. Or why not just zip the file, especially as it's so huge.
Don't use for/foreach (<$input>) because it reads the whole file at once and then iterates over it. Use while (<$input>) instead. The sysread solution is good, but the sendfile is the best performance-wise.
Answering the (original) question ("Does Perl have an equivalent to PHP's readline() function ... ?"), the answer is "the angle bracket syntax":
open my $fh, '<', '/path/to/file.txt';
while (my $line = <file>) {
print $line;
}
Getting the content-length with this method isn't necessarily easy, though, so I'd recommend staying with Tie::File.
NOTE
Using:
for my $line (<$filehandle>) { ... }
(as I originally wrote) copies the contents of the file to a list and iterates over that. Using
while (my $line = <$filehandle>) { ... }
does not. When dealing with small files the difference isn't significant, but when dealing with large files it definitely can be.
Answering the (updated) question ("Does Perl have an equivalent to PHP's readfile() function ... ?"), the answer is slurping. There are a couple of syntaxes, but Perl6::Slurp seems to be the current module of choice.
The implied question ("why doesn't the browser prompt for download before grabbing the entire file?") has absolutely nothing to do with how you're reading in the file, and everything to do with what the browser thinks is good form. I would guess that the browser sees the mime-type and decides it knows how to display plain text.
Looking more closely at the Content-Disposition problem, I remember having similar trouble with IE ignoring Content-Disposition. Unfortunately I can't remember the workaround. IE has a long history of problems here (old page, refers to IE 5.0, 5.5 and 6.0). For clarification, however, I would like to know:
What kind of link are you using to point to this big file (i.e., are you using a normal a href="perl_script.cgi?filename.txt link or are you using Javascript of some kind)?
What system are you using to actually serve the file? For instance, does the webserver make its own connection to the other computer without a webserver, and then copy the file to the webserver and then send the file to the end user, or does the user make the connection directly to the computer without a webserver?
In the original question you wrote "this does not cause the browser to prompt for download before grabbing the entire file" and in a comment you wrote "I still don't get a download prompt for the file until the whole thing is downloaded." Does this mean that the file gets displayed in the browser (since it's just text), that after the browser has downloaded the entire file you get a "where do you want to save this file" prompt, or something else?
I have a feeling that there is a chance the HTTP headers are getting stripped out at some point or that a Cache-control header is getting added (which apparently can cause trouble).
I've successfully done it by telling the browser it was of type application/octet-stream instead of type text/plain. Apparently most browsers prefer to display text/plain inline instead of giving the user a download dialog option.
It's technically lying to the browser, but it does the job.
The most efficient way to serve a large file for download depends on a web-server you use.
In addition to #Kent Fredric X-Sendfile suggestion:
File Downloads Done Right have some links that describe how to do it for Apache, lighttpd (mod_secdownload: security via url generation), nginx. There are examples in PHP, Ruby (Rails), Python which can be adopted for Perl.
Basically it boils down to:
Configure paths, and permissions for your web-server.
Generate valid headers for the redirect in your Perl app (Content-Type, Content-Disposition, Content-length?, X-Sendfile or X-Accel-Redirect, etc).
There are probably CPAN modules, web-frameworks plugins that do exactly that e.g., #Leon Timmermans mentioned Sys::Sendfile in his answer.