perl web development get local file size - perl

My hosting is not allowed to upload big files to server with PHP. So I need to upload with Perl.
I need to display uploaded file size. And here is my part of code:
open ( UPLOADFILE, ">$upload_dir/$filename" ) or die "$!";
binmode UPLOADFILE;
while ( <$upload_filehandle> )
{
print UPLOADFILE;
}
close UPLOADFILE;
$filename = ">$upload_dir/$filename";
use File::stat;
my $stat = stat($filename);
say $stat->size;
say stat($filename)->size;
But I got an error:
Can't call method "size" on an undefined value at upload.cgi line 56.
Update
After Borodin's answer I updated to this new code, which is now giving an Internal server error
#!/usr/bin/perl -wT
use strict;
use CGI;
use CGI::Carp qw ( fatalsToBrowser );
use File::Basename;
use diagnostics;
use File::stat;
use warnings;
use 5.010;
... #SOME CODES ##
my $upload_filehandle = $query->upload("photo");
open ( UPLOADFILE, ">$upload_dir/$filename" ) or die "$!";
binmode UPLOADFILE;
while ( <$upload_filehandle> )
{
print UPLOADFILE;
}
close UPLOADFILE;
my $full_name = "$upload_dir/$filename";
{
open my $up_fh, '>:raw', $full_name or die qq{Unable to open "$full_name" for output: $!};
print $up_fh, $_ while <$upload_filehandle>;
}
printf "%.2fMB uploaded\n", (-s $full_name) / (1024 * 1024);
print $query->header ( );

Your variable $filename is set to >$upload_dir/$filename which is wrong. The > is the mode for the open call and isn't part of the file name
You want
my $filename = "$upload_dir/$filename";
Also, rather than using File::stat etc. you can just use the -s file test operator
say -s $filename;
which is much more concise
I would code the whole thing like this
my $full_name = "$upload_dir/$filename";
{
open my $up_fh, '>:raw', $full_name or die qq{Unable to open "$full_name" for output: $!};
print $up_fh, $_ while <$upload_filehandle>;
}
printf "%.2fMB uploaded\n", (-s $full_name) / (1024 * 1024);
Update
You're misunderstanding what my code does. You're opening the upload file twice now -- mine replaces all of that. It should look like this below
The reason you're getting the error is that you're printing the file size before the HTTP header. I don't know what you want your response to look like, but you probably want to wrap it in HTML and print it after $query->header()
#!/usr/bin/perl -T
use strict;
use warnings;
use 5.010;
use CGI;
use CGI::Carp qw ( fatalsToBrowser );
use File::Basename;
#... SOME CODE
my $upload_filehandle = $query->upload('photo');
my $full_name = "$upload_dir/$filename";
{
open my $uploaded_fh, '>:raw', $full_name or die qq{Unable to open "$full_name" for output: $!};
print $uploaded_fh $_ while <$upload_filehandle>;
}
my $size = -s $full_name;
print $query->header('text/plain');
printf "\n%dB file uploaded\n", $size;

Related

Downloading text files: Perl

The files are not downloading, please help.
#!/usr/bin/perl -w
require HTTP::Response;
require LWP::UserAgent;
open (INPUT, "ndb_id_file.txt") or die "can't open ndb_id_file.txt";
#input = <INPUT>
foreach $line(#input) {
$ua = LWP::UserAgent->new;
$ua->env_proxy('http');
$ua->proxy(['http', 'ftp'],'http://144020019:*******#netmon.****.ac.in:80');
response =
$ua->get('www.ndbserver.rutgers.edu/files/ftp/NDB/coordinates/na-biol/$line');
if ($response->is_success) {
$content = $response->content();
open(OUT,">$line.pdb") or die "Output file $line cannot be produced... Error...";
print (OUT "$content");
}
}
There are a number of problems with your program. The major ones being in this line
response = $ua->get('www.ndbserver.rutgers.edu/files/ftp/NDB/coordinates/na-biol/$line');
You are trying to assign to response, which is not a variable name
the value of $line is not being inserted into the URL because you are using single quotes
The contents of $line end with a linefeed, which should be removed using chomp
The URL has no scheme — it should start with http://
Apart from those points, you should fix these issues
You must always use strict and use warnings at the top of every Perl program you write. Adding -w on the shebang line is far inferior
You should use rather than require LWP::UserAgent. And there is no need to also use HTTP::Response as it is loaded as part of LWP
You should always use the three-parameter form of open with lexical file handles. And if the open fails you should print a die string that includes the value of $! which gives the reason for the failure
You should use while to read data from a file one line at a time, unless you have a good reason to need all of it in memory at once
There is no need to create a new user agent $ua every time around the loop. Just make one and use it to fetch every URL
You should use decoded_content instead of content to fetch the content of an HTTP::Response message in case it is compressed
Here is a program that includes all of those fixes. I haven't been able to test it but it does compile
#!/usr/bin/perl
use strict;
use warnings;
use LWP::UserAgent;
my $in_file = 'ndb_id_file.txt';
open my $fh, '<', $in_file or die qq{Unable to open "$in_file" for input: $!};
my $ua = LWP::UserAgent->new;
$ua->env_proxy('http');
$ua->proxy(['http', 'ftp'], 'http://144020019:*******#netmon.****.ac.in:80');
while ( my $line = <$fh> ) {
chomp $line;
my $url = "http://www.ndbserver.rutgers.edu/files/ftp/NDB/coordinates/na-biol/$line";
my $response = $ua->get($url);
unless ( $response->is_success ) {
warn $response->status_line;
next;
}
my $content = $response->decoded_content;
my $out_file = "$line.pdb";
open my $out_fh, '>', $out_file or die qq{Unable to open "$out_file" for output: $!};
print $out_fh $content;
close $out_fh or die qq{Unable to close "$out_file": $!};
}

Outputting specifically named file to same directory as infile - Perl

I've got the following code:
#!/usr/bin/perl
use strict;
use warnings;
my $usage = "Usage: $0 <infile.txt> <outfile.txt>\n";
my $infile = shift or die $usage;
my $outfile = shift or die $usage;
open (my $data, "<", $infile) or die "There was a problem opening: $!";
my #primers = <$data>;
close $data;
chomp #primers;
use Algorithm::Combinatorics qw(combinations);
my $strings = \#primers;
my $iter = combinations($strings, 2);
open(my $fh, '>', $outfile);
while (my $c = $iter->next) {
print $fh join('',#$c) ."\n";
}
Which works just fine however I would prefer if the user did not have to specify the output directory and filename. Is there an easy way to get Perl to print the output to the same directory as infile but also giving the output file a specific name such as 'output.txt'?
Any pointers would be greatly appreciated!
Thanks.
SOLVED:
#!/usr/bin/perl
use strict;
use warnings;
my $usage = "Usage: $0 <infile.txt>\n";
my $infile = shift or die $usage;
use File::Basename;
my $DIR = dirname($infile);
my $outfile = $DIR . "/results.txt" or die $usage;
open (my $data, "<", $infile) or die "There was a problem opening: $!";
my #primers = <$data>;
close $data;
chomp #primers;
use Algorithm::Combinatorics qw(combinations);
my $strings = \#primers;
my $iter = combinations($strings, 2);
open(my $fh, '>', $outfile);
while (my $c = $iter->next) {
print $fh join('',#$c) ."\n";
}
print ("Finished. The results are located at $outfile\n\n");
If I understand you correctly, you are trying to write the output to the same directory as input file. If so, you can use File::Basename module to get the directory of the input file.
How about
my $outfile=$infile . ".combinations"
Or, better yet, use stdin and stdout.
(also, check that your outfile was opened succesfully)

How to display binary data on browser using perl?

I am trying to send contents (raw contents) of a file to browser.
But the file is delivered as attachment, even there is no attachment in headers.
Can anyone please suggest how I can display raw data on browser using perl ?
#!/usr/bin/perl
use strict;
use warnings;
use CGI;
use CGI::Carp qw(fatalsToBrowser);
use File::Copy qw( copy );
use File::Spec::Functions qw( catfile );
use POSIX qw(strftime);
use Time::Local;
use HTTP::Status qw(:constants :is status_message);
use Digest::MD5 qw(md5 md5_hex md5_base64);
use File::Basename;
use URI;
my $extfile = '/home/suresh/clientrequest.txt';
open(FH, ">>$extfile") or die "Cannot open file";
my $query = CGI->new;
my $stcode = status_message(206);
my $uri =$ENV{'REQUEST_URI'};
my $rdate =strftime("%a, %d %b %Y %H:%M:%S %Z", localtime());
print FH "Client request: ", $ENV{'REQUEST_URI'}, "\n";
my $dir = '/srv/samba/Assets';
#my $dir = '/home/suresh/Assets';
my $nffFile = fileparse ("$uri", qr/\.[^.]*/);
my $fullFname = $nffFile . ".nff";
my $path = catfile($dir, $fullFname);
my $filesize = -s $path;
print FH "Size of the file: ", $filesize, "\n";
#Search requested asset files
opendir(DIR, $dir);
my #files = readdir(DIR);
if (grep($_=~/$fullFname/,#files)){
print FH "Found the requested NFF file: ", $fullFname, "\n";
open my $fh, '<:raw', $path;
print "$ENV{SERVER_PROTOCOL} 206 $stcode";
print $query->header(
-'Accept-Range'=>'bytes',
-'Date'=> $rdate,
-'Content-Range'=>'0-188/$filesize',
-'Content-Length'=>$filesize,
-'Content-Type'=>'application/octet-stream',
-'Connection'=>'Keep-Alive',
-'Media-Type'=>'application/octet-stream',
);
binmode STDOUT, ':raw';
copy $fh => \*STDOUT;
close $fh
or die "Cannot close '$path': $!";
}else {
print $query->header('text/plain', '404 File not Found!');
print FH "Requested NFF file: ", $fullFname, " not found!!\n\n";
}
closedir(DIR);
Solved .. It was a simple change I need to change content type as text/html to archive this.

Read Increment Then Write to a text file in perl

I have this little perl script which opens a txt file, reads the number in it, then overwrites the file with the number incremented by 1. I can open and read from the file, I can write to the file but I"m having issues overwriting. In addition, I'm wondering if there is a way to do this without opening the file twice. Here's my code:
#!/usr/bin/perl
open (FILE, "<", "data.txt") or die "$! error trying to a\
ppend";
undef $/;
$number = <FILE>;
$number = int($number);
$myNumber = $number++;
print $myNumber+'\n';
close(FILE);
open(FILE, ">data.txt") or die "$! error";
print FILE $myNumber;
close(FILE);
Change the line
$myNumber = $number++;
to
$myNumber = $number+1;
That should solve the problem.
Below is how you could do by opening the file just once:
open(FILE, "+<data.txt") or die "$! error";
undef $/;
$number = <FILE>;
$number = int($number);
$myNumber = $number+1;
seek(FILE, 0, 0);
truncate(FILE, tell FILE);
print $myNumber+"\n";
print FILE $myNumber;
close(FILE);
It's good that you used the three-argument form of open the first time. You also needed to do that in your second open. Also, you should use lexical variables, i.e., those which begin with my, in your script--even for your file handles.
You can just increment the variable that holds the number, instead of passing it to a new variable. Also, it's a good idea to use chomp. This things being said, consider the following option:
#!/usr/bin/env perl
use strict;
use warnings;
undef $/;
open my $fhIN, "<", "data.txt" or die "Error trying to open for reading: $!";
chomp( my $number = <$fhIN> );
close $fhIN;
$number++;
open my $fhOUT, ">", "data.txt" or die "Error trying to open for writing: $!";
print $fhOUT $number;
close $fhOUT;
Another option is to use the Module File::Slurp, letting it handle all the I/O operations:
#!/usr/bin/env perl
use strict;
use warnings;
use File::Slurp qw/edit_file/;
edit_file { chomp; $_++ } 'data.txt';
Try this:
#!/usr/bin/perl
use strict;
use warnings;
my $file = "data.txt";
my $number = 0;
my $fh;
if( -e $file ) {
open $fh, "+<", $file or die "Opening '$file' failed, because $!\n";
$number = <$fh>;
seek( $fh, 0, 0 );
} else { # if no data.txt exists - yet
open $fh, ">", $file or die "Creating '$file' failed, because $!\n";
}
$number++;
print "$number\n";
print $fh $number;
close( $fh );
If you're using a bash shell, and you save the code to test.pl, you can test it with:
for i in {1..10}; do ./test.pl; done
Then 'cat data.txt', should show a 10.

Why is my image download CGI script written in Perl not working?

#!/usr/bin/perl
use CGI ':standard';
use CGI::Carp qw(fatalsToBrowser);
my $files_location;
my $ID;
my #fileholder;
$files_location = "C:\Users\user\Documents\hello\icon.png";
open(DLFILE, "<$files_location") ;
#fileholder = <DLFILE>;
close (DLFILE) ;
print "Content-Type:application/x-download\n";
print "Content-Disposition:attachment;filename=$ID\n\n";
print #fileholder;
When I run this script, instead of returning the icon.png file it returns the download.pl (the name of the script given above) with no content inside it. What is the issue?
Script i am using currently.
#!C:\Perl64\bin\perl.exe -w
use CGI qw(:standard);
use File::Copy qw( copy );
use File::Spec::Functions qw( catfile );
use constant IMG_DIR => catfile(qw( D:\ ));
serve_logo(IMG_DIR);
sub serve_logo {
my ($dir) = #_;
my $cgi = CGI->new;
my $file = "icon.png";
#print $file;
defined ($file) or die "Invalid image name in CGI request\n";
send_file($cgi, $dir, $file);
return;
}
sub send_file
{
my ($cgi, $dir, $file) = #_;
my $path = catfile($dir, $file);
open my $fh, '<:raw', $path or die "Cannot open '$path': $!";
print $cgi->header( -type => 'application/octet-stream', -attachment => $file, );
binmode STDOUT, ':raw';
copy $fh => \*STDOUT, 8_192;
close $fh or die "Cannot close '$path': $!";
return;
}
There are quite a few issues. The first one is the fact that you are using #fileholder = <DLFILE>; to slurp a binary file. On Windows, automatic conversion of line endings will wreak havoc on the contents of that file.
Other issues are:
You are not checking the return value of open. We don't even know if open succeeded.
You never assign a value to $ID, meaning you're sending "filename=\n\n" in your response.
You are slurping a binary file, making the memory footprint of your program proportional to the size of the binary file. Robust programs don't do that.
You're useing CGI.pm, but you are neither using it nor have you read the docs.
You're using a bareword (i.e. package global) filehandle.
The fundamental reason, however, is that open fails. Why does open fail? Simple:
C:\temp> cat uu.pl
#!/usr/bin/env perl
use strict; use warnings;
my $files_location = "C:\Users\user\Documents\hello\icon.png";
print "$files_location\n";
Let's try running that, shall we?
C:\temp> uu
Unrecognized escape \D passed through at C:\temp\uu.pl line 5.
Unrecognized escape \h passed through at C:\temp\uu.pl line 5.
Unrecognized escape \i passed through at C:\temp\uu.pl line 5.
C:SERSSERDOCUMENTSHELLOICON.PNG
Here is a short script illustrating a better way:
use CGI qw(:standard);
use File::Copy qw( copy );
use File::Spec::Functions qw( catfile );
use constant IMG_DIR => catfile(qw(
E:\ srv localhost images
));
serve_logo(IMG_DIR);
sub serve_logo {
my ($dir) = #_;
my %mapping = (
'big' => 'logo-1600x1200px.png',
'medium' => 'logo-800x600.png',
'small' => 'logo-400x300.png',
'thumb' => 'logo-200x150.jpg',
'icon' => 'logo-32x32.gif',
);
my $cgi = CGI->new;
my $file = $mapping{ $cgi->param('which') };
defined ($file)
or die "Invalid image name in CGI request\n";
send_file($cgi, $dir, $file);
return;
}
sub send_file {
my ($cgi, $dir, $file) = #_;
my $path = catfile($dir, $file);
open my $fh, '<:raw', $path
or die "Cannot open '$path': $!";
print $cgi->header(
-type => 'application/octet-stream',
-attachment => $file,
);
binmode STDOUT, ':raw';
copy $fh => \*STDOUT, 8_192;
close $fh
or die "Cannot close '$path': $!";
return;
}
I also posted a detailed explanation on my blog.
It took me a while to figure what was wrong, so for those that end up here (as I did) having random issues serving large files, here's my advice:
Avoid File::Copy, as it's bugged for this purpose.
When serving data through CGI, syswrite can return undef ($! being 'Resource temporarily unavailable') for some time.
File::Copy stops in that case (returns 0, sets $!), failing to transfer the entire file (or stream).
Many different options to work around that, retrying the syswrite, or using blocking sockets, not sure which on is the best though !