File upload using a Perl CGI script not working - perl

The following code doesn't have any syntax errors, but still doesn't working.Can I use server ip(like 100.100.100.100) for $Domain and what path should be given for $directory(i mean adding the serverip or domain name?Please help
#!/usr/bin/perl
use CGI;
$CGI::POST_MAX= 100 * 1024;
$CGI::DISABLE_UPLOADS=0;
$Referer = $ENV{HTTP_REFERER};
$Domain = "xxx.com";
$cgi = new CGI;
$file=$cgi->upload('text');
print $cgi->header,
$cgi->start_html
(
-title=>'CGI.pm File Upload'
);
print <<EOF;
<form action="" method="post" enctype="multipart/form-data">
<input type="file" name="text" size=60><br>
<input type="submit" value="Upload">
</form>
EOF
if($file)
{
if($Referer =~ "$Domain")
{
$directory="var/www/cgi-bin/uploads";
open UPLOAD, ">$directory$file";
binmode UPLOAD;
while(<$file>) {print UPLOAD;}
close UPLOAD;
}
}
$cgi->end_html;
exit;

Looks like you need to read the documentation on file upload basics again. The sample code they have is:
use autodie;
# undef may be returned if it's not a valid file handle
if ( my $io_handle = $q->upload('field_name') ) {
open ( my $out_file,'>>','/usr/local/web/users/feedback' );
while ( my $bytesread = $io_handle->read($buffer,1024) ) {
print $out_file $buffer;
}
}
There are some stylistic differences to your code, but the important thing to note that is that when your code runs this line:
$file=$cgi->upload('text');
Then $file contains an open filehandle. It does not contain the filename. This means that there are at least three errors in these lines of your code:
$directory="var/www/cgi-bin/uploads";
open UPLOAD, ">$directory$file";
The value you store in $directory should almost certainly start with a / (so it's /var/www/cgi-bin/uploads).
You also need another / between $directory and $file (otherwise, it will contain something like /var/www/cgi-bin/uploadsmyfile.dat).
You need to call $cgi->param('text') to get the name of the file that is being uploaded.
This is what is stopping your program from working. The upload section of your code should look like this:
my $filename = $cgi->param('text');
my $fh = $cgi->upload('text');
my $directory = '/var/www/cgi-bin/uploads';
open my $upload_fh, '>', "$directory/$filename"
or die "Can't open '$directory/$filename': $!";
print $upload_fh $_ while <$fh>;
Note that I've made some stylistic improvements here:
Used 3-argument version of open()
Used lexical filehandles
Checked the success of the open() call and killed the program with a useful error message if it fails
All in all, you seem to have learned CGI programming from a resource that is about twenty years out of date. Your code looks like it comes from the 1990s.
A few other tips:
Always use strict and use warnings.
Indirect object notation (new CGI) is potentially very confusing. Use CGI->new instead.
We've known that the HTML-generation functions in CGI.pm are a terrible idea since the end of the last millennium. Please don't use them. Many good templating solutions are available for Perl.
Writing a CGI program in 2017 is a terrible idea. Take a look at CGI::Alternatives for an introduction to Modern Perl Web Development tools.

Related

Perl: Open a file from a URL

I wanted to know how to open a file from a URL rather than a local file and I found the following answer on another thread:
use IO::String;
my $handle = IO::String->new(get("google.com"));
my #lines = <$handle>;
close $handle;
This works perfectly... on my PC...
But when I transferred the code over to my hosted server it complains that it can't find the IO module. So is there another way to open a file from an URL, that doesn't require any external modules (or uses one that is pretty much installed on every server)...?
You can install PerlIO::http, which will give you an input layer for opening a filehandle from a URL via open. This thing is not included in the Perl core, but it will work with Perls as early as 5.8.9.
Once you've installed it, all you need to do is open with a layer :http in the mode argument. There is nothing to use here. That happens automatically.
open my $fh, '<:http', 'https://metacpan.org/recent';
You can then read from $fh like a regular file. Under the hood it will take care of getting the data over the wire.
while (my $line = <$fh>) { ... }
There is no way to "open a file from a URL" as you ask. Well, I suppose you could throw something together using the progress() callback from LWP::UserAgent, but even then I don't think it would work how you want it to.
But you can make something that looks like it's doing what you want pretty easily. Actually, what we're really doing is pulling all the data back from the URL and then opening a filehandle on a string that contains that data.
use LWP::Simple;
my $data = get('https://google.com');
open my $url_fh, '<', \$data or die $!;
# Now $url_fh is a filehandle wrapped around your data.
# Treat it like any other filehandle.
while (<$url_fh>) {
print;
}
Your problem was that IO::String wasn't installed. But there's no need to install it, as it's simple enough to do what it does with standard Perl features (simply open a filehandle on a reference to a string).
Update: IO::String is completely unnecessary here. Not only because you can do what it does very simply, by just opening a filehandle on a reference to your string, but also because all you want to do is to read a file from a web site into an array. And in that case, your code is simply:
use LWP::Simple;
my $url = 'something';
my #records = split /\n/, get($url);
You might even consider adding some error handing.
use LWP::Simple;
my $url = 'something';
my $data = get($url);
die "No data found\n" unless defined $data;
my #array = split /\n/, get($url);

Read and write a textfile with Perl

I am trying to open and read a textfile and then write the content of this file line per line into an HTML-File. So far, I've come up with this:
use strict;
use locale;
my (#datei, $i);
open (FHIN,"HSS_D.txt") || die "couldn't open file $!";
#datei= <in>;
close FHIN;
open (FHOUT, ">pz2.html");
print FHOUT "<HTML>\n";
print FHOUT "<HEAD>\n";
print FHOUT "<TITLE>pz 2</TITLE>\n";
print FHOUT '<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">';
print FHOUT "\n</HEAD>\n";
print FHOUT "<BODY>\n";
for ($i = 0; $i < #datei; $i++) {
print FHOUT "<p>$datei[$i]</p>\n";
}
print FHOUT "</BODY></html>\n";
close (FHOUT);
However, I get a compilation error every time and I can't figure out what's wrong. Thanks for your help!
If you had enabled warnings via use warnings or use warnings qw(all)—which you should always do—you would have seen something like this:
Name "main::in" used only once: possible typo at foo.pl line 6.
That is, of course, this line:
#datei= <in>;
The root cause of the problem is that you opened a filehandle named FHIN, but you tried to read from a filehandle named in. However, the whole operation would be better written using lexical filehandles and the three-argument form of open, which is considered a best practice:
open(my $fh, '<', 'HSS_D.txt') or die "couldn't open file $!";
As an aside, I've voted to close this question as off-topic because it is about a problem that was caused by a simple typographical error.
Problem in your script
You are storing incorrect handler in your array that is your problem. #datei = <in> it should be
#datei = <FHIN>;
Some etc things you should know
1) Always put use warnings and use strict on a top of the program.
2) Don't store the whole file in an array instead you have to process the file line by line.
while (my $line = <FHIN>)
{
Do your stuff here.
3) use three arguments for file handle. Like as follow
open my $fh,'<', "filename"
4) to access the each element from an array​ you can use Perl foreach instead of C style looping
for my $elemnts(#arrray)
{
If you have suppose want to iterate loop through its index use the following format.
for my $index(0..$#arrray)
{
Above .. means range operator$# will give the last index value

Out of memory when serving a very big binary file over HTTP

The code below is the original code of a Perl CGI script we are using. Even for very big files it seems to be working, but not for really huge files.
The current code is :
$files_location = $c->{target_dir}.'/'.$ID;
open(DLFILE, "<$files_location") || Error('open', 'file');
#fileholder = <DLFILE>;
close (DLFILE) || Error ('close', 'file');
print "Content-Type:application/x-download\n";
print "Content-Disposition:attachment;filename=$name\n\n";
print #fileholder;
binmode $DLFILE;
If I understand the code correctly, it is loading the whole file in memory before "printing" it. Of course I suppose it would be a lot better to load and display it by chunks ? But after having read many forums and tutorials I am still not sure how to do it best, with standard Perl libraries...
Last question, why is "binmode" specified at the end ?
Thanks a lot for any hint or advice,
I have no idea what binmode $DLFILE is for. $DLFILE is nothing to do with the file handle DLFILE, and it's a bit late to set the binmode of the file now that it has been read to the end. It's probably just a mistake
You can use this instead. It uses modern Perl best practices and reads and sends the file in 8K chunks
The file name seems to be made from $ID so I'm not sure that $name would be correct, but I can't tell
Make sure to keep the braces, as the block makes Perl restore the old value of $/ and close the open file handle
my $files_location = "$c->{target_dir}/$ID";
{
print "Content-Type: application/x-download\n";
print "Content-Disposition: attachment; filename=$name\n\n";
open my $fh, '<:raw', $files_location or Error('open', "file $files_location");
local $/ = \( 8 * 1024 );
print while <$fh>;
}
You're pulling the entire file at once into memory. Best to loop over the file line-by-line, which eliminates this problem.
Note also that I've modified the code to use the proper 3-arg open, and to use a lexical file handle instead of a global bareword one.
open my $fh, '<', $files_location or die $!;
print "Content-Type:application/x-download\n";
print "Content-Disposition:attachment;filename=$name\n\n";
while (my $line = <$fh>){
print $line;
}
The binmode call appears to be useless in the context of what you've shown here, as $DLFILE doesn't appear to be a valid, in-use variable (add use strict; and use warnings; at the top of your script...)

Getstore to Buffer, not using temporary files

I've started Perl recently and mixed quite a bit of things to get what I want.
My script gets the content of a webpage, writes it to a file.
Then I open a filehandler, plug the file report.html in (sorry i'm not english, i don't know how to say it better) and parse it.
I write every line i encounter to a new file, except lines containing a specific color.
It works, but I'd like to try another way which doesn't require me to create a "report.html" temporary file.
Furthermore, I'd like to print my result directly in a file, I don't want to have to use a system redirection '>'. That'd mean my script has to be called by another .sh script, and I don't want that.
use strict;
use warnings;
use LWP::Simple;
my $report = "report.html";
getstore('http://test/report.php', 'report.html') or d\
ie 'Unable to get page\n';
open my $fh2, "<$report" or die("could not open report file : $!\n");
while (<$fh2>)
{
print if (!(/<td style="background-color:#71B53A;"/ .. //));
}
close($fh2);
Thanks for your help
If you have got the html content into a variable, you can use a open call on this variable. Like:
my $var = "your html content\ncomes here\nstored into this variable";
open my $fh, '<', \$var;
# .. just do the things you like to $fh
You can try get function in LWP::Simple Module ;)
To your sencond question, use open like open $fh, '<', $filepath. you can use perldoc -f open to see more info.

What's the best way to open and read a file in Perl?

Please note - I am not looking for the "right" way to open/read a file, or the way I should open/read a file every single time. I am just interested to find out what way most people use, and maybe learn a few new methods at the same time :)*
A very common block of code in my Perl programs is opening a file and reading or writing to it. I have seen so many ways of doing this, and my style on performing this task has changed over the years a few times. I'm just wondering what the best (if there is a best way) method is to do this?
I used to open a file like this:
my $input_file = "/path/to/my/file";
open INPUT_FILE, "<$input_file" || die "Can't open $input_file: $!\n";
But I think that has problems with error trapping.
Adding a parenthesis seems to fix the error trapping:
open (INPUT_FILE, "<$input_file") || die "Can't open $input_file: $!\n";
I know you can also assign a filehandle to a variable, so instead of using "INPUT_FILE" like I did above, I could have used $input_filehandle - is that way better?
For reading a file, if it is small, is there anything wrong with globbing, like this?
my #array = <INPUT_FILE>;
or
my $file_contents = join( "\n", <INPUT_FILE> );
or should you always loop through, like this:
my #array;
while (<INPUT_FILE>) {
push(#array, $_);
}
I know there are so many ways to accomplish things in perl, I'm just wondering if there are preferred/standard methods of opening and reading in a file?
There are no universal standards, but there are reasons to prefer one or another. My preferred form is this:
open( my $input_fh, "<", $input_file ) || die "Can't open $input_file: $!";
The reasons are:
You report errors immediately. (Replace "die" with "warn" if that's what you want.)
Your filehandle is now reference-counted, so once you're not using it it will be automatically closed. If you use the global name INPUT_FILEHANDLE, then you have to close the file manually or it will stay open until the program exits.
The read-mode indicator "<" is separated from the $input_file, increasing readability.
The following is great if the file is small and you know you want all lines:
my #lines = <$input_fh>;
You can even do this, if you need to process all lines as a single string:
my $text = join('', <$input_fh>);
For long files you will want to iterate over lines with while, or use read.
If you want the entire file as a single string, there's no need to iterate through it.
use strict;
use warnings;
use Carp;
use English qw( -no_match_vars );
my $data = q{};
{
local $RS = undef; # This makes it just read the whole thing,
my $fh;
croak "Can't open $input_file: $!\n" if not open $fh, '<', $input_file;
$data = <$fh>;
croak 'Some Error During Close :/ ' if not close $fh;
}
The above satisfies perlcritic --brutal, which is a good way to test for 'best practices' :). $input_file is still undefined here, but the rest is kosher.
Having to write 'or die' everywhere drives me nuts. My preferred way to open a file looks like this:
use autodie;
open(my $image_fh, '<', $filename);
While that's very little typing, there are a lot of important things to note which are going on:
We're using the autodie pragma, which means that all of Perl's built-ins will throw an exception if something goes wrong. It eliminates the need for writing or die ... in your code, it produces friendly, human-readable error messages, and has lexical scope. It's available from the CPAN.
We're using the three-argument version of open. It means that even if we have a funny filename containing characters such as <, > or |, Perl will still do the right thing. In my Perl Security tutorial at OSCON I showed a number of ways to get 2-argument open to misbehave. The notes for this tutorial are available for free download from Perl Training Australia.
We're using a scalar file handle. This means that we're not going to be coincidently closing someone else's file handle of the same name, which can happen if we use package file handles. It also means strict can spot typos, and that our file handle will be cleaned up automatically if it goes out of scope.
We're using a meaningful file handle. In this case it looks like we're going to write to an image.
The file handle ends with _fh. If we see us using it like a regular scalar, then we know that it's probably a mistake.
If your files are small enough that reading the whole thing into memory is feasible, use File::Slurp. It reads and writes full files with a very simple API, plus it does all the error checking so you don't have to.
There is no best way to open and read a file. It's the wrong question to ask. What's in the file? How much data do you need at any point? Do you need all of the data at once? What do you need to do with the data? You need to figure those out before you think about how you need to open and read the file.
Is anything that you are doing now causing you problems? If not, don't you have better problems to solve? :)
Most of your question is merely syntax, and that's all answered in the Perl documentation (especially (perlopentut). You might also like to pick up Learning Perl, which answers most of the problems you have in your question.
Good luck, :)
It's true that there are as many best ways to open a file in Perl as there are
$files_in_the_known_universe * $perl_programmers
...but it's still interesting to see who usually does it which way. My preferred form of slurping (reading the whole file at once) is:
use strict;
use warnings;
use IO::File;
my $file = shift #ARGV or die "what file?";
my $fh = IO::File->new( $file, '<' ) or die "$file: $!";
my $data = do { local $/; <$fh> };
$fh->close();
# If you didn't just run out of memory, you have:
printf "%d characters (possibly bytes)\n", length($data);
And when going line-by-line:
my $fh = IO::File->new( $file, '<' ) or die "$file: $!";
while ( my $line = <$fh> ) {
print "Better than cat: $line";
}
$fh->close();
Caveat lector of course: these are just the approaches I've committed to muscle memory for everyday work, and they may be radically unsuited to the problem you're trying to solve.
I once used the
open (FILEIN, "<", $inputfile) or die "...";
my #FileContents = <FILEIN>;
close FILEIN;
boilerplate regularly. Nowadays, I use File::Slurp for small files that I want to hold completely in memory, and Tie::File for big files that I want to scalably address and/or files that I want to change in place.
For OO, I like:
use FileHandle;
...
my $handle = FileHandle->new( "< $file_to_read" );
croak( "Could not open '$file_to_read'" ) unless $handle;
...
my $line1 = <$handle>;
my $line2 = $handle->getline;
my #lines = $handle->getlines;
$handle->close;
Read the entire file $file into variable $text with a single line
$text = do {local(#ARGV, $/) = $file ; <>};
or as a function
$text = load_file($file);
sub load_file {local(#ARGV, $/) = #_; <>}
If these programs are just for your productivity, whatever works! Build in as much error handling as you think you need.
Reading in a whole file if it's large may not be the best way long-term to do things, so you may want to process lines as they come in rather than load them up in an array.
One tip I got from one of the chapters in The Pragmatic Programmer (Hunt & Thomas) is that you might want to have the script save a backup of the file for you before it goes to work slicing and dicing.
The || operator has higher precedence, so it is evaluated first before sending the result to "open"... In the code you've mentioned, use the "or" operator instead, and you wouldn't have that problem.
open INPUT_FILE, "<$input_file"
or die "Can't open $input_file: $!\n";
Damian Conway does it this way:
$data = readline!open(!((*{!$_},$/)=\$_)) for "filename";
But I don't recommend that to you.