CGI script cant create file - perl

I have the following CGI script that launches a module that creates a PNG file and then shows it.
#!/usr/bin/perl
use RRDs;
use CGI;
main:
{
my $cgi = new CGI;
my $filename = $cgi->param('filename');
print "Content-type: text/html\n\n";
my $curr_time = time();
my $start_time = $curr_time-3600;
RRDs::graph("$filename", "--start", "$start_time", "DEF:DiskC=c.rrd:DiskC:AVERAGE", "AREA:DiskC#0000FF");
my $err = RRDs::error();
print "<HTML><HEAD><TITLE>Disk C Utilization</TITLE></HEAD><BODY><H1>Disk C Utilization</H1><BR>$err<img src=\"$filename\"/></BODY></HTML>";
}
The graph method says that can't create the PNG file. If I run this script in a command like it works fine so I think it's a matter of permissions. I already set chmod 755 on the cgi-script folder. What do you suggest? Is this related to Apache2 settings?

Um, check the logs :) CGI Help Guide
$filename is not the filename that you want to use , it can be anything the browser sends, even F:/bar/bar/bar/bar/bar/bar/bar/UHOH.png
Its unlikely that F:/bar/bar/bar/bar/bar/bar/bar/UHOH.png exists on your server
You want to generate a filename, maybe like this
sub WashFilename {
use File::Basename;
my $basename = basename( shift );
# untainted , only use a-z A-Z 0-9 and dot
$basename = join '', $basename =~ m/([.a-zA-Z0-9])/g;
# basename is now, hopefully, file.ext
## so to ensure uniqueness, we adulterate it :)
my $id = $$.'-'.time;
my( $file, $ext ) = split /\./, $basename, 2 ;
return join '.', grep defined, $file, $id, $ext;
} ## end sub WashFilename
You also want to place the file in a directory of your webserver that will serve images (NOT cgi-bin)
And you also want to use File::Type::WebImages to determine web image file types using magic and make sure the filename has the appropriate extension , so your server will send the appropriate headers
And don't forget to chmod the file appropriately

chmod 777 solved my problem!

Don't store a document to file unless you have a reason to: it's slow and introduces all kinds of potential hazards.
You don't appear to reuse the same image on different requests, which might be such a reason.
Instead, teach your CGI script to output the image directly, depending on how it is called (or write a second one that does).

Related

Get the path for a similarly named file in perl, where only the extension differs?

I'm trying to write an Automator service, so I can chuck this into a right-click menu in the gui.
I have a filepath to a txt file, and there is a similarly named file that varies only in the file extension. This can be a pdf or a jpg, or potentially any other extension, no way to know beforehand. How can I get the filepath to this other file (there will only be one such)?
$other_name =~ s/txt$/!(txt)/;
$other_name =~ s/ /?/g;
my #test = glob "$other_name";
In Bash, I'd just turn on the extglob option, and change the "txt" at the end to "!(txt)" and the do glob expansion. But I'm not even sure if that's available in perl. And since the filepaths always have spaces (it's in one of the near-root directory names), that further complicates things. I've read through the glob() documentation at http://perldoc.perl.org/functions/glob.html and tried every variation of quoting (the above example code shows my attempt after having given up, where I just remove all the spaces entirely).
It seems like I'm able to put modules inside the script, so this doesn't have to be bare perl (just ran a test).
Is there an elegant or at least simple way to accomplish this?
You can extract everything in the filename up to extension, then run a glob with that and filter out the unneeded .txt. This is one of those cases where you need to protect the pattern in the glob with a double set of quotes, for spaces.
use warnings;
use strict;
use feature qw(say);
my $file = "dir with space/file with spaces.txt";
# Pull the full name without extension
my ($basefname) = $file =~ m/(.*)\.txt$/;
# Get all files with that name and filter out unneeded (txt)
my #other_exts = grep { not /\.txt$/ } glob(qq{"$basefname.*"});
say for #other_exts;
With a toy structure like this
dir space/
file with spaces.pdf
file with spaces.txt
The output is
dir space/file with spaces.pdf
This recent post has more on related globs.
Perl doesn't allow the not substring construct in glob. You have to find all files with the same name and any extension, and remove the one ending with .txt
This program shows the idea. It splits the original file name into a stem part and a suffix part, and uses the stem to form a glob pattern. The grep removes any result that ends with the original suffix
It picks only the first matching file name if there is more than one candidate. $other_name will be set to undef if no matching file was found
The original file name is expected as a parameter on the command line
The result is printed to STDOUT; I don't know what you need for your right-click menu
The line use File::Glob ':bsd_glob' is necessary if you are working with file paths that contain spaces, as it seems you are
use strict;
use warnings 'all';
use File::Glob ':bsd_glob';
my ($stem, $suffix) = shift =~ /(.*)(\..*)/;
my ($other_name) = grep ! /$suffix$/i, glob "$stem.*";
$other_name =~ tr/ /?/;
print $other_name, "\n";
This is an example, based on File::Basename core module
use File::Basename;
my $fullname = "/path/to/my/filename.txt";
my ($name, $path, $suffix) = fileparse($fullname, qw/.txt/);
my $new_filename = $path . $name . ".pdf";
# $name --> filename
# $path --> /path/to/my/
# $suffix --> .txt
# $new_filename --> /path/to/my/filename.pdf

Copy the last modified Dir from one location to another using Perl

fairly new to perl so this is most likely is not the best code which is why I am posting. I got this to work but was wondering if there is a better way. I do not have the ability to download modules. I am copying the last modified directory in a build folder from one server to another server. The argument allows me to choose which build directory to choose from.
Thanks
#!C:\strawberry\perl
use warnings;
use strict;
use File::Copy::Recursive;
my $NewFolder = `(dir /o-d/ad/b \\\\myserver1.name.com\\builds\\$ARGV[0] | head -1)`;
chomp($NewFolder);
$dir1 = "\\\\myserver1.name.com\\builds\\$ARGV[0]/$NewFolder";
$dir2 = "\\\\myserver2.name.com\\builds\\$ARGV[0]/Backup/$NewFolder";
File::Copy::Recursive::dircopy $dir1, $dir2 or die "Copy failed: $!";
Use forward slashes. It just makes your code easier to read:
$dir1 = "\\\\myserver1.name.com\\builds\\$ARGV[0]/$NewFolder";
vs.
$dir1 = "//myserver1.name.com/builds/$ARGV[0]/$NewFolder";
Also, don't do system calls where Perl can do it. For example, Perl can see the last modification date of a file via the stat. Even better is the File::stat module that makes the stat command so much easier to use.
Don't use #ARGV in your programs. Instead, read the variables from #ARGV into your own variables. It makes your program easier to understand, and your own variables have limited scope while #ARGV is global.
Use modern conventions. Variable names should be in all lower case, and use underscores to separate out words. That is $new_folder vs. $NewFolder. Is this arbitrary? Yes, but it's a convention followed by most Perl developers. It means not wondering if the variable is $newFolder, $NewFolder, or $newfolder because you know by these rules it is $new_folder.
And finally, use autodie; This will kill your program whenever a file operation fails. This turns perl from a check function for errors programming language into a exception checking language. This way, you don't have to worry whether or not you have to check for a failed IO operation.
Here's a completely untested, error ridden example:
use strict;
use warnings;
use autodie;
use File::Copy::Recursive qw(dircopy); #Optional Module
use File::Stat;
use constants {
ORIG_SERVER => '//myserver1.name.com/builds',
TO_SERVER => '//myserver2.name.com/builds',
};
my $from_directory = shift;
#
# Find newest directory
#
opendir my $dir_fh, ORIG_SERVER . "/$from_directory";
my $newest_directory;
while ( my $sub_directory = readdir $dir_fh ) {
next if $sub_directory eq "." or $sub_directory eq "..";
next unless -d $sub_directory;
if ( not defined $newest_directory ) {
$youngest_directory = $sub_directory;
next;
}
my $youngest_directory_stat = stat ORIG_SERVER . "/$directory/$newest_directory";
my $sub_directory_stat = stat ORIG_SERVER . "/$directory/$sub_directory";
if ( $newest_directory_stat->mtime > $sub_directory_stat->mtime ) {
$newest_directory = $sub_directory;
}
}
dircopy ORIG_SERVER . "/$directory/$youngest_directory",
TO_SERVER . "/$directory/$youngest_directory/backup";
My program is a lot longer than your program because your program depended upon various system operating commands, like dir and head which I don't believe is a standard Windows OS command. Instead, I read each entry under that directory into my loop. Anything that's not a directory, I toss (next if -d $sub_directory) and I toss out the special directories . and ...
After that, I use stat to find the youngest directory which to me means the one with the newest modification time. Note that Unix doesn't store creation time. However, according to perlport ctime is creation time on Win32, so you might prefer that instead of mtime.
If I didn't use File::stat, instead of this:
my $youngest_directory_stat = stat ORIG_SERVER . "/$directory/$newest_directory";
my $sub_directory_stat = stat ORIG_SERVER . "/$directory/$sub_directory";
if ( $newest_directory_stat->mtime > $sub_directory_stat->mtime ) {
$newest_directory = $sub_directory;
}
I could have done this:
my $newest = ORIG_SERVER . "/$directory/$newest_directory";
my $sub_dir = ORIG_SERVER . "/$directory/$sub_directory";
if ( stat( $newest )[9] > stat( $sub_dir )[9] ) {
$newest_directory = $sub_directory;
}
The stat command without File::stat returns an array of values, and I could have simply used the [9] element of that array. However, what is 9? Even though it could of saved me a few lines of code, and including an extra Perl module, it's better to use File::stat.
One thing you notice is that constants don't interpolate which means I have to keep doing things like this:
my $youngest_directory_stat = stat ORIG_SERVER . "/$directory/$newest_directory";
However, you can use this bit of Perlish black magic to interpolate constants inside quotes:
my $youngest_directory_stat = stat "#{[ORIG_SERVER]}/$directory/$newest_directory";
Hope that helps.

... architecting & implementing help w/ Perl & REST

I really just need some basic software engineering architecting guidance... When I say 'architecting', I mean pseudo-code or statements I can go after in the Perl book I have, and hopefully some online examples...
I'm using Perl and I'm trying to parse Medline/Pubmed file paths on an Unix system in order to finally pass the PMID from each path to a pmid2doi conversion website (http://www.pmid2doi.org/).
The structure of each link is a filepath of the form...
/xxxxx/xxxxx/xxxxx/xxxxx/xxxxx/UNC00000000000042/00223468/v45i3/S0022346809003820
|<-PMID->|
where 00223468 is the PMID.
$ ls
18507872 main.pdf main.raw main.xml
So far, I've got something that looks like this (in bash):
for doi in `find . -name "*.xml" | awk -F\/ '{print $2}' ` #this extracts the PMID
do
echo $doi
wget pmid2doi website
done
Now I need help translating it to Perl.
The pmid2doi website requires inputting the PMID in order to get back the DOI.
This is what I need to get running in Perl, and I need a little help in designing implementing this...
Given your development background, I would be grateful for any insights, or recommended sites for generating regular expressions in Perl.
thanks very much!
Additionally:
I see that the pmid2doi website says the REST API expects the website plus the DOI value.
So I need to find some example Perl code that gets a REST value from a URL.
In REST I just prepare the URL as specified and then the returned result should be the value I want.
Something like this maybe?
use File::Find;
my $client = REST::Client->new( $an_url );
File::Find::find( sub {
return unless m/\.xml$/;
carp "Could not open $File::Find::name!"
unless open( my $fh, '<', $File::Find::name )
;
my $doi;
while ( <$fh> ) {
next unless ( $doi ) = m{[^/]*/([^/]*)};
$client->GET( join( '/', $base, $doi ));
do_stuff_with_content( $client->responseContent );
}
close $fh;
} => '.'
);
Am I misunderstanding things, or could you just do this in Perl locally and not bother with the website? No need to hit up a remote host and deal with latency, usage limits, etc...
https://metacpan.org/module/Bio::DB::EUtilities
Edit: I am, that's a CGI client for NCBI. From the docs: 'A separate project, currently present in BioPerl-Run, utilizes the SOAP-based interface for eUtils.'
https://bitbucket.org/reece/docdepot/src/ca32360f6fa4/archive/perl-version/bin/doi-to-pmid
#!/usr/bin/perl
# doi-to-pmid -- get the PubMed ids for a doi
use strict;
use warnings;
use Bio::DB::EUtilities;
my #pmids = doi_to_pmids( $ARGV[0] );
exit 1 unless (#pmids);
print( join("\n",#pmids), "\n" );
exit;
sub doi_to_pmids {
my $doi = $_[0];
my $f = Bio::DB::EUtilities->new(-eutil => 'esearch',
-db => 'pubmed',
-term => "$doi [doi]");
my #ids = $f->get_ids();
return (#ids);
}

Check for existence of directory in Perl with wildcard

I need to check whether any of a set of directories exist in a Perl script. The directories are named in the format XXXX*YYY - I need to check for each XXXX and enter an if statement if true.
In my script I have two variables $monitor_location (contains the path to the root directory being scanned) and $clientid (contains the XXXX).
The code snippet below has been expanded to show more of what I'm doing. I have a query which returns each client ID, I'm then looping for each record returned and trying to calculate the disk space used by that client ID.
I have the following code so far (doesn't work):
# loop for each client
while ( ($clientid, $email, $name, $max_record) = $query_handle1->fetchrow_array() )
{
# add leading zeroes to client ID if needed
$clientid=sprintf"%04s",$clientid;
# scan file system to check how much recording space has been used
if (-d "$monitor_location/$clientid\*") {
# there are some call recordings for this client
$str = `du -c $monitor_location/$clientid* | tail -n 1 2>/dev/null`;
$str =~ /^(\d+)/;
$client_recspace = $1;
print "Client $clientid has used $client_recspace of $max_record\n";
}
}
To be clear, I want to enter the if statement if there are any folders that start with XXXX.
Hope this makes sense! Thanks
You can use glob to expand the wildcard:
for my $dir (grep -d, glob "$monitor_location/$clientid*") {
...
}
I have a "thing" against glob. (It seems to only work once (for me), meaning you couldn't re-glob that same dir again later in the same script. It's probably just me, though.)
I prefer readdir(). This is definitely longer, but it WFM.
chdir("$monitor_location") or die;
open(DIR, ".") or die;
my #items = grep(-d, grep(/^$clientid/, readdir(DIR)));
close(DIR);
Everything in #items matches what you want.

Perl File Name Change

I am studying and extending a Perl script written by others. It has a line:
#pub=`ls $sourceDir | grep '\.htm' | grep -v Default | head -550`;
foreach (#pub) {
my $docName = $_;
chomp($docName);
$docName =~ s/\.htm$//g;
............}
I know that it uses a UNIX command firstly to take out all the htm files, then get rid of file extension.
Now I need to do one thing, which is also very important. That is, I need to change the file name of the actual files stored, by replacing the white space with underscore. I am stuck here because I am not sure whether I should follow his code style, achieving this by using UNIX, or I should do this in Perl? The point is that I need to modify the real file on the disk, not the string which used to hold the file name.
Thanks.
Something like this should help (not tested)
use File::Basename;
use File::Spec;
use File::Copy;
use strict;
my #files = grep { ! /Default/ } glob("$sourceDir/*.htm");
# I didn't implement the "head -550" part as I don't understand the point.
# But you can easily do it using `splice()` function.
foreach my $file (#files) {
next unless (-f $file); # Don't rename directories!
my $dirname = dirname($file); # file's directory, so we rename only the file itself.
my $file_name = basename($file); # File name fore renaming.
my $new_file_name = $file_name;
$new_file_name =~ s/ /_/g; # replace all spaces with underscores
rename($file, File::Spec->catfile($dirname, $new_file_name))
or die $!; # Error handling - what if we couldn't rename?
}
It will be faster to use File::Copy to move the file to its new name rather than using this method which forks off a new process, spawns a new shell, etc. it takes more memory and is slower than doing it within perl itself.
edit.. you can get rid of all that backtick b.s., too, like this
my #files = grep {!/Default/} glob "$sourcedir/*.html";